The company that owns Facebook and Instagram, Meta, has said that it would put together a team to combat misleading artificial intelligence (AI) material ahead of the June EU elections.
It worries about the potential use of generative AI, a technology that can fabricate visuals, sounds, and videos—to deceive voters. Coincidentally, the Times was informed by Home Secretary James Cleverly that some individuals want to manipulate a general election with artificial intelligence-generated spoofs.
However, a business expert stated that the ideas may be viewed as “weak.” If Meta has any similar intentions for the next US and UK elections, the BBC has inquired. Two weeks have passed since Meta and other major tech companies confirmed a commitment to combat such content.
This year’s European Parliament voting is scheduled for June 6–9. Rival social media platform TikTok said in February that it will be introducing “Election Centres”—which would house official information—within its app for each of the 27 EU nations, each in their native tongue.
The company, which also owns WhatsApp and Threads, said in a blog post that it would establish “an EU-specific Elections Operations Centre” that would “identify potential threats and put specific mitigations in place across our apps and technologies in real-time.” Marco Pancini is the head of EU affairs at Meta.
“Since 2016, we’ve invested more than $20bn (£15.7bn) into safety and security and quadrupled the size of our global team working in this area to around 40,000 people,” he stated. “This includes 15,000 content reviewers who review content across Facebook, Instagram, and Threads in more than 70 languages including all 24 official EU languages.”
Meta’s Response to AI-Generated Misinformation
However, Deepak Padmanabhan of Queen’s University Belfast, who co-wrote a paper on elections and AI, claims that the announcement is flawed. “Most of its planned strategy could be observed to lack teeth in substantive ways,” he stated the company intends to handle AI-generated photos is one of the concerns he has with Meta’s strategy, stating that it “may be intrinsically unworkable.”
He posed the question of what might happen if photographs produced by realistic artificial intelligence appeared to depict demonstrators fighting with police.
He posed the question of what might happen if photographs produced by realistic artificial intelligence appeared to depict demonstrators fighting with police.
“Proving it to be fake requires that we are sure that there was no such attack by the policemen pictured on the farmers pictured – this may be infeasible both for technology or for human experts,” he stated. “How can this be classified as true or fraudulent by any technology? “Thus, it is not very clear as to how effective Meta’s generative AI strategy could be – at the very least, there are serious limitations.”
To counter the danger, Meta, which presently collaborates with 26 fact-checking organizations around the EU, said that it will add three new partners with offices in Slovakia, France, and Bulgaria.
These organizations’ purpose is to disprove content that spreads false information, even when it contains AI-generated parts. They do not deal with posts that aim to reduce voting; in fact, posts of this sort are prohibited. According to Mr. Pancini, these kinds of posts won’t be permitted in advertisements and will instead have warning labels and be made less noticeable.
Advertisements are not allowed to cast doubt on the validity of the vote, declare an early triumph, or dispute “the methods and processes of election”. However, he stated that the firm’s output was the consequence of cooperation and that more coordination will be needed going forward. “Since AI-generated content appears across the internet, we’ve also been working with other companies in our industry on common standards and guidelines,” he stated.
“This work is bigger than any one company and will require a huge effort across industry, government, and civil society.”