Meta is launching a special task force dedicated to tackling disinformation and abusive AI-generated content in the lead-up to the EU elections in June.
The power of social media to influence voting is well documented. But the rapid rise of AI — which can generate “deepfake” images, text, and videos at the push of a button — has triggered new fears that the technology will be used to disrupt major elections across the world this year.
Led by a team of intelligence experts from within the company, Meta’s new “operations centre” has been set up to “swiftly identify potential threats” and implement “real-time mitigation strategies,” said the firm’s head of EU affairs, Marco Pancini.
The announcement comes just weeks after TikTok put forth its preparations for the EU elections, which stand to be this year’s second largest democratic vote in the world, behind India’s.
Under the EU’s new Digital Services Act (DSA), online platforms with more than 45 million monthly average users — like Facebook and TikTok — are obliged to take measures against disinformation and election manipulation.
What is Meta doing?
Meta said it will remove content from its platforms Facebook, Instagram, and Threads that could “contribute to imminent violence or physical harm, or that is intended to suppress voting.”
Besides removing illegal content, Meta will expand its team of independent fact-checkers, adding three new partners in Bulgaria, France, and Slovakia.
When content is “debunked” by these fact checkers, Meta attaches warning labels and reduces its distribution in the feed so people are less likely to see it. When one of these labels is placed on a post, 95% of people don’t click through to view it, the company claims.
“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognise that speed is especially important during breaking news events,” Pancini said.
The threat of AI-generated content
As part of Meta’s efforts to address AI risks, it will add a new feature for users to disclose when they share AI-generated video or audio. The company said it could even impose penalties for noncompliance, although did not specify what this would entail.
Advertisers who run ads related to social issues, elections, or politics on Meta platforms will also have to disclose if they use a photorealistic image, video, or audio that has been AI-generated.
Earlier this month, 20 tech companies, including Meta, Google, Microsoft, X, Amazon and TikTok, signed a pledge to crack down on AI content designed to mislead voters.
The firms aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms.
The power of AI to disrupt elections has already come under the spotlight.
In the US, a political ad published by the Republican Party last year depicts a dystopian scenario should President Joe Biden be re-elected: explosions in Taipei as China invades, waves of migrants causing panic in the US, and martial law imposed in San Francisco.
In November, a recording of Mayor of London Sadiq Khan circulated social media. It called for Armistice Day commemorations to be postponed to allow for a pro-Palestinian march to go ahead instead.
Both the video and audio were fakes generated by AI. Khan later warned that deepfakes could swing a close UK election.
“The era of deepfake and AI-generated content to mislead and disrupt is already in play,” British Home Secretary James Cleverly told The Times yesterday.
The secretary warned that criminals and “malign actors” working on behalf of malicious states could use AI-generated “deepfakes” to hijack the general election.
This warning comes amid the biggest election year in world history. It is estimated that 2 billion people around the globe will vote in national elections throughout 2024, including in the UK, US, India, South Africa, and 60 other countries.