AI content made up less than 1% of election-related misinformation in 2024, Meta says
Date: 4/12/2024 - Time: Updated 12:57:36 pm
Courtesy: The Indian Express
The social media giant also admitted to content moderation overreach during the COVID-19 pandemic.

Meta has found that AI-generated content made up less than one per cent of the misinformation that was fact-checked during major elections held in over 40 countries this year, including in India. The finding was based on the social media giant’s analysis of content that was posted on its platforms during elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil. “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content,” Nick Clegg, the global affairs president at Meta, wrote in a blog post published on Tuesday, December 3. Meta’s claims suggest that previously raised concerns about the role of AI in spreading propaganda and disinformation did not play out on its platforms such as Facebook, WhatsApp, Instagram, and Threads. Meta also said that it prevented foreign interference in elections by taking down over 20 new “covert influence operations.” “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) netw
orks – and found they made only incremental productivity and content-generation gains using generative AI,” it said. The company also noted that it rejected over 5,90,000 requests from users to create election-related deepfakes such as AI-generated images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden on its AI image generator tool called Imagine. Recently, Meta’s Nick Clegg said that the company regrets its aggressive approach to content moderation during the COVID-19 pandemic. “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content,” Clegg was quoted as saying by The Verge. The top executive also acknowledged that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly,” he said.