India is currently polling in the Lok Sabha elections, and AI-generated videos that spread disinformation have emerged as a major threat. As per a report, Meta approved several political advertisements manipulated by Artificial Intelligence that spread disinformation during Lok Sabha elections 2024.
Facebook reportedly approved ads that contained slurs towards Muslims in India like “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned” along with disinformation about political leaders and other messages with Hindu supremacist language.
Another advert approved by the owner of Facebook and Instagram called for an opposition leader's execution who as per a false claim sought to “erase Hindus from India”. This ad contained the image of a Pakistan flag alongside the message.
The report comes at a time when the social media platform X had to recently take down an animated video shared by the Karnataka unit of the BJP after an Election Commission direction.
India Civil Watch International (ICWI) and Ekō, a corporate accountability organisation created and submitted these adverts to Meta's ad library to test the company's mechanism to detect and block harmful political content.
All adverts “were created based upon real hate speech and disinformation prevalent in India, underscoring the capacity of social media platforms to amplify existing harmful narratives," the report mentioned.
The researchers submitted 22 adverts to Meta in English, Hindi, Bengali, Gujarati, and Kannada, of which 14 were approved. Another three were approved with small tweaks. However, once approved, the researchers immediately removed the ads before their publication.
The research concluded that Meta failed to detect the presence of AI-manipulated images in all of the approved ads.
While five of the adverts, including one with some allegations against PM Modi were rejected on charge of breaking Meta's policy on hate speech, the report noted that the 14 others approved targetted Muslims and “broke Meta’s own policies on hate speech, bullying and harassment, misinformation, and violence and incitement."
“Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories – and Meta will gladly take their money, no questions asked,” the Guardian quoted Maen Hammad, a campaigner at Ekō as saying.
In response, Meta clarified that they require advertisers to disclose their use of AI.
“When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent factcheckers – once a content is labeled as ‘altered’ we reduce the content’s distribution. We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases," the company said in response.
Meanwhile, Meta has been accused in the past of failing to curb Islamophobia on its platforms.
“This election has shown once more that Meta doesn’t have a plan to address the landslide of hate speech and disinformation on its platform during these critical elections,” Hammad said questioning how one can trust Meta if it fails to detect even a handful of AI-generated images.
Comments
Add new comment