An influential American Senator asked the US social media companies as to what preparations they have made for elections in India
Published Date – 16 March 2024, 10:03 AM
Washington: An influential American Senator on Friday asked the US social media companies as to what preparations they have made for elections in India, where social media platforms, including Meta-owned WhatsApp, have a long track record of amplifying misleading and false content.
The letter, written by Senator Michael Bennet, a member of the Senate Intelligence and Rules Committees, which has oversight over US elections, comes on the eve of the announcement of elections in India by the Election Commission of India (ECI).
The letter by Bennet to the leaders of Alphabet, Meta, TikTok, and X is addressed to seeking information from these companies about their preparations for elections in various countries, including India.
“The dangers your platforms pose to elections are not new – users deployed deepfakes and digitally altered content in previous contests – but now, artificial intelligence (AI) models are poised to exacerbate risks to both the democratic process and political stability. The proliferation of sophisticated AI tools has reduced earlier barriers to entry by allowing almost anyone to generate alarmingly realistic images, video, and audio,” Bennet wrote.
With over 70 countries holding elections and more than two billion people casting ballots this year, 2024 is the “year of democracy”.
Australia, Belgium, Croatia, the European Union, Finland, Ghana, Iceland, India, Lithuania, Namibia, Mexico, Moldova, Mongolia, Panama, Romania, Senegal, South Africa, the United Kingdom, and the United States are expected to hold major electoral contests this year.
In his letter to Elon Musk of X, Mark Zuckerberg of Meta, Shou Zi Chew of Tik Tok and Sundar Pichai of Alphabet, Bennet requested information on the platforms’ election-related policies, content moderation teams, including the languages covered and the number of moderators on full-time or part-time contracts, and tools adopted to identify AI-generated content.
“Democracy’s promise – that people rule themselves – is fragile,” Bennet continued.
“Disinformation and misinformation poison democratic discourse by muddying the distinction between fact and fiction. Your platforms should strengthen democracy, not undermine it,” he wrote.
“In India, the world’s largest democracy, the country’s dominant social media platforms – including Meta-owned WhatsApp – have a long track record of amplifying misleading and false content. Political actors that fan ethnic resentment for their own benefit have found easy access to disinformation networks on your platforms,” the Senator wrote.
Bennet then asked about details of their new policies and people that have placed for India elections. “What, if any, new policies have you put in place to prepare for the 2024 Indian election? How many content moderators do you currently employ in Assamese, Bengali, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Sanskrit, Sindhi, Tamil, Telugu, Urdu, Bodo, Santhali, Maithili, and Dogri?” he asked.
“Of these, please provide a breakdown between full-time employees and contractors,” Bennet said.
The Senator told the social media CEOs that beyond their failures to effectively moderate misleading AI-generated content, their platforms also remain unable to stop more traditional forms of false content.
“China-linked actors used malicious information campaigns to undermine Taiwan’s January elections. Facebook allowed the spread of disinformation campaigns that accused Taiwan and the United States of collaborating to create bioweapons, while TikTok permitted coordinated Chinese-language content critical of President-elect William Lai’s Democratic Progressive Party to proliferate across its platform,” it said.
According to the Senator, he has heard from the heads of the US Intelligence Community that the Russian, Chinese, and Iranian governments may attempt to interfere in US elections.
“As these and other actors threaten peoples’ right to exercise popular sovereignty, your platforms continue to allow users to distribute fabricated content, discredit electoral integrity, and deepen social distrust,” he wrote.
Bennet requested information on the platforms’ election-related policies, content moderation teams – including the languages covered and the number of moderators on full-time or part-time contracts – and tools adopted to identify AI-generated content.