OpenAI, Amazon.com Inc., Google and 17 other major players in artificial intelligence technology have formed a consortium to try to prevent AI from being used to deceive voters in upcoming global elections.
The companies announced the agreement at the Munich Security Conference on Friday, making a series of commitments to try to detect AI-enabled election misinformation, respond to it and raise public awareness about the potential for deception.
The proliferation of AI has made producing realistic fake images, audio and videos widely accessible, raising fears that the technology will be used to mislead voters in a year when elections will determine the leadership of 40% of the world’s population. Last month, an AI-generated audio message that sounded like President Joe Biden attempted to dissuade Democrats from voting in the New Hampshire primary election.
The companies pledged to use technology to mitigate the risks of AI-generated election content in the agreement, named the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” They also committed to share information with each other about how to address bad actors.
“AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” Microsoft Corp.’s President Brad Smith said in the press release announcing the accord.
Adobe Inc., ByteDance Ltd.’s TikTok and International Business Machines Corp., as well as startups such as Anthropic and Inflection AI, were among the signers. The agreement also included social media companies Meta Platforms Inc., X and Snap Inc.
“The intentional and undisclosed generation and distribution of deceptive AI election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the accord said.
In light of the rise of realistic fakes of candidates’ voices and likenesses, the new agreement will try to curtail digital content that fakes the words or actions of political candidates and other players in elections.
However, many tech companies say they are on edge about the potential for political misuse of AI-generated content. A system developed by Meta will initially only be able to detect fake images, not audio or video, and could miss ones that have been stripped of watermarks.
“Our mind is not at ease,” OpenAI’s Chief Executive Officer Sam Altman said last month at a Bloomberg event at the World Economic Forum. “We’re going to have to watch this incredibly closely this year.”