A group of 20 tech companies announced on Friday they have agreed to work together to prevent deceptive artificial intelligence content from interfering with elections across the globe this year.AI 

Tech Giants including OpenAI and Meta Join Forces to Combat AI Election Interference

(Reuters) – A group of 20 technology companies said on Friday they have agreed to work together to prevent fraudulent artificial intelligence content from interfering with elections around the world this year.

The rapid growth of generative artificial intelligence (AI), which can create text, images and videos in seconds in response to prompts, has fueled fears that the new technology could be used to influence this year’s major elections, as more than half the world’s population heads to the polls.

Signatories to the technical agreement announced at the Munich Security Conference include companies that build generative AI models used to create content, including OpenAI, Microsoft and Adobe. Other signatories include social media platforms facing the challenge of keeping harmful content off their pages, such as Meta Platforms, TikTok and X, formerly known as Twitter.

The agreement includes commitments to develop cooperation to develop tools to detect misleading AI-generated images, videos and audio, create public awareness campaigns to educate whistleblowers about misleading content, and take action against such content on their services.

Technology to identify AI-generated content or verify its origin could include watermarking or metadata embedding, the companies said.

The agreement did not specify a timetable for fulfilling the commitments or how each company would implement them.

“I think the usefulness of this (agreement) is the breadth of companies that have signed up to it,” said Nick Clegg, director of global affairs at Meta Platforms.

“It’s all well and good if individual platforms develop new practices for detection, provenance, tagging, watermarking and so on, but unless there’s a broader commitment to a common interoperable way, we’ll be stuck with different commitments,” Clegg said.

Generative AI is already being used to influence politics and even convince people not to vote.

In January, a robocall using a fake voice of US President Joe Biden went viral to New Hampshire voters urging them to stay home during the state’s presidential election.

Despite the popularity of text generation tools like OpenAI’s ChatGPT, tech companies are focusing on preventing the harmful effects of AI images, video and audio, in part because people are more skeptical of text, said Dana Rao, Adobe’s director of trust. interview.

“Sound, video and images have an emotional connection,” he said. “Your brain is wired to believe this kind of media.”

Related posts

Leave a Comment