Google will require election advertisers to disclose all artificial intelligence-generated messages. (AP)AI 

Google to Demand Noticeable Disclosures for AI-Produced Political Ads

Google, a subsidiary of Alphabet Inc., will soon mandate that election advertisers disclose if their messages have been modified or generated using artificial intelligence tools.

Effective in mid-November, the policy update requires election advertisers across Google’s platforms to alert viewers when their ads contain images, videos or audio about generative artificial intelligence — software that can create or edit content with a simple prompt. Advertisers must include prominent language such as “This voice is computer generated” or “This image does not depict actual events” in modified election ads on Google platforms, the company said in a notice to advertisers. The policy does not apply to minor corrections, such as resizing or brightening the image.

The update improves Google’s election ad transparency measures, the company said, especially given the growing prevalence of artificial intelligence tools — including Google’s — that generate synthetic content. “It will continue to help support responsible political advertising and provide voters with the information they need to make informed decisions,” said Michael Aciman, a Google spokesman.

Google’s new policy does not apply to videos uploaded to YouTube that are not paid advertising, even if they are uploaded through political campaigns, the company said. Meta Platforms Inc., which owns Instagram and Facebook, and X, formerly known as Twitter, do not have specific reporting rules for AI-generated ads. Meta said that it receives feedback from its fact-checking partners about false information produced by artificial intelligence and that it revises its policies.

Like other digital advertising services, Google has had to fight misinformation across all its platforms, including false claims about elections and voting that can undermine trust and participation in the democratic process. In 2018, Google required election advertisers to go through an identity verification process, and a year later it increased targeting restrictions on election ads and expanded its practice to include state candidates and officeholders, political parties and ballot initiatives. The company also touts its ad transparency center, where the public can check who bought election ads, how much they spent and how many impressions the ads received on the company’s platforms, including its search engine and video platform YouTube.

Yet the problem of misinformation has persisted – especially on YouTube. Although YouTube enforced a separate policy for election ads on the platform in 2020, YouTube said regular videos spreading false claims of widespread election fraud did not violate their policies. the videos were reportedly viewed more than 137 million times in the week of November 3. YouTube changed its rules only after the so-called safe harbor deadline expired on December 8, 2020 — the date by which all state-level election challenges, such as financial statements and audits, had to be completed.

In June of this year, YouTube announced it would stop removing content that promotes false claims of widespread election fraud in the 2020 and other previous US presidential elections.

Google said YouTube’s community rules, which prohibit digitally manipulated content that could pose a serious risk of public nuisance, apply to all video content uploaded to the platform. And the company said it had enforced its policy on political ads in previous years, blocking or removing 5.2 million ads that violated its policies, including 142 million for violating Google’s misleading policies in 2022.

Related posts

Leave a Comment