Google's Gemini AI app errors, including images of diverse Nazi troops, criticized. (Google)AI 

Warning of tech giants’ power seen in Google Gemini’s flawed AI racial images

At the cutting-edge tech festival, attendees viewed the scandal surrounding Google Gemini chatbot’s creation of images depicting Black and Asian Nazi soldiers as a cautionary tale about the potential influence of artificial intelligence in the hands of tech giants. Google CEO Sundar Pichai recently criticized his company’s Gemini AI app for its “completely unacceptable” mistakes, including the production of images featuring ethnically diverse Nazi soldiers, leading to a temporary suspension of user-generated content.

Social media users mocked and criticized Google for historically inaccurate images, such as images of a black female US senator from the 19th century, when the first senator was not elected until 1992.

“We definitely messed up the creation of the image,” Google founder Sergey Brin said at a recent artificial intelligence “hackathon,” adding that the company should have tested Gemini more thoroughly.

People interviewed at the popular South by Southwest arts and technology festival in Austin said the Gemini stumble underscores the disproportionate power a handful of companies have over AI platforms poised to change the way people live and work.

“Basically, it was too ‘woke,'” said lawyer and tech entrepreneur Joshua Weaver, meaning that Google had gone overboard in trying to project inclusion and diversity.

Google quickly corrected its mistake, but the underlying problem remains, said Charlie Burgoyne, director of Valkyrie’s applied science lab in Texas.

He likened Google’s Gemini patch to a Band-Aid.

While Google has long had the luxury of honing its products, it is now competing in an AI race with Microsoft, OpenAI, Anthropic and others, Weaver noted, adding, “They’re moving faster than they can move.”

Mistakes in the pursuit of cultural sensitivity are flashpoints, especially given the strained political divisions in the United States, exacerbated by Elon Musk’s X-platform, formerly Twitter.

“People on Twitter are very happy to celebrate all the embarrassing things that happen in tech,” Weaver said, adding that the reaction to the Nazi chant was “overblown.”

However, the accident called into question the degree of control over data using AI tools, he argued.

In the coming decade, the amount of information — or misinformation — generated by AI could dwarf the amount generated by humans, meaning those who manage AI safeguards will have a huge impact on the world, Weaver said.

Bias-in, Bias-out

Karen Palmer, an award-winning mixed-reality creator at Interactive Films Ltd., said she could imagine a future where someone gets into a robot and “if the AI scans you and believes you’ve had significant infractions … you’re taken to the local police station, not intended destination.

AI has been trained on mountains of data and can be put to work on a growing number of tasks, from creating an image or sound to determining who gets a loan or whether a medical scan detects cancer.

But that data comes from a world full of cultural bias, disinformation, and social inequality—not to mention online content that can include random conversations between friends or intentionally exaggerated and provocative posts—and AI models can reproduce those flaws.

With Gemini, Google engineers tried to balance the algorithms to produce results that better reflect the diversity of people.

The effort backfired.

“It can be really tricky, nuanced and subtle to figure out where the bias is and how it’s embedded,” said tech lawyer Alex Shahrestani, director of tech firm Promise Legal.

He and others believe that even well-intentioned engineers involved in AI training can’t help but bring their own life experiences and subconscious biases to the process.

Valkyrie’s Burgoyne also criticized major technology that keeps the inner workings of generative AI hidden in “black boxes” so users can’t detect hidden biases.

“The characteristics of the results have greatly exceeded our understanding of the methodology,” he said.

Experts and activists are calling for more diversity in the teams that create AI and related tools, as well as greater transparency in how they operate—especially when algorithms rewrite user requests to “enhance” results.

The challenge is how to appropriately build perspectives on the world’s many and diverse communities, Jason Lewis of the Indigenous Futures Resource Center and related groups said here.

At Indigenous AI, Jason works with remote indigenous communities to design algorithms that use their data ethically while reflecting their views of the world, something he doesn’t always see in the arrogance of big tech leaders.

His own work, he told the group, stands out as “such a contrast to the rhetoric of Silicon Valley, which has top-down, ‘Oh, we’re doing this because we’re benefiting all of humanity, right?'” bullshit.

Related posts

Leave a Comment