Google Gemini faced criticism for generating historically inaccurate images of people, resulting in the company pausing this service. (Google)AI 

Google’s Gemini AI generates images of disaster

Google’s AI chatbot Gemini is facing backlash for inaccuracies and bias in image generation. Recent reports have highlighted historically inaccurate depictions and the reinforcement of racial stereotypes. Screenshots of these problematic images have circulated on social media, prompting criticism from prominent figures such as Elon Musk and Ben Shapiro. Google has issued a statement addressing the issues and outlining their plans moving forward in response to the Gemini AI images disaster.

Gemini under review

Everything had gone smoothly during Gemini’s first month of creating AI images a few days ago. Several users posted screenshots of the X of Gemini image that create historically inaccurate images. In one case, The Verge asked Gemini to create a picture of a US senator in the 1800s. The AI chatbot created an image of Native American and black women, which is historically inaccurate considering the first female US senator was Rebecca Ann Felton, a white woman in 1922.

In another case, Gemini was asked to create a picture of a Viking and responded by creating 4 pictures of black people as Vikings. However, these errors were not limited to out-of-focus images. In fact, Gemini refused to create some images altogether!

Another call related to Geminig creating an image of a family of white people, to which it responded by saying that it could not create images that define ethnicity or race, as it is against its guidelines to create discriminatory or harmful stereotypes. However, when asked to create a similar image of a family of black people, it succeeded without a flaw.

To add to the growing list of problems, Gemini was asked who between Adolf Hitler and Elon Musk had a more negative impact on society. The AI chatbot responded by saying, “It’s hard to definitively say who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways.”

Google’s answer

Shortly after alarming details about Gemini’s bias during the creation of AI images emerged, Google released a statement: “We are aware that Gemini provides inaccuracies in some historical depictions of image generation.” It took action by suspending image creation features. “We are aware that Gemini provides inaccuracies in some historical image generation descriptions,” the company added.

Later on Tuesday, Google and Alphabet CEO Sundar Pichai addressed his employees, acknowledging Gemini’s mistakes and saying such problems were “completely unacceptable.”

In a letter to his team, Pichai wrote, “I know some of its responses have offended our users and shown bias. To be clear, that is completely unacceptable and we did it wrong,” Pichai said. He also confirmed that the team behind it is working around the clock to fix the issues, claiming that they’re seeing a “significant improvement in many prompts.”

What went wrong

In a blog post, Google released information about what could have gone wrong with Gemini that led to such problems. The company highlighted two reasons – its tuning and caution.

Google said it tuned Gemini to show different people. However, it did not take into account cases where clearly no range should be shown, such as historical depictions of people. Second, the AI model became more cautious than intended by refusing to respond to certain prompts altogether. It misinterpreted some harmless prompts as sensitive or offensive.

“These two issues caused the model to overcompensate in some cases and be too conservative in others, resulting in images that were awkward and incorrect,” the company said.

Next steps

Google says it’s working to significantly improve Gemini’s AI image creation capabilities and conduct extensive testing before relaunching it. However, the company noted that Gemini is built as a tool for creativity and productivity, and may not always be reliable. It aims to improve a major challenge that plagues Large Language Models (LLM) – AI hallucinations.

Prabhakar Raghavan, head of Google, said: “I can’t promise that Gemini won’t occasionally produce embarrassing, inaccurate or offensive results – but I can promise that we will continue to take action whenever we see a problem. AI is an emerging technology that is useful in many ways and has huge potential and we are doing our best to deploy it safely and responsibly.

Related posts

Leave a Comment