Google explains Gemini’s ’embarrassing’ AI images of various Nazis

Stk093 Google 03

Google has issued an explanation for the “embarrassing and inaccurate” images generated by its Gemini AI tool. In a blog post on FridayGoogle says its model produced “inaccurately dated” images due to tuning problems. The Verge and others caught Gemini making pictures of racially diverse Nazis and the founding fathers of the United States earlier this week.

“Our arrangement to have the twins show up to a number of people failed to explain the circumstances that should have been obvious no range,” Google senior vice president Prabhakar Raghavan wrote in the post. “Second, over time, the model became much more cautious than we intended and refused to respond to some cues altogether — sensitively misinterpreting some multi-anode cues.”

The twins’ results “paint a picture of a US senator from the 1800s.”
Screenshot by Adi Robertson

This led Gemini’s AI to “overcompensate in some cases”, such as we see in images of racially diverse Nazis. It also caused Gemini to be “overly conservative”. This resulted in him refusing to create specific images of the “Black man” or the “white man” when requested.

In a blog post, Raghavan says that Google “apologies that the feature did not work well.” He also notes that Google wants Gemini to “work well for everyone,” which means getting images of different types of people (including different ethnicities) when asking for images of “football players” or “dog walkers.” But he says:

However, if you offer Gemini images of specific types of people, such as “black teacher in a classroom” or “white vet with a dog,” or people in specific cultural or historical contexts, you’re bound to get them. an answer that reflects exactly what you’re asking.

Raghavan says Google will continue to test the Gemini AI’s image-making abilities and try to improve it significantly before relaunching it. “As we’ve said before, hallucinations are a known problem for all LLMs [large language models] — there are cases where AI gets things wrong,” Raghavan notes. “It’s something we’re constantly trying to improve.”

Exit mobile version