76958733

Google said Thursday that it will “pause” its Gemini chatbot’s image creation tool after it was widely reported on social media for creating “diverse” images that are not historically or factually accurate, such as black Vikings, Native American popes and female NHL players.

Users described Gemini as “absurdly woke” and “unusable”, while requests to create representative images for subjects such as America’s Founding Fathers resulted in oddly revisionist images.

“We are already working to fix recent issues with the image creation feature of Gemini,” Google said in a statement posted on X. version soon.”

Examples include an AI rendering of a black man representing George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed as a pope, despite the fact that all 266 popes throughout history have been white men.

One social media user described the Gemini tool as “unusable”. Google Gemini

In one shocking example Spotted by The VergeThe twins even created “diverse” depictions of Nazi-era German soldiers in 1943, including an Asian woman and a black man in uniform.

Google has previously admitted that the chatbot’s erratic behavior needs to be fixed.

“We immediately try to improve such descriptions,” he said.

Google admitted that Gemini “missed the mark”. Google Gemini

“Gemini’s image creation with artificial intelligence creates a very large crowd of people. And that’s generally a good thing, because people all over the world are using it. But here the sign is not enough.”

The Post has reached out to Google for further comment.

It was a significant misstep for Google, which earlier this month rebranded its main AI chatbot product as Gemini and introduced a slew of new features, including image generation.

Google Gemini has been mocked online for producing “woke” versions of historical figures. Google Gemini

The blunder comes just days after OpenAI, which runs the popular ChatGPT, unveiled a new AI tool called Sora that creates videos based on users’ text prompts.

Because Google hasn’t published the settings that govern its Gemini chatbot’s behavior, it’s difficult to get a clear explanation for why it invents different versions of historical figures and events.

Google hasn’t published the settings that govern Gemini’s behavior. Google Gemini

When asked by The Post to provide its trust and security guidelines, Gemini acknowledged that they are “not made public due to technical complexities and intellectual property considerations.”

The chatbot also acknowledged that it is aware of “criticism that Gemini’s preference for forced diversity in its image-building may lead to historically inaccurate portrayals.”

“The algorithms behind image generation models are complex and still under development,” Gemini said. “They may struggle to understand the nuances of historical context and cultural representation, leading to inaccurate results.”