Google Gemini’s Inaccurate AI Images Disappoint, But Shouldn’t Surprise

Adobe Firefly-created image of person screaming at computer.

Text-to-image generation tools have been an incredible asset, enabling me to transform conceptual musings into visually captivating artwork for dissemination across my social media platforms, blog, and newsletter. As someone with limited design expertise, artificial intelligence has significantly streamlined my content creation process. However, I’m aware that outputs from platforms like Adobe Firefly, DALL-E, Midjourney, and others aren’t flawless. Therefore, it came as no surprise when I learned that Google Gemini produced historically inaccurate images.

The company acknowledged on Feb. 23 that its AI model “missed the mark” when numerous users reported instances where the image generator depicted certain historical figures as individuals of various ethnic backgrounds and genders. Google had been criticized for “anti-white bias” and being “woke.

While the story snowballs and gets picked up by more publishing outlets and on social media, we shouldn’t be shocked by the results. This scenario has occurred before and will undoubtedly occur again in the future. Maybe not with Google, but possibly with OpenAI’s Sora or with another platform.

Diversity bias is a frequent concern when it comes to using AI, especially with face recognition software. There were complaints the technology failed to detect Black people due to dark-skinned faces and also mislabeled some people as animals. Further research ultimately led to the development of the Gender Shades audit by Joy Buolamwini and Timnit Gebru.

The work consequently led Google to incorporate a 10-skin tone standard to reduce bias in its data sets. The announcement at Google I/O 2022 was celebrated because people could now make edits to photos or images that matched their race and gender — no longer was there a one-size-fits-all model.

And let’s not forget about Microsoft’s chatbot Tay. Not having adequate guardrails and data fed into it led people to convince it to parrot the worst of the internet on social media.

All of this is a reminder that LLMs need to consume more data to minimize such bias from occurring…and for companies to implement safety measures to ensure the most accurate and traceable information is being used.

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading