We select and review products independently. When you purchase through our links we may earn a commission. Learn more.

Google’s Bard AI Chatbot Is Already Making Mistakes

A preview of Google's Bard intruding on normal search results.

This week, Google announced its AI (artificial intelligence) chatbot tool, “Bard,” to take on OpenAI’s ChatGPT. It hopes to revolutionize search, but Google’s AI tool is off to a terrible start and already making mistakes.

Google is racing to take on Microsoft and Bing, and this new technology will certainly face several hurdles. As we all already know, ChatGPT isn’t all that reliable yet, can’t stop lying, and isn’t intelligent the way we want AI to be.

Unfortunately, we’re already seeing something similar from Google’s AI chatbot. During Bard’s launch and first demonstration, Google’s fancy tool said something factually incorrect, and that’s just the tip of the iceberg. Sure, Google already removed the video from YouTube, but the mistake is still clearly visible in the announcement blog post.

Google's AI Chatbot bard making a mistake.

When asked, “what new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” Bard quickly did a Google search, used some AI magic, and spit out three facts about the James Webb Telescope.

However, that last result explains that the JWST took the very first picture of a planet outside our own solar system, which is wrong. According to NASA, and quickly mentioned by a slew of Astronomers on Twitter, the first photo of its kind was taken in 2004 by the ESO Paranal Observatory in Chile.

That doesn’t seem like a huge mistake, but Bard confidently shared incorrect information. The internet was quick to bash Google over the error and confusion, but again, it’s important to remember this is new technology.

Following the mistake, a Google spokesman told The Verge: “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program. We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety, and groundedness in real-world information.”

Unfortunately, Bard needs a little more work and is already starting to confirm our worst fears. Google has not explained how the software AI was trained to generate answers and summaries, but will hopefully keep to its AI principles and continue working to improve the technology.

via Telegraph

Cory Gunther Cory Gunther
Cory Gunther has been writing about phones, Android, cars, and technology in general for over a decade. He's a staff writer for Review Geek covering roundups, EVs, and news. He's previously written for GottaBeMobile, SlashGear, AndroidCentral, and InputMag, and he's written over 9,000 articles. Read Full Bio »