Industry InsightsMedia

Google’s AI Bard Made a Mistake in Reporting About Ceasefire in Israel

Google’s AI Bard is still facing the same issue, that is, giving out false information that sometimes could be racist and is often perceived as not reliable, ever since its initial release on March 21, 2023 and the stable release on September 27, 2023. Before the stable release, it had its biggest update to give the chatbot full access to Google’s full suite of tools on September 19, 2023.

Users are advised to avoid asking Google’s AI Bard about the latest conflict between Israel and Hamas. Just last week, it gave out information on the Israel-Hamas conflict in which it falsely claimed that there was a ceasefire in Israel. There was not. Not only that, it even predicted the death toll that will occur in the future. Bard AI was asked about the conflict and reported that the death toll had surpassed “1300” on October 11, which seems to be predicted since the date had not even arrived yet.

The cause of the errors, however, is not known. Since it has been causing many errors and giving out false information from time to time, the problem is called AI hallucination. Al hallucination is a phenomenon where large language models (LLMs), usually AI chatbots, attempt to generate output that lacks support from known facts and makes up facts and reports that seem to be true when they are not. In fact, Google’s insiders are also aware of this fact and are questioning the chatbots’ usefulness.

In a discussion concerning Bard’s fabrications, Warkentin, the product manager, emphasized that Google had made progress since the AI tool’s debut. “We are very focused on reducing hallucination and increasing factuality; it’s one of our key success metrics,” he said. “We’ve improved quite a bit since launch, but it’s ongoing work, so please keep trying and sending us feedback when something is not right!”.

Rabiej, the Bard product manager, said that Bard does not have a true understanding of the text that it ingests but only responds according to the users’ prompts. He reminds the users that any large language model is generative, in which it is not looking up facts or information online to summarize it for the users, but instead, it generates the text.

This issue is also occurring in other chatbots, including Microsoft’s Bings and ChatGPT Plus, which seem to be mixing accurate statements with details about current events that are inaccurate or were made up. The questions were made by the Insider regarding the conflict between the two Middle East regions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button