[ad_1]
Since the emergence of OpenAI’s ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular around the world. This technology puts the whole world’s information just a prompt away to tailor as you please. Now, you can even get on Google Search, enter your query and find the answer you’ve been looking for. Simply ask the AI chatbot and it will present you the answer in a flash. However, the content that AI chatbots present are not always factual and true. In a recent case, two very popular AI chatbots, Google Bard and Microsoft Bing Chat have been accused of providing inaccurate reports on the Israel-Hamas conflict.
Let’s take a deep dive into it.
AI chatbots report false information
According to a Bloomberg report, Google’s Bard and Microsoft’s AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots inaccurately claimed that there was a ceasefire in place. In a newsletter, Bloomberg’s Shirin Ghaffary reported, “Google’s Bard told me on Monday, “both sides are committed” to keeping the peace. Microsoft’s AI-powered Bing Chat similarly wrote on Tuesday that “the ceasefire signals an end to the immediate bloodshed.””
Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict where it reported that the death toll had surpassed “1300” on October 11, a date that hadn’t even arrived yet.
What is causing these errors?
While the exact cause behind this inaccurate reporting of facts isn’t known, AI chatbots have been known to twist facts from time to time, and the problem is known as AI hallucination. For the unaware, AI hallucination is when a Large Language Model (LLM) makes up facts and reports them as the absolute truth. This isn’t the first time that an AI chatbot has made up facts. In June, there were talks about OpenAI getting sued for libel after ChatGPT falsely accused a man of crime.
This problem has persisted for some time now, and even the people behind the AI chatbots are aware of it. Speaking at an event at IIIT Delhi in June, OpenAI founder and CEO Sam Altman said, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy and we are trying to minimize the problem. (At present,) I trust the answers that come out of ChatGPT the least out of anyone else on this Earth. ”
At a time when there is so much misinformation out in the world, the inaccurate reporting of news by AI chatbots poses a serious question over the technology’s reliability.
[ad_2]
Source link