Shocking reason why you need Google Search if you use Google Bard

[ad_1]

At a time when conversations around ‘hallucinations’ of artificial intelligence are becoming increasingly frequent, Google VP and managing director of Google UK, Debbie Weinstein gave a shocking statement in an interview where she said that the company’s AI chatbot Google Bard is not meant to be used for accurate information, and that users should instead use Google Search for that. She also highlighted that Google Bard is an ‘experiment’ and is not meant for specific information.

In an interview with BBC’s Today program, Weinstein was responding to the question of AI hallucinations with Google’s in-house chatbot Bard. She said, “I would say Bard is used differently from how Google Search is traditionally used. Bard is, first of all, an experiment in how you can collaborate with large language model. It is really for collaboration around problem-solving, ideation, and creating new ideas. It is not really the place you go for specific information”.

Google Bard’s unreliability issue

Answering another question on the unreliability of Google’s AI chatbot and whether the company feels comfortable having a product in the public domain that can give out misleading information, she said, “At the bottom of it, you have the option to give us feedback and give it a thumbs up or thumbs down when it is not delivering for you. We also have a link that says Google It. We know that people count on Google for accurate information and we are encouraging people to actually use Google Search to reference information they found”.

Google does in fact give this information. On the homepage of Google Bard, it says, “I have limitations and won’t always get it right, but your feedback will help me to improve”. However, there is no mention that a user should cross-check the information from Google Search.

AI chatbots: Handle with care?

The issue of AI hallucination is not exclusive to Google Bard. On occasions, both Bing AI and ChatGPT have also given either incorrect or misleading information. The problem appears to be consistent across different LLMs and AI foundational models.

So, is there even a point to use them? The Google VP asserts that it can be a good collaborative tool and can refine your search experience and help in problem-solving and ideation. And that seems to be true to a certain extent. Most chatbots do a decent job of analyzing large chunks of text, help in suggesting improvements in an essay, and have passable performance when it comes to subjective answers such as tips, how-to’s, and more.

But for more objective questions, where specific information is needed, you are better off with a search engine, at least that’s what the Google VP believes.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *