Tag: AI hallucination

5 things about AI you may have missed today: Gen AI set to boost India’s GDP, Boots to unveil AI ‘Personal Shopper’
Technology

5 things about AI you may have missed today: Gen AI set to boost India’s GDP, Boots to unveil AI ‘Personal Shopper’

[ad_1] Gen AI set to boost India's GDP by $1.2-1.5 trillion by 2030, EY report reveals; AI chatbots and humans share the hallucination experience; Federal agencies lag in AI management, GAO report reveals; Bihar Police exam to deploys AI to prevent cheating for 1,275 sub-inspector posts- this and more in our daily roundup. Let us take a look.1. Gen AI set to boost India's GDP by $1.2-1.5 trillion by 2030, EY report revealsGenerative artificial intelligence (Gen AI) could contribute $1.2-1.5 trillion to India's GDP by FY30, states an EY India report. Titled 'The AIdea of India,' the report outlines Gen AI's potential to accelerate India's digital transformation. It forecasts a $359-438 billion addition in FY 2029-30 alone, with business services, financial services, education, retail, an...
5 things about AI you may have missed today: Bing AI gets election info wrong, AI risk to financial systems, more
Technology

5 things about AI you may have missed today: Bing AI gets election info wrong, AI risk to financial systems, more

[ad_1] Today, December 15, the artificial intelligence space was filled with twists and turns, and the familiar foe of emerging technology, AI hallucination, has made a comeback. In the first incident, Microsoft's AI chatbot Bing AI made a major blunder while answering questions on elections in Germany and Switzerland. In other news, the US Financial Stability Oversight Council has found that the rapid adoption of AI can pose new risks for the financial system of the country. This and more in today's AI roundup. Let us take a closer look.Microsoft's Bing AI wrongly answers election questionsResearch from European nonprofits AI Forensics and AlgorithmWatch indicates that Microsoft's Bing AI chatbot, now called Microsoft Copilot, provided inaccurate answers to 1 in 3 basic questions about...
AI hallucination: What is it, how does it affect AI chatbots, and how are tech firms dealing with it?
Technology

AI hallucination: What is it, how does it affect AI chatbots, and how are tech firms dealing with it?

[ad_1] Generative artificial intelligence (AI) is a transformative technology with untapped potential, and many experts believe we are still just scratching its surface. Not only it is being used as a standalone model, but various AI tools, including AI chatbots are being created to use this technology creatively. However, a major bottleneck in its integration and adoption remains AI hallucination, which is something even companies such as Google, Microsoft, and OpenAI have struggled with, and continue to do so. So, what exactly is it, how does it impact the AI chatbots, and how are tech firms navigating through this challenge? Let us take a look.What is AI hallucination?AI hallucinations are essentially incidents when an AI chatbot gives out an incorrect or nonsensical response to a qu...
AI hallucination? Bard chatbot claims it is a woman, prefers to be called Sofia
Technology

AI hallucination? Bard chatbot claims it is a woman, prefers to be called Sofia

[ad_1] AI hallucination is the phenomenon when a generative AI chatbot says something incorrect, misleading, or something it is not supposed to say. Many leading AI researchers have warned about the issue of AI hallucinations, as they can have a harmful impact on society. However, the responses may not always be harmful and instead create a sense of awe and fascination.We all know that most LLMs are built on poor-quality data that is available on the internet and as such asking it about things that occur in the outside world could result in AI hallucinations. So, we tried something different. Instead of asking Google Bard whether it knew something about the world, we probed how well it knew itself. And here is how it went. Bard claims to be a woman named SofiaBelow, I'm sharing my promp...
UK govt’s use of AI for immigration, crime flagged as discriminatory
Technology

UK govt’s use of AI for immigration, crime flagged as discriminatory

[ad_1] The artificial intelligence (AI) wave has swept the world, with nearly every sector adopting this technology. Over the last few months, we've seen several use cases of AI in fields such as education, finance, healthcare and even agriculture. While this technology is proving its mettle in some areas by allowing to get the work done more efficiently, it has also led to a series of issues pertaining to false information and hallucinations. While world governments are drafting regulations around this technology, it is still being used widely, leading to discriminatory results.Use of AI leading to discriminatory resultsAccording to a report by the Guardian, UK government officials are leveraging AI for various tasks. From flagging up sham marriages to deciding on which pensioners get ...
AI chatbot hallucination problem is huge, here is how tech companies are facing the challenge
Technology

AI chatbot hallucination problem is huge, here is how tech companies are facing the challenge

[ad_1] There is no doubt that generative artificial intelligence (AI) has proven itself to be a revolutionary technology. But we are still scratching the surface of what this technology is capable of. Just like any technology, it is bound to get more powerful and impactful with further research and its integration into existing technologies. However, one of the major challenges both AI researchers and tech companies building AI tools are facing is the problem of AI hallucination which is slowing its adoption and reducing the trust users have on them.What is AI hallucination?AI hallucinations are essentially incidents when an AI chatbot gives out an incorrect or nonsensical response to a question. Sometimes, the hallucinations can be blatant, for example, recently, Google Bard and Micros...
Google Bard, Bing Search make huge mistakes, inaccurately report ceasefire in Israel
Technology

Google Bard, Bing Search make huge mistakes, inaccurately report ceasefire in Israel

[ad_1] Since the emergence of OpenAI's ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular around the world. This technology puts the whole world's information just a prompt away to tailor as you please. Now, you can even get on Google Search, enter your query and find the answer you've been looking for. Simply ask the AI chatbot and it will present you the answer in a flash. However, the content that AI chatbots present are not always factual and true. In a recent case, two very popular AI chatbots, Google Bard and Microsoft Bing Chat have been accused of providing inaccurate reports on the Israel-Hamas conflict.Let's take a deep dive into it. AI chatbots report false informationAccording to a Bloomberg report, Google's Bard and Microsoft's AI-...
Shocking study claims ChatGPT has a “significant and systematic political bias”
Technology

Shocking study claims ChatGPT has a “significant and systematic political bias”

[ad_1] Since its inception, OpenAI's ChatGPT has faced many allegations around spreading misinformation, fake news, and inaccurate information. But over time, the chatbot's algorithm has been able to improve these issues significantly. Alongside, one more criticism was made about ChatGPT in its very early days that the platform displayed a sign of political bias. Some people alleged that the chatbot leaned liberal while giving responses to some questions. However, just days after the allegations first surfaced, people found that the OpenAI's chatbot refused to answer any political questions, something it does even today. However, a new study has made claims that ChatGPT still holds a political bias.A study by researchers from the University of East Anglia in the UK conducted a survey wh...
Shocking reason why you need Google Search if you use Google Bard
Technology

Shocking reason why you need Google Search if you use Google Bard

[ad_1] At a time when conversations around ‘hallucinations' of artificial intelligence are becoming increasingly frequent, Google VP and managing director of Google UK, Debbie Weinstein gave a shocking statement in an interview where she said that the company's AI chatbot Google Bard is not meant to be used for accurate information, and that users should instead use Google Search for that. She also highlighted that Google Bard is an ‘experiment' and is not meant for specific information.In an interview with BBC's Today program, Weinstein was responding to the question of AI hallucinations with Google's in-house chatbot Bard. She said, “I would say Bard is used differently from how Google Search is traditionally used. Bard is, first of all, an experiment in how you can collaborate with l...
Google, one of AI’s biggest backers, warns own staff about chatbots
Technology

Google, one of AI’s biggest backers, warns own staff about chatbots

[ad_1] Alphabet Inc is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information. The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk. Alphabet also alerted its engineers to avoid direct use ...