[ad_1]
Generative artificial intelligence (AI) is a transformative technology with untapped potential, and many experts believe we are still just scratching its surface. Not only it is being used as a standalone model, but various AI tools, including AI chatbots are being created to use this technology creatively. However, a major bottleneck in its integration and adoption remains AI hallucination, which is something even companies such as Google, Microsoft, and OpenAI have struggled with, and continue to do so. So, what exactly is it, how does it impact the AI chatbots, and how are tech firms navigating through this challenge? Let us take a look.
What is AI hallucination?
AI hallucinations are essentially incidents when an AI chatbot gives out an incorrect or nonsensical response to a question. Sometimes, the hallucinations can be blatant, for example, recently, Google Bard and Microsoft’s Bing AI falsely claimed that there has been a ceasefire in Israel during its ongoing conflict against Hamas. But other times, it can be subtle to the point users without expert-level knowledge can end up believing them. Another example is in Bard, where asking the question “What country in Africa starts with a K?” generates the response “There are actually no countries in Africa that begin with the letter K”.
The root cause of AI hallucinations
AI hallucinations can occur in large language models (LLMs) due to various reasons. One of the primary culprits appears to be unfiltered huge amounts of data that are fed to the AI models to train them. Since this data is sourced from fiction novels, unreliable websites, and social media, they are bound to carry biased and incorrect information. Processing such information can often lead an AI chatbot to believe it as the truth.
Another issue is problems with how the AI model processes and categorizes the data in response to a prompt, which can often come from users without the knowledge of AI. Poor-quality prompts can generate poor-quality responses if the AI model is not built to process the data correctly.
How are tech firms dealing with the challenge?
Right now, there is no playbook to deal with AI hallucinations. Every company is testing out its methods and systems in place to ensure that the occurrence of inaccuracies is reduced significantly. Recently, Microsoft published an article on the topic where it highlighted, that “models pre-trained to be sufficiently good predictors (i.e., calibrated) may require post-training to mitigate hallucinations on the type of arbitrary facts that tend to appear once in the training set”.
However, there are certain things both tech firms and developers building on these tools can do to ensure that the issue is kept in check. IBM has recently published a detailed post on the problem of AI hallucination. In the post, it has mentioned 6 points to fight this challenge. These are as follows:
1. Using high-quality training data – IBM highlights, “In order to prevent hallucinations, ensure that AI models are trained on diverse, balanced and well-structured data”. Typically the data sourced from the open internet can come with biases, misleading information, and inaccuracies. Filtering the training data can help with improving such instances.
2. Defining the purpose your AI model will serve – “Spelling out how you will use the AI model—as well as any limitations on the use of the model—will help reduce hallucinations. Your team or organization should establish the chosen AI system’s responsibilities and limitations; this will help the system complete tasks more effectively and minimize irrelevant, “hallucinatory” results” IBM states.
3. Using data templates – Data templates offer teams a predetermined format, enhancing the chances of an AI model generating outputs in line with set guidelines. By relying on these templates, consistency in output is ensured, reducing the risk of the model producing inaccurate results.
4. Limiting responses – AI models may hallucinate due to a lack of constraints on possible outcomes. To enhance consistency and accuracy, it is recommended to establish boundaries for AI models using filtering tools or setting clear probabilistic thresholds.
5. Testing and refining the system continually – Thoroughly testing and continuously evaluating an AI model are crucial to preventing hallucinations. These practices enhance the overall performance of the system, allowing users to adapt or retrain the model as data evolves over time.
6. Last, but not the least, IBM has highlighted human oversight as the best method to reduce the impact of AI hallucinations.
At present, this is an ongoing challenge that is unlikely to be solved simply by changing the algorithm or the structure of the LLMs. The solution is expected to come as the technology itself matures and such problems can be understood at a deeper level.
[ad_2]
Source link