Ada Support

Grounding and hallucinations in AI: Taming the wild imagination of artificial intelligence

Gordon Gibson
Director, Applied Machine Learning
AI & Automation | 8 min read

We've witnessed remarkable breakthroughs in artificial intelligence — many that have revolutionized industries across the board. From healthcare to finance to customer service , AI has become an indispensable tool for businesses looking to streamline operations and enhance experiences.

However, as with any powerful technology, AI comes with its own set of challenges . One of the most perplexing is the phenomenon of AI hallucinations.

It could go a little something like this: You're chatting with an AI agent, asking about a company's return policy. Suddenly, the AI confidently informs you that you can return any item within 90 days, no questions asked. Sounds great, right? There's just one tiny problem – the actual return policy is only 30 days. You've just experienced an AI hallucination .

There’s no magic solution to eliminating hallucinations, but there are ways to circumvent them in your own AI agents. One of the most notable hallucination prevention techniques is called “grounding.” Let’s explore what grounding is in the context of hallucinations in AI.

Hallucinations: When AI gets a little too creative

Let's start with the basics: What are AI hallucinations ? Simply put, hallucinations occur when an AI model generates false or misleading information and presents it as fact.

It's like that one friend who always embellishes their stories at parties – except this friend is a highly sophisticated language model with access to vast amounts of information.

Hallucinations are more common than you might think. According to recent studies, AI hallucinations can occur in anywhere from 3% to 10% of responses generated by large language models. Some estimates even suggest that chatbots may hallucinate up to 27% of the time.

So why do these hallucinations happen ? It comes down to how the AI models are trained.

AI models learn by analyzing massive amounts of data and identifying patterns. Sometimes, in their eagerness to provide a coherent response, they may fill in gaps with information that seems plausible but isn't necessarily true. It's like playing a high-stakes game of Mad Libs, where the AI is desperately trying to complete the sentence.

Grounding: AI's reality check

Now that we've established the problem, let's talk more about one of the solutions: grounding. In the world of AI, grounding is like giving your model a solid foundation in reality. It's the process of connecting AI outputs to verifiable sources of information, making it more likely that the model's responses are anchored in fact rather than fiction.

Think of grounding as putting a leash on your AI agent’s imagination. It's not about stifling creativity — it's about channeling that creativity in a way that's actually useful and accurate. After all, we want our AI assistants to be more like helpful librarians and less like unreliable narrators.

Let's explore some of the most effective grounding techniques that are helping to tame the wild imaginations of AI models.

Retrieval Augmented Generation (RAG)

RAG is like an AI model’s personal research assistant. Instead of relying solely on its pre-trained knowledge, RAG allows the model to pull information from external, verified sources in real-time.

Here's how it works:

  1. The AI receives a query
  2. It searches through a curated database of reliable information
  3. It retrieves relevant facts and context
  4. It uses this information to generate a response

The beauty of RAG is that it significantly reduces the chance of hallucination by grounding the AI's responses in up-to-date, accurate information. It's like having a fact-checker on speed dial.

When an AI agent uses RAG, it’s able to cross-verify information across multiple reliable sources and use various verification mechanisms to ensure the accuracy of its responses.

Prompt engineering

In the world of AI, how you ask a question can make a huge difference in the output. Prompt engineering is the practice of optimizing a prompt (input instructions and context) to achieve a desired outcome. Ideally, good prompts produce accurate and relevant outputs. When giving instructions to your AI agent , you need to be mindful of how you craft your directions—in the same way you would be mindful of providing employee feedback. It needs to be constructive and provide a clear direction to move forward.

Here are some key strategies in prompt engineering your AI agent:

  • Be specific and clear in your requests: When you’re communicating with AI, vague prompts can lead to ambiguous or incorrect outputs. By being specific and clear in your request, you provide the AI with a well-defined task. For example, instead of asking your AI agent to, "Be nice to customers," you could coach it with feedback that is more specific, like, "Ensure you greet each customer politely before responding to their query." This reduces the room for interpretation and helps the AI action on your feedback accurately.
  • Provide context and examples: Contextual information can significantly enhance the quality of AI responses. In the same way you train employees on your company’s goals and the expectations of their role, provide your AI with context. Including relevant background details or examples within your prompt can guide the AI to generate more accurate and contextually appropriate answers. For instance, if you need the AI to ask for customer information to complete a request, you might include examples of polite responses that specify the tone and style you want.
  • Use constraints to limit the AI's potential responses: Constraints can help narrow down the AI's output to the most relevant information. By setting boundaries, you can prevent the AI from generating off-topic or incorrect responses. For example, you wouldn’t want your AI agent chit chatting with your customers instead of giving helpful answers. In order to prevent this, you can clearly articulate to your AI Agent what is appropriate and what is not, then run pre and post validation to ensure your AI agent is not violating your policies. This focused approach ensures that the AI stays within the desired scope.
  • Provide escape hatch instructions: Train the AI agent to act in a certain way when it isn’t provided with any relevant information. If the AI Agent can't find any relevant information for an inquiry you can instruct it to clarify or connect with a human

By optimizing our prompts, we can help steer the AI away from hallucinations and towards factual responses.

The Future of grounded AI: A more trustworthy AI agent

As we continue to refine these grounding techniques, we're moving towards a future where AI can be a more reliable and trustworthy partner in our daily lives and business operations. In fact, despite concerns about AI usage, 65% of consumers still trust businesses that employ AI technology.

But let's not get ahead of ourselves. While we're making great strides in taming AI hallucinations, it's important to remember that no system is perfect.

AI hallucinations are a real challenge, but with clever grounding techniques, we can keep our artificial friends firmly rooted in reality. It's an ongoing process of refinement and improvement, but the potential benefits are enormous.

The guide to AI hallucinations

Go deeper. Discover more tips for prevention and get actionable insight on how to quickly identify and correct them.

Get the guide