Visit hbs.edu

How Can We Counteract Generative AI’s Hallucinations?

Event Lightbulb graphic

Harvard Business School Professor and Chair of the D^3 (Digital, Data, Design) Institute at Harvard—Karim Lakhani—shares expertise on why ChatGPT and Generative AI tools hallucinate, as well as how to prompt these tools to do better.

Since ChatGPT does not have a built-in fact-checking mechanism for its responses based upon data patterns, it can “hallucinate” responses that are factually incorrect or misleading.

Users can take several steps to minimize hallucinations and misinformation when interacting with ChatGPT or other generative AI tools through careful prompting:

  1. Request sources or evidence. When asking for factual information, specifically request reliable sources or evidence to support the response. For example, you can ask, “What are the sources for that information?” or “Can you provide evidence to support your answer?” This can encourage the model to provide more reliable and verifiable information.
  2. Use multiple prompts or iterative refinement. If the initial response from the model seems dubious or insufficient, try rephrasing or providing additional prompts to get a more accurate or comprehensive answer. Iterative refinement of the conversation can help in obtaining better results.
  3. Ask for explanations or reasoning. Instead of simply asking for a direct answer, ask the model to explain its reasoning or provide a step-by-step explanation. This can help uncover any potential flaws or biases in the generated response.
  4. Double-check information independently. Don’t solely rely on the model’s responses. Take the responsibility to fact-check and verify the information independently using trusted sources or references. Cross-referencing information can help identify and correct any misinformation generated by the model.
  5. Address biases by increasing multiple perspectives.  Generative AI models are ultimately human-made, and therefore reflect pre-existing biases which may lead to unintended impacts. Instead of asking, “Is this response biased?,” we can assume that the answer is “Yes.” Our response requires ethical considerations in prompting the use of outputs. In order to evaluate generated responses for accuracy and fairness, we must become increasingly aware blind spots, of which perspectives may not be represented, and to both value and seek multiple perspectives.

Remember that even with careful prompting, generative AI models can still produce inaccurate or misleading information. It’s essential to exercise critical thinking skills, asking questions and seeking credible sources for important or sensitive matters.

For more expert insights on this topic, watch a sneak peak inside of a Harvard Business School classroom, with Karim Lakhani guiding Harvard faculty on Generative AI practices in Teaching and Learning.

Please visit our Generative AI Observatory to join our future conversations on generative AI topics.

Join our discord for future generative AI event updates and community connections.


Highlights:

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.