[Teaching Problem Solving with AI] PL Reflection - Reducing Hallucinations

What additional strategies for reducing AI hallucinations do you want to highlight for your students?

This discussion question is from the Self-Paced Professional Learning for Teaching Problem Solving with AI.

While not a strategy, I would like to discuss the term hallucination. This term gives AI an personification…I think that is the right term at least :slight_smile:

The chatbot is really just trying to answer a question and matching data as close as it can. When it can’t it can get stuck and report false information, it didn’t ‘think’ that data up, it is just answering a question.

This method really shows the importance of double checking everything it reports to us, meaning we do not have to be an expert in the data it is giving us, but we must be a jack-of-all-trades to know a little bit to help call out a AI bluff.

I think more information helps the AI hallucination problem, but I think the larger problem is the students not realizing or understanding that the answers they receive from AI may be wrong. Do any of you give prompts that you know AI will be wrong and expect the students to catch it or research? Some of my students don’t even question calculator answers when we do accounting. They can’t comprehend that they make mistakes, how would they understand AI can be incorrect?