[Teaching Problem Solving with AI] PL Reflection - Reducing Hallucinations

What additional strategies for reducing AI hallucinations do you want to highlight for your students?

This discussion question is from the Self-Paced Professional Learning for Teaching Problem Solving with AI.

While not a strategy, I would like to discuss the term hallucination. This term gives AI an personification…I think that is the right term at least :slight_smile:

The chatbot is really just trying to answer a question and matching data as close as it can. When it can’t it can get stuck and report false information, it didn’t ‘think’ that data up, it is just answering a question.

This method really shows the importance of double checking everything it reports to us, meaning we do not have to be an expert in the data it is giving us, but we must be a jack-of-all-trades to know a little bit to help call out a AI bluff.