On Lesson 2, Level 4, using the AI to ask the question how many students responded to the survey, all of my students are getting wildly different responses. Just wondering if this is an example of AI hallucination, or if there is something obvious I am missing. Thanks!
update: OK, my suspicion is that the AI is using that dataset and just picking a random sample to treat as the total number of students. That’s why each student is getting a different answer. I might have missed that in the lesson plan, and my suspicion might be wrong.
Ah yep, that’s unit 6 of the AIF course.
I agree that it does not accurately count the number of records. When I did this lesson, I needed to have a back-and-forth conversation for a few prompts to get the AI bot to give me both accurate and useful information. In Unit 2 of AIF, students talk about AI hallucinations, and noticing that we can’t take what AI outputs as truth all of the time. This feels like an example of that - where we can see how it could be useful, but there still (at least for this level of AI) needs to be a human in the loop to evaluate and re-request. That said, even with re-prompting I have yet to get it to compute the right number, or get anywhere close to the right tallies.
Given that the goal of this lesson is to use AI to accurately analyze the data, that part is a bit of a let down! I wonder if there’s another way to get it to accurately synthesize the info.
I ended up telling my students to pretend that the chatbot is correct and whatever sample size it presented is accurate. I told them that would allow us to explore the different ways an AI could be used instead of just finding one best answer for the entire class.