My experiment: Letting students use AI for CSD Unit 3 assignments

Background

I’ve discovered this year that some of my high school students are using ChatGPT to generate code for assignments in CSD’s Unit 3 - interactive animations and games. For the most part, it’s easy to spot, since ChatGPT uses machine-like language. (For example, keyDown(LEFT_ARROW) instead of keyDown("left").)

So this got me thinking thoughts that I’ve been avoiding for a while.

  • If AI is the way of the future (or present?), then should I be teaching students to use it in my coding courses?
  • If you can’t code something yourself, then are you really equipped to evaluate an LLM’s output?
  • Therefore, does allowing AI code assistants in my introductory coding courses short-circuit student learning?
  • What would happen if I gave them the green light?

My Experiment

I’ve decided to pilot unrestricted AI use with a couple of my students. One student is a strong one, who could code anything thus far himself. Another is a struggling student, who hasn’t turned anything in all trimester and is very unmotivated overall.

I’ve created a sandbox for these students using magicschool.ai. This allows me to read the conversations they’re having with the code assistant LLM there.

Will the struggling student get better? Will my strong student become less capable? I hope to find out.

Initial Observations

After just a couple of days, here’s what I’ve seen:

  • My strong student was able to successfully use it to complete an assessment level on one of the lessons (I think it was 19, velocity). The code, as mentioned above, looked fishy, but the functionality of the program was all there. He had to do some hand editing to get the behavior to be exactly what he wanted.
  • My struggling student has become much more motivated. He’s far more willing to create with the assistance of the AI. Without it, he’s terminally distracted.
  • The LLM seems most helpful when getting a creative project started. It’s more difficult to get it to create something that meets strict requirements in a rubric.

I don’t yet have enough data to remark on the effects on student learning. I will follow up on this post in a couple of weeks.

My prediction is that I will ultimately have some lessons for which AI is allowed, and others for which it is not. I believe that the middle path will end up being the right path.

Your Experiences?

  • Have any of you done similar experimentation in your classes?
  • What do you believe has been the effect on student learning?

Followup.

  • AI doesn’t seem to short-circuit student learning, as long as they have a foundation. If, on the other hand, they have no knowledge of what they’re asking the LLM to produce, then they will remain at the point of no knowledge.
  • Going forward, I will allow AI coding assistants for students who earn it. They must demonstrate a basic foundation of coding ability.