posts@fersebas: cat ai-proof-homework.md



How to assign student work that survives AI and remote learning

I love learning and education as much as I love software. That's why I work as a part-time school teacher.

Recently I came across the following scenarios:

  1. In one of my coding classes, a guy sent me JavaScript code with comments, functions, recursion and even HTML DOM manipulation.

    At the time, we had only dealt with variables and printing to the screen..

  2. Another one, in my history class (I love history too) confidently answered an impossible question:

    "Which pope commissioned Shakespeare to paint the Sistine Chapel?"

    Yeah. I had suspected of this guy for a while.

  3. On a test a couple kids in particular, answered the questions with some similarly articulate responses:

    "Frankly, it’s a nuanced topic." "Now, let's see how ... impacted ... in a catastrophic way". "Now, let me go over ... and explain how ...".

    I know these kids well. They’re the ones who say ‘I don’t know’ when I randomly ask them in class.

I bet you spotted the AI involvement faster than a kid could ask, ‘Could you repeat the question?

I confronted some. Others were not worth it.

It was fun, to say the least.

Still, as a learning enthusiast, a professional in software, and a teacher, that raises some concerns for me in all those areas.

The quiet rise of AI cheating

Naturally, the first thought that crossed my mind was, “AI will make cheating easier”—a common concern shared by many teachers today.

From conversations with colleagues at both school and university levels, it’s clear that this challenge is becoming increasingly frequent.

Yet, despite this growing presence, many teachers remain unaware when students use large language models (LLMs).

Teachers of core subjects that depend heavily on writing—whether it’s poems, essays, or summaries—are the most affected. This includes language arts, social studies, and history.

Next in line are subjects where questions have only one correct answer, like science.

Math has remained relatively safe so far due to the nature of large language models, but I wouldn’t be too confident about that for long.

Rethinking cheating prevention

Because of the above, I found myself asking, “How can I make it harder for my students to cheat?”

My first idea was simply to look for things that AI can't do (or do well) yet.

One solution I found was increasing the amount of oral pop quizzes. In the age of Work From Home, you can’t tell which other tabs are open (Wikipedia, Google, ChatGPT), so asking questions by surprise could give the teacher a (very slight) edge..

Another solution was to have students do more "artistic" work. Drawing, to be exact. AI is currently mainly text-based (LLMs). Yet, there are also diffusion models, which can create images from prompts. So far, we can still tell when an image was "AI generated", but the gap between real and AI-generated images is closing quickly.

Perhaps we should have students do models using cardboard or clay, after that happens.

None of those approaches proved very effective, to say the least, so I reached out to a teacher I know for an outside perspective.

He told me he made his homework “super hard” by outlining a long list of detailed criteria—more than 20 bullet points, in fact. He believes this makes it harder for his students to cheat.

But in reality, he’s actually making their lives easier. Those neat, specific bullet points become the perfect prompts for copy and paste. So the key isn’t more questions, greater difficulty, or increased precision.

After all of that, I realized, I was attempting to solve the wrong problem.

A new kind of student work

Our current methods—essays, multiple-choice tests, and short-answer questions—are no longer effective, let alone efficient.

AI is a tool, just as computers and calculators are.

Tools are meant to replace labor.

Almost every adult nowadays uses a computer or calculator every single day.

We don't shame accountants for not adding every digit by hand. If any did so, we would think that person lost their mind.

NASA used to have PhDs, calculate by hand, the numbers that put the Apollo XI on the moon. Now, those PhDs, have computers do those numbers and focus on something else.

In the age of AI, there needs to be a switch from labor-based student work towards a tinkering-based student work.

The real question is not how to prevent AI use, but how to properly evaluate and grade it.

AI is a tool, and tools are as good as what you do with them.

So I guess we should do the same that NASA did. Give our engineers (students) computers (free use of AI), and evaluate them for their clever use of it.

Where do we go from here?

Learning happens through doing and repetition, so student work will always be necessary.

How can we do it effectively in our time?

Come up with ambiguous work that requires students to start asking questions back to you, things that make them formulate the problem instead of you telling them, things that get their opinions involved.

I will continue exploring this topic and perhaps post a future article on some rudimentary methods to evaluate thinking in the age of AI.


posts@fersebas: