Future of AIAI

AI Detection as a Transparency Tool Can Help Students Think for Themselves

By Max Spero, CEO and Co-Founder of Pangram and Bradley Emi, fellow Stanford alum

Reframing AI detection in student writing as a transparency tool opens fertile ground for helping students think for themselves.

As long as there have been assignments, some students have tried to cut corners or outright cheat their assigned tasks. Meanwhile, teachers have been trying to stay one step ahead of them, knowing the newest trends so they could remain confident their students were learning to think for themselves. 

Viewing this balancing act as a cat-and-mouse game can be short sighted, though, and may miss a valuable opportunity. There is easy access to large language models (LLM) these days and it is driving increased use of AI tools by students and increased usage of AI detectors. 

It is exceedingly difficult to inspire students to express their own thinking when ChatGPT, Claude, and Gemini are integrated into document tools or available for free on the web. Nonetheless, we are convinced that changing the mindset about how LLMs and AI detection work together opens up a rather interesting potential for instruction.

These thoughts came together very clearly for our team after a six-week pilot in a Southern California school where teachers experimented with our AI detection technology. Originally our intent with the pilot study was to make improvements in the product interface, but instead we learned something far more compelling and helpful and we’re sharing it here.

Takeaway 1: Think of AI detecting differently

We often hear from teachers that they tend to develop an ear for each student’s writing pretty early in the school year. It’s usually not difficult to tell if a student has had some kind of help from AI, whether it’s because they are suddenly using vocabulary outside their usual range, writing in a different voice or structure, or just using a lot of bland phrases common to AI. AI detection can alert teachers that something is up, but so can a teacher’s instinct.

Telling students that AI detection software is being used and even showing them examples of it working can be enough to stop most students from taking shortcuts with ChatGPT. Gather a few sample texts to put through the detector in a whole-class exercise. Turn it into a game and ask students to vote on which samples they think are AI-generated and which they think are by humans, then run everything through an AI detector to see who’s right.

One participant in our pilot said, “Once I caught [students using AI], I noticed a difference. The amount of students that were trying to get away with it completely disappeared. And then their writing actually improved because…they started putting in more effort and turning out better quality work because they didn’t have that crutch anymore. ”

Lesson 2: Use the uniformity of AI writing as a counterpoint to original thought

AI detection works because large language models don’t write in the way we typically think of arranging ideas and then expressing them. LLMs are probability machines, using statistical analysis to guess the next most likely word. This results in quirks that are distinctly unlike human writing. LLMs are getting more sophisticated, but according to our engineers and other researchers, the models are developing clear styles and “voices” that make them even easier to detect. 

LLMs also tend to use recognizable flowery words and phrases like “in the realm,” “paradigm shift,” or “a tapestry of” far more often than human writers. Likewise, “not just x–but y” is frequently found as is the use of boring syntax, phrasing, or repeated sentence structures. AI writing tends to devolve into lists of general examples rather than personal anecdotes or specific examples. Human writers will sprinkle in sentences of varying length and sophistication and make idiosyncratic errors, which are not often found in AI writing. 

The peccadillos of AI writing aren’t just a sign that an LLM wrote a paper, however. They are also bad habits we want students to avoid when writing on their own. Try giving students a segment of AI generated text and ask them to rewrite it so it no longer triggers an AI detector. To do so, they’ll need to edit the text to include more interesting and varied vocabulary and diverse sentence structures that pull the reader through the text. They need examples from their own lived experience, imbuing the writing with curiosity and individual humanity. 

Just as peer editing is an excellent learning experience, so too is seeing this process play out with an AI detector.

Lesson 3: Shift away from AI detection as case-closed, to being an opener.

One great benefit of finding AI in a paper is that it allows teachers to begin a conversation with students rather than end in a judgement call. Instead of approaching a student to tell them you think they cheated, explain how the software marked a section of their work as AI generated. From there, the conversation can turn to why the student thinks that happened. If they admit to using AI, then the questions become about why they sought help. Perhaps an exercise in comparing their AI-generated text with a new version written on their own is in order.

Often students resort to AI because it offers a quick and easy solution to being short on time. Similarly, students for whom English is a second language might resort to using AI to help them overcome their lack of fluency. Without agreeing that taking shortcuts using AI in an assignment or test are acceptable, shift the conversation towards something more positive and productive. Detecting AI’s presence then becomes less about ethics and punishment and more about developing the ability to express their original thoughts. 

After we surveyed the teachers in the pilot study about how students and teachers benefit from reframing AI detection to AI transparency, one teacher in particular summed up this concept perfectly. “I think the biggest value in a tool like this is that it helps ensure that students are the ones doing the thinking work. It’s a way to open up the conversation with students about how to ethically use AI in a way that doesn’t rob them of their chance to learn, but as an assistant to help them learn better.”

Author

Related Articles

Back to top button