Although AI has some educators worried, it’s possible to identify AI-generated student work from human-generated work

6 tips to detect AI-generated student work


Although AI has some educators worried, it’s possible to identify AI-generated student work from human-generated work

Key points:

As the school year starts, the excitement and stress about the potential use of generative AI has K-12 teachers and university faculty collectively stressed about these new tools and their potential impact on instruction. A recent professional development meeting about AI at a midwestern university set a new attendance record for such events.

There is no sure-fire way to identify text as generated by AI, and some of the early tools offered to do such have either been shown to be only somewhat effective or have been withdrawn from public use as not meeting their developer’s standards. A spate of AI detectors are available, including CopyLeaks, Content at Scale, and GPTZero, but most will note it is important to consider the results in conjunction with a conversation with the student involved. Asking a student to explain a complex or confusing portion of a submission might be more effective than any of the AI detectors.

Instructors at all levels should consider the following criteria to help them determine whether text-based submissions were student or AI-generated:

1. Look for typos. AI-generated text tends not to include typos, and such errors that make our writing human are often a sign that the submission was created by a human.

2. Lack of personal experiences or generalized examples are another potential sign of AI-generated writing. For instance, “My family went to the beach in the car” is more likely to be AI-generated than “Mom, Betty, and Rose went to the 3rd Street beach to swim.”

3. AI-generated text is based upon looking for patterns in large samples of text. Therefore, more common words, such as the, it, and is are more likely to be represented in such documents. Similarly, common words and phrases are more likely to appear in AI-generated submissions.

4. Instructors should look for unusual or complete phrases that a student would not normally employ. A high school student speaking of a lacuna in his school records might be a sign the paper was AI-generated.

5. Inconsistent styles, tone, or tense changes may be a sign of AI-derived materials. Inaccurate citations are often common in AI-generated papers. The format is correct, but the author, title, and journal information were simply thrown together and do not represent an actual article. These and other such inaccurate information from a generative AI tool are sometimes called hallucinations.

6. Current generative AI tends to be based off training materials developed no later than 2021. So, text that references 2022 or more recent events, etc. is less likely to be AI-generated. Of course, this will continue to change as AI engines are improved.

This article is not intended to dissuade instructors from using AI detection software, but to be aware of the limits of such tools.

In the end, like in any other student issue, speaking with the student is the best way to determine if the student is submitting their own work or that of a machine. One potential method would be to randomly ask one or two students to orally explain how they developed their submission for the class for each assignment. This oral exam method might go far in encouraging students to be prepared to defend their own work and to not rely on AI.

Related:
5 ways AI can help teachers in the classroom
How to redefine learning in the digital age

Sign up for our K-12 newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Steven M. Baule, Ed.D., Ph.D.
Latest posts by Steven M. Baule, Ed.D., Ph.D. (see all)

Want to share a great resource? Let us know at submissions@eschoolmedia.com.

eSchool News uses cookies to improve your experience. Visit our Privacy Policy for more information.