What if you’re a student and you are falsely accused of plagiarism using ChatGPT? Despite being proven innocent, you remain under suspicion, writes Pepijn Stoop. It happened to three of his fellow students. “Information on what students can do in case of AI plagiarism accusations is completely lacking in plagiarism regulations and e-modules.”
Imagine the following scenario. You get your assignment back. Although you thought you did it well, you are shocked to see that you received zero points for a weighty question. You scroll to the teacher’s comments with a pounding heart. Then comes the second shock: The teacher accuses you of plagiarism using ChatGPT. But you didn’t do it. What follows are back-and-forth e-mails with the professor to prove your innocence, but how do you do that?
This happened to three students in the AI master’s course ML1. Given the sensitivity of the event, they wish to remain anonymous. From conversations I had with them, they told me that they were accused by the professor on several questions, without the intervention of the exam committee, of violating UvA plagiarism rules. They all had to prove their innocence. One succeeded, while the others were awarded a minimal number of points and remained “under suspicion.” In an announcement published later, the professor claimed that a “significant” number of answers had been copied “directly” from ChatGPT.
These students are undoubtedly not the only ones accused of plagiarism since AI has become a fixture at the UvA. The university has a discouraging AI policy in education but does not actively detect it. The option to detect AI content with plagiarism checker Turnitin has been disabled since August 2023 due to concerns about its “reliability.” The UvA explains in an e-module for lecturers that they must go to the Examination Board with suspicions of AI plagiarism and that mere “suspicions” are insufficient evidence of fraud.
In the case of the master’s students, these suspicions do seem to have been used as “evidence.” And this is problematic because research shows that students who are non-native speakers of English, as is the case for the three master’s students, are more likely to be falsely accused of AI plagiarism. Such students have a smaller English vocabulary and use less complex syntax and grammar than native speakers. This can affect the flow and tone of the narrative, giving the appearance of AI-generated text. In addition, short academic texts with a fixed structure, such as the master’s students’ responses, are more likely to be incorrectly assigned to AI.
Students can thus be falsely accused based on unconscious biases, with dire consequences. The instructor of ML1 threatened in the announcement to go to the examination board for subsequent assignments. The possible consequence? Failing the course. It remains to be seen here whether the examination board could assess work completely objectively. Even language experts in the study regularly judged incorrectly.
Although the UvA considers automatic detection of AI content as not “reliable,” nowhere does the university explicitly mention when suspicions by teachers might be unreliable. Information on what students can do in case of allegations of AI plagiarism is completely lacking in plagiarism regulations and e-modules.
Until this is communicated to faculty and students, I believe all faculty should stop making potentially false accusations of AI plagiarism and be made aware of possible biases. No student should have to prove themselves against an algorithm or a teacher who checks like a machine, prejudice or not.