ChatGPT has become an integral part of the average student’s life. For writing summaries or creating a study schedule, the chatbot may seem like a helpful solution. But is this AI program really such an innocent tool, or will an entire generation of students soon graduate having forgotten how to think critically?
A little helper for brainstorming, writing or summarising: over the past few years, generative AI programmes such as ChatGPT have rapidly become part of many students’ lives. Figures from Statistics Netherlands show that young and highly educated people in particular make eager use of the chatbot, and last summer an UvA student was even convicted for using ChatGPT.
Whether all this is desirable, is highly debatable. If students no longer take the trouble to understand an article themselves in search of relevant information, or stop writing their own essays and instead rely on a chatbot, does the degree programme not miss its purpose? After all, how well do you truly understand the material when you outsource a large part of the thinking to ChatGPT? And perhaps most importantly at a university: does all this AI use hinder the ability to think independently and critically?
Brain activity
Giving a clear-cut answer to these questions is difficult, as research in this area is still in its infancy, UvA brain scientist Iris Groen explains. Nevertheless, more and more studies are emerging that attempt to determine the impact of AI on our cognitive functions. “For example, last June a – not yet peer-reviewed – study was published comparing the brain activity of two groups of students. One group used ChatGPT while writing an essay, and the other did not. The researchers claim to have observed a difference in brain activity,” Groen says. According to the study, the more participants relied on AI, the fewer neural connections were formed in the brain.
Although she emphasises that such research into brain activity is still preliminary, and that many additional studies are needed before firm conclusions can be drawn, the educational outcomes in the study were telling as well: respondents who used ChatGPT to write their essays retained the material less well and produced lower-quality texts.
Forklift
These results are in line with those of other studies in the field, and together they seem to reveal the first emerging trends: “We can very cautiously conclude that it largely depends on how you use it,” says Groen. “If it is used in a supportive way, AI can be enriching, but when you let ChatGPT take over the entire task, it makes sense that you learn far less yourself. It’s like going to the gym but using a forklift to lift the weights. You’re not training your own muscles that way.”
And that is the crux of the matter: “At university, you train your brain to think critically, but when you outsource that, you may well regress,” Groen explains. Although clear evidence that brain activity truly diminishes without training is not yet readily available, the reverse is well established, she emphasises. “If someone becomes very skilled at something – for example, chess – you can pinpoint exactly which areas of the brain show increased activity directly linked to that skill.”
A greater risk than students losing the skill of critical thinking, then, seems to be that they never properly acquire it in the first place. And the consequences of that are indeed visible in the brain. “That is why it is so important that we ensure that this learning process actually takes place,” Groen says.
She compares the issue to children who must first learn to do arithmetic on paper before they are allowed to use a calculator. The same applies to academic work, according to Groen: students must first master certain skills themselves before it makes sense to ask ChatGPT for help. “Otherwise, it becomes impossible to review the output critically and correct any mistakes in it. You need to develop a framework to deal with all the pitfalls, so that you don’t blindly accept everything it produces.”
Illusion of competence
But in order to develop that framework, practice is essential, says UvA education scientist Maurice Schols. He specialises in the use of technology in education and notes that critically evaluating ChatGPT output is far from straightforward. “Students see a result on the screen and assume it must be correct. It’s presented neatly and ready-made, so why scrutinise it? That illusion of competence –which ChatGPT is very good at creating – is something we need to counter in education.”
According to Schols, it is evident that students are using AI, and the numbers confirm this. For instance, research by DUB, the University of Utrecht’s newspaper, showed that the majority of students use similar chatbots. Often for brainstorming, but full text generation with ChatGPT is also common.
Homogeneous essays that are clearly written by a chatbot, however, are still rare, Schols says. “It’s extremely difficult to prove something like that; detectors can produce false positives. But when there are suspicions of AI use, we do need to start that conversation.” He recalls a discussion he once had with a student who admitted to relying heavily on ChatGPT for an assignment. “I wanted to see her transcript, understand how she used it and why. The whole process needs to become much more transparent.”
Stimulating critical thinking
Simply slamming on the brakes is not the solution, Schols argues. “We mustn’t look away. We know that learning takes place when friction and resistance are experienced. If that no longer happens naturally, then as a university we must build that friction and resistance back into the system. For example, by focusing less on the final product and more on the process.”
He does not see investing in better plagiarism detectors as the way forward: “You want to stay ahead of that moment by teaching students during their studies to look critically at ChatGPT’s output. The educational system must learn to think differently so that we continue to stimulate that critical attitude.”
“That’s why we must take a good look at the curriculum,” Schols continues. “Many faculties now rely on summative assessments once per semester. I think that needs to change. We should move towards interim assessment moments where we examine the process and engage in dialogue with students. They must learn how to make responsible choices when using AI. Critical evaluation is a key element in that, so it needs to be firmly embedded. That critical attitude must be measured and monitored, separate from the final product.”
If that does not happen, he fears that students’ ability to think critically will indeed be at risk. “I am concerned, because many departments currently take a highly restrictive approach. Then you know it’s a losing battle, because students will use AI anyway. Universities must handle this proactively. If we take that seriously in the coming period, I’m not at all afraid that we’ll end up sending masses of students into the world with dulled minds.”