Columnist Han van der Maas finds it incomprehensible that there are still academics who oppose the use of AI. “With a ban, academia will be sidelined from the biggest technological revolution of the last fifty years.”
This summer, hundreds of colleagues called for a halt to the uncritical introduction (a euphemism for a ban) of AI in academic education and research. This call surprised me. With a ban, academia will find itself on the sidelines of the biggest technological revolution of the last fifty years, an innovation that it itself set in motion. Burying our heads in the sand will not steer this revolution in the right direction.
Tobacco industry
I don’t find the opponents’ arguments very convincing either. The energy costs of my Large Language Model (LLM) use in my total energy consumption are negligible, and those costs are likely to fall further in the near future. Nor do I consider the improper use of training data to be particularly serious. I also continuously train my own neural networks on data from others without having obtained explicit permission to do so. And the fact that these systems are commercially exploited is not only problematic. The authors even go so far as to equate AI companies with the tobacco industry.
According to the appeal, it has been demonstrated that AI impairs learning and hinders critical thinking. However, this claim is based on an unpublished MIT study with a sample size that is far too small. The fact that something like this can be demonstrated unequivocally is not really an example of critical thinking. It is evident that LLMs can be misused by students to complete assignments and all kinds of writing tasks. Those who use one of the better LLMs and check the references will not be caught out. This forces us to thoroughly review our assessment model, which is annoying. But it is also evident that students and academics can benefit enormously from this new technology.
Labour productivity
According to the appeal, LLMs are useless, but that is not my experience. I use them every day for all kinds of tedious tasks, such as writing letters and emails in proper English, but also to summarise criticism of certain scientific methods or theories, answer rather stupid mathematical questions (favourite prompt: I don’t understand yet, explain it again differently), generating surprisingly good programming code, finding articles on topics I am not familiar with and critically evaluating my texts. My work productivity has roughly doubled in the last year and the quality of my work has improved. I have also learned a great deal. My research is at the intersection of mathematical psychology, biology and physics. All kinds of mathematical theorems, models and methods that I only half understood before are now much clearer to me thanks to my favourite prompt.
Confirmation and disconfirmation
LLMs have a tendency to confirm and sometimes invent information when it is not available, cannot count letters in words and cannot play chess (a mortal sin). But these latter limitations will soon disappear through the combination of AI technologies. I can live with the first limitations. It is essential to check LLM information and also to ask for disconfirmation. This applies to all information systems (including the teacher).
The intrinsically interested user, and therefore also the student, suddenly has access to a source of information that is always available and, when used properly, can provide valuable information. What exactly constitutes proper use is still unclear and changes rapidly with each new update. But that is precisely what we need to discuss. Banning AI in the classroom and in our existing software systems is not only a missed opportunity, it is also a missed opportunity.
What strikes me is that some of my colleagues hardly use LLMs, while others use them a lot, each with their own views. It reminds me of the Covid vaccination debate. I hope we don’t end up in the same polarisation.
Here are some comments from ChatGPT 5 on this column:
“You argue that ‘uncritical introduction’ is a euphemism for a ban. That’s a bit harsh. Some people want to regulate more critically or integrate in a controlled manner, not impose a total ban. If you take a more subtle approach, you avoid caricaturing your opponents.”
“You compare energy consumption to your personal consumption, but opponents often think in terms of global scalability and ecological footprint. Your argument works better if you acknowledge that it is currently small on an individual level, but that the question is what happens with mass use.”
“You write that you yourself also train models on other people’s data without permission. That makes your point clear, but it almost sounds like a justification for violations. Perhaps it would be stronger to say: Academic practices such as training on public datasets are similar to what companies do, but there is a need for legitimate discussions about boundaries and consent.”
“You state that their claims are ‘not strong’, but the nuance that there are indeed real concerns (privacy, bias, surveillance, dependence on Big Tech) could be more prominent. Otherwise, you give the impression that you reject all criticism, when in fact you are advocating critical use.”
“Your personal productivity leap is convincing, but the risk is that readers will think, ‘Yes, but you are a highly educated, intrinsically motivated user.’ Perhaps explicitly acknowledge that not everyone uses it that way and that guidance/education is needed.
“The conclusion is strong, but the comparison with the vaccination debate is also loaded. It can be distracting or polarising. Perhaps you could keep the comparison but phrase it a little more mildly: ‘It reminds me of previous debates that became less fruitful due to strong polarisation’.”
Han van der Maas is full professor Methods of Psychology at the UvA.