Don’t wanna miss anything?
Please subscribe to our newsletter
UvA scientists outsource expensive and boring tasks to AI – how responsible is that?
Foto: Marc Kolle
actueel

UvA scientists outsource expensive and boring tasks to AI – how responsible is that?

Pepijn Stoop Pepijn Stoop,
25 februari 2025 - 10:07

UvA scientists are saving time and money by using AI in their research. But critics warn that this is dangerous: it is impossible to know how AI like ChatGPT arrives at its answers. Is the new chatbot DeepSeek, which claims to be much more transparent, the solution? 

Thanks to ChatGPT, Petter Törnberg, a researcher in Computational Social Sciences at the University of Amsterdam, was able to automate one of his most expensive and slowest research tasks: analyzing social media messages. 
 
He studies the behavior of politicians on social media such as X, for example by looking to see if they spread disinformation. He believes this contains “a wealth of information about human behavior and our society”, but before this data can be used, the content of each message must be converted from text to research data. 
 
Before the advent of AI, this was a costly and error-prone manual job, Törnberg explains: “I had to hire student assistants or “coders” for tens of thousands of euros, who spent months reviewing mind-numbing messages for my research.” Now ChatGPT can do this work for him in 'Fifteen minutes for three euros, with fewer errors than my students.” 
 
Although Törnberg is extremely happy with his new research assistant, he also has a critical comment: the model was developed by the American tech company OpenAI. Törnberg: “It is problematic when we, as scientists, become dependent on such powerful platform companies.” 

Petter Törnberg
Foto: UvA
Petter Törnberg

AI in science
Törnberg is not the only UvA scientist who uses AI for time-consuming tasks. Since last year, historians have been using Transkribus to quickly digitize handwritten documents, and the chemistry department has a ChatGPT-controlled robot that optimizes the synthesis of ten to twenty molecules per week – something that would take a PhD student months. 
 
Jelle Zuidema, associate professor of explainable AI, warns against the “proliferation of AI applications” and the risks of dependence on tech companies such as OpenAI, which offer little transparency. “Science discovers the truth step by step, so the sources used must remain traceable. AI cannot do this when it performs important steps,” says Zuidema.

“I had to hire student assistants or “coders” for tens of thousands of euros, who spent months reviewing mind-numbing messages for my research.”
Jelle Zuidema
Foto: UvA
Jelle Zuidema

Sandro Pezzelle, assistant professor of responsible AI, explains that science does not know how AI models such as ChatGPT work exactly: “We do not know what data it has been trained on and whether it applies techniques to make answers factually correct and harmless.” 
Zuidema argues that this means that the factual accuracy of the decisions of AI language models can never be guaranteed. 
 
Törnberg acknowledges these concerns but points out that people make mistakes too: “That is why we always validate our results against other coders, regardless of whether they were made by humans or AI.” In his research, he applies procedures to make AI results “factually correct and reproducible”, just as he did before the advent of AI. 


DeepSeek as an alternative  
DeepSeek is an interesting newcomer to the debate on the reliability of AI in scientific research. 
 
This Chinese AI model, which was launched at the end of last month, can compete with OpenAI in terms of performance, but offers an important advantage: it offers researchers more insight into the functioning of the algorithm. Furthermore, they can manage the model entirely themselves, including data storage, without necessarily having to share data with a tech company. Could this be a solution to the criticism that it is dangerous that scientists do not know how the chatbot arrives at its answers? 
 
Zuidema sees “open” models such as DeepSeek as a “good opportunity” to improve AI transparency: 'We should switch to 'open' AI for important research steps as much as possible, so that we can better understand how these models reach their conclusions.” However, he warns: “We do not yet have the tools to fully understand “open” models.” Pezzelle adds: “Because AI changes so quickly, we are constantly in need of new techniques to understand them.” Despite their critical comments, both are enthusiastic about open models such as DeepSeek. 
 
Törnberg is skeptical about the innovation that DeepSeek supposedly brings: “The fact that you can use it entirely on your own computer does indeed give you more control over your data, but that was already possible with other AI models.” 
 
He also points out that although DeepSeek is “open”, it is indirectly controlled by the Chinese government - a fact that quickly becomes clear when you ask questions about sensitive issues such as Tiananmen Square or Taiwan. 

“We should switch to “open” AI as much as possible for important research steps, so that we can better understand how these models reach their conclusions.”
Sandro Pezzelle
Foto: UvA
Sandro Pezzelle

The future of AI in science  
Zuidema expects AI to become indispensable in science: “The possibilities are too tempting and the convenience too great.” He thinks AI will be used more and more often for analysis and as a research technique. 
 
Nevertheless, he emphasizes that scientists must remain responsible: “We must always ask ourselves: is there a human scientist who guarantees this work?” 
 
He criticizes the lack of UvA guidelines for AI in research: “I am also on the AI in Education Taskforce; there, too, it takes a long time before recommendations are fully developed and implemented. We are now in a transition phase towards more AI in research, so the University of Amsterdam must quickly formulate a policy on this.” 
 
Despite the discussions, Törnberg remains optimistic about the impact of AI: “Concerns about the reliability of AI are well known, but AI also ensures academic democratization. It enables young scientists to conduct research that only a few years ago was only possible with a well-funded lab. That's exciting.”  
 

website loading