Don’t wanna miss anything?
Please subscribe to our newsletter
Technological determinism in AI-education.
Foto: Marc Kolle.
opinie

Current AI education is too one-sidedly focused on technology

Frank Wildenburg Frank Wildenburg,
4 juni 2025 - 12:30

The rise of ‘modern’ artificial intelligence is not only bringing about a change in the curriculum. The norms and values implicit in the program, the ‘worldview’ that is being passed on to students, are also changing, argues artificial intelligence lecturer Frank Wildenburg.

With the rise of ‘modern’ artificial intelligence (AI) in almost all aspects of society, AI education, too, has to adapt. To understand the new AI models students need more knowledge of mathematics and programming, while ‘traditional’ AI courses about psychology and logic seem less and less relevant.

 

But these changes result in more than just a change of curriculum. The norms and values implicit in the programme, the world view presented to the students, change with it.

 

Innocent decision
The first part of a ‘representative’ Dutch AI bachelors’ consists of imparting a technological and scientific basis necessary for the field. Think of practical knowledge like programming skills, but also fundamental theories. This sounds like an innocent decision – after all, one must understand how AI systems work before being able to analyse them.

 

But the choice for these theories, without giving students the knowledge to critically analyse them, makes it so that these foundations are presented to students as ‘inevitable’ and ‘unchangeable’. And this needn’t be the case in reality: the choice to base AI models on these theories in particular is, after all, dependent on the development of the field.

Much of the ‘practical experience’ within the AI programme consists of learning to programme rather than ‘soft’ ethical knowledge

Moreover, the choice to start with fundamental technological theories isn’t a value-free choice! Technical knowledge of the models is necessary to (re)produce the AI systems, but why has the decision been made to first teach students how to code, and inform them about the ethical risks involved after that? It’s as if we give students a match first, and a bucket of water only two years later.

 

After the students have learned about fundamental theories, they will learn how these are applied in current or future AI systems. For the students, this is a popular topic: finally they can focus on the ‘real’ AI models they encounter in the news and their personal lives.

 

But implicitly, this causes these technologies to be presented as a ‘logical’ consequence of the foundations that the students have seen before, without reflecting on the values that caused precisely these technologies to result from them. Furthermore, the prioritisation of technological above ethical aspects creates the image of ‘a real scientist’ as someone who only wants to create, without worrying about potential downsides. Or, as I have heard more than once: “I just want to tinker with the models, and someone else can think about the ethical questions”.

 

Side track
Only at the end of the programme, or as a ‘side track’ besides the ‘real’ courses, do students come into contact with the ‘non-technological’ parts of their field: ethics, policy, and society. In practice, these topics are covered by only a few courses, separate from the rest of the curriculum.

 

What’s more, these courses aren’t always applicable to students’ experiences. Because these few courses must cover the entire ‘ethical’ part of the programme, they often consists of abstract theories and hypothetical visions of the future. Useful knowledge, but not relevant for the here or now, or for how we could adjust current developments.

 

The practicalities of the programme don’t help. Much of the encounters with ‘practice’ consists of contact with employees from tech companies more concerned with programming skills than ‘soft’ ethical knowledge. And then there’s the fact that AI development increasingly takes place within big tech companies like Google, Meta and OpenAI, resulting in purely academical-theoretical knowledge losing some of its previous prestige.

 

The result: at the end of their programme, students are convinced that analysis and regulation of the development of their field is something that only ‘others’ should concern themselves with, not relevant for their professional future.

 

Technological determinism

The message implicit in the above is clear: the foundations of the field are fixed, all current developments are logical consequences from there, and it is the students’ role to facilitate these developments. Everything ‘around’ this – ethics, policy, regulation can be delegated to other disciplines, or thrown out entirely.

 

This is a message of technological determinism: the philosophical theory which states that technological development progresses along an internal efficiency. The norms and values of cultures are determined by technological developments, but not vice versa. Or to put it bluntly: technological development cannot be influenced, so society had better adapt.

AI development is increasingly taking place at large AI companies such as Google, Meta and OpenAI, resulting in academic and theoretical knowledge becoming less prestigious

Atom bomb

This idea has been oft criticised within modern philosophy. For example, consider the fact that technology is not just influenced by innovation push, but also by demand pull from society. Would the atomic bomb also have been invented if there hadn’t been a second World War? Furthermore, societal values can influence what those demands are, or what is seen as a ‘logical’ consequence. Despite this, technologically determinist views are hard to refute within the tech industry – and what is described above shows how AI education reproduces this view.

 

And this is a great risk! Technological determinism can cause scientists to dismiss ethical consequences of their developments (because ‘someone else would do it anyway’), disregard alternatives (because ‘one thing follows from another’), or cause them to think that all scientific problems can be solved using technology (because the ‘internal efficiency’ will solve it).

 

With the above, I don’t intend to state that a ‘modern’, technical AI curriculum is a bad thing. But we can emphasize to students that this is not the only part of AI that matters. For example, we could present the societal aspects of AI as just as fundamental as the technological aspects. Or, in other words, make clear to the students that programming at Google needn’t be their only final goal. That a job as policy maker, ethicist or legal expect is just as much a valid job as a substantive technical career. Hopefully this way we can ensure that we keep control of AI, and not the other way around.

 

Frank Wildenburg is a logician and lecturer in artificial intelligence at the Faculty of Science at the UvA.

website loading