Niks meer missen?
Schrijf je in voor onze nieuwsbrief
Foto: Marc Kolle.
international

Pepijn Stoop | Let’s teach humans to put out AI fires

Pepijn Stoop ,
2 februari 2024 - 11:31

Worryingly, many people cannot distinguish fake images created with the help of artificial intelligence from real ones, argues Pepijn Stoop. “A missed opportunity, because although these pictures seem perfect, AI regularly makes mistakes.”

Last week, images of a burning Eiffel Tower caused a flood of concerned posts on social media. Small detail: Nothing was on fire because the images were generated by AI. Creating fake images with AI is becoming increasingly simple, but recognizing them remains a challenge for many people. AI researchers should focus more on this human aspect, rather than trying to solve the problem with more AI.

 

Deepfakes

Five years ago, I experienced my first introduction to AI-generated videos, also called deepfakes. An acquaintance at the time showed me how he used his impressive computer set-up to replace the head of a ranting Gordon Ramsey in a Hell’s Kitchen episode with that of my fraternity mate. Although it was a joke and the quality could have been better, I continued to marvel at it for weeks.

 

Since then, there has been a huge shift. Any layman with a smartphone can set the Eiffel Tower on fire, and although the Gorden Ramsey morph never went global, AI photos and videos now often do. Unfortunately, not all of these images are meant to be as funny as the Hell’s Kitchen video. For example, the platform X had to intervene this week after AI-generated nude photos of Taylor Swift were shared en masse, and during last year’s Paris riots, fake images of Macron spread targeted disinformation.

 

Tiktok

AI science’s solution to this problem is more AI. Fake image detectors are often cited as a remedy in the fight against fake images. Yet these do not always appear to work, for example, in the case of lower quality photos, and this detection is not used by platforms like Tiktok that give these fake images a platform.

 

In any case, plenty of attention is paid to fake nude photos of pop stars and deepfakes. Images are eagerly shared, with all the consequences. Yet in my experience, non-AI’ers often don’t know how to distinguish these images from “real” ones. A missed opportunity, because although these images seem perfect, AI regularly makes mistakes.

 

For example, AI remains bad at creating hands. Those fake photos of President Macron were noticed because he sometimes had twelve fingers. The Eiffel Tower fire was revealed as fake because the deepfakes looked too polished since AI has trouble mimicking texture.

 

Human perception

In the end, all these tips are media wisdom in new clothes. Just as we all know by now that you shouldn’t choose “welcome123” as your password, I think non-AI people should learn what fake images look like and how to recognize them for themselves. AI experts and scientists, who, after all, brought these techniques into the world, should take the lead in this. Current research focuses too much on developing detection and too little on human perception of AI images and developing training programs, for example.