Niks meer missen?
Schrijf je in voor onze nieuwsbrief!
Foto: bètabreak
international

Panel discussion on racism in AI: “Artificial intelligence holds up a mirror to us”

Sija van den Beukel,
17 maart 2023 - 09:12

How do algorithms contribute to racism? And what are the consequences? These questions were addressed during a panel discussion Wednesday afternoon at Science Park. “We need to create a ‘safe space’ where companies dare to be transparent without being punished right away.” 

It happened to her several times during an exam she took with the surveying software Proctorial for the VU, says bioinformatics student Robin Pocornie in the lobby of building 904 at Science Park, where about 50 students had come to participate in a discussion. She was not allowed into the exam because the software did not recognize her face. 

“Why wasn't the technology problem in the software just fixed?”

When she raised the problem with the VU, the university put the problem down to a bad Internet connection. Pocornie suspected it was due to an error in the software, which did not recognize her dark skin color because the algorithm was fed mostly white faces. Pocornie was not vindicated until she took the case to the Human Rights Board. Assuming discrimination, the college issued an interim ruling last December. 
  
Not a technology problem 
“Why wasn't the technology problem in the software just fixed?” someone in the audience asks. The panel, moderated by Robin Hardy of Bètabreak and Diversity Officer Machiel Keestra, could be brief about that. Panelist Katrin Schulz, a researcher on how humans make predictions at the Institute for Logic, Language and Computation (ILLC), says: “Bias in AI is mostly framed as a technology problem. But it's also a social problem precisely because it's about people.” 
 
Pocornie says: “We do a lot of research on how software can become more inclusive. But institutions have a responsibility to verify that the technology they're using is appropriate for the population they're applying it to. If you as a university claim to be inclusive and diverse, you must act accordingly.” 
  
Democrat 
A man who introduces himself as a postdoctoral sociology major from Miami asks how we can make sure AI becomes more democratic. “How are we going to work with the companies developing these technologies so we can intervene before the problems get too big?” 

We should continue to develop AI, Liem believes, because AI holds up a mirror to us

That starts with not developing everything that is possible to develop, says panelist Cynthia Liem, associate professor of Computer Science at TU Delft. “As a researcher, I sometimes turn down proposals for some predictive algorithms I don't want to develop.” 
  
Liem is referring to the development of algorithms to screen job applicants. “I don't think it is possible to make such an algorithm neutral. People have prejudices; there's no way around that. The issue is what the consequences are. Systematically recording how you select candidates is problematic because there is always a bias there.”
 
Public scandal 
Would it help to boycott companies that abuse AI? Or to rein them in with bad publicity? Liem responds: “That's a dilemma. On the one hand, the discussion comes up only when it becomes a public scandal as in the case of Pocornie. At the same time, AI development faces complex challenges. Even with a very comprehensive protocol, things can still go wrong sometimes. We need to find the balance between creating a ‘safe space’ in which companies dare to be transparent and can apologize when something goes wrong without being punished right away.” 
 
Someone in the audience comes up with the example of the shooting on the University of Michigan campus where ChatGTP wrote the press release. Wouldn't it be better to ban that? That goes too far, Liem thinks: “I wouldn't be in favor of banning the tool. Because on the flip side, ChatGTP can provide inclusion for people who have trouble writing.” 
  
Mirror 
On the contrary, we should continue to develop AI, Liem believes. “Artificial intelligence holds up a mirror to us. It can help us enormously in that way to make discrimination visible. We should use it as a reflection tool.” 

“A lot of you are being trained to develop these technologies later. So it's up to your generation to do something” 

Finally, the panel offers some tips and advice on how to limit discrimination in AI. Pocornie comments: “With the college's ruling in December, there's a precedent for future cases. I hope people will start using that.” She advocates for an opt-out feature. “As a student, I was required to use the software or I wouldn't be able to take my exam. We as people need to keep the autonomy and build in an option not to participate.” 
  
When building software, keep talking to the customer, Liem advises. Show them exactly what's possible so they can determine if that's what they want. Schultz envisions a playing field where companies can test their product for diversity and inclusion before taking it to market. In addition, the university's job is to train the people who work in companies. “We need to make sure our student populations are diverse enough so they make good decisions in development.” Keestra turns to the audience: “A lot of you are being trained to develop these technologies later. So it's up to your generation to do something about this.”