Lively and in-depth discussions of city news, politics, science, entertainment, the arts, and more.
Hosted by Larry Mantle
Airs Weekdays 10 a.m.-12 p.m.

As predictive algorithms become widespread, how do we approach machine bias?




Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition at Science Museum.
Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition at Science Museum.
Ming Yeung/Getty Images Entertainment Video

Listen to story

09:56
Download this story 4MB

Ideally, predictive algorithms are stone-cold, rational, big data-crunching tools that can assist humans in their flawed decision-making process, but the caveat is that they can often reflect the biases of their creators.

According to Laura Hudson in her FiveThirtyEight piece “Technology Is Biased Too. How Do We Fix it?” algorithmic bias is a growing problem, as organizations increasingly use algorithms as a factor in deciding whether to give someone a loan, offer someone a job or even whether to convict a defendant or grant them parole.

But fixing these algorithms presents a philosophical quandary: how to define fairness? And if biases are impossible to avoid, then which ones are less harmful than others?

So how are problematic algorithms already being used today? How, if at all, can they be made “fair?” And in what way can we use algorithms responsibly?

Guest:

Suresh Venkatasubramanian, professor of computing at the University of Utah and a member of the board of directors for the ACLU Utah; he studies algorithmic fairness