Lively and in-depth discussions of city news, politics, science, entertainment, the arts, and more.
Hosted by Larry Mantle
Airs Weekdays 10 a.m.-12 p.m.

Let’s all vote: should we crowdsource the morality of driverless cars?




A biker passes a pilot model of the Uber self-driving car on September 13, 2016 in Pittsburgh, Pennsylvania.
A biker passes a pilot model of the Uber self-driving car on September 13, 2016 in Pittsburgh, Pennsylvania.
AFP/AFP/Getty Images

Listen to story

09:31
Download this story 4.0MB

If a driverless car is hurtling towards a pedestrian and has the option of swerving out of the way and killing the passenger, what should it do?

What if there are two passengers and only one pedestrian? What if the pedestrian is a child? It’s a twist on the Philosophy 101 trolley problem, but it’s a dilemma that driverless cars may one day encounter.
In an attempt at creating a moral framework for these decisions, MIT researchers set up a site called the Moral Machine, where people could decide who lives or decides in theoretical driverless car accident scenarios. In partnership with researchers at Carnegie Mellon University, those MIT researchers took the subsequent data and created an artificial intelligence that could learn from these results and make similar ethical decisions.

But is crowdsourcing morality the best way to create an ethical guideline for driverless cars? Or is this an example of tyranny of the majority? How should we code the morality of driverless cars?

Want to be a part of the moral machine? Try it out here.

Guests:

Pradeep Ravikumar, associate professor in the Machine Learning Department at Carnegie Mellon University; he is one of the researchers that developed a voting-based system for ethical decision making

James Grimmelman, professor of law at Cornell Tech; he studies how laws regulating software affect freedom, wealth and power