Main content
Hero lines Hero lines

What’s your car thinking?

Ethical dilemmas under the hood of driverless vehicles

Imagine a time in the not-so-distant future. You’re heading down the road in your driverless car when another car comes barreling at you head-on.

What does your autonomous vehicle do? Does it swerve off the road and risk hurting pedestrians? Does it crash into the other car, possibly injuring the occupants?

As the idea of having driverless vehicles everywhere on our roads gets closer to reality, some people worry other concerns are not being addressed.

“Autonomous vehicles provide us an example where we are really offloading human judgment onto machines,” explains Donald Hubin, director of Ohio State’s Center for Ethics and Human Values.

Today, drivers are the ones making those split-second decisions about how to react when facing potential danger, Hubin said. But in the future, it will be up to programmers to decide how autonomous vehicles will react.

“We’re designing the vehicles to make choices about how to respond in those situations,” he said. “Those are obviously morally loaded choices.”

Hubin says the problem is there’s no consensus on what factors should be weighed when programmers set up autonomous vehicles. For example, is the goal just to minimize harm? Hubin describes another scenario in which the vehicle may have to decide whether to crash into a motorcyclist who’s wearing a helmet or one who isn’t. “There seem to be good public policy reasons to say we’re not going to make you a more likely victim if you’re being responsible and wearing a helmet,” he says.

Hubin is calling for a broader conversation about these issues that will lead to some type of federal policy. “There need to be coordinated solutions,” he says.

Professor Bryan Choi views the issue through the lens of legal challenges in the future.

Choi is jointly appointed at Ohio State’s Moritz College of Law and the Department of Computer Science and Engineering. He focuses on legal issues in technology and computing.

He argues our attention should be on whether programmers have written and trained the code in a “crashworthy” way that prevents undue harm when accidents happen, the way we expect seatbelts and airbags to work.

Choi explained that what courts care about is unreasonable harm, not ethical dilemmas in which all choices are equally bad.

“It’s impossible to prevent 100 percent of accidents, but you have to put best efforts into mitigating the worst of those,” he says. “I think that’s where the ethics questions should go. When has the software manufacturer done ‘enough’ to anticipate and mitigate foreseeable failures?”