The Virtue of Selfishness for Robots

One of the biggest selling points for self-driving vehicles is safety–after all, robots don’t get drunk, make phone calls, and do their makeup at 70 miles an hour. But what happens when things go wrong? Via Tyler Cowen at Marginal Revolution, the New Yorker recently examined that question:

Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

Cowen and others point to at least three possible models of machine ethics: a personal choice model, a market for risk, or a regulatory mandate.

http://www.treehugger.com/cars/googles-self-driving-car-reaches-300000-miles-without-accident.html
Google driverless SUV (Treehugger)

A personal choice model would allow drivers to select how to allocate risks between themselves and others. An altruist might weight their own risks as less important than that of others–that is, they’d prefer to swerve off the bridge than hit even one other person. Conversely, a completely self-interested user would choose to protect themselves at all costs, even if it means sending a bus full of kids careening into the river. As a middle ground, a user might decide on a moderate rule, perhaps prioritizing the safety of people in their vehicle over that of an equal number of people in another car, but accepting risk when more ‘outside’ lives are at stake.

We might prefer this model for allowing people freedom to pursue whatever norms they feel comfortable with, but a utilitarian might object and note that permitting selfish behavior may have negative externalities–that is, it not only shifts risk to others, but also increases overall risk–so we might all be safer if we collectively agreed to give our cars altruistic ethics. While this logic can be hard to swallow when it’s your life at risk, the argument suggests that we should at least consider some other options.

What if we let people choose how much risk to bear but require that the selfish compensate the altruists, perhaps through higher insurance premiums? Different people have different levels of tolerance for risk, and those who value their safety more should be willing to pay people who are more caviler about death by fiery auto crash for the privilege of acting selfishly when disaster strikes. Cowen paints an amusing picture of this kind of market: “And over here, at a price discount, is the Peter Singer Utilitarian Model. The Roark costs $800 more.”

This seems like a sensible way to allocate risk: everyone still gets to choose how much to protect themselves vs. others, but those who impose more risk also compensate those who pose less. But, of course, there’s an objection: “What about people who can’t afford to pay the premium? Why should the poor be forced to take greater risks, while the rich can easily afford to choose the ‘selfish’ car?” This is something of a contingent argument, as its significance depends on the price of the ‘selfishness premium’, but if it were the case that avoiding the riskiest policy was out of reach for many drivers, regardless of their risk tolerance, we might take this criticism as a serious problem.

The conventional liberal answer to this sort of situation is “Regulate it!” We could simply to require that all vehicles be sold with programming to maximize everyone’s safety–in the bridge case, the car won’t hesitate to swerve off towards certain death for its occupants if it saves more lives in total. From an aggregate perspective this is good, but isn’t there something chilling about knowing your car will turn on you and send you to your death for the greater good?

https://i0.wp.com/blogs.reuters.com/wp-content/uploads/2006/04/stunt300.jpg
And it probably won’t even look this cool. (Reuters)

It seems as if we’re left with a difficult trilemma between freedom, market values, and the social good–each of these leaves something to be desired. However, there’s another way to judge these options, as David Levinson points out:

Determining the strategy for self-preservation will inevitably be easier than determining the strategy for what others are doing, as the others (a crowd of people, other cars) is much less predictable. If everyone assume the other will do self-preservation, that is more stable than me trying to predict what you will do to avoid hitting me while you try to predict what I will do, ad infinitum.

Our trilemma, it turns out, hinges on the idea that different ethical rules are equally easy to follow. However, taking action in a life-and-death situation has to happen very quickly, more complicated models mean slow response times, and slower responses mean more accidents! In particular, a model that has to guess or process different kinds of behaviors from different vehicles will be much more complicated than one with a uniform model of what others will do. To Levinson, this points towards an ethic of self-preservation, but perhaps the more important point is that on safety, standardized, predictable responses will trump individual choice every time. This means that once a standard is set, risks will be lowest if all users follow it–so it seems likely that whatever system enters the market on a broad scale first will become locked in.

While this consideration simplifies the problem in some ways, this doesn’t mean that ethicists can call it a day just yet. Different models of selfishness vs. altruism might still be possible within the constraint of limits on processing consequences, and, crucially, our ability to choose the ethical frameworks of the machines around us may be difficult and costly to change. It’s important to get it right the first time, and to do so in a way that considers both the best ethical framework and the dangers of implementing it. If ethicists want a say in who’s lives matter to our vehicles, the time to start is now, and the place to start is with the engineers.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s