Moral Autonomy in the Age of the Driverless Car

Imagine you have overslept and are running late for work. You rush through your morning routine, get out of the house, and start your car. A quick glance at the dashboard clock reveals that it may be possible to make your first meeting, but it will require the drive to the office be completed in record time.

The DMV has set very specific rules governing how drivers should conduct themselves on the road. Drivers should come to a complete stop for three seconds at each stop sign, yield priority to whoever arrives first at a four-way intersection, and never travel faster than the posted speed limit. The reality is far more ambiguous. Drivers learn through experience which rules can be bent without dramatically increasing the chances of an accident or drawing attention from law enforcement. The inherent wiggle room enables drivers to justify breaking the rules every once in awhile, especially when they are late and facing the prospect of an angry boss.

Over two and half million vehicles are involved in automobile accidents each year in the United States.

Owners may also be specify how aggressively their car will drive based on how late or impatient they are (in fact, this feature has already been implemented by Google engineers). Although these features may benefit individuals by allowing them to reach destinations faster, they increase the risk of accidents for everyone on the road.

Bryant Smith, a researcher at Stanford Law School’s Center for Internet and Society, has proposed a more extreme version of the same scenario:

Imagine you are driving down a narrow mountain road between two big trucks. Suddenly, the brakes on the truck behind you fail, and it rapidly gains speed. If you stay in your lane, you will be crushed between the trucks. If you veer to the right, you will go off a cliff. If you veer to the left, you will strike a motorcyclist. What do you do? In short, who dies? (Citation)

Before long, it may be more appropriate to ask a different question: What would my car do? Right now, these decisions are split second judgments. But before the end of the decade the decision process could change . Engineers at Google have designed a car capable of driving itself. The cars have “driven” 300,000 miles accident free without any human intervention and Google CEO Sergey Brin has promised the cars will be available to ordinary people within five years. If our cars are controlled by artificial intelligence, the decision making process for scenarios like the one above will have to be made in advance as pre-programmed moral decisions.

There are two distinct frameworks an autonomous car could possibly use to make decisions in potentially lethal situations. One possibility is minimizing the amount of total injury without favoritism. If every single car is programmed as a utilitarian, it may be possible to save a significant number of lives over time. On the other hand, it would certainly be somewhat unsettling to place your life in the hand of an autonomous vehicle that may not necessarily act in your best interest when faced with an emergency situation. Another option is having each vehicle prioritize the safety of its passengers over everything else. If market forces are left unchecked, manufacturers will likely program their vehicles in this fashion to maximize sales – most human drivers likely follow a similar model and would want their autonomous vehicle to do the same.

The best solution is likely a blend of these two extremes. The main difficulty is ensuring that each car manufacturer uses software that maintains similar standards. If cars could communicate among one another and orchestrate their movements despite having different software it would go a long way towards further reducing accidents. A cooperative agreement could also ensure each separate piece of software controlling our cars factors public safety into its decision making process to the same degree. This kind of regulation would have to come from the government and would likely be tricky to implement and pass, but is certainly something worth debating in the future.

Once every car on the street is autonomous, it is very likely that the AI and sensors that control our cars will be able to navigate the roads easily and accidents will essentially be essentially be eliminated as autonomous cars become ubiquitous. However, as long as humans remain of the road it will be a fundamentally unpredictable environment. The first generation of autonomous cars will inevitably be faced with some tough decisions. It is important that we think carefully about how we want our cars to behave and the effect that it will have on individual and public safety.