Imagine there is a runaway trolley and you are the one operating the track. Ahead, the track diverges with one leading to a group of 5 pedestrians and another leading to only one pedestrian. You can pull the lever and save the 5 pedestrians on the track while allowing the one pedestrian to die. Or, you could do nothing and let the 5 pedestrians die while saving the one pedestrian. This problem is called the trolley problem and there is no right or wrong answer. In some cases a few would suggest pulling the lever as saving 5 people is better than 1. However, others may disagree as the one person may be an important political figure and his or her work is more important than the lives of 5 people.
We can transpose this problem with the automated car problem. If you buy an automated car for your daily commute to work, would you feel comfortable with the fact that the car may opt to let you die because of an algorithm weighing the other vehicle(s) as more important? How do we figure out what is fair in these situations? The answer isn’t easy nor will everyone agree on it, but it is an issue that will one day take center stage when more and more autonomous vehicles hit the road. The sooner we start debating this problem, the faster we can ease society’s concern.
Scenario One: We Save As Many Lives Possible
One idea is to save as many people as possible in a crash. This isn’t necessarily easy to do, but we could program self driving cars to calculate the probability of how to save the most people in an imminent crash. An easy scenario is to imagine a self driving car losing control of its lane and swerving into oncoming traffic. We’ll assume this swerving was beyond the programs control and bad road conditions caused an anomaly in the car’s steering. We could logically program the vehicle to realize that if it swerves back into its lane it has a high probability of swiping the cars that are in the opposite lanes– potentially killing them. However, if it continues to swerve and crash into the divider, the passenger(s) have a high probability of dying, but more lives would be saved for the oncoming traffic.
This scenario works well until the passenger becomes something society considers valuable or precious. In this case what if the swerving car is a pregnant mother and her 2 year old baby? Is the husband expected to except the fact that the car allowed her wife and 2 children to die because the incoming traffic had more passengers at risk? This isn’t to say that it doesn’t make sense, it just becomes less practical. The reality with this solution is that it refuses to create a contract to protect the passengers on board. Instead it tries to appease society as a whole. This becomes less effective when the victims of a crash are treasured members of society or are considered morally more valuable than the other passengers.
Scenario Two: It is Every Person For Themselves
Another idea is to make it the objective of the vehicle to protect its own passengers. If we take the same scenario of the swerving car, the car would try to swerve back in its lane swiping the car in the oncoming lane and killing the passengers, but allowing the other passengers in the surviving car to survive. In this case, we can also assume that the car being swiped has done its best to protect itself and tried to get out of the way of the swerving car. What makes this scenario easier to swallow is we can assume that the two cars that crashed did their best to save their individual passengers.
This scenario still has it’s flaws however. If the car swerving into the other lane has a 90 year old passenger and our pregnant mother and child are the ones being swiped, is it fair that a 90 year old walks way with his life? One could argue that he lived his time and that the pregnant mother and child deserve to live. This may make it an easier pill to swallow for the victim’s families. At least they know the product they bought did its best to protect their loved ones, but it doesn’t make the reality itself easier to accept.
Scenario Three: Every Car Has a Weight
A more complex option is to give cars a score or weight. This weight would be a compilation of many different things such as age, health, life expectancy, current illnesses, crimes, etc. which then are used to produce a score. For example if all the passengers in the car have their individual scores, the programming would compile their numbers together to produce an overall score for the car. In the light of an accident the network of automated vehicles would compare their scores with the other vehicles and determine who to try and save and who to let die.
This scenario can lead to a slippery slope. While it may make sense to allow important political figures to survive, it may also be unfair to allow someone unimportant to die because they are a cigarette smoker. We can argue that everyone has the basic human right to try and survive; so to buy a product that will determine whether you are worth saving based on circumstances that can be out of your control may be an unpopular solution among society.
Scenario Four: Every Person For Themselves, but Smarter
My argument is that when you buy a product, especially a car, you expect it to protect you when there’s an accident whether causes by your vehicle or someone else’s. Every person, despite their self worth is entitled to survive, so it seems unfair to allow software to play God and try and save other people besides the ones it was paid to protect. My programming would use the following logic:
- Is the car empty?
- If the car is empty have the empty car save all others in traffic.
- If the car is not empty, are their people in the car or is it an animal (non-human)?
- If it is an animal (non-human) have the car save all others in traffic.
- Otherwise try and protect the passengers in your own vehicle.
In this case we ask ourselves if we are dealing with people or the car itself. It may be normal for someone to send their empty car to the mechanic or car wash and if this the case, then it makes sense to allow an empty car to drive itself out of other passenger’s ways in order to save other people. This prevents mishaps where there are accidents and the empty car survives, but not the passengers. This logic also sacrifices non-human animals. Maybe the car is empty besides having the dog, if this is the case then it unfortunately makes more sense to save human lives instead of pet’s lives. We could also put priority of animals over empty cars. If a car is carrying a cat and an empty car is set to collide with it, then we save the cat over the empty car. Again, this isn’t a perfect solution, but it helps prevent scenarios where no one had to die but bad programming caused unnecessary death.
Life Isn’t Fair Nor is it Simple
Sometimes reality is blunt; it may make sense to save as many people as possible, but we cannot forget about the individual. It’s harder to accept the fact that your loved one died because their score was lower than the other car or because the other car carried more important people. However, it is easier to accept that your loved one died because that car couldn’t save them given the scenario it was put in. It doesn’t make it fair or less painful, but it makes it easier to accept over time.