Our grandparents dreamed of a future world filled with jet packs and flying cars. As of yet, that world is not a reality. However, we are about to get the next best thing: self-driving cars. The technology is real and on the roads. While it’s not yet available for consumers to purchase, many companies are developing these cars. They’re even being tested by Uber in Pittsburgh. These cars come with lots of exciting possibilities and questions. They could save a lot of energy through decreasing the number of cars on the road, and they could do away with personal car insurance. However, they could also increase how much people drive; driving becomes less of a chore if you are always a passenger. Insurance might not fall to the driver, but it would need to be held by someone. The questions are new and will test the ability of our law-makers to keep pace with advances in technology.
One area where the law may have a large impact is in the ethics of the AIs that drive our self-driving cars. In general, autonomous vehicles are much safer than other cars because they avoid human error. They do, however, include the possibility of mechanical failure as much as other cars. In a case where, say, the brakes failed, the computer driving the car would have to choose what to do rather than a human driver. However, the computer isn’t really choosing. Just like with anything an AI does, the rules will be written by a human.
These humans work for companies. Like companies in all industries, the self-driving car world is in tune with market research. MIT’s Moral Machine presents different scenarios in which a self-driving car has to decide how many people die in a given situation. The car is presented with different numbers of pedestrians and passengers. Users taking the survey are asked to direct the car in a way they think is ethical. In general, people want cars to use a utilitarian brand of ethics. Basically this means that the car should choose to kill the fewest people. However, this means that the car will sometimes choose to kill its own driver. I said before that these decisions will be made by people working for companies. Companies want people to buy their cars. While people may want theoretical cars to act based on utilitarianism, they don’t want to buy a car that might kill them. If left to their own devices the car companies will make cars that people want to buy. These cars would protect the driver even if it means sacrificing other people’s lives.
Who are these other people? They are probably not other people in self-driving cars. They are pedestrians or the drivers of older, less high tech cars. They are poorer than the drivers of the self-driving cars. At least initially, self-driving cars will be very expensive. Even when they come down in price to be available to the middle class, they will not be in reach for everyone. Even a car you have to drive yourself is not in reach for all Americans. In 2009, nine percent of American households did not own a car. While not owning a car can come from a variety of factors, there are many people living in poverty who rely on walking and on public transportation rather than a car. These are the people a self-driving car would kill. If we do not take the high road of utilitarian ethics, we will be programming computers that will save their rich owners at the price of the lives of the poor.
This can be stopped. Car companies will not have unlimited power if our government writes ethics into the regulation of self-driving cars. It is our responsibility to hold them accountable for this.
Allison Mollenkamp is a junior majoring in English and theatre. Her column runs biweekly.