The morals of self driving cars

The latest developments in technology have led to the creation of machines that work by themselves and that minimize the human input.  Recently, the most famous example is given by self-driving cars. But before getting there, let’s go back in time a little. Since 1885, when Karl Benz built his first automobile, the design, the safety measures, the engines and all those tools that make a car work have improved and have been increasingly varying. Different models, versions and uses  of cars are now readily available on the market. We went from Karl Benz with the first  car in 1885 to Elon Musk today with his super fancy and innovative Tesla. Now self-driving cars are the next level. They represent the next step in the history of automobile technology because they do not need human input to function. Google with their self-driving cars and Tesla Autopilot seem to be the pioneers in the industry to test driverless cars and implement this technology and they were lately joined by Uber. Autonomous cars are believed to improve the security in the roads by adopting a very intricate system of sensors ranging from GPS to computer vision, from odometry to radar. Thus, they are supposed to limit the amount of accidents that can occur on the road every day. In fact, estimates prove that the use of self-driving cars could reduce the amount of road accidents by 90%. They do seem a very attractive purchase.

Whereas the technological advancements are at reach, these autonomous cars seem to be a concern from a legal as well as moral standpoint. From the legal point of view, it is not easy and immediate to understand who to blame in case of an accident since no human is technically driving the car. The clash of opinions arises because some experts believe that the car manufacturer should be held responsible in case of a technical failure. This opinion is based on the fact that since the car manufacturer is able to trace the issue in the car, he is incentivized to improve the quality and safety of these cars by being obliged to find and solve the problem. On the other hand, others believe that the owner of the car should be held responsible because he knows the risks he is facing while driving the car and thus he would be incentivized to know the risks that come together with the vehicle.

Besides the legal liability matter, special attention should be directed towards the moral aspects of these machines. The moral question arises in emergency situations, when the lives of the passengers or of others are at stake. What should the driverless car do? One of the most prominent research institutions in this field is the Massachusetts Institute of Technology (hereon MIT). Experts have developed a moral machine, which is a platform where different scenarios that machine intelligence could face are depicted. Here, different human perspectives about what someone would do in the situations are collected. The different settings represent a special type of the trolley experiment. In particular, the car has to choose the lesser of two evils: whether to kill two passengers or five pedestrians. The external viewer has to decide what they would do in that situation and their opinions are collected.

Here two big issues arise: first, we need to decide on what principles the machine would have to base its actions and second of all, how to implement this morality with a software. With regards to the first issue, there are two main branches of philosophy that contrast each other: utilitarianism and deontology. Utilitarianism prescribes the maximization of the utility, which translates into minimizing the number of people do not survive the accident and this decision would change according to the circumstances. On the contrary, supporters of the deontological branch of philosophy believe that the machine should follow strict rules that would apply in any situation and would be thus safer. Experts seem to concord with the belief that cars, like humans, would not stick to only one principle but they would try to weight different ones in order to achieve a relatively moral outcome.

All in all, besides the technological refinements that need to be applied to automated cars to further improve and guarantee their safety, science has to face and overcome the challenge of finding a solution to the moral dilemmas that could occur in situations of emergency and translate it back into an applicable code that allows for a more humane way of driving a car.

Back to top
Cancel