Self-driving cars are around the corner, butDeveloping rules to govern them proved to be a daunting task. What should cars do? To accept a human point of view, which often indulges one’s own interests, or to act in the name of the common good? It turned out that if you allow people to program autonomous transport themselves, the gap between self-interest and great good is reduced.
Ethics of self-driving cars
The focus in this area has been on the mosturgent moral dilemmas - how should autonomous cars behave when it comes to life or death. A 2016 study found that people widely support utilitarian programming models, saving lives for more people, even if it jeopardizes the driver's life. But these same people openly expressed that they would be less willing to buy a car that could sacrifice itself in order to save others.
A few months ago this same group of scientistspublished a global survey on car attitudes, which showed that the moral principles that people think should be used to program them vary considerably from country to country.
And yet, not so long ago, work appeared in PNAS,dedicated to the more common social dilemmas, not associated with mortal danger, but still opposing collective individual interests. Drivers are already guided in such situations every day; to slow down and skip someone - this, of course, will add a few seconds to the trip time, but if everyone does this, the movement will go more smoothly.
The authors acknowledge that decisions about howprogram autonomous cars to deal with these situations will not be accepted solely by the owner; manufacturers and regulators are likely to play a big role. However, they wanted to find out how the process of programming these early decisions, rather than taking them on the fly, will affect people's choice.
There is already a significant amount of researchshowing that engaging people in the decision-making process ahead of time leads to fairer and less selfish decisions. A new study has shown that this applies to the programming context of autonomous cars.
The researchers have developed a computerAn experiment based on the classic prisoner's dilemma in which players have to choose between cooperating or giving up. Four participants from Amazon Mechanical Turk were given control over one car and had to choose whether to turn on the air conditioner every time the car stopped or not.
The shutdown of the air conditioner was described ascollective good, because it reduces fuel consumption and reduces harm to the environment. There were financial rewards that varied depending on how many people decided to cooperate or declined in each round. Financial rewards were built in such a way that it was more profitable for players to refuse to cooperate, but if everyone refused, the cumulative result was worse than in the case of general cooperation.
In each game, the car stopped 10 times,but while half of the participants made decisions every time the car stopped, as if they were driving themselves, the other half made their decisions for all 10 stops from the very beginning, as if they were self-programming their car.
During various experiments, scientists found that people who programmed their machines in advance were always more cooperative than those who made decisions on the fly.
In an attempt to figure out why this is so, scientistsconducted tests in which the game's interface emphasized various aspects of the problem (focusing on yourself against focusing on the team; or focusing on monetary rewards against the environment), and analyzed the participants' self-esteem.
The results showed that advancevehicle programming made participants less focused on short-term financial rewards. Remarkably, in another experiment, where participants could reprogram cars after each round, they still collaborated more than those who made decisions directly. This is important, scientists say, because manufacturers are likely to allow buyers to customize the parameters of their car, depending on driving experience.
This kind of research may seemquite abstract, and the specific rewards and motifs used in this experiment are divorced from the actual driving process. But the fundamental conclusion is that the separation of people from immediate decision-making — something that a self-driving car will definitely do — leads people to be more cooperative. This is true because we are increasingly relying on machines that will decide for us.
Almost everyone agrees thatSelf-driving cars on average will be safer, greener and more efficient. But recent reports that self-driving cars will cruise around the city at low speed, rather than parking, underline potential pitfalls in the future.
And what decision will you take? Donate your interests or meet the needs of society? Tell us in our chat in Telegram.