Autonomous Driving and Ethical Challenges

Entrepreneur
3 min readDec 30, 2020

Autonomous driving is becoming more complicated than ever. According to” The moral machine experiment,” published in Nature, different countries and cultures prioritize different scenarios for saving lives.

The most popular factors are:

  • Save young before older
  • Save humans before animals
  • Choose to save more lives

Data from over 40 million people in 233 countries and territories shows exciting things. In China and Japan, age is not considered a top priority, but law-abiding. In Argentina and the Maldives, the AI should prioritize women before men and not do so much in Sweden and Australia. The data is divided into Southern, eastern, and western countries.

Over 1,2 million people die in traffic accidents every year, and 90% of the cases are because of human error. If we can solve the issues relating to autonomous driving, it would be a huge step in saving lives.

But to achieve that, engineers need to, for the first time, understand human psychology like never before. To put a self-driving car on the road, you need biology and not just computers. If somebody approaches the road, the vehicle needs to understand; is this a 70-year-old, 17-year-old or 7-year-old and need to understand the different behavior of a human child, a human teenager, and a human adult. These are intrinsic biological information current machine learning systems will have a hard time interpreting correctly.

The world is at a point it has never been before. With the inevitable adoption of autonomous transportation, the world over, computers will be responsible for making decisions on the lives of passengers and other road users, whether they other passengers riding autonomous vehicles, pedestrians crossing the street, or animals.

Policymakers must have a clear understanding of the issues that could arise from the full-scale deployment of autonomous driving technology. To enable them to develop an encompassing policy framework that will guide manufacturers, software companies, and stakeholders in the autonomous driving value chain.

According to the Moral Machine project, respondents have identified three main preferences they want to see in autonomous vehicles’ decision-making process.

These include:

  • Preference for sparing human lives
  • Preference for sparing more lives
  • Preference for sparing young lives

Findings buttress that teaching machines to correctly make intelligent and wise decisions in life or death situations is possible. Despite cultural, belief, economic and social variations, people worldwide have reached something similar to a consensus about the ethical codes that should be used in autonomous vehicles. So, this shows some promise for the future adoption of self-thinking vehicles.

While there are advances in making intelligent machines better at ethical decision-making, it remains unclear how programmers and autonomous vehicle manufacturers will compensate for human dispositions such as fear, prejudice, disagreements, and internal conflicts in the decision-making process of these sophisticated machines.

One day, you might need to update the algorithms in your car before entering another country, and they will contain choices and decisions that do not share your values while you are sitting in the back seat.

--

--