How do we decide how to programme autonomous vehicles?

Autonomous cars are almost here and love them or loath them, they're going to revolutionise the way we travel. Before this can happen however, there are a few ethical creases that need ironing out.

In the event of a collision, an autonomous vehicle has the capacity to make split second decisions thanks to its 360 degree sensibility and lightning fast processing speeds. Unlike humans who are likely to panic, autonomous vehicles are able to make rational choices which can enhance the survivability of those involved.

The Trolley Problem

Let's imagine a situation in which an autonomous vehicle with four passengers is on course for a collision with a pedestrian. This is a modern reincarnation of the famous Trolley Problem. You might argue that diverting the car and saving the passengers but killing the pedestrian would be the most ethical thing to do. In moral philosophy, this is known as utilitarianism. In this situation, it is concerned with ensuring the maximum number of lives being saved.

Visit Moralmachine.mit.edu to test your moral preferences against everyone else's

The pedestrian crossing the road however was not originally involved. Programming a car to intervene in a such a situation could be considered as murder and it would therefore be an immoral action in and of itself. This side of the argument is known as deontology and it is concerned with assessing the moral value of an action according to moral rules and norms. In this instance, deontology argues that it is murder to kill somebody who would have otherwise survived.

This might look like quite an extreme situation that wouldn't arise often enough to be of any serious concern, but the same moral dilemma factors into every decision a car makes. This is often in a far more nuanced sense - passing a cyclist for example. When we drive we have to decide how much space we allow the cyclist versus the space that we allow oncoming vehicles. At it's core this is the Trolley Problem all over again but in a far more nuanced context.

Every trip, an autonomous vehicle has to make thousands of decisions that all have moral implications.

Autonomous vehicles may well know how to drive but it is very difficult to teach them to think. Morality is very subjective and computers are not very good with such irrationality. This is because omputer code is binary. The car can only act on a predetermined set of parameters which means that it is incapable of imagining and weighing up the various moral dilemmas produced by opting to kill a pedestrian to save it's occupants. This means that we have to do the thinking for the car, ahead of time, in the programming stage.

This opens yet another can of worms because we then have to decide who sets the parameters. In other words, who gets to play God and decide who's life should be prioritised in the event of a collision? If individuals were given control then clearly everybody would prioritise their own safety, so that's pointless. If we let manufacturers decide then it opens up the possibility of paying a premium for enhanced survivability which would disadvantage people on lower incomes.

What is the solution?

It is possible to remove a lot of the ethical dilemmas in autonomous driving by simply adhering strongly to the rules of the road. If pedestrians know that an autonomous vehicle is not going to swerve to save them if they step out into the road, then the car's brain is not faced with the challenge of making the right moral choice. We must therefore think of the car as being on rails. This would however mean that most likely, pedestrians would have to cross at designated places rather than 'jaywalking'.

There is also increasing talk around the idea of autonomous vehicles being able to communicate with pedestrians. They would be able to signal that it is safe to cross in front of the vehicle or that it is giving way. This reintroduces a human dimension to autonomous vehicle design and will hopefully improve their safety.

A possible way in which autonomous vehicles could communicate with pedestrians.

What does the future look like for autonomous vehicles?

I think it is wrong to expect autonomous vehicles to be perfect. We humans are far from perfect. In fact, human error is a contributing factor to 96% of accidents on the road so surely removing the driver from the equation will lead to better road safety? As long as autonomous vehicles can exceed the performance of a human driver, there is a moral case for their implementation. This is already the case due to the fact that computers are not prone to distraction, fatigue or intoxication.

The moral questions raised by autonomous vehicles may be far from answered, but as testing continues hopefully we will build a more complete picture. No transport revolution has been without its faults, so we shouldn't expect to know all the answers just yet.

Join in

Comments (2)

YOU MIGHT ALSO LIKE

Quiz: The Audi quiz to end all Audi quizzes
PEUGEOT e-LEGEND CONCEPT, a perfect balance of autonomy and electric powertrain
SELF-DRIVING MOTORCYCLE IMPROVES TESTING OF AUTONOMOUS VEHICLES