Australasian Science: Australia's authority on science since 1938

Who to Kill? An Ethical Dilemma for Driverless Cars

By AusSMC

A study published in Science has found that people generally approve of autonomous cars that have been programmed to sacrifice their passengers if it will save others, yet these same people aren’t keen to ride in such “utilitarian” vehicles themselves.

Given that driverless cars are less than a decade away, we need to work out, as a society, how we program such systems. Unlike the past, where if you survived an accident you could be brought in front of the courts if you drove irresponsibly, we will have to program computers with behaviours in advance that determine how they react in such situations.

I would, however, caution the results that can be taken away from studies like these undertaken on Amazon Turk, where participants are not themselves under any danger and had plenty of time to decide what the system should do. This may not reflect how we would, as drivers of cars, act in such moments of crisis.

Nevertheless, it is good to see such work, for the uptake of driverless cars will have a profound benefit on society, reducing road deaths greatly, and liberating many groups like the elderly and the disabled who are currently denied personal mobility.

Prof Toby Walsh is the Research Leader of the Optimisation Research Group at NICTA.


The study sheds some light on the state of public sentiment on this ethical issue. It shows that aligning moral AI driving algorithms with human values is a major challenge – there is no easy answer!

What I found interesting in this research is that participants were reluctant to accept government regulations of utilitarian autonomous vehicles (AV). In fact, the surveys showed that participants would be less likely to consider purchasing an AV with such regulation than without. This to me is even a bigger challenge:

  1. deciding whether governments should regulate algorithms; and
  2. what tests and procedures should be put in place to ensure that the algorithms are compliant?

After all, these are very rare events, and such instances are not routine. Therefore, lacking a large set of examples, they are relatively resistant to training or programming.

We also need to recognise that mobility and travel (whether by car, train, bus, aircraft etc.) are inherently risky, and can never be safe. We take calculated risks every time we travel.

With AVs, however, there seems to be some hyped expectations that they should be perfectly safe. They won’t be, and I believe the situation is not as complicated.

For example, plenty of ethical decisions are already being made in automotive engineering today. Inherent in airbags, for example, is the assumption that they are going to save a substantial number of lives, and only kill a few.

Some people have even gone further to suggest that, given the number of fatal traffic accidents involving human error today, it could be considered unethical to introduce self-driving technology too slowly.

The biggest ethical question then becomes: how quickly should we move towards full automation given that we have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill a few? Should they be allowed on our roads, even if they make such mistakes?

It is difficult to speculate what this would mean for Australia, because the results are based on surveys of US residents. I suspect that the public sentiment would be similar though.

We do need to engage with the public on this. I feel there is a leadership vacuum in this public policy space in Australia, and we need better engagement with the community to clarify the issues, concerns and expectations, and lead in informing and shaping the future policies in this space.

Hussein Dia is an Associate Professor in Transport Engineering at the Swinburne University of Technology.


There is a built-in trigger that we fear the unknown and don’t trust science whatever the expert opinion and scientific studies. This is one of the reasons when the Docklands Light Railway system was introduced in 1987, autonomous safety fears meant each train had a safety operator. Driverless trains already operate in many cities and can be seen in most international airports connecting us between terminals. We already have autonomous pizza delivery systems.

Autonomous vehicles will reach a tipping point where the advancement in science, the economic arguments and technology get to a point of incremental change and human consciousness “that it is going to happen” and “will happen”. First of all we will see a series of small steps. Watch out for uber autonomous taxis in Pittsburgh or autonomous ships or autonomous cargo planes.

We always fear the future, but without science and advancement we would still be in the cave and the wheel would not have been invented.

A/Prof Ian Yeoman is MTM Director of the School of Management at Victoria University of Wellington.

Original study published in Science.