Australasian Science: Australia's authority on science since 1938

The Moral Machine

Credit: the_lightwriter/adobe

Credit: the_lightwriter/adobe

By Guy Nolch

How can we program autonomous vehicles to make life-or-death decisions when our own moral values vary according to factors such as age, gender, socioeconomic status and culture?

The full text of this article can be purchased from Informit.

Few of us will ever face a split-second life-or-death decision, yet every day many of these are made on our roads. In that instant, how do drivers choose the least devastating consequence when the choice of swerving left, right, or not at all will nevertheless result in tragedy?

Would you choose to spare the greatest number of lives? Or make a value judgement, saving a mother pushing a pram instead of an elderly couple? Would you save a businessman over a homeless man, a police officer over a drug dealer, an athlete over a slob, or simply a woman over a man?

In many instances we won’t have time to rationalise this split-second decision. Even if we did, each of us would have a different moral matrix prejudicing our decision. This won’t be the case when driverless vehicles inevitably take over our roads.

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” wrote an international team of researchers whose “Moral Machine” project was published in Nature paper (https://goo.gl/vh66h9). “We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday...

The full text of this article can be purchased from Informit.