Australasian Science: Australia's authority on science since 1938

The Moral Machine

Credit: the_lightwriter/adobe

Credit: the_lightwriter/adobe

By Guy Nolch

How can we program autonomous vehicles to make life-or-death decisions when our own moral values vary according to factors such as age, gender, socioeconomic status and culture?

Few of us will ever face a split-second life-or-death decision, yet every day many of these are made on our roads. In that instant, how do drivers choose the least devastating consequence when the choice of swerving left, right, or not at all will nevertheless result in tragedy?

Would you choose to spare the greatest number of lives? Or make a value judgement, saving a mother pushing a pram instead of an elderly couple? Would you save a businessman over a homeless man, a police officer over a drug dealer, an athlete over a slob, or simply a woman over a man?

In many instances we won’t have time to rationalise this split-second decision. Even if we did, each of us would have a different moral matrix prejudicing our decision. This won’t be the case when driverless vehicles inevitably take over our roads.

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” wrote an international team of researchers whose “Moral Machine” project was published in Nature paper (https://goo.gl/vh66h9). “We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

Driverless vehicles will rely on an array of sensors to perceive the environment around them, respond to changes in the traffic conditions around them and respond to any direct threats to the vehicle as well as the threat that the vehicle poses to others. While artificial intelligence will quickly determine a course of action, and transmit that information to nearby autonomous vehicles to enable them to respond, the algorithms that compute the vehicle’s course of action will be programmed by humans.

This means that vehicle manufacturers and their AI sub­contractors need to make moral judgements that will determine who will live and who will die when a fatality is inevitable. How can they possibly make such decisions on behalf of all of us when so many of us would be paralysed by indecision if given the time to ponder such a lose–lose scenario? How many of us will have the same binary values as the computer scientists? Indeed, how much will our moral framework differ depending on factors such as age, gender, nationality or socioeconomic background?

“Decisions about the ethical principles that will guide autonomous vehicles cannot be left solely to either the engineers or the ethicists,” the Moral Machine researchers wrote. “Even if ethicists were to agree on how autonomous vehicles should solve moral dilemmas, their work would be useless if citizens were to disagree with their solution, and thus opt out of the future that autonomous vehicles promise in lieu of the status quo. Any attempt to devise artificial intelligence ethics must be at least cognizant of public morality.”

While inevitably tragic choices will be made by driverless cars, it is likely that humans in the same situation would make choices that are no better, and sometimes worse, according to A/Prof James Harland of RMIT University. “It is certainly true that ethical questions such as this will be increasingly important as technology advances,” he says. “It should also be noted that driverless cars may possibly have more ‘herd intelligence’ available than a human driver, such as statistics showing that a particular area has a higher incidence of collisions, and hence slowing down or taking other precautionary action well before the impossible choice arises.”

To get to that stage, road infrastructure will need to become tech-savvy too. “Future roads may not be the same roads we are using today,” says Prof Hossein Sarrafzadeh of Unitec Institute of Technology in Auckland. “Even if we use similar roads they will be heavily sensored, intelligent roads.” Pedestrian safety may not be such an issue because “there may be no humans walking across the roads that autonomous vehicles travel in”.

The Moral Majority

The Moral Machine study, led by MIT researchers, attempted to gauge global public morality by creating an online simulation that captured 40 million decisions made by millions of people in 233 countries. “In the main interface of the Moral Machine, users are shown unavoidable accident scenarios with two possible outcomes, depending on whether the autonomous vehicle swerves or stays on course. They then click on the outcome that they find preferable,” the researchers explained.

The Moral Machine explored nine choices: “sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status). Additional characters were included in some scenarios (for example, criminals, pregnant women or doctors), who were not linked to any of these nine factors.

“Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences for egalitarianism,” the study reported.

A separate survey collected demographic data such as participants’ geolocation, gender, age, income and education (but not race), as well as religious and political attitudes, and found “no sizable impact”. Indeed, “the most notable effects are driven by gender and religiosity”. For instance, males were just 0.06% less likely to spare females. Even so, participants in all countries were more likely to favour females, particularly in nations where the health and survival prospects were greater for women and hence “males are seen as more expendable”.

Most notable was a clustering of responses along geographical and cultural/historical ties. The Western cluster “contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups”; the Eastern cluster contains far eastern Confucianist nations and Islamic countries; and the Southern group included Central and Latin America as well as countries with French influences. Australia and New Zealand’s moral compass was closely aligned with the UK in the Western cluster.

The study found that “clusters largely differ in the weight they give to some preferences. For example, the preference to spare younger characters rather than older characters is much less pronounced for countries in the Eastern cluster, and much higher for countries in the Southern cluster. The same is true for the preference for sparing higher status characters. Similarly, countries in the Southern cluster exhibit a much weaker preference for sparing humans over pets, compared to the other two clusters. Only the (weak) preference for sparing pedestrians over passengers and the (moderate) preference for sparing the lawful over the unlawful appear to be shared to the same extent in all clusters.”

The researchers believe that these differences can be attributed to whether participants were from individualistic cultures, which emphasise the distinctive value of each individual and have a stronger preference for sparing a greater number of characters, or from collectivistic cultures, which emphasise the respect that is due to older members of the community and therefore have a weaker preference for sparing younger characters. “This split between individualistic and collectivistic cultures may prove an important obstacle for universal machine ethics,” the researchers say.

Another insight was that “participants from countries that are poorer and suffer from weaker [legal] institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation”. Furthermore, participants “from countries with less economic equality between the rich and poor also treat the rich and poor less equally,” possibly because of “regular encounters with inequality seeping into people’s moral preferences”.

The Price of Morality

While the Moral Machine study found distinct regional clusters, Prof Toby Walsh of CSIRO’s Data61 warned: “The values we give machines should not be some blurred average of a particular country or countries. In fact, we should hold machines to higher ethical standards than humans for many reasons: because we can, because this is the only way humans will trust them, because they have none of our human weaknesses, and because they will sense the world more precisely and respond more quickly than humans possibly can.”

A/Prof Iain MacGill of the University of NSW warns that “even with sufficient societal consensus on what we would like these vehicles to do in the case of unavoidable accidents, we still face the challenge of shaping the rules which these vehicles must follow (for example, trading off speed and hence user convenience against safety), as well as coding ‘ethics’ to determine how they choose in matters of life and death. And then persuading people to buy vehicles that explicitly put the safety of other road users at the same or perhaps even higher priority than themselves – something that human drivers don’t have to do.

“It doesn’t help that we have companies racing to bring these vehicles to market with what seems to be insufficient regard to the societal risks invariably involved with new technology deployment. And can we trust the companies driving this, some with significant questions about their own ‘winner takes all’ business ethics, to appropriately program socially agreed ethics into their products?”

Prof Mary-Anne Williams, who is Director of Disruptive Innovation at the University of Technology Sydney, questions whether automotive manufacturers can be trusted to balance the moral responsibilities of their AI-enabled vehicles while protecting themselves from the commercial consequences they will face when things go wrong. “In order to minimise liability, car companies may design cars that slow down in wealthy neighbourhoods, or that kill humans rather than cause more expensive serious injuries,” she says.

Williams can see a future where “cars that always sacrifice the passengers might sell for 10% of cars that preserve them. Wealthy people may be happy to subsidise the technology to obtain guarantees of protection. One can imagine a new insurance industry built on the need to service people who can pay for personal security on the roads and as pedestrians – a subscription service that prioritises life according to the magnitude of premiums.”

Williams warns that car-makers are likely to pull down the garage rollerdoor to protect the decisions made by an AI-enabled vehicle if a crash investigation looks like ending in court. “Since AI algorithms today cannot provide sufficient details to explain their behaviour, it would be difficult to prove cars are taking actions to kill people to reduce legal expenses,” she says. “Who will have access to the data in an autonomous vehicle’s black box? Will loved ones have the right to know all the autonomous car decisions?”

A/Prof Jay Katupitiya of the University of NSW draws an uncomfortable parallel. “What would we think if, in a court proceeding, a driver testified that ‘I steered left because I could save a young person’s life and I knew it would kill the frail old person, and it unfortunately did, that was the best I could do’? In my opinion, programming these intentions is more immoral than not.”

It’s also fraught with the potential for error. Williams asks: “Will autonomous cars negotiate the outcome of multi-vehicle accidents? How will they resolve their inconsistent human life-preserving strategies during an accident? Without coordination, many more people may die unnecessarily.”

Legal Complications

Walsh says that the Moral Machine study’s conclusions should be treated with caution. “How people say they will behave is not necessarily how they will actually do so in the heat of a moment. I completed their survey and deliberately tried to see what happened if I killed as many people as possible. As far as I know, they didn’t filter out my killer results.”

The Moral Machine researchers admit that, as an online-only survey that would have attracted citizen scientists with a disproportionate interest in AI, “our sample is self-selected, and not guaranteed to exactly match the socio-demographics of each country… But the fact that our samples are not guaranteed to be representative means that policymakers should not embrace our data as the final word on societal preferences – even if our sample is arguably close to the internet-connected, tech-savvy population that is interested in driverless car technology, and more likely to participate in early adoption.”

The Moral Machine group concludes that “manufacturers and policymakers should be, if not responsive, at least cognizant of moral preferences in the countries in which they design artificial intelligence systems and policies. Whereas the ethical preferences of the public should not necessarily be the primary arbiter of ethical policy, the people’s willingness to buy autonomous vehicles and tolerate them on the roads will depend on the palatability of the ethical rules that are adopted.” After all, baby boomers may not rush to buy cars that will sacrifice them in order to save millennials.

Nevertheless, achieving a workable framework from which autonomous vehicles can make moral decisions isn’t an impossible task. “We might not reach universal agreement: even the strongest preferences expressed through the Moral Machine showed substantial cultural variations,” they wrote. “But the fact that broad regions of the world displayed relative agreement suggests that our journey to consensual machine ethics is not doomed from the start. Attempts at establishing broad ethical codes for intelligent machines… often recommend that machine ethics should be aligned with human values. These codes seldom recognize, though, that humans experience inner conflict, interpersonal disagreements, and cultural dissimilarities in the moral domain.”

While we usually forgive human frailties in difficult circumstances, this may not be the case when an AI-enabled vehicle makes a tragic decision that conflicts with our basic instinct for a fair go. “The law tends to be pretty forgiving of people who respond instinctively to sudden emergencies,” says A/Prof Colin Gavaghan of the University of Otago’s Faculty of Law. “The possibility of programming ethics into a driverless car, though, takes this to another level. Some of the preferences expressed in this research would be hard to square with our approaches to discrimination and equality – favouring lives on the basis of sex or income, for instance, really wouldn’t pass muster here.

“At what point does a ‘child’ cross the threshold to having a less ‘valuable’ life? 16? 18? Is an infant’s life more precious than a toddler’s? An 8-year-old’s? Expressed like that, the prospect of building a preference for ‘young’ lives looks pretty challenging.

“One preference that might be easier to understand and to accommodate is for the car to save as many lives as possible. Sometimes, that might mean ploughing ahead into the logging truck rather than swerving into the group of cyclists. Most of us might recognise that as the ‘right’ thing to do, but would we buy a car that sacrificed our lives – or the lives of our loved ones – for the good of the many?

“Which brings us to the role of law in all this. Maybe it just shouldn’t be legal to buy a car that would discriminate on protected grounds, or that would sacrifice other people to preserve our own safety. But in that case, how many people would buy a driverless car at all?

“Maybe the biggest issue is this: over a million people die on the roads every year. Driverless cars have the potential to reduce this dramatically. It’s important to think about these rare ‘dilemma’ cases, but getting too caught up with them might see us lose sight of the real, everyday safety gains that this technology can offer.”


Guy Nolch is the Editor of Australasian Science.