Australasian Science: Australia's authority on science since 1938

What If Computers Have Feelings, Too?

By Michael Cook

If software becomes intelligent, what are the ethics of creating, modifying and deleting it from our hard drives?

Most bioethical discourse deals with tangible situations like surrogate mothers, stem cells, abortion, assisted suicide or palliative care. After all, the “bio” in bioethics comes from the Greek word bios, meaning corporeal life. Historically the field has dealt with the ethical dilemmas of dealing with blood and guts.

But there is a theoretical avant garde in bioethics, and it’s a bit more like science fiction than ER. Theoretical bioethics tries to anticipate ethical issues that could arise if advanced technology becomes available. There are always a lot of ifs – but these are what bring joy to an academic’s heart.

The other day an intriguing example lobbed into my in-box. Writing in the Journal of Experimental & Theoretical Artificial Intelligence, Oxford bioethicist Anders Sandberg asks whether software can suffer. If so, what are the ethics of creating, modifying and deleting it from our hard drives?

We’re all familiar with software that makes us suffer because of corrupted files and crashes. But whimpering, yelping, whingeing software?

This is a bit more plausible that it sounds at first. There are at least two massive “brain mapping” projects under way. The US$1.6 billion Human Brain Project funded by the European Commission is being compared to the Large Hadron Collider in its importance. The United States has launched its own US$100 million brain mapping initiative. The idea of both projects is to build a computer model of the brain, doing for our grey matter what the Human Genome Project did for genetics.

Theoretically, the knowledge gained from these projects could be used to emulate the brains of animals and humans. No one knows whether this is possible, but it is tantalising for scientists who are seeking a low-cost way to conduct animal experiments.

As one theorist has written: “the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like … mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits.”

This implies that a being – is it too much to call it a person? – is alive on the hard drive. And building on the ethics of animal experimentation, it could be argued that tweaking the software to emulate pain would be wrong.

How would we know whether the software is suffering? That is a philosophical conundrum. Sandberg believes that the best option is to “assume that any emulated system could have the same mental properties as the original system and treat it correspondingly”. In other words, software brains should be treated with the same respect as the experimental animal; virtual mistreatment would be just as wrong as real mistreatment in a laboratory.

How about the most difficult of all bioethical issues, euthanasia? For animals, death is death. But if there are identical copies of the software, is the emulated being really dead? On the other hand, would we be respecting the software’s dignity if we kept deleting copies?

Even trickier problems crop up with emulations of the human brain. What if a virus turns software schizophrenic or anorexic? “If we are ethically forbidden from pulling the plug of a counterpart biological human,” writes Sandberg, “we are forbidden from doing the same to the emulation. This might lead to a situation where we have a large number of emulation ‘patients’ requiring significant resources, yet not contributing anything to refining the technology nor having any realistic chance of a ‘cure’.”

And what about software “rights”? Could the emulations demand a right to be run from time to time? How will their privacy rights be protected? What legal redress will they have if they are hacked?

The imaginative dilemmas projected by Sandberg and his fellow futurists cannot be falsified because they haven’t happened yet. My bet is that they will never happen.

But there is a take-away. If human beings are not unique and if our respect for beings is proportional to their consciousness, then we stumble into huge (and unnecessary) dilemmas. Radical animal rights activists claim not only that primates and dogs should not be experimented upon, but also animals with less consciousness like mice. Any being that can suffer deserves protection and respect.

The same reasoning leads, as Sandberg demonstrates, to the notion of suffering software and enforceable rights for software. It is this reductio ad absurdum which ought to make us question whether we have properly understood the notion of “animal rights”.

Michael Cook is editor of BioEdge, an online newsletter about bioethics.