Australasian Science: Australia's authority on science since 1938

What If Computers Have Feelings, Too?

By Michael Cook

If software becomes intelligent, what are the ethics of creating, modifying and deleting it from our hard drives?

The full text of this article can be purchased from Informit.

Most bioethical discourse deals with tangible situations like surrogate mothers, stem cells, abortion, assisted suicide or palliative care. After all, the “bio” in bioethics comes from the Greek word bios, meaning corporeal life. Historically the field has dealt with the ethical dilemmas of dealing with blood and guts.

But there is a theoretical avant garde in bioethics, and it’s a bit more like science fiction than ER. Theoretical bioethics tries to anticipate ethical issues that could arise if advanced technology becomes available. There are always a lot of ifs – but these are what bring joy to an academic’s heart.

The other day an intriguing example lobbed into my in-box. Writing in the Journal of Experimental & Theoretical Artificial Intelligence, Oxford bioethicist Anders Sandberg asks whether software can suffer. If so, what are the ethics of creating, modifying and deleting it from our hard drives?

We’re all familiar with software that makes us suffer because of corrupted files and crashes. But whimpering, yelping, whingeing software?

This is a bit more plausible that it sounds at first. There are at least two massive “brain mapping” projects under way. The US$1.6 billion Human Brain Project funded by the European Commission is being compared to the Large Hadron Collider in its importance. The United States has launched its own US$...

The full text of this article can be purchased from Informit.