Australasian Science: Australia's authority on science since 1938

Masters or Slaves of AI?

By Michael Cook

If neural lacing enables our brains to be networked, we could easily be hacked or become the tools of Google or government.

In the 19th century and for a good part of the 20th century, many people feared that humanity was destined to become lapdogs of bloated industrialists. The world would be divided between the haves and the have-nots, the capitalists and the proletariat.

The fear persists, but nowadays capitalists like Mark Zuckerberg wear hoodies and tennis shoes like the rest of us. Apart from North Korea, there is universal agreement that “to get rich is glorious”.

So the great divide of the 21st century and beyond will be based not on money, but on intelligence. And the superior intellects will not even be humans, but machines. Or at least that’s what Elon Musk says, one of the leading figures in Silicon Valley and one of America’s richest men.

Musk is a co-founder of PayPal, and he has since launched visionary projects like sending men to Mars with a one-way ticket and Tesla electric cars. One of his visions, though, is a nightmare in which computers take over from humanity and begin to think for themselves at ever-increasing speed and sophistication. One of Google’s gurus, Ray Kurzweil, calls this “The Singularity” and predicts that this is going to happen in 2045.

To forestall this existential risk, Musk has launched yet another project, OpenAI, to create “friendly” artificial intelligence. “The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat,” he told a programmers conference recently. “I don’t love the idea of being a house cat.”

Another of his strategies for the pet problem is intriguing. He plans to invest in technology that will integrate the brain with the internet. Although this might sound preposterous, he recently identified a promising technology: the neural lace.

In a recent issue of Nature Neurotechnology, researchers reported that they have injected a fine mesh of wire and plastic mesh into the brains of mice, where it integrates into brain tissue and “eavesdrops” on neural chatter. This could help the disabled or enhance performance. “We have to walk before we can run, but we think we can really revolutionize our ability to interface with the brain,” says a co-author of the article, Charles Liebe of Harvard University.

The novelty of neural lace is that it can be injected into the bloodstream through an artery, and does not require complicated surgery. “Somebody’s gotta do it, I’m not saying I will. If somebody doesn’t do it then I think I should probably do it,” Musk told the conference.

The idea was received with gasps of wonder among the digerati. The medical benefits are obvious. The blind could see, the deaf hear and the lame walk.

But Musk’s ultimate goal is not to make disabled people normal. It is to make normal people superhuman, so there are also some serious ethical issues to work through.

Take privacy. If you can hack a computer, you can hack a brain. Integrating your memories and cognitive activity with the internet allows other people to see what you are doing and thinking 24/7 — a kind of upscale parole bracelet.

Take autonomy. In our culture, this is the most cherished of our personal values. But once brains are integrated into an information network, they can be manipulated in increasingly sophisticated ways. And since technology always serves its owner, we could easily become the tools of Google or the government.

Take responsibility. There might be no crime, as the neural lace could shut down the “hardware” whenever passions threaten to overwhelm social norms – as defined by the network.

Musk and others are asking: should we hack the brain so that we can be the masters and not the slaves of artificial intelligence? But this is the wrong question.

They should be asking whether there is anything that makes us uniquely human that AI can never replicate? If not, the issues of privacy, autonomy and responsibility are meaningless.

A lot in bioethics depends on initial premises. If we agree with Musk that humans are basically just inferior computers, bioethics becomes irrelevant.

Michael Cook is editor of the online bioethics newsletter BioEdge.