Australasian Science: Australia's authority on science since 1938

Can We Program Safe AI?

Credit: iStockphoto

Credit: iStockphoto

By Steve Omohundro

Tomorrow’s software will compute with meaning and be much more autonomous. But a thought experiment with a chess robot shows that we will also need to carefully include human values.

Steve Omohundro is President of Self-Aware Systems.

The full text of this article can be purchased from Informit.

Technology is rapidly advancing. Moore’s law says that the number of transistors on a chip doubles every 2 years. Moore’s law has held since it was first proposed in 1965 and extended back to 1900 when older computing technologies are included.

The rapid increase in power and decrease in the price of computing hardware has led to its integration into every aspect of our lives. There are now one billion PCs, five billion mobile phones and more than a trillion web pages connected to the Internet. If Moore’s law continues to hold, systems with the computational power of the human brain will be cheap and ubiquitous within the next few decades.

While hardware has been advancing rapidly, today’s software is still plagued by many of the same problems it had half a century ago. It is often buggy, full of security holes, expensive to develop and hard to adapt to new requirements. Today’s popular programming languages are bloated messes built on old paradigms.

The problem is that today’s software still just manipulates bits without understanding the meaning of the information it acts on. Without meaning, it has no way to detect and repair bugs and security holes.

We are developing a new kind of software that acts directly on meaning. This kind of software will enable a wide range of improved functionality, including semantic searching, semantic...

The full text of this article can be purchased from Informit.