Australasian Science: Australia's authority on science since 1938

Risky Bias in Artificial Intelligence

By Mary-Anne Williams

Machine learning is intrinsically biased, but what can be done about it?

The full text of this article can be purchased from Informit.

Why does the digital assistant Alexa giggle and speak of its own volition at random times throughout the day and night? Alexa is clueless as to why it is doing it, and Amazon cannot explain this bizarre – some say creepy – behaviour either.

Welcome to your AI-enabled future.

Artificial intelligence that can enhance and scale human expertise is profoundly changing our social and working lives, controlling how we perceive and interact with the physical and digital world.

We live in the “Age of AI”. It’s a time of unprecedented and unstoppable disruption and opportunity, where individuals, businesses, governments and the global economy progressively rely on the perceptions, decisions and actions of AI.

Machine learning, the dominant approach to AI today, has several scientific challenges holding it back from widespread adoption and truly transforming life as we know it. One of them is the “opacity problem”. Machine learning cannot explain itself. It lacks awareness of its own processes, and therefore cannot explain its decisions and actions. Not being able to ask “why” is a serious and escalating problem as machine learning algorithms continue to profoundly impact our lives and future opportunity.

We must develop robust solutions to the opacity problem because machine learning algorithms have been found to be biased – indeed outright...

The full text of this article can be purchased from Informit.