Australasian Science: Australia's authority on science since 1938

Risky Bias in Artificial Intelligence

By Mary-Anne Williams

Machine learning is intrinsically biased, but what can be done about it?

Why does the digital assistant Alexa giggle and speak of its own volition at random times throughout the day and night? Alexa is clueless as to why it is doing it, and Amazon cannot explain this bizarre – some say creepy – behaviour either.

Welcome to your AI-enabled future.

Artificial intelligence that can enhance and scale human expertise is profoundly changing our social and working lives, controlling how we perceive and interact with the physical and digital world.

We live in the “Age of AI”. It’s a time of unprecedented and unstoppable disruption and opportunity, where individuals, businesses, governments and the global economy progressively rely on the perceptions, decisions and actions of AI.

Machine learning, the dominant approach to AI today, has several scientific challenges holding it back from widespread adoption and truly transforming life as we know it. One of them is the “opacity problem”. Machine learning cannot explain itself. It lacks awareness of its own processes, and therefore cannot explain its decisions and actions. Not being able to ask “why” is a serious and escalating problem as machine learning algorithms continue to profoundly impact our lives and future opportunity.

We must develop robust solutions to the opacity problem because machine learning algorithms have been found to be biased – indeed outright racist and sexist in some cases. The ability to discriminate sensory information is critical for intelligence, but at the same time bias can lead to unethical or illegal outcomes.

Machine learning systems learn to be biased. They learn to discriminate inputs like distinguishing images of melanoma from images with and without melanoma, outperforming humans in accuracy and scale.

Machine learning models simply encapsulate the data they are presented. Without a well-designed bias that leads to accurate prediction, machine learning makes critical mistakes: false positives, such as predicting melanoma where there is none; and false negatives, like not predicting melanoma when it is present.

There are three primary sources of bias in machine learning: data, training and algorithm. The data used to train the model is often biased – this can happen as a result of the human bias embedded in the assumptions or historical aspects of selection and preparation of the data sets.

This also happens when the data set is just too small, narrow in scope, or non-representative to build a robust model. Then, machine learning can amplify the inherent bias in the data by over-focusing on it.


Currently, machine learning’s predilection for bias can make it dangerous because it may not be clear when machine learning algorithms will fail. Sometimes, failures may occur in weird and mysterious ways, like confusing dogs with muffins, towels or fried chicken.

Such failures have led to innovations like adaptive adversarial machine learning algorithms that learn by competing against each other. This technique was used to train the deep learning system AlphaGo Zero, which beat the world’s best Go players. From a computational complexity perspective, Go is much harder than chess.

AlphaGo Zero is notable because it was not trained with a database of human moves, but by playing against itself over a period of three days. This technique can also be used for malicious purposes to “fool” machine learning algorithms. Cybersecurity risks can occur if a malicious adversarial algorithm learns to manipulate the data input to other algorithms by exploiting their vulnerabilities, compromising the security for an entire system.

The risks associated with machine learning in terms of scope, scale, severity and likelihood are high, and they serve to amplify the urgent need for explainable AI (XAI). Having recognised the need for “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing”, the European Union has imposed new laws that protect humans’ rights to explanation.


Distinguished Professor Mary-Anne Williams FTSE is Director of the University of Technology, Sydney Magic Lab, a Fellow in the Centre for Legal Informatics, and Co-Founder of the AI Policy Hub at Stanford University.