Australasian Science: Australia's authority on science since 1938

Smart bots out-game human hunters

Increasingly, those who venture into any computer-driven environment will experience a diminishing ability to tell if they are dealing with another human being, or with an artifice constructed from machine code.

Another milestone has been achieved in the seemingly unstoppable march by artificial intelligence to rival, and perhaps exceed, human intelligence. Thanks to a competition conceived and organised by Associate Professor Philip Hingston at Edith Cowan University’s School of Computer and Security Science, true anthropomorphism in the virtual world of gaming is here to stay.

What this means is that, increasingly, those who venture into any computer-driven environment will experience a diminishing ability to tell if they are dealing with another human being, or with an artifice constructed from machine code.

“I think we’ve achieved at least one element of the Turing test,” Philip says. “Two of our competitors in the latest BotPrize competition built game bots that the judges were unable to distinguish from human players.”

Alan Turing, famous for his code-breaking efforts at Bletchley Park during the Second World War, bypassed philosophical musings on the meaning of ‘intelligence’ by devising a simple test, which endures today. He proposed that if someone communicating with another party could not tell if that party was human or a machine, then the machine had passed the test of human-equivalent intelligence.

“Language is very difficult for computers, and no machine has yet passed the Turing test based on an extended (text) chat between human judges and a computer,” Philip says.

“We set off on a slightly different path, using the first-person shooter game Unreal Tournament 2004 (UT2K4) as our test platform. This is one of those games where you go into a virtual world and try to shoot your opponents before they shoot you. There is no language involved, but because the game can be played by multiple players over the Internet, we were able to develop a competition where the judges played against opponents that were either operated by human competitors, or operated by computer programs (bots).”

Last September, in the fifth year of the competition that had seen 14 teams from nine countries competing, two winners emerged – one, an individual, and the other a small team from the University of Texas (Austin). Both winners fooled the judges by, to some extent, mimicking their behaviour so that their actions appeared familiar, normal and human-like.

“We’ve know for a long time that humans like to play against other humans, because even their irrational acts - like pursuit in dangerous circumstances to pay back on a grudge – are preferred to a robotic perfection of an opponent who never misses and never makes a mistake,” Philip says.

“The competitors solved other problems, such as point-to-point movement by bots, or getting stuck in a corner, early in the evolution of the game. Now they’re working with much more subtle systems, borrowing from psychological theories of cognition and applying human development systems such as evolving responses and behaviour across generations of bots.

“The bots have to be able to react the geometry and events of their virtual world in the same way that a human being would – intelligently, with good but not perfect reflexes, making occasional errors, and moving through the space as if they were experiencing emotion.”

Now that this benchmark has been set, Philip is working on how to evolve the game to a new stage to make it more demanding. “I’m thinking of perhaps making it a team game, where the players are part of a small team that works together and reacts to other members of the team as well as to what their opponents are doing,” he says. “The judges who are immersed in the game as players will have to work out, once again, if they’re playing against human or machine-driven teams, so there’s much greater dynamic complexity involved.”

Though the judging platform is a game, consequential developments give pause for thought. Philip has explored these considerations in a new book, Believable Bots: Can Computers Play Like People, in which he draws together a wide variety of contributions under his editorship.

“The idea that we humans would one day share the Earth with a rival intelligence is as old as science fiction,” he writes in the preface. “That day is speeding towards us. Our rivals (or will they be our companions?) will not come from another galaxy, but out of our own strivings and imaginings. The bots are coming; chatbots, robots, gamebots.”

He points to our lack of knowledge about how people will react to inter-relating with bots in environments such as aged care, or education. “Will people be happier if they can’t differentiate between people and bots, or will it make them uneasy?” he asks. “So far we don’t know, because it’s so clear that restricted systems like talking satellite navigation tools are just programming.

“But when the Turing test has been passed at all levels, we will be on the verge of a very different world.”

Having just taken over as international chairman of the Games Technical Committee of the Computational Intelligence Society within the Institute of Electrical and Electronic Engineers, Philip will be in a better position than most to see that world dawning.

Cohesion