Unreal bots beat Turing test: AI players are officially more human than gamers
Two AI bots have been judged to be more human than several human opponents in the video game equivalent of the Turing Test.
Sponsored by 2K Games, the BotPrize is an annual challenge in which international teams of programmers create ‘intelligent’ bots capable of passing themselves off as human players. The bots take part in a series of Unreal Tournament 2004 matches against an equal number of human players.
Every participant competes to win, but they also have a judging gun with which to tag their opponents as bots or humans. In this year’s event, two bots achieved a ‘humanness rating’ of 52 per cent, exceeding the humanness rating of 40 per cent achieved by the actual human competitors. In short, the entrants have achieved the motto of Bladerunner’s Tyrell Corporation: more human than human.
The winning bots were UT^2, programmed by Risto Miikkulainen, professor of computer science in the College of Natural Sciences, and Mirror Bot, by Romanian computer scientist Mihai Polceanu. According to New Scientist both won through a stratregy of mimicking the human players, watching behaviours and copying them in a sort of mirrored social interchange that the judges clearly found convincing. UT^2 was apparently slightly more complex because it also employed strategies recorded from dozens of real-life UT matches, linked with a learning model named neuroevolution in which successful strategies were bred to produce even more convincing human behaviours.
Phys.org puts it thus: “Networks that thrive in a given environment are kept, and the less fit are thrown away. The holes in the population are filled by copies of the fit ones and by their ‘offspring,’ which are created by randomly modifying (mutating) the survivors. The simulation is run for as many generations as are necessary for networks to emerge that have evolved the desired behavior”.
The BotTest has been running since 2008, but this year’s humanness ratings were twice those of 2011’s. “When this ‘Turing test for game bots’ competition was started, the goal was 50 percent humanness,” Miikkulainen told Phys.Org. “It took us five years to get there, but that level was finally reached last week, and it’s not a fluke.”
Of course, the winning entries at BotPrize are clever things, but in this context, the term AI is slightly disingenuous – they’re designed to be believable rather than smart. In this sense, they’re quite close to standard game AI agents, which usually aim to achieve three basic abilities – navigation, fallibility and avoidance – as cheaply as possible, via scripting, pathfinding and finite state machine algorithms. The term artificial stupidity is often used to describe how developers attempt to make bots more vulnerable by limiting their abilities. This can be achieved by the implementation of heuristic functions so that agents experience the game world in the same way as players – i.e. they’re not given any extra data on where the player is apart from what they can ‘see’ or ‘hear’ within the constraints of their current position. The programmers of UT^2, for example, introduced constraints to ensure that the bot’s accuracy was degraded during fast movements or while shooting over distances.
More intriguing perhaps was the fourth-placed entrant. Programmed by Zafeirios Fountas, an AI research student at ICL, Neurobot is actually an attempt to simulate human consciousness. “I was using the competition as a testbed for my research so from my point of view mimicry isn’t as scientifically interesting,” he tells us. “In our group we’re using spiking neural networks. There are some mathematical models that simulate the way the real neurons in the brain work, so we’re trying to make models of the brain functionality that are important for different reasons. In this case I was testing a theory which tries to explain how consciousness works in terms of a mechanism. I relied on the assumption that if a system becomes conscious then it might exhibit more human-like behaviour – I wanted to test this.”
So what’s the benefit of testing a neural network within a game environment? “Well, firstperson shooters provide very realistic and robust realtime environments – they are very similar to reality in many ways. And of course there’s interaction with real people which is extremely important to us. We’re trying to create real intelligence.”
Fountas reckons that research into realistic human behaviours will be useful in a lot of situations, for example crowd behaviour simulators, which can be used in public safety planning, or in designing evacuation strategies for new buildings. “And more optimistically, there’s the potential for personal companion robots,” he adds. As for use in games, neural networks have largely been rejected in the past as far too complex and computionally hungry for practical use. But that may well change as processors become more powerful. “Isn’t it important for games to create opponents that can understand you?” asks Fountas. “Simulating consciousness has way bigger potential because it is scaleable – once we have found the mechanisms that are responsible for intelligence in our brains, we can add neurons and enlarge the network to create smarter agents. This isn’t possible with mimicry”.
on the subject of where research into consciousness is going, Fountas pointed us toward the Blue Brain Project at Ecole Polytechnique Fédérale de Lausanne which is attempting to completely map the human brain’s neural network within five years. Sadly, we suspect the first implementation of this momentous discovery won’t be more realistic enemies in Modern Warfare 5.