July 27, 2019

How to win against an AI


In the game OpenRA there is an option to play against an Artificial Intelligence. It is working with scripting AI and the AI is cheating. Not in the sense, that it's playing to well but the opposite. The build in feature of the AI is, that the programmer has reduced it's performance. Otherwise the human player would always loose.
Somebody may think that's easy to win against a computergenerated force. Because the human player has the ability to think abstract and the AI not. The problem with modern AI systems is, that they have a lot modules builtin which can plan ahead and additionally the AI can press in one minute more often a keystroke so overall the human player has no chance.
From a strategy point of view the question is how to win against an opponent who is stronger and nearly unbeatable. The first thing to do is to measure the performance of the AI precisely. The newbie may think that the AI has an advantage of 30% or maybe 40% and as a result it's possible to overcome the gap with some tricks. The sad fact is, that the advantage of the AI is much greater. A normal programmed AI is at least 10 times stronger, and a well realized AI is up to 100 times stronger. The advantage can't be measured in a challenge similar to the ELO score in a chess tournament, but it's an absolute value which has to do with how many actions the AI can execute each minute, and how little error the AI is doing during runtime. A single AI can play easily against 10 human players at the same time.
How likely is the chance to win against such a player? Right, the chance is zero. Even if the human player trains hard he can't beat a computer program. It's a physical limit he has. A human player can press each minute around 60 times a mousebutton and activate an action. In contrast, a computer generated force can press in the same time the mousebutton 600 times and more. Especially if the situation is more complex the computer has an advantage because the same algorithm can handle hundred of detail problems in the game and he can decide each issue with maximum efficient.
Some years ago during the last AI winter there was a time, in which philosophers were optimistic that computer's can't do everything. They imagined, that the strength of computers is located in repetitive tasks, while creative complex tasks can't be handled by computers. At this time the philosophers were right, because in the early 1990s there was no example available of well playing game AI. If the programmer struggle to program such an automated player it is easy to maintain a position in which the human is always superior to a machine. Unfortunately, the gap is only a technical detail. If somebody programs an AI, then the AI is able to beat the human.
The only chance a human player has to win against a game AI who plays OpenRA is, if no such AI is available. That means, the human acts in an environment in which game AI are not highly developed, because the AI experts are not available or they are not able to solve the issue. The programmer takes a short look at the problem, comes to the conclusion that the state space of OpenRA is too complex and then he admits, that it's not possible to create a simulated force.
Putting the AI Into a weaker position
A normal game of OpenRA starts with the same conditions. That means, the AI starts with the same amount of ressources like the human. In a short amount of time, the AI is using it's strength to increase the gap to the human. That means, he will build more units and conquer a larger space of the map. The result is a very strong AI which has occupies more ressources and the human player has become much weaker.
But what will happen, if the AI is put into a weaker position, somewhere in the middle? That means, the AI has less units and less ressources, while the human controls the map? Right, then the game is fair. Which means, the AI has fight hard to beat the human. The AI needs all the power to overcome the weaker starting position. From a philosphical point of view, such experiments are called “AI in a box”, because the aim is not to play a match against the AI, but the idea is to put the AI into a prison, and then observe what the system is doing with the limited ressources. The human player doesn't compete with the AI but it's forming an environment for the AI.
The situation is comparable in a computer chess match in which the AI has only the king and the human player dominates the match with the entire collection of all the powerful pieces like queen, bishop and so on. If the AI will become too powerful, the experiment gets stopped.