July 20, 2019

Measuring Artificial Intelligence


The problem with most AI related projects is, that the newbies doesn't know what the goal is. Sure, Artificial Intelligence is the goal, but how exactly is this defined? A possible answer to the problem is to define Artificial Intelligence on a time scale. AI is equal to run a problem solver very fast.
Let us make a small example. The task is to solve the first level of Sokoban game. For doing so, the robot has to push some boxes. He can do so by building a graph of all possible movements and a common algorithm for doing so is RRT (Rapidly-exploring random tree). After starting the software, the CPU will consume 100% and after a while the program has found the solution. In case of Sokoban, the needed gametree is very large. Even if RRT is a nice algorithm he will consume lots of cpu ressources.
Can we increase the performance? Exactly here comes Artificial Inteligence into the game. RRT itself can be called AI, because it's a vanilla graph search algorithm. It's working a bit faster than a brute force solver but not very much. If the Sokoban problem is slightly more complicated the solver will fail. That means, a standard problem has to little cpu ressources to build the tree in realtime. It will takes many hours, until the sequence of robot moves was generated. AI related problem solving is to identify a faster technique. The aim is to solve the sokoban puzzle but with less computational effort.
The definition has the advantage that it explains what the goal is. The goal is more than only to control a robot, because the RRT algorithm can do so very well. The aim is to use as little cpu ressources as possible. That means, to invent an algorithm which is using the underlying hardware more efficient. Somebody may argue, that apart from graph search there is no alternative available which means, that games like Sokoban or Lemmings can't be solved. But this ignores that some AI heuristics are available which can do so much faster. The amount of potential speed up technique is large, and it's up to the programmer to identify the best one.
The interesting aspect of using the RRT algorithm is, that the technique is in theory well suited to solve any kind of Sokoban level. But in reality it can't. The problem is, that after some movements in the game, the game tree will become very large and even the RRT algorithm isn't able to handle this complexity. That means, everything works fine, but the CPU of the underlying hardware will need to much ressources. This brings the computer to it's physical limit and there is a need to invent something which works better. The question is not how to program an AI, the question is how to make the RRT algorithm more faster.
Let us take a look into RRT. The algorithm itself works great. It is using a graph to store existing movements in the game, and it extends the graph in a tricky way. Programming an RRT algorithm in software is a bit more complicated than programming the Sokoban game itself, but it's possible with all programming language available. The difficulty is, that RRT won't solve the core problem which is to find a path through the game tree. It's simply a scaling problem which means, that RRT works well only for problem which has 1000 upto 10000 nodes and then it becomes difficult. On the first look this insights sounds surprising, because RRT is one of the most efficient graph search algorithm available. Sometimes it was described as faster as A*. Unfortunately, this advantage is not enough. Because Sokoban is more complicated to solve than the RRT algoirthm has to offer.
All serious AI techniques are trying to overcome the bottleneck of classical graph search algorithm. They are trying to find a path in a large state space. They are doing so with heuristics and domain knowledge. What we can say for sure, is that normal graph search algorithm like A* or RRT are not able to solve harder problems. Hard means, that the map is huge in which the pathplanning problem is there or that the state space of the robot is huge. This is a surprising insight, because for the newbie RRT looks like a powerful tool for solving all kind of planning problems. The bottleneck is not to improve RRT a bit, for example by using a multicore CPU, the problem is, that solving Sokoban needs a speed up of 1 million percent. This is equal to a complete new algorithm apart from graph search.
What's wrong with RRT?
RRT is one of the most powerful search algorithm available. The reason why is a combination of building a graph and extend the graph efficiently with new nodes. Creating a graph is important because this helps to not figure out the same sequence of actions twice, and adding new nodes with the RRT techniques saves many ressources.
RRT outperforms A* a tittle bit. Not as much, that A* is become obsolete, but using RRT as the standard search algorithm for pathplanning and action planning in games. The only problem is, that RRT itself will fail in most problems. Graph search plus an optimized node-adding method isn't enough to handle a state space which contains of millions of nodes. If we like to use RRT for solving computer chess, sokoban, Lemmings or a robotics task we will notice, that the algorithm fails. That means, if will take to long for finding an answer. This has nothing to do with a certain programming language for example Python vs. C++, but the problem is, that a speedup of around 1000 upto 1 million is needed which can't be provided with RRT.
The answer to the problem is, to modify the problem itself. Instead of searching in the state space of lowlevel actions, the idea is to search in a symbolic state space. If RRT is used, for sampling PDDL actions, it's a very powerful algorithm. The problem is, that searching in symbolic actions has nothing to do with graph search, but it's a heuristic which goes beyond the RRT idea. The advantage of RRT is at the same time it's bottleneck. Searching in a graph works for all kind of problems but it results into a poor performance.
Let us go back to the Sokoban problem. Suppose, the RRT Solver is trying to analyze the state space. After a while, it's obvious that it takes too much time. We can cancel the operation and write a notice that RRT won't solve the problem. That means, RRT is useless for solving sokoban. We can switch back to the normal brute force solver which has nearly the same performance. That means, a highly optimized RRT algorithm will need 1 year to search in the game tree, and a normal brute force algorithm will take 100 years. Both performance results are not practical because the software has to find the next move in under a second.
What i want to explain is, that for serious problem we can ignore RRT and use a normal brute force solver without any graph building strategy. The bottleneck isn't how to search fast in a graph, the problem is how to map a task to a graph. Suppose, there is a graph given which contains of 2000 nodes and the aim is to find the shortest path. Under this constraints, RRT makes totally sense, because it's able to search the path very fast. Importunately, the Sokoban game doesn't provide a graph with 2000 nodes, but it provides no graph at all, and it's up to the programmer to identify a data structure in the game.
A well known technique for speed up the search in the state space is called “reactive planning”. That are non-sampling techniques which needs no or only a little amount of computational effort. Usually they are working in a hierarchical fashion. For example, it make sense to divide a map into smaller maps and use an quadtree planner to find the path. This is only the most common introduction into the subject of reactive planning, there are many more strategies available to reduce the state space drastically.
Artificial Intelligence is grouped around alternatives to classical RRT planning and is using knowledge, reactive planning and heuristics for speeding up the search in the game tree. The goal is to reduce the computational load to a minimum which allows to solve very complex problems on mainstream CPUs.
On the first look, quadtree like planner looks not very powerful. But compared to a vanilla RRT algorithm they are able to speed up the search for 10000x times and more. That means, after starting the algorithm the result is presented within milliseconds. The good news is, that octree planners, reactive planning and cognitive architectures doesn't contain Artificial Intelligence itself, but what they are doing is to search faster in the game tree. They are producing the same action sequence for a robot like the vanilla RRT algorithm but in a shorter amount of time. The initial problem is not “how to make the robot” smart, but the problem is, that the robot is smart already, but the solver needs 100 days, until he has determined the next move. AI is equal to reducing the calculating speed to zero or almost zero.

No comments:

Post a Comment