May 28, 2019

Search algorithm as AI-technique


A search algorithm is used in classical computer science for graph traversal. The same idea can be utilized for all sorts of Artificial Intelligence algorithm. The question is only how to get better performance. Let me give an example. Solving a game of chess is a search problem, controlling a robot is also a search problem and recognize an image can be treated as a search issue too. The reason why controlling a robot is treated as an AI problem is because there is no search algorithm available which can handle the task. A vanilla search algorithm for testing out all possible actions in the state space will result into a poor runtime. That means, there are billions of potential trajectory and only a handful of them will solve the issue.
The question isn't: what is AI, the question is how to improve the performance of a search algorithm. Some of highly effective strategies are: model based search, macro actions and hierarchical search. All of them are able to reduce the search space drastically. They are not realizing AI directly, but they speed up the search algorithm. That means, a vanilla search algorithm for determine the trajectory of a robot will take 2 hours and realizing the same idea with a model-based search algorithm will take only 1 seconds runtime.
STRIPS
A well known technique for increase the speed of a robot path planner is Strips. The idea is, to model high-level-actions in a pddl file. What strips has in common with macro actions and hierarchical search is to build a model. A model can be understood as a game around the original game. Let us give me an example. A robot should navigate in a maze. This is the description. Solving this game with a normal search algorithm isn't possible, because the state space of this game is too. The answer is to invent a derivative game which has a small state space. This new game is equal to a model. Strips is one possible technique for creating a model. A strips model contains of subactions, which can be executed and which brings the model into a follow-up state.
In the domain of game programming the term model isn't used. Instead the programmer call it a game engine or rule engine. This modul determines which actions are possible in the game. According to the game engine the player can move up, down, left and right. If somebody knows the game engine for a game, he can solve it. So the problem is how to invent the game engine which fits to the reality and which has a small state space. Most AI-related projects are discussing this single question. Without a model it is not possible to search in the state space.
Search for a plan
If an artificial life agents want's to execute actions he needs a plan. A plan is a sequence of actions. The plan is executed inside a model. The model provides potential actions like up, left, right and the plan is the sequence of these steps. An optimal AI system is able to determine the plan at runtime, but can also change the model at runtime. Let us describe the idea in detail.
Everything starts with a challenge, for example the robot should reach the goal of a maze. On top of the challenge a model is constructed on the fly. The model contains high-level actions. For the model a plan is generated and the plan is executed by the robot.
A possible strategy for automatic model learning is “Learning from demonstration”. A human teacher demonstrates the actions. The robot will not repeat the actions, but it creates a model for the demonstration. This allows the robot to search in the model for a plan.