August 04, 2019

What is the technology behind expert systems?


The literature about expert systems and general problem solving is large and many ideas are mentioned, for example the Lisp programming language, cognitive architectures and rule based systems. The problem is to identify subjects which are important to realize Artificial Intelligence and some which are not but can be described as a fashion of the 1980s and subjective preferences of researchers.
The main problem with the early AI research in the 1960s was, that no modern desktop computers were available. If the in 1960 and early 1970s somebody was interested to run a software he wasn't able to do so. Operating systems like Windows 95 were not invented, and interpreted languages like Python wasn't there. Most of the early AI literature is a mixture of AI principle and computer science in general. A typical example is the LISP language which was used for anything and nothing. Lisp was an operating system, self-modifying code, a programming language and an interactive environment. The first thing to do is to sort the own tools. So we should ask direct: what is the basic principle of an expert system?
Basically spoken it's not an algorithm but a text adventure which is simulating something. The programmer has to construct a game engine, which is equal to a rule engine, and then he can send commands to the text adventure. Either manual or with an automatic solver. This brings the game into a goal state. The understanding of an expert system as a textadventure is the core idea of symbolic AI. It helps to simplify a problem into smaller tasks. The advantage is, that any textadventure can be solved by a solver.
The next question is how to convert a given domain (for example a robot arm) into a text adventure. The answer has to do with human machine interaction. A human operator is able to control the robot and while he is doing so, it is possible to observe his actions in a psychological experiment. These studies are going beyond computer science, because a psychological experiment has a lot to do with humans but only little with turing-machines. From a computer science perspective, such experiments are equal to generate a dataset. That is a database with the recorded game log of the experiment. And it's up to the AI engineer to convert the game log into an expert system which is equal to a text adventure.
Unfortunately, the rules of a human machine interaction task are hidden in the dataset. They are not available as machine readable instructions but are based on experience, natural language instructions and general problem solving capabilities. Creating an expert system doesn't mean to invent an AI algorithm, but the algorithm is available as default. The more important goal is to invent a space in which actions can be executed. From a technical point of view, the attempt in doing so is called “model induction”, sometimes it is called a forward model because it describes how the system is working. Using a solver to bring an existing forward model into a goal state is not very hard. The algorithm are known, and in most cases a simple graph search technique is fast enough. The more demanding task is, that for most problems the forward model is not known.
The process of converting a human demonstration into a text adventure is called grounding. It's the core problem in AI because it allows an improved human machine communication. A grounded problem can be understood by both sides: for the computer the text adventure contains of symbols which can be stored and manipulated in the memory and for the human the game represents the reality.
On the first look, the most important question for expert systems programmer is how the expert system is working internally. This question has a surprisingly simple answer. The internal working is not important. It is working with a graph search algorithm or a similar algorithm which brings the current state into the goal state. In a primitive expert system the search algorithm contains of only 20 lines of code, who is testing out the entire state space and programming such an algorithm is not very complicated. The more demanding task is the question how an expert system perceives the environment. That means, the human operator is doing with the robot a task and the expert system monitors the actions. How exactly identifies the expert system a subaction, and what is shown on the screen as the detected event? A well programmed expert system is at foremost an activity recognition engine. It translates human activities into machine readable description.
Frameworks are not available
Even if some techniques are known to construct expert systems for example the CLIPS shell, the LISP programming language, the PDDL domain definition standard and the means-end analysis for searching in the state space none of these techniques are needed in a robotics project. They can be called less important details but are not are here to stay. If Lisp, PDDL and all the other techniques are useless which kind of framework, programming language or algorithm can be utilized for developing an expert system? Unfortunately, there is no such think like a framework, but a software engineering workflow which consists of three simple steps:
1. create a simulation which is controlled by a human operator
2. create a plan recognition system
3. create a fully autonomous solver
The first step is easy to solve because it's equal to normal game programming. The idea is use an existing programming language like C#, and use an existing game engine like Unity3d to create a standard game which takes the input of a human operator. The steps 2 and 3. are more complicated to realize. In most cases they have to do with observing humans who are doing a task and try to formlize the steps in a text adventure. This text adventure is used to parse a demonstration but is the baseline for the automatic solver as well. Instead of recommending a concrete programming language or an algorithm the better idea is to understand the steps 2 and 3 as part of a software engineering process. They are handled with version control systems like git and get visualized with the UML notation.