August 11, 2019

The symbol grounding problem is overestimated


A normal expert system works great if the facts are defined precisely. An example for a fact is, that the robot is near to the box, another fact is, that the box has an angle of 0 degree. The expert system takes these facts as input and executes operators on the facts. Not all rules can be applied but only a subset. The concept is known in game AI as a GOAP planner, because the solver is able to bring the system into a goal state.
According to some computer scientists, something is missing in that loop. They ask who the expert system gets all his facts. In the literature this question is called the symbol grounding problem because it's about a connection between the environment and the facts in the expert system. But is this problem really so important? In most cases the transition from perception to the fact database is not very complicated. The sensor is able to measure an information and the data is converted into a fact. If the robot is near to the box or not can be determined by a single line of code. Calling this transition a bottleneck which prevents expert systems from become a useful tool is an exaggeration. The real problem is not to convert a variable back and forth the difficulty is, to inference from the given facts. Instead of focus on the environment-to-sensor workflow the more important part of the overall architecture is the expert system itself.

No comments:

Post a Comment