July 12, 2019

Understanding cognitive models and cognitive architectures


If somebody has taken a deeper look into Artificial Intelligence he will recognize a large amount of literature about cognitive architecture. They are abstract models to describe human behavior. On the first look a cognitive architecture doesn't make much sense, because it's internal structure works different from computer science. In computer science the idea is to program a robot to do a task, this is done by realizing an algorithm in software. But in a cognitive model there is no code written in Python and no algorithms at all.
Overcome this bridge is not very hard. The keyword for understanding cognitive models is “plan recognition in teleoperation”. It means, not to program a robot with an algorithm, but to record human's actions in a teleoperation task. A cognitive model was invented to track human level intelligence. Which means the precondition is, that the human operator has already the capability of storing information in a memory, to recognize objects and to make decisions.
Let me explain the overall structure from the experiment point of view. At first, a human operator is using a teleoperated robot arm to stack bricks. The scene is monitored with cameras and converted into a gamelog. This information is feed into the cognitive model. It is used as a scene recognition and behavior recognition system to understand human behavior. There is no need to program an Artificial Intelligence at all, because the human operator will provide the example decisions.
If a human operator plays a game, the gamelog is created in the background. A gamelog contains information about the player's position, the position of the enemy and which key was pressed. The task of the cognitive architecture is to store the raw data in a buffer, and then analyze the information on a semantic level.