November 18, 2019

Execution monitoring as human robot interaction

The main reason why the productivity of robots in real world application is perceived as low is because the human robot interaction isn't working great. The human operator is needed by the robot and this prevents that the human can do a different task while the robot is online. Because a human operator produces costs, the robot won't improve the situation.

The issue can be solved with a clear definition in which case a human operator is needed and in which time frame not. The decision is made by a execution monitoring unit. That is a device which recognizes if the robot is working great or not. The monitor knows two different states: either a human operator is needed by the robot, or the robot will work autonomously. The clear distinction between two states reduces the costs drastically. The human operator will produce only costs if he is needed by the robot. It's up to the robot control software how often a human intervention is needed. If the software is working great, the human operator is needed only seldom.

The distinction between two situations make sense because it allows to formalize what the real productivity of a machine is. If a human operator is needed all the time the productivity of the robot is low. Because the human can do the task by himself much faster. The robot will become only productive if no human intervention is needed. And the exact time span has to be measured.

It's an illusion to imagine that a robot is able to do a task autonomously. Because robotics task have the tendency to become complex and current software contains some bugs. As a result, every robot which was imagined by the engineers will produce errors during runtime. It's not possible to invent a robot which doesn't need human intervention anymore. What is possible instead is a robot who knows if he needs help or not. Every task can be monitored. It possible to detect if a pick&place task was succeed or not. If not, something is wrong with the robot, the object or with the software. It's important to understand that a robot can become highly productive and also low productive. Low productivity is equal that the robot isn't working autonomously but a human operator has to adjust some knobs. Or to explain it from the other point of view. There is no need to build fully autonomous robots. It's ok if the robot needs sometimes a human operator as support. The only thing what is needed is, that there is clear distinction between both situations. Only if the human operator is allowed to stay away from the robot the machine is highly productive.

Literature

The literature uses the term “execution monitoring” for evaluating if planned robot actions can be executed in reality. The idea is to create a higher instance above the normal robot control system for increasing the robustness. This might be an interesting objective, but the more important reason for execution monitoring is to connect a robot system with a human operator. A robot system is never autonomous but it's embedded in a real world scenario. That means, the robot is doing something and at the same time a human operator observes what the robot is doing. The operator has the opportunity to stop the robot at any time.

A formal execution monitoring improves the human robot interaction. It results into a robot who is working in the background. This feature is important if a robot should be perceived as a tool which doesn't need human interaction. Let us describe the situation from a human user. He starts the robot and after five minutes a warning signal is visible. This is equal that something went wrong with the robot. The warning signal of the execution monitor is part of a the human machine interaction. It triggers a time slice in which a higher amount of human attention is needed. If the warning signal was cleared, the robot operates in the background mode, that means, no human operator is needed.

From a standpoint of maximizing the productivity, the human operator is interested in not getting alarmed by the robot. The desired situation is, that the robot is working alone and the human operator can do something different. In such a case the human operator and the robot are disconnected.

Stack light

Wikipedia knows the term “stack light” https://en.wikipedia.org/wiki/Stack_light for describing the status LED of an industrial machine. The interesting point is, that the red, green and yellow light doesn't improve the working of a CNC machine itself, but it signalizes the state to humans. The human operator knows, in which case he needs to investigate a failure. The term “andon light” comes not from a technical but from a management perspective of how to organize the workflow at the assembly line.

In a tutorial about stacklight it was explained why these additional signals are needed. A normal CNC machine is equipped with a console which allows the human operator to start, stop and adjust a machine. The assumption of a console is, that the human operator is in front of the machine. That means, it's an interactive device. The problem is, that from a productivity point of view, it's not possible that a human operator sits 24/7 near to a console, but after a while he likes to go away.