Most robotics project have failed. The engineers have developed a sophisticated planning system which is doing something, but in reality the robot isn't even able to grasp an object. The outlook of a helping robot who increases the productivity can't be realized with today's technology and the engineer has to admit, that he has invented nothing and AI is out of reach.
What is missing in the development path is an incremental improvement from a low tech system which is working to an advanced system which is working better. A possible innovation chain contains of the following steps:
1. activity recognition (cameras are tracking a scene, no robot is available)
2. Teleoperation (a human operator controls a robot arm with a joystick)
3. Fully autonomous robot (no human invention is needed)
The first step on the list is easy to realize. It is normal video surveillance camera which is recognizing objects on the screen. If an apple is visible on the table, the system generates the string “apple on the table”. Such a system is different from the normal understanding of a robot, because if the apple should be grasp, a human has to do the job with his own job. There is no robot at all and the productivity is the same.
The reason why it make sense to develop an recognition only system first, is because the technical requirements are lower. In most cases, a OpenCV module plus a neural network is enough to implement an activity parser. And if the software makes a mistake, nobody cares because the task itself of grasping the object is done by humans.
The next step is a bit more advanced. In the teleoperation mode, a robot arm is available. This arm can't move alone but has to controlled by a human. Very similar to what the user of a hydraulic excavator is doing. If he moves the joystick forward, the endeffector will follow with a delay. Such a system is a bit more complicated. The scene must be understood by the vision system and the joystick signals have to transmit to the hydraulic arm.
Only in the last step a fully autonomous robot is realized. The teleoperation setup get's improved by an additional software modul which overcomes human intervention. A fully autonomous robot is some kind of advanced teleoperation system. The software has a built-in activity parser from step 1, a joystick controlled robot arm from step 2 and on top a planner determines the next steps.
An interesting question is how to go from a teleoperated robot into a step 3 which is an autonomous robot. One option is to use “learning from demonstration” (LfD) [1]. LfD is equal to record and playback a human control sequence. The idea is that subtasks like graspobject or reachobject are already given as a model, and the imitation learning module has to repeat this task. Let us give an easy example. In the demonstration phase, a human user controls an RC path on a certain path. The trajectory is stored and in the replay mode, this trajectory is used as a goal for the automatic controller. The robot will follow the same path without human intervention.
If the idea of path following is scaled up to manipulation tasks the concept becomes more powerful. An object manipulation will produce also a path, because the joints of the robot arm are forming a spline in the coordinate system. If the robot arm goes upward, the joint of the servo motor will grow from 10 degree to 70 degree.
An interesting step inbetween teleoperation and autonomy are teleoperation system which are passive and have time-delay. The example is a crane which has chaotic endeffector who is depended from the wind. If the operator press the joystick into forward direction, the crane will react with a delay and in most cases not exactly. THe result is, that the coordination between the human and the crane is more complicated. This is equal to a need for an intelligent teleoperation system.
[1] Zhang, Tianhao, et al. "Deep imitation learning for complex manipulation tasks from virtual reality teleoperation." 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.
Truly intelligent robots need a Volition module controlling a Motorium module.
ReplyDelete@Mentifex “plan recognition” can be extended with a cognitive architecture which provides information about beliefs, goals and volitional actions. In contrast to AI planning which is handled with the PDDL syntax, cognitive architectures aren't standardized. The concept remains fuzzy.
ReplyDelete