In the past of computer science philosophy until around the year 2010 a certain paradigm was widespread available about the inner working of a robot. The idea was derived from science fiction novels written by Isaac Asimov and were based on the idea of an independent robot who is not in control of a human operator but takes its own decisions. In most or even all science fiction stores about humanoid robots, the robots have their own brain which allows them to take decision, analyze a situation and take actions. These fictional robots have much in common with animals in nature who are also independent beeings with their own will.
Engineers in the past were trying to realize this idea in technology, namely in hardware and software. The goal was to program a closed system which takes decisions by its own. The concrete realization can be seen in early self driving cars and early maze robots who are working in the autonmous mode.
Despite the large amount of effort to realize these robots, the concept of autonomous robotics has failed. The typical autonomous car programmed before the year 2010 was powered by millions lines of code but wasn't able to solve simple navigation tasks. The bottleneck is not located in a certain software architecture but it has to do with the idea of autonomy. This idea prevents the development of advanced artificial intelligence which is not working independent from a human operator but assumes teleoperation and especially text based teleoperation.
Solving the so called "instruction followin" task in robotics is much easier than implementing autonomouos robots. instruction following means basically, that gets instruction from a human. For example, the robot is grasping the ball because the human operator is pressing the button for "grasp the ball".
Such a remote controlled robot can't be called intelligent anymore, but its a tool similar to a crane which also operates by levers pressed by a human. The goal of building autonomous robots makes only sense for science fiction novels but its a bad advice for implementing robots in the reality. real robotis is based on teleoperation and voice commands.
The beginning of modern teleoperated robotics can be traced back to a single talk, held by Edwin Olson in 2010.[1] He explained to the perplexed audience that his robots doesn't working with software nor algorithms, but they are teleoperated with a joystick. Olsen claims, that such a control paradigm is harder to realize than classical algorithm based robot control.
To understand why the audience during this 2010 talk was upset, we have to listen was Olsen said exactly. In the introduction he made a joke about former attempts in realizing robotics, especially the idea of writing large amount of software for implementing algorithms. These large scale software based robots were seen as the here to stay paradigm for most of computer scientists and it was blasphemy to question this paradigm in the year 2010. In simpler words Olsen said basically, that all the sophisticated motion planning algorithms developed in thousands lines of code with endless amount of man hours are useless, and his robots are controlled by a joystick which is more efficient. Some people in the audience assumed, that Edwin Olsen is not a computer scientist but a comedian and perhaps they are right.
Edwin Olsen didn't mention in his talk natural language as source for robot control, but he is focussing only on joystick control. His talk is focusson the difference of autonomous robots vs teleoperated robots.
[1] Winning the MAGIC 2010 Autonomous Robotics Competition https://www.youtube.com/watch?v=OuOQ--CyBwc
December 03, 2025
The myth of autonomous robotics
Labels:
Teleoperation
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment