November 04, 2019

How to build a household robot

On the first look, it's a purely technical issue which can be solved with servo motors, an onboard microcontroller, a vision system and an advanced artificial Intelligence which is controlled by a neural network. A closer look into the problem will show, that the technical side isn't the bottleneck but it's a social issue. A household robot is not a machine, but it's a human who was trained in doing the dishes.

The question is not how to program the software which is able to do the job autonomously, but the question is how to motivate humans in become a household service robot. What is the motivation of an individual to obey the commands of a higher authority? Which kind of strategies are preventing humans from cleaning up the kitchen.

These questions can be answered with sociological and psychological models about the thought of humans. The phenomena of human robots is seldom researched because most visions assumes that within 10 years mechanical household robots are available. No they don't. Because of the productivity paradoxon it's not possible to construct a machine which is able to work in the kitchen. The only thing what is possible is to educate humans to do the job manual. The underlying technology is not an expert system nor compiled languages like C++, but it's a simple paper-based schedule, group management and individual benefits for humans who are motivated to do repetitive tasks.

This kind of description stays in contrast how classical technology is described. A hammer for example is defined as a tool which improves the ability of a human. With a hammer he can do more interesting tasks than without such a device. The naive assumption is, that the same human-machine relationship can be transferred to upraising robots. That means, a household robot is sold similar to a vacuum cleaner and every household can buy such a device. Unfortunately, such an outlook is not realistic. It has to do with the man-machine interface.

Let me give the details. Suppose a company has developed a household robot and it's sold to the masses. The bottleneck of the machine is to communicate with a human about the next task. By definition a household robot is a very complex machine, way more complicated than a hammer. This makes the communication much harder. Talking to a mechanical hammer is not that hard, but talking to a household robot is a different subject.

Suppose the requirement for a household robot is, that the machine will parse natural language and convert it into actions in reality. The human owners can enter the command “clean the room” and the robots will do so. Programming such an interface is a complicated task. Only humanoid robot will need such a highlevel interface. The interesting point is, that without such an interface, the robot won't fulfill the requirement of become a household robot. And here is the problem located. The complexity of the human machine interface prevents that the machine will help the human. There are some examples available of laboratory household robots which are able to clean up the kitchen. But none of these machines is working the right way. What they have in common is, that they are not acting as autonomous systems but what the engineers have developed instead are robots who are teaching humans to clean the kitchen. They are educational machines but not robots.

This bottleneck can't be overcome by better technology. No matter which software was used to program a household robot, the human machine interface will look the same. The interface is equal to a sense-making system. It connects the world of humans with the world of machines.

To describe this issue in detail we have to take a step back. In most cases, the engineers have the idea to increase the productivity. The idea is, that the human is not longer forced to clean the kitchen by himself but he is able to delegate the task to a machine. A robot is only the tool for increasing the productivity. So far this is the vision. The problem is, that from a technikcal perspective a household robot has to look like a robot. If the device has no vision system and no legs it won't be able to do the job in the right way. A machine which looks like a robot will need a complex interface to control the robot. This produces complexity which wasn't desired by the engineers. They are trying to design the interface in a certain way, in the hope to improve man machine communication, but they will fail. At the end, the household robot will become a household game, in which the robot is explaining to the human how the world looks like and what to do next.

It's very similar to a situation in which the dog has trained his owner to do everything for him. Dogs are very good in doing so. And robots can do the same task as well. Let us take a look into the latest experiments with self-driving cars. What we can observe is, that lots of humans are devoted to the machine. They are creating marketing campaigns, improving the software and explain to the public how wonderful the future will be. The self-driving car is forcing humans to become robot drivers. That is the only thing what the technology can archive.

Real world examples

Many robots are available in the year 2019. Notable examples are self-driving cars, Lego Mindstorms robots, UAV robots, legged robots which can walk and robots in the factory. None of these device can improve the productivity. That means, from an economical standpoint it makes no sense to use these machines. Not because the technology is not highly enough developed. Latest examples of legged robots are equipped with advanced hardware and software. But the productivity paradox has to with the human machine interface. Simply spoken it's not possible to give a robot a command similar to give a command to a human. What advanced Artificial Intelligence has to offer is the opposite. They are creating a virtual cave for the human owner.

Let me give an example. Suppose, the enduser has bought the Mindstorms EV3 kit, in the hope that he can do with the system something useful. From a purely technical perspective the robot is great. Lots of tutorials are available in the Internet, and it comes with well working servo motors. The question is how exactly has the human to interact with the robot? Not in way, that the robot can programmed in a certain way. But the robot is equal to an environment in which the human can operate. The EV3 brick can be programmed in a certain way, and the hardware can be utilized for certain needs. The good news is, that for Mindstorms robots the company who is producing the machines doesn't hide the fact, that it's an educational robot. That means, the main idea is that the human owner will waste their time by learning how to program the machine. If he has figured out all the details the project is over.

The same interaction pattern is visible for self-driving cars. The only difference is, that these cars are labeled not as educational robots but as a professional robot which can imrprove the traffic. No they can't. Similar to Lego Mindstorms, the owner of a self-driving car will waste their time. He has to read through the manual and if he has understand the car completely, the project comes to an end.

Education robots are working with the principle that the owner has to invest lots of money and lots of hours and he gets nothing in return. The reason is not very hard to explain. A complex robot results into a complex human machine interface. And a complex interface is equal to a lower productivity.