November 30, 2019
Receiving downvotes is not funny
The Stackexchange software which powers the SE.AI forum as well allows to upvote answers but the opposite is also available. Sometimes so called serial voting takes place. That means, that not a single posting receives negative perception but a handful of them. Unfurtunately, it's not possible to trace back the origin of the frequent clicks, perhaps it was a bot or it was a real user. Nobody cares.
The dominant reason why Wikipedia doesn't support upvotes of articles is because of that reason. The added value of a poll about information is low and my advice for Stackexchange is to deactivate the voting feature at all. Perhaps i can tell a story from my own blog. In the wordpress system there is also a polling feature available. Users can vote for an article and decide if the quality was poor or very good.
November 28, 2019
Was the inverse kinematics problem solved?
On the first look, the inverse control of a robot arm is one of the more easier to tackle problems within Artificial Intelligence. The desired joint angles can be determined by normal sinus and cosinus calculations and it's possible to control legs and arms with the same algorithm. The more interesting question is, what the robot should do if the inverse kinematics is working.
A short look into a robotics forum will show, that inverse kinematics can become more complicated than it was imagined. The reason is, that the same question about inverse kinematics is asked many times a week, and each time the answer is a bit different and the bugs can't be solved easily. It seems, that inverse kinematics needs a higher priority than expected. And at the same time, the value of a fully working inverse kinematics is higher than expected.
If a robot arm is able to point towards a desired point in the space, most of the planning problem is solved. Let us imagine a 6dof robot arm, which is only providing an accurate IK solver. Such a robot arm can be used for a variety of tasks. It can become a pick&place robot, a tennis ball playing robot, a robot leg, a finger and much more.
A naive search for literature about the so called inverse kinematics problem will show, that many researchers have investigated the details in the past. It seems, that a lot of papers were written about this simple problem. The amount of possible solver techniques is amazing. Perhaps Inverse kinematics is the core problem in robotics? What we can say for sure, is that a simple FAQ in which a formula is given for controlling the arm of robot won't tackle the problem with the needed amount of priority. The reason is, that apart from simple IK solvers there are many attempts available in which underactuated robot arm are pointing towards a position in the 3d space, and the solver technique is way more complicated than only calculate a simple sinus equation.
I didn't want to explain that an IK solver alone is able to control a robot, but the IK problem is some kind of initial obstacle before somebody can develop the software for a robot. If the IK problem is unsolved the robot project at all will fail. To get an impression about the challenge we have to estimate how long it will take until an ik solver is programmed. Is it a project for a single day, for a month, or can it be figured out only as a large scale programming challange with lots of scientists? The last one is the right answer. Programming an IK solver takes many programmers who are working together over years on the same issue.
A short look into a robotics forum will show, that inverse kinematics can become more complicated than it was imagined. The reason is, that the same question about inverse kinematics is asked many times a week, and each time the answer is a bit different and the bugs can't be solved easily. It seems, that inverse kinematics needs a higher priority than expected. And at the same time, the value of a fully working inverse kinematics is higher than expected.
If a robot arm is able to point towards a desired point in the space, most of the planning problem is solved. Let us imagine a 6dof robot arm, which is only providing an accurate IK solver. Such a robot arm can be used for a variety of tasks. It can become a pick&place robot, a tennis ball playing robot, a robot leg, a finger and much more.
A naive search for literature about the so called inverse kinematics problem will show, that many researchers have investigated the details in the past. It seems, that a lot of papers were written about this simple problem. The amount of possible solver techniques is amazing. Perhaps Inverse kinematics is the core problem in robotics? What we can say for sure, is that a simple FAQ in which a formula is given for controlling the arm of robot won't tackle the problem with the needed amount of priority. The reason is, that apart from simple IK solvers there are many attempts available in which underactuated robot arm are pointing towards a position in the 3d space, and the solver technique is way more complicated than only calculate a simple sinus equation.
I didn't want to explain that an IK solver alone is able to control a robot, but the IK problem is some kind of initial obstacle before somebody can develop the software for a robot. If the IK problem is unsolved the robot project at all will fail. To get an impression about the challenge we have to estimate how long it will take until an ik solver is programmed. Is it a project for a single day, for a month, or can it be figured out only as a large scale programming challange with lots of scientists? The last one is the right answer. Programming an IK solver takes many programmers who are working together over years on the same issue.
GUI development with Linux
The most convenient way in creating graphical applications for a Linux desktop is the Python Tkinter framework. In around 50 lines of code, it's possible to create a hello world window which includes a button and a text entry widget. The interesting fact is, that potential alternatives to that technology are much more complicated.
A possible alternative would be to use Python and GTK to create a native Linux desktop GUI app. The problem with gtk under Python is, that the amount of documentation is low, i have tried out to create a simple hello world gui and failed. The problem is not completely new because the combination of C++ and GTK is also hard to master. The reason why is hard to explain, but what is for sure is, that python+tkinter is much easier to master.
Let us take a look what Windows programmers are doing. The normal way in creating a GUI app goes over the C# language which comes with a strong GUI library. Creating a C# GUI app in Windows is as easy as with tkinter in Python. The difference is, that a C# program can be used for productive code, while Python is some kind of prototyping language.
The hypothesis is, that right now, no good GUI frameworks are available for Linux. In the normal C++ language no GUI library is available and using GTK is too complicated. It seems, that from a os development side it's complicated to create and document a GUI framework which is used by normal programmers. A naive approach would be to use the C# language in Linux as well, but unfortunately, the forms library will work only in Windows environments.
The reason why Linux desktop are not equipped with a more elaborated GTK framework is simple. It's part of the Anti-RHEL conspiracy. Microsoft, Adobe and all the major computer journalists are boycotting the Linux ecosystem and the small Red Hat company is not strong enough to build it's own GUI framework. THe problem is, that GUI toolkits are patent protected and it's not allowed to develop an easy to use framework from scratch. In theory, Red Hat can develop a more elaborated GTK toolkit which is easy to use from Python and C++ but as a consequence Microsoft would start a legal conflict against Linux and the result is, that no Linux operating system at all will be available anymore.
The simple advice for programers is to stay within the Microsoft environment. It's the more easy to use operating system for GUI development. C# under Windows can be seen as the best practice method in programming a GUI interface. As a consequence many native Windows applications were developed by the programmers while under Linux only textual programs are available.
A possible alternative would be to use Python and GTK to create a native Linux desktop GUI app. The problem with gtk under Python is, that the amount of documentation is low, i have tried out to create a simple hello world gui and failed. The problem is not completely new because the combination of C++ and GTK is also hard to master. The reason why is hard to explain, but what is for sure is, that python+tkinter is much easier to master.
Let us take a look what Windows programmers are doing. The normal way in creating a GUI app goes over the C# language which comes with a strong GUI library. Creating a C# GUI app in Windows is as easy as with tkinter in Python. The difference is, that a C# program can be used for productive code, while Python is some kind of prototyping language.
The hypothesis is, that right now, no good GUI frameworks are available for Linux. In the normal C++ language no GUI library is available and using GTK is too complicated. It seems, that from a os development side it's complicated to create and document a GUI framework which is used by normal programmers. A naive approach would be to use the C# language in Linux as well, but unfortunately, the forms library will work only in Windows environments.
The reason why Linux desktop are not equipped with a more elaborated GTK framework is simple. It's part of the Anti-RHEL conspiracy. Microsoft, Adobe and all the major computer journalists are boycotting the Linux ecosystem and the small Red Hat company is not strong enough to build it's own GUI framework. THe problem is, that GUI toolkits are patent protected and it's not allowed to develop an easy to use framework from scratch. In theory, Red Hat can develop a more elaborated GTK toolkit which is easy to use from Python and C++ but as a consequence Microsoft would start a legal conflict against Linux and the result is, that no Linux operating system at all will be available anymore.
The simple advice for programers is to stay within the Microsoft environment. It's the more easy to use operating system for GUI development. C# under Windows can be seen as the best practice method in programming a GUI interface. As a consequence many native Windows applications were developed by the programmers while under Linux only textual programs are available.
November 27, 2019
The secrets of ticket escalation
In most documentation from OTRS, Jira Helpdesk and Zendesk a so called ticket escalation is explained in a step by step procedure. The first level agent should press certain buttons and then the tickets gets escalated. It seems that the overall procedure is not easy to explain so it make sense to address the workflow in a step by step tutorial.
At first, it's important to describe a naive understanding of ticket escalation.
In the example, the external customer sends a request to the organization and the first level agent submits the ticket to somebody else who is higher in the hierarchy. This kind of description is given in some of the weaker tutorials. It's a naive approach because it doesn't assumes that some kind of conflicts can be there in the escalation process. According to the chart, the term escalation means only that the first level agent sends a ping to the second level agent. And if the second level agent doesn't like to answer the ticket it goes back to the first level agent.
In the second chart, a more elaborated form of ticket escalation is presented. The position of the customer is not below the organization but above. Escalation means similar to the first case to assign a ticket to somebody who is higher in the hierarchy, but this time the definition of the hierarchy is different.
Let us describe the workflow step-by step. The process is initiated by the customer who sends a request to the organization. The customer doesn't ask for help, but he gives the organization a chance to help him. It's a very demanding customer who doesn't tolerate an excuse. Now the first level agent has two choices, he can escalate the conflict. Escalation means, to send the request back to the customer. The result is a conflict between the first level agent and the customer. Or the first level agent can decide to de-escalate the problem towards the second level hierarchy.
The definition of the term “ticket escalation” is so complicated because two opposite descriptions are available. Picture 1 has shown a pseudo-escalation process in which a conflict is not possible, while picture 2 presents a real ticket escalation which assumes and produces stress within the organisation.
Escalation button
Most normal users of ticket systems and issue trackers are interested where the escalation button is located they have to press next. The simple answer is, that in the these program no such button like “escalate a ticket” is available. Instead the process gets activated by natural language. It's a certain style of sentences a first level agents sends to the customer and to the second level agent.
The overall pipeline works the following. At first, the first level agent has to choose which concept he prefers. If the agent assumes that the first picture is valid, he will send to the second level agent a message like “Hi, there is a ticket and i want it to escalate to you.” If the agent preferes the second picture, the terminology is different and the resulting speech too.
At first, it's important to describe a naive understanding of ticket escalation.
In the example, the external customer sends a request to the organization and the first level agent submits the ticket to somebody else who is higher in the hierarchy. This kind of description is given in some of the weaker tutorials. It's a naive approach because it doesn't assumes that some kind of conflicts can be there in the escalation process. According to the chart, the term escalation means only that the first level agent sends a ping to the second level agent. And if the second level agent doesn't like to answer the ticket it goes back to the first level agent.
In the second chart, a more elaborated form of ticket escalation is presented. The position of the customer is not below the organization but above. Escalation means similar to the first case to assign a ticket to somebody who is higher in the hierarchy, but this time the definition of the hierarchy is different.
Let us describe the workflow step-by step. The process is initiated by the customer who sends a request to the organization. The customer doesn't ask for help, but he gives the organization a chance to help him. It's a very demanding customer who doesn't tolerate an excuse. Now the first level agent has two choices, he can escalate the conflict. Escalation means, to send the request back to the customer. The result is a conflict between the first level agent and the customer. Or the first level agent can decide to de-escalate the problem towards the second level hierarchy.
The definition of the term “ticket escalation” is so complicated because two opposite descriptions are available. Picture 1 has shown a pseudo-escalation process in which a conflict is not possible, while picture 2 presents a real ticket escalation which assumes and produces stress within the organisation.
Escalation button
Most normal users of ticket systems and issue trackers are interested where the escalation button is located they have to press next. The simple answer is, that in the these program no such button like “escalate a ticket” is available. Instead the process gets activated by natural language. It's a certain style of sentences a first level agents sends to the customer and to the second level agent.
The overall pipeline works the following. At first, the first level agent has to choose which concept he prefers. If the agent assumes that the first picture is valid, he will send to the second level agent a message like “Hi, there is a ticket and i want it to escalate to you.” If the agent preferes the second picture, the terminology is different and the resulting speech too.
Management in Technopoly
Neil Postman has defined the term Technopoly for the overall system which has has elected the technology as the main objective. Machines can be described by it's mechanical and electrical properties but to make these machine work some kind of management system is needed which explains to the workforce what to do next.
There is a larger online forum available in which work-related problems are discussed with a psychological point of view. It is https://workplace.stackexchange.com/ and most questions are about does and don't in business environment. It's interested to recognize who the filter of the website is working. The filter has the obligation to let trivial questions pass but it blocks more fundamental problems.
Let us give some examples. One option to describe the group interaction at the workplace is the theory of “Total customer orientation”, another approach for getting a deeper knowledge what a social role is, would be a so called business theatre. The idea is to perceive an office as foremost as a stage in which persons have roles. Somebody may argue that both ideas are great for get a deeper knowledge about modern business, and exactly for this reason both topics are blocked at the workplace website. It is an example of knowledge of domination. That means, the value of the knowledge is high and as a result it's not transported into the public domain.
I have tested out to post some questions in that direction. Most of them were deleted very soon, because of different reasons. The main one is perhaps, that if everybody knows how to manipulate a group of workers, the manager would loose their power. And the assumption is right. The knowledge will become useless if the magic trick is explained so that everybody can understand the background.
Total customer orientation alone is not able to describe modern business scenarios. Human workers are more complex than it is estimated in the theory. But it would be an important step towards a realistic picture. It is some kind of irony, that especially subjects which leads to greater knowledge are identified as offtopic. This shows how the knowledge production in Technology works or to make the point more obvious: knowledge is part of the game and the task is to hide the important knowledge from the public.
It's the same strategy what early software companies were used. They published the working program but they didn't talked about the sourcecode. Exact the same principle is done by today's manager. They are able to coordinate larger groups of workers, but they didn't talk about how to do so.
But i don't want to become to critique. The mentioned workplace forum has some lighter sides as well. For example, there is tag “scrum” available under which agile management is discussed. Some of the postings and comments are trying to explain at least a bit, how group interaction can be realized.
There is a larger online forum available in which work-related problems are discussed with a psychological point of view. It is https://workplace.stackexchange.com/ and most questions are about does and don't in business environment. It's interested to recognize who the filter of the website is working. The filter has the obligation to let trivial questions pass but it blocks more fundamental problems.
Let us give some examples. One option to describe the group interaction at the workplace is the theory of “Total customer orientation”, another approach for getting a deeper knowledge what a social role is, would be a so called business theatre. The idea is to perceive an office as foremost as a stage in which persons have roles. Somebody may argue that both ideas are great for get a deeper knowledge about modern business, and exactly for this reason both topics are blocked at the workplace website. It is an example of knowledge of domination. That means, the value of the knowledge is high and as a result it's not transported into the public domain.
I have tested out to post some questions in that direction. Most of them were deleted very soon, because of different reasons. The main one is perhaps, that if everybody knows how to manipulate a group of workers, the manager would loose their power. And the assumption is right. The knowledge will become useless if the magic trick is explained so that everybody can understand the background.
Total customer orientation alone is not able to describe modern business scenarios. Human workers are more complex than it is estimated in the theory. But it would be an important step towards a realistic picture. It is some kind of irony, that especially subjects which leads to greater knowledge are identified as offtopic. This shows how the knowledge production in Technology works or to make the point more obvious: knowledge is part of the game and the task is to hide the important knowledge from the public.
It's the same strategy what early software companies were used. They published the working program but they didn't talked about the sourcecode. Exact the same principle is done by today's manager. They are able to coordinate larger groups of workers, but they didn't talk about how to do so.
But i don't want to become to critique. The mentioned workplace forum has some lighter sides as well. For example, there is tag “scrum” available under which agile management is discussed. Some of the postings and comments are trying to explain at least a bit, how group interaction can be realized.
November 26, 2019
SE.AI users doesn't know how to publish a paper
In the ongoing conflict between me and the SE.AI website https://ai.stackexchange.com/ i gave up no post into the chat but i'm reading what other users are writing. Yesterday, the admins have discovered my blog and posted the link into the chat. Other users are arguing about the value of content and if my knowledge is high enough. It seems, that SE.AI has problems to interpret the information so i'd like to give some advice.
If somebody likes to become an AI scientists, he has to write some papers about Artificial Intelligence first. Without writing a paper, it's not possible to get referenced by other. If the paper was written it should be uploaded to a document server, for example to Academia.edu. Then, the potential fake scientist has to wait a bit until Google Scholar has discovered the information. In my case it took 2 and a half year until the paper was available in Google Scholar. The inner working of a crawler is nothing new but it's known from normal web search engines.
If any question left open to how to create and publish a paper, I'm motivated to give help. Greetings to SE.AI.
UPDATE
In the meantime i have created a hypothesis which is able to explain the current conflict. The self-understanding of the moderators of SE.AI is to moderate a Q&A website. The assumption is, that the website contains of a number of postings and the moderator has to defend this content against attacks from the outside. That means, the SE.AI has a core which is equal to the moderator and if somebody is closer to the core he has more rights, more wisdom and will become under more pressure.
The problem is, that this kind of understanding is not the only one which is available but it's a very conservative understanding of what a Q&A website site. The more elaborated form is, that wisdom is located outside of SE.AI and the admins have to learn from the newbies what is going on the domain of artificial Intelligence. That is in short my point of view. That means, the wisdom and the power is provided by the newbies who are asking questions and select answers they think which are valid. As a consequence the SE.AI website itself is nothing and the social role of a moderator can be reduced to formal housekeeping.
The hypothesis is, that not the quality of my answers in SE.AI is the reason for the current conflict but my position towards a Q&A website should be moderated, simply spoken i do not trust any moderators but i trust the OP aka the request which is send from the outside. This position is the complete opposite to the self-understanding of the current moderator team. Simply spoken i'm not devoted to the moderators but i have to obey to somebody who posts a question to SE.AI.
The assumption is, that because of this simple rule, the current conflict has broke out and will be developed further until new information are available.
If somebody likes to become an AI scientists, he has to write some papers about Artificial Intelligence first. Without writing a paper, it's not possible to get referenced by other. If the paper was written it should be uploaded to a document server, for example to Academia.edu. Then, the potential fake scientist has to wait a bit until Google Scholar has discovered the information. In my case it took 2 and a half year until the paper was available in Google Scholar. The inner working of a crawler is nothing new but it's known from normal web search engines.
If any question left open to how to create and publish a paper, I'm motivated to give help. Greetings to SE.AI.
UPDATE
In the meantime i have created a hypothesis which is able to explain the current conflict. The self-understanding of the moderators of SE.AI is to moderate a Q&A website. The assumption is, that the website contains of a number of postings and the moderator has to defend this content against attacks from the outside. That means, the SE.AI has a core which is equal to the moderator and if somebody is closer to the core he has more rights, more wisdom and will become under more pressure.
The problem is, that this kind of understanding is not the only one which is available but it's a very conservative understanding of what a Q&A website site. The more elaborated form is, that wisdom is located outside of SE.AI and the admins have to learn from the newbies what is going on the domain of artificial Intelligence. That is in short my point of view. That means, the wisdom and the power is provided by the newbies who are asking questions and select answers they think which are valid. As a consequence the SE.AI website itself is nothing and the social role of a moderator can be reduced to formal housekeeping.
The hypothesis is, that not the quality of my answers in SE.AI is the reason for the current conflict but my position towards a Q&A website should be moderated, simply spoken i do not trust any moderators but i trust the OP aka the request which is send from the outside. This position is the complete opposite to the self-understanding of the current moderator team. Simply spoken i'm not devoted to the moderators but i have to obey to somebody who posts a question to SE.AI.
The assumption is, that because of this simple rule, the current conflict has broke out and will be developed further until new information are available.
November 24, 2019
Is there an author-pays AI forum available?
Setting up an internet server has become so cheap, that most online forums are working free to the user. That means, the user account can be created for free, and then the users are allowed to post as much information they want.
The alternative is a profit oriented commercial online forum. That means, the user has to pay 1 US$ for posting a question and he has also pay 1 US$ for answering a question. The result is, that the amount of users who can afford such a resource is limited.
Artificial Intelligence is perceived as a future topic with a high impact to all parts of society. if society is able to build robots and write the appropriate software the productivity of the economy would benefit from it. It make sense to introduce the author-pays model with the hope, that such a forum would become attractive for professional users.
Are author-pays AI forums available today?
The alternative is a profit oriented commercial online forum. That means, the user has to pay 1 US$ for posting a question and he has also pay 1 US$ for answering a question. The result is, that the amount of users who can afford such a resource is limited.
Artificial Intelligence is perceived as a future topic with a high impact to all parts of society. if society is able to build robots and write the appropriate software the productivity of the economy would benefit from it. It make sense to introduce the author-pays model with the hope, that such a forum would become attractive for professional users.
Are author-pays AI forums available today?
November 23, 2019
Increased probability of getting banned in SE.AI
https://ai.stackexchange.com/
In the case of conflicts in the SE.AI forum the situation has increased a bit. In the meantime, not only a single user thinks openly about banning my account but three of them are arguing in that way. I don't think, they are joking because from other Stackexchange projects it's known, that the admins are banning users in reality. And they are authorized in doing so because their admin panel has a simple button for that purpose.
I must admit, that the increased tension has surprised me. The former probability of getting banned was from my point of view <10%, now it has increased to 30% because of the reason that a discussion was started with exactly this purpose. In most cases, banning is equal to that the user is no longer allowed to login under his account and sometimes all the postings from the past gets deleted. In my case, i have contributed many postings. The volume is around 3% of the overall forum. The reason is, that the SE.AI forum is a small one, and it's enough if a single user writes some article to become a serious part of the project.
Like i mentioned in the previous blog post, i stopped the attempt to argue with the admins in the chat because the previous attempt was not successful.
In the case of conflicts in the SE.AI forum the situation has increased a bit. In the meantime, not only a single user thinks openly about banning my account but three of them are arguing in that way. I don't think, they are joking because from other Stackexchange projects it's known, that the admins are banning users in reality. And they are authorized in doing so because their admin panel has a simple button for that purpose.
I must admit, that the increased tension has surprised me. The former probability of getting banned was from my point of view <10%, now it has increased to 30% because of the reason that a discussion was started with exactly this purpose. In most cases, banning is equal to that the user is no longer allowed to login under his account and sometimes all the postings from the past gets deleted. In my case, i have contributed many postings. The volume is around 3% of the overall forum. The reason is, that the SE.AI forum is a small one, and it's enough if a single user writes some article to become a serious part of the project.
Like i mentioned in the previous blog post, i stopped the attempt to argue with the admins in the chat because the previous attempt was not successful.
Massive downvotes in SE.AI
The website https://ai.stackexchange.com/ is the largest online forum about Artificial Intelligence in the internet. It's parr of the stackexchange network and has around 150 users who are posting questions and answers. Recently, my user account on this website was downvoted by pressing the downvote button and by textual comments. According to the comments, i spread wrong information about Artificial Intelligence. My first idea was to discuss the problems in the chat section of the website itself. But this increased the conflict further. The next strategy is to ignore the SE.AI and post the answer here in my own blog.
Here is my response to the received downvotes:
Instead of analyzing if a certain action makes sense or not the more elaborated question is to predict the outcome of a system. For me, SE.AI is some kind of underactuated system. The task is to predict what the other users, and especially the admins will do next. According to the known information, it's unlikely that the current admin or a future admin will ban my user account. The reason is, that I'm active for nearly 2 years and have received over 300 upvotes from other users.
On the other hand, it's possible that in the future a new frontline is available. For example if the group of elected admins decides to ban the user with the highest amount of received downvotes, then my user account will become under fire, because according to the latest stats this constraints is fulfilled. The probability that this will happen in reality is estimated with <10%. If my user account gets banned, I'm not allowed to increase my reputation on this website. A possible attempt to build an alternative Artificial Intelligence forum from scratch will fail, because SE.AI is the only one who is available in the Internet. What can i do instead is to create a remote comment. This kind of strategy is working by answering a question not on a website itself, but in the own blog and put the URL to the original post in the blog post as well.
Here is my response to the received downvotes:
Instead of analyzing if a certain action makes sense or not the more elaborated question is to predict the outcome of a system. For me, SE.AI is some kind of underactuated system. The task is to predict what the other users, and especially the admins will do next. According to the known information, it's unlikely that the current admin or a future admin will ban my user account. The reason is, that I'm active for nearly 2 years and have received over 300 upvotes from other users.
On the other hand, it's possible that in the future a new frontline is available. For example if the group of elected admins decides to ban the user with the highest amount of received downvotes, then my user account will become under fire, because according to the latest stats this constraints is fulfilled. The probability that this will happen in reality is estimated with <10%. If my user account gets banned, I'm not allowed to increase my reputation on this website. A possible attempt to build an alternative Artificial Intelligence forum from scratch will fail, because SE.AI is the only one who is available in the Internet. What can i do instead is to create a remote comment. This kind of strategy is working by answering a question not on a website itself, but in the own blog and put the URL to the original post in the blog post as well.
Why Python is the ideal programming language
The reason why alternatives to Python are widespread used by today's programmers has to do with a certain role a programmer plays. The assumption is, that a programmer is an expert programmer who has a deep knowledge of compiler technology and is able to write fast and efficient sourcecode. It surprising to see, that the Python language doesn't provide help for this task. It's simply not possible to write efficient sourcecode in Python.
The reason why Python is accepted by a newer generation of programmers as there preferred choice has to do with separating the programming workflow into two subparts: creating a GUI prototype and programming a piece of software. This two step pipeline is the result of modern software engineering which is trying to realize software design as a dedicated step. The basic fact why Python has become so popular in a short amount of time is, that it's the best Prototyping language available. In contrast to Matlab, Visual Basic macros and a pure graphical GUI prototyping tool, Python is more professional and available in non MS-Windows operating systems.
The funny thing is, that a written Python sourcecode fulfills certain needs great while other not. From the perspective of classical programming, Python sourcecode is something which can be ignored. Because it's slower than C++ code and less efficient than the Assembly language. The main advantage of a python program is, that it can be translated easily in any other programming language. Not because there are so many software tools available which can convert a “.py” programtext into a “.cpp” programtext, but because the manual task of doing so is an easy one.
Suppose at Stackoverflow it's allowed to post the following question. Hi guys, i have written a game in Python which is 10000 lines of code long. Can anybody help to convert the code into a Java program?” The interesting fact is, that a large amount of Stackoverflow users able to do so. Writing a Java program if the code was already tested and bugfixed in a Python prototype is an easy to realize software project. It will take longer than a single day, but it will take much shorter than creating the java code from scratch.
The most important part of programming is the prototyping step. If the Python code was developed, 80% of the overall project is done. Using Python as a prototyping language make sense, because it's much easier to write and bugfix code in Python. The compiler is very friendly to newbie programmers. And it's even possible to use multithreading. It's less efficient than in C++ but it works in a prototype.
The only technique which is more efficient than writing a prototype in Python is to search for code which was already written in github. Installing a program if the programmer has provided the repository in an archive is the fastest way of getting access to software. For example, if somebody likes to play a game of pong, he will need around a week until he has programmed the software in Python, but he can install and run the game with an existing repository within minutes.
Sometimes, Python is described as throw away prototyping language. The idea is, that the programmer develops a game in Python and if's ready he deletes the project folder, because the game doesn't make much sense. In reality, most of the written code is not new, and doesn't solve important problems. Many game development projects are started as learning / teaching experience. That means, somebody likes to learn programming and creates a bad programmed sidescrolling game, which isn't desired by anybody apart from the programmer itself.
Writing sourcecode and for throwing it away is not a mistake but the best practice method for testing out new ideas. The trick is to reduce the costs for doing so. Writing in the Assembly language throw away code is possible, but it will take very long. It's important for a programmer to anticipate, that even after investing lots of week into writing the code, the result won't fit to the needs.
The reason why Python is accepted by a newer generation of programmers as there preferred choice has to do with separating the programming workflow into two subparts: creating a GUI prototype and programming a piece of software. This two step pipeline is the result of modern software engineering which is trying to realize software design as a dedicated step. The basic fact why Python has become so popular in a short amount of time is, that it's the best Prototyping language available. In contrast to Matlab, Visual Basic macros and a pure graphical GUI prototyping tool, Python is more professional and available in non MS-Windows operating systems.
The funny thing is, that a written Python sourcecode fulfills certain needs great while other not. From the perspective of classical programming, Python sourcecode is something which can be ignored. Because it's slower than C++ code and less efficient than the Assembly language. The main advantage of a python program is, that it can be translated easily in any other programming language. Not because there are so many software tools available which can convert a “.py” programtext into a “.cpp” programtext, but because the manual task of doing so is an easy one.
Suppose at Stackoverflow it's allowed to post the following question. Hi guys, i have written a game in Python which is 10000 lines of code long. Can anybody help to convert the code into a Java program?” The interesting fact is, that a large amount of Stackoverflow users able to do so. Writing a Java program if the code was already tested and bugfixed in a Python prototype is an easy to realize software project. It will take longer than a single day, but it will take much shorter than creating the java code from scratch.
The most important part of programming is the prototyping step. If the Python code was developed, 80% of the overall project is done. Using Python as a prototyping language make sense, because it's much easier to write and bugfix code in Python. The compiler is very friendly to newbie programmers. And it's even possible to use multithreading. It's less efficient than in C++ but it works in a prototype.
The only technique which is more efficient than writing a prototype in Python is to search for code which was already written in github. Installing a program if the programmer has provided the repository in an archive is the fastest way of getting access to software. For example, if somebody likes to play a game of pong, he will need around a week until he has programmed the software in Python, but he can install and run the game with an existing repository within minutes.
Sometimes, Python is described as throw away prototyping language. The idea is, that the programmer develops a game in Python and if's ready he deletes the project folder, because the game doesn't make much sense. In reality, most of the written code is not new, and doesn't solve important problems. Many game development projects are started as learning / teaching experience. That means, somebody likes to learn programming and creates a bad programmed sidescrolling game, which isn't desired by anybody apart from the programmer itself.
Writing sourcecode and for throwing it away is not a mistake but the best practice method for testing out new ideas. The trick is to reduce the costs for doing so. Writing in the Assembly language throw away code is possible, but it will take very long. It's important for a programmer to anticipate, that even after investing lots of week into writing the code, the result won't fit to the needs.
Gimmicks for an advanced emergency operation center
A so called EOC is the central hub for coordinating large groups of people. The EOC itself consists of interacting users, and these users are managing more people outside the EOC for example firefighter, ambulance and other personal.
The importance of an EOC can't be estimated high enough. In case of an emergency, money is not the problem but the bottleneck is to safe life. So the question is, which kind of equippment should be bought for an Emergency operation center if the aim is to maximize the quality? Unfortunately, this isn't easy to answer. Because the technical equipment is limited. It's only possible to buy computer monitors, desktop pc and keyboards which are available as normal products. In most cases a normal computer room with workstations is more what the average EOC will need. If the workstations are connected with highspeed internet cable and the LED lighting on the calling is bright enough, it's hard to tell how to improve the situation further.
In some EOC a screen wall is available with the assumption that this will improve the information awareness. It's correct that such a wall of monitors can cost huge amount of money, but the effect is very low. If the person is far away from the wall he can't see the details so it's more a tool used in Star trek movies than in real EOC.
The perhaps most obvious bottleneck in an EOC is the training of the users. Which kind of skills are needed to maximize the group productivity? Even if the answer is not known, it make sense to assume that the skills of the individual are limiting the capabilities of the EOC. The resulting question is, which kind of training is the best which can be bought with money?
The best training material for EOC users was created by me. Unfurtunately, it's the opposite of a costly training course, but it's a simple github repository. The URL is https://github.com/ManuelRodriguez331/Helpdesk-game
With this material it's possible to explain to the newbies what an EOC operator has to do in an emergency. The program is based on the educational paradigm of self-training. That means a new user has to download the python script, start the slideshow and can lay back while watching the dialogues on the screen. Hopefully, the result is, that the newbie becomes familiar with group interaction.
The most interesting fact for understanding an EOC is, that it's never a problem of directing vehicles to the right place or follow the instructions somebody else has created in the last year. But an EOC works with 99% with interaction on different hierarchies. That means, team building is everything.
Cameras
In movies about catastrophic events an emergency operation center is often equipped with a video wall. The more elaborated equipment is the opposite. The problem is not to display something but to record the actions of the users in the EOC. It make sense to buy lots of cameras and transform an EOC into a Big brother show.
Suppose a traning session was held in the EOC. Without any doubt, the newbies will make lots of mistakes, their missing teamplay will result into taking the wrong decision. The problem is not to make a mistake but not learn from it. After a training session is over, the recordings of the cameras can by analyzed. This allows to blame persons for the right reason. It helps to improve the group communication and can be used for educational lectures.
Before a camera will produce high quality results, the lighting must become ultra bright. It make no sense to install a camera in a low light environment. Instead there is need for high power LED for support the working of the cameras.
The importance of an EOC can't be estimated high enough. In case of an emergency, money is not the problem but the bottleneck is to safe life. So the question is, which kind of equippment should be bought for an Emergency operation center if the aim is to maximize the quality? Unfortunately, this isn't easy to answer. Because the technical equipment is limited. It's only possible to buy computer monitors, desktop pc and keyboards which are available as normal products. In most cases a normal computer room with workstations is more what the average EOC will need. If the workstations are connected with highspeed internet cable and the LED lighting on the calling is bright enough, it's hard to tell how to improve the situation further.
In some EOC a screen wall is available with the assumption that this will improve the information awareness. It's correct that such a wall of monitors can cost huge amount of money, but the effect is very low. If the person is far away from the wall he can't see the details so it's more a tool used in Star trek movies than in real EOC.
The perhaps most obvious bottleneck in an EOC is the training of the users. Which kind of skills are needed to maximize the group productivity? Even if the answer is not known, it make sense to assume that the skills of the individual are limiting the capabilities of the EOC. The resulting question is, which kind of training is the best which can be bought with money?
The best training material for EOC users was created by me. Unfurtunately, it's the opposite of a costly training course, but it's a simple github repository. The URL is https://github.com/ManuelRodriguez331/Helpdesk-game
With this material it's possible to explain to the newbies what an EOC operator has to do in an emergency. The program is based on the educational paradigm of self-training. That means a new user has to download the python script, start the slideshow and can lay back while watching the dialogues on the screen. Hopefully, the result is, that the newbie becomes familiar with group interaction.
The most interesting fact for understanding an EOC is, that it's never a problem of directing vehicles to the right place or follow the instructions somebody else has created in the last year. But an EOC works with 99% with interaction on different hierarchies. That means, team building is everything.
Cameras
In movies about catastrophic events an emergency operation center is often equipped with a video wall. The more elaborated equipment is the opposite. The problem is not to display something but to record the actions of the users in the EOC. It make sense to buy lots of cameras and transform an EOC into a Big brother show.
Suppose a traning session was held in the EOC. Without any doubt, the newbies will make lots of mistakes, their missing teamplay will result into taking the wrong decision. The problem is not to make a mistake but not learn from it. After a training session is over, the recordings of the cameras can by analyzed. This allows to blame persons for the right reason. It helps to improve the group communication and can be used for educational lectures.
Before a camera will produce high quality results, the lighting must become ultra bright. It make no sense to install a camera in a low light environment. Instead there is need for high power LED for support the working of the cameras.
November 18, 2019
Lean production with the helpmate service robot
In the mid 1990s a new service robot was presented to the public, the Helpmate robot. The device was designed to support employees in a hospital. From a technical perspective the Helpmate robot was gorgeous. It was equipped with modern sensors and the onboard software was working error free. Unfortunately, the helpmate robot was from a commercial perspective a failure. Only few amount of customers were interested in the product, and if they bought a robot, they were disappointed.
The problem isn't located in the Helpmate robot itself, but the low productivity of the robot has to do with a certain human machine interaction. To use the robot as a tool to improve the logistics in a hospital a certain management philosophy is needed which is explained in the following blogpost.
The concept is oriented on the Andor switch in lean production which is a robust bottom up technique to organize workgroups in an assembly line. Each robot has a stack light which can be either red which means “error, human intervention is needed” or green which is equal to “everything is ok, no human is needed”.
The picture shows a fictional hospital which contains of 5 helpmate robots plus two human operators. Two of the robots are indicating a problem. They are trapped in the corridor, something with the tray isn't correct or the software has a malfunction. The red indicator signals the human operator that he is needed by the robot. The other three robots are operating within the specification, that means they are executing a delivery job and they doesn't need the attention of a human expert.
It's important to understand that the robot control software which is installed on the onboard computer is only a part of the overall workflow. To use a robot or a machine in a broader context, some kind of execution monitoring is needed in which potential failure are anticipated. In the example picture two human operators are needed to solve smaller and larger problems with the fleet of helpmate robots. Their task is to approach a red blinking robot and fix the issue. It's relative sure, that during 24/7 operation one or more of the helpmate robots will switch from the green status into the red status. It's not possible that a software alone is handle to all situations. The advantage fhe employees is, that at least the helpmate robots who are in the normal mode can work without human intervention. They are doing their job and transport tablets from point A to point B.
The stack light improves the human machine communication. If all the lights are green the robot fleet is working with the maximum productivity. If the robots or the envirionment are in bad situation some of the robots or sometimes all will switch in to the error mode, which means, that the productivity is very low. If the robot doesn't move and if a human operator is needed to fix the issue, the overall work can't be done. That means, the fleet is producing costs but it doesn't provide a service in exchange.
Tool based robotics
Running a robot can be done in two different modes. In the first mode, a robot has the full attention of a human operator. He has switched on the device and likes to observe what the robot is doing. Such a workmode is very common for synthetic robotics challenge, in which the behavior of the robot is evaluated and the audience is interested in every movement the robot is doing. Unfortunately, such a workmode is equal to a low productivity. If one or two human operators are observing the robot all the time, the factory has to pay the human operator. At the end, the overall costs are higher than without a robot in the loop.
Because of this reason a second workmode is needed, in which a robot runs in the background and can be managed as a tool. That means, that no human operator is observing the robot, but the device is in a blackbox and nobody knows what exactly the machine is doing right now. For a realistic workflow used in assembly lines and for real world application in general it's important that the robot is able to switch between the modes. The reason is, that each mode has advantages and disadvantages and for long term operation both modes are needed. What will happen in reality is, that a robot stays for a timespan in mode1 (red) and for a certain timespan in mode2 (green). This allows the financial department which has to calculate if the robots are a great investment to estimate the overall productivity. For example, if a robot stays 90% of the time in the red mode, the costs are too high, and the robot should be replaced by a different model.
Human intervention
The picture shows the robot and the human operator at the same time. The robots have to a task which is deliver something in a hospital, and the humans have to do something as well which is observing the robot during operation. A single robot might be working autonomously, but the overall fleet can't do so. During runtime, it's very common that at least one robot will show the red light. This is the trigger for human intervention. Without the two humans in the loop, the robot fleet of five robots isn't able to work. The reason is, that the onboard software isn't highly enough developed to fix all possible interruption by it's own. Even if the programmer has made a great job and the pathplanner is working robust, it is very likely that in real world application the planner will fail to plan the path sometimes.
Instead of arguing that this situation is not allowed to happen or to estimate that technicans have to improve the software, the better idea is to estimate such situation on a management perspective. The combination of 5 robots plus 2 human operators allows a 100% uptime. That means that the workflow never stops. From the customer perspective this is equal that he can use at any time a robot to deliver something to the destination.
The problem isn't located in the Helpmate robot itself, but the low productivity of the robot has to do with a certain human machine interaction. To use the robot as a tool to improve the logistics in a hospital a certain management philosophy is needed which is explained in the following blogpost.
The concept is oriented on the Andor switch in lean production which is a robust bottom up technique to organize workgroups in an assembly line. Each robot has a stack light which can be either red which means “error, human intervention is needed” or green which is equal to “everything is ok, no human is needed”.
The picture shows a fictional hospital which contains of 5 helpmate robots plus two human operators. Two of the robots are indicating a problem. They are trapped in the corridor, something with the tray isn't correct or the software has a malfunction. The red indicator signals the human operator that he is needed by the robot. The other three robots are operating within the specification, that means they are executing a delivery job and they doesn't need the attention of a human expert.
It's important to understand that the robot control software which is installed on the onboard computer is only a part of the overall workflow. To use a robot or a machine in a broader context, some kind of execution monitoring is needed in which potential failure are anticipated. In the example picture two human operators are needed to solve smaller and larger problems with the fleet of helpmate robots. Their task is to approach a red blinking robot and fix the issue. It's relative sure, that during 24/7 operation one or more of the helpmate robots will switch from the green status into the red status. It's not possible that a software alone is handle to all situations. The advantage fhe employees is, that at least the helpmate robots who are in the normal mode can work without human intervention. They are doing their job and transport tablets from point A to point B.
The stack light improves the human machine communication. If all the lights are green the robot fleet is working with the maximum productivity. If the robots or the envirionment are in bad situation some of the robots or sometimes all will switch in to the error mode, which means, that the productivity is very low. If the robot doesn't move and if a human operator is needed to fix the issue, the overall work can't be done. That means, the fleet is producing costs but it doesn't provide a service in exchange.
Tool based robotics
Running a robot can be done in two different modes. In the first mode, a robot has the full attention of a human operator. He has switched on the device and likes to observe what the robot is doing. Such a workmode is very common for synthetic robotics challenge, in which the behavior of the robot is evaluated and the audience is interested in every movement the robot is doing. Unfortunately, such a workmode is equal to a low productivity. If one or two human operators are observing the robot all the time, the factory has to pay the human operator. At the end, the overall costs are higher than without a robot in the loop.
Because of this reason a second workmode is needed, in which a robot runs in the background and can be managed as a tool. That means, that no human operator is observing the robot, but the device is in a blackbox and nobody knows what exactly the machine is doing right now. For a realistic workflow used in assembly lines and for real world application in general it's important that the robot is able to switch between the modes. The reason is, that each mode has advantages and disadvantages and for long term operation both modes are needed. What will happen in reality is, that a robot stays for a timespan in mode1 (red) and for a certain timespan in mode2 (green). This allows the financial department which has to calculate if the robots are a great investment to estimate the overall productivity. For example, if a robot stays 90% of the time in the red mode, the costs are too high, and the robot should be replaced by a different model.
Human intervention
The picture shows the robot and the human operator at the same time. The robots have to a task which is deliver something in a hospital, and the humans have to do something as well which is observing the robot during operation. A single robot might be working autonomously, but the overall fleet can't do so. During runtime, it's very common that at least one robot will show the red light. This is the trigger for human intervention. Without the two humans in the loop, the robot fleet of five robots isn't able to work. The reason is, that the onboard software isn't highly enough developed to fix all possible interruption by it's own. Even if the programmer has made a great job and the pathplanner is working robust, it is very likely that in real world application the planner will fail to plan the path sometimes.
Instead of arguing that this situation is not allowed to happen or to estimate that technicans have to improve the software, the better idea is to estimate such situation on a management perspective. The combination of 5 robots plus 2 human operators allows a 100% uptime. That means that the workflow never stops. From the customer perspective this is equal that he can use at any time a robot to deliver something to the destination.
Execution monitoring with Andon lights
In the lean management theory a so called Andon light indicates the status of a process. In the easiest case a machine can have the normal status or the error status. This principle allows assign a human operator to a machine or take the human operator away from the process flow. The principle can be used as well for increasing the productivity of a robot control system. The image on the left shows a robot who is maintaining a task under the control of a human operator. That means, the robot isn't autonomous, but a human operator is in the loop.
In the image on the right, the human is not needed and the robot can do the task autonomously. A robot can fluctuate between both states. From the perspective of maximizing the productivity the green status is the goal situation. If no human operator is needed, the costs are much lower. The decision is made not by the human operator but by the robot control system. With a self-monitoring system the robot is able to detect failures by it's own. Detecting if a pick&place operation of a robotarm was successful is much easier than doing the action with a robot.
Execution monitoring as human robot interaction
The main reason why the productivity of robots in real world application is perceived as low is because the human robot interaction isn't working great. The human operator is needed by the robot and this prevents that the human can do a different task while the robot is online. Because a human operator produces costs, the robot won't improve the situation.
The issue can be solved with a clear definition in which case a human operator is needed and in which time frame not. The decision is made by a execution monitoring unit. That is a device which recognizes if the robot is working great or not. The monitor knows two different states: either a human operator is needed by the robot, or the robot will work autonomously. The clear distinction between two states reduces the costs drastically. The human operator will produce only costs if he is needed by the robot. It's up to the robot control software how often a human intervention is needed. If the software is working great, the human operator is needed only seldom.
The distinction between two situations make sense because it allows to formalize what the real productivity of a machine is. If a human operator is needed all the time the productivity of the robot is low. Because the human can do the task by himself much faster. The robot will become only productive if no human intervention is needed. And the exact time span has to be measured.
It's an illusion to imagine that a robot is able to do a task autonomously. Because robotics task have the tendency to become complex and current software contains some bugs. As a result, every robot which was imagined by the engineers will produce errors during runtime. It's not possible to invent a robot which doesn't need human intervention anymore. What is possible instead is a robot who knows if he needs help or not. Every task can be monitored. It possible to detect if a pick&place task was succeed or not. If not, something is wrong with the robot, the object or with the software. It's important to understand that a robot can become highly productive and also low productive. Low productivity is equal that the robot isn't working autonomously but a human operator has to adjust some knobs. Or to explain it from the other point of view. There is no need to build fully autonomous robots. It's ok if the robot needs sometimes a human operator as support. The only thing what is needed is, that there is clear distinction between both situations. Only if the human operator is allowed to stay away from the robot the machine is highly productive.
Literature
The literature uses the term “execution monitoring” for evaluating if planned robot actions can be executed in reality. The idea is to create a higher instance above the normal robot control system for increasing the robustness. This might be an interesting objective, but the more important reason for execution monitoring is to connect a robot system with a human operator. A robot system is never autonomous but it's embedded in a real world scenario. That means, the robot is doing something and at the same time a human operator observes what the robot is doing. The operator has the opportunity to stop the robot at any time.
A formal execution monitoring improves the human robot interaction. It results into a robot who is working in the background. This feature is important if a robot should be perceived as a tool which doesn't need human interaction. Let us describe the situation from a human user. He starts the robot and after five minutes a warning signal is visible. This is equal that something went wrong with the robot. The warning signal of the execution monitor is part of a the human machine interaction. It triggers a time slice in which a higher amount of human attention is needed. If the warning signal was cleared, the robot operates in the background mode, that means, no human operator is needed.
From a standpoint of maximizing the productivity, the human operator is interested in not getting alarmed by the robot. The desired situation is, that the robot is working alone and the human operator can do something different. In such a case the human operator and the robot are disconnected.
Stack light
Wikipedia knows the term “stack light” https://en.wikipedia.org/wiki/Stack_light for describing the status LED of an industrial machine. The interesting point is, that the red, green and yellow light doesn't improve the working of a CNC machine itself, but it signalizes the state to humans. The human operator knows, in which case he needs to investigate a failure. The term “andon light” comes not from a technical but from a management perspective of how to organize the workflow at the assembly line.
In a tutorial about stacklight it was explained why these additional signals are needed. A normal CNC machine is equipped with a console which allows the human operator to start, stop and adjust a machine. The assumption of a console is, that the human operator is in front of the machine. That means, it's an interactive device. The problem is, that from a productivity point of view, it's not possible that a human operator sits 24/7 near to a console, but after a while he likes to go away.
The issue can be solved with a clear definition in which case a human operator is needed and in which time frame not. The decision is made by a execution monitoring unit. That is a device which recognizes if the robot is working great or not. The monitor knows two different states: either a human operator is needed by the robot, or the robot will work autonomously. The clear distinction between two states reduces the costs drastically. The human operator will produce only costs if he is needed by the robot. It's up to the robot control software how often a human intervention is needed. If the software is working great, the human operator is needed only seldom.
The distinction between two situations make sense because it allows to formalize what the real productivity of a machine is. If a human operator is needed all the time the productivity of the robot is low. Because the human can do the task by himself much faster. The robot will become only productive if no human intervention is needed. And the exact time span has to be measured.
It's an illusion to imagine that a robot is able to do a task autonomously. Because robotics task have the tendency to become complex and current software contains some bugs. As a result, every robot which was imagined by the engineers will produce errors during runtime. It's not possible to invent a robot which doesn't need human intervention anymore. What is possible instead is a robot who knows if he needs help or not. Every task can be monitored. It possible to detect if a pick&place task was succeed or not. If not, something is wrong with the robot, the object or with the software. It's important to understand that a robot can become highly productive and also low productive. Low productivity is equal that the robot isn't working autonomously but a human operator has to adjust some knobs. Or to explain it from the other point of view. There is no need to build fully autonomous robots. It's ok if the robot needs sometimes a human operator as support. The only thing what is needed is, that there is clear distinction between both situations. Only if the human operator is allowed to stay away from the robot the machine is highly productive.
Literature
The literature uses the term “execution monitoring” for evaluating if planned robot actions can be executed in reality. The idea is to create a higher instance above the normal robot control system for increasing the robustness. This might be an interesting objective, but the more important reason for execution monitoring is to connect a robot system with a human operator. A robot system is never autonomous but it's embedded in a real world scenario. That means, the robot is doing something and at the same time a human operator observes what the robot is doing. The operator has the opportunity to stop the robot at any time.
A formal execution monitoring improves the human robot interaction. It results into a robot who is working in the background. This feature is important if a robot should be perceived as a tool which doesn't need human interaction. Let us describe the situation from a human user. He starts the robot and after five minutes a warning signal is visible. This is equal that something went wrong with the robot. The warning signal of the execution monitor is part of a the human machine interaction. It triggers a time slice in which a higher amount of human attention is needed. If the warning signal was cleared, the robot operates in the background mode, that means, no human operator is needed.
From a standpoint of maximizing the productivity, the human operator is interested in not getting alarmed by the robot. The desired situation is, that the robot is working alone and the human operator can do something different. In such a case the human operator and the robot are disconnected.
Stack light
Wikipedia knows the term “stack light” https://en.wikipedia.org/wiki/Stack_light for describing the status LED of an industrial machine. The interesting point is, that the red, green and yellow light doesn't improve the working of a CNC machine itself, but it signalizes the state to humans. The human operator knows, in which case he needs to investigate a failure. The term “andon light” comes not from a technical but from a management perspective of how to organize the workflow at the assembly line.
In a tutorial about stacklight it was explained why these additional signals are needed. A normal CNC machine is equipped with a console which allows the human operator to start, stop and adjust a machine. The assumption of a console is, that the human operator is in front of the machine. That means, it's an interactive device. The problem is, that from a productivity point of view, it's not possible that a human operator sits 24/7 near to a console, but after a while he likes to go away.
Real world robotics without human operator
The image on the left shows a well working robot in action. It can be realized within a robot challenge. The rules for the challenge are known and it's possible for a programmer team to program a robot which is working inside the specification. Notable examples are soccer playing robots, self-driving cars, delivery drones and even biped robots. On the first look the programmer in the challenge have solved the issue, because the demonstrated behaviors shows, that the robot is capable of doing a task autonomously, that means without human intervention.
The problems will become visible if the task is not encapsulated in a synthetic challenge but should be solved in a real world application. A typical example is a self-driving car which travels on the road or a pick&place robot used at the assembly. Usually a human operator is needed who observes the robot during operation. The interesting point is, that even the robot has succeed in the synthetic challenge he will fail in the real world applications. That means, the human operator of a pick&place robot is under heavy stress while the robot is doing a pick&place task.
In the right picture the desired situation is shown in which a robot masters a task without a human operator. Not a single case is known in which a robot is capable of doing so. It's not possible to remove the human operator from the loop, because then the task is not longer fulfilled. Surprisingly this is true for all robotics domain like self-driving transport vehicles, pick&place robots and household robots. The technology works only if nobody cares. If it's a real world situation without a human operator in the loop, the robot isn't able to handle the task by it's own. The problem is not located within the robot's programing but it has to do with tasks which are highly complicated. A robot is only needed for task which are done in the past by humans. The idea is to transfer work from a human operator to a robot. A robot won't be confronted with repetive easy to solve tasks, because these problems can be automated with classical techniques like CNC machines. Only task which are harder to automate are transfered to robots.
Unfortunately, a robot is not able to handle complicated problems by it's own. It's a human level problem which can only be solved by humans in the loop. It's not possible that a robot supports the human in the task. What robotics engineers are doing is to assume that a task can be easily automated. The best example is a transport vehicle which drives on a straight line. In a challenge the robot is doing a great job, and exactly such a device is needed in a factory for solving a transport problem. The assumption is, that the technology would work in a real world application similar to the demonstration in the robotic challenge. The problem is, that in real world use case the human operator has to move next to the vehicle all the time to ensure that the robot is doing the task the right way. As a result, the human operator won't save time and energy but monitoring the robot is more complicated than doing the job by it's own. That is the reason, why all the factories are not using robots.
Or let me explain the situation from a different perspective. In a synthetic robot challenge for example in Robocup, many human operators are available. A team of specialists is monitoring the robot all the time. In a real world use case, these human operators are not available. And exactly this produces a new kind of situation. The problem is, that a human operator who monitors a robot can't removed from the setup because than it's not guaranteed what the robot is doing. A robot can't decide by it's own if he is working inside the specification.
The only way for ensure a high productivity is to take a human operator out of the loop. This can be realized by separation of humans and robot and with execution monitoring. The idea is, that no human operator is needed which monitors what the robot is doing, but the machine is working autonomously. A monitoring device is a machine which operates independently from the robot. It's main objective is to detect in which case the robot is wrong. Then the device will stop the robot and a human operator is needed. From an abstract perspective a monitoring device defines two different situations. Either the human operator can relax and leave the robot, or the human operator is needed because the robot has made a mistake. Real world application of robotics means, that for a certain amount of time, no human operator is needed. This is equal to increase the productivity.
November 17, 2019
The gap between automation and robotics is larger than expected
According to robotics experts, robotics have replaced former automation technology. Automation is low tech automation while robotics is high tech automation. The assumption is, that in the 1970s the term automation was used and with advanced computers it was replaced in the 2000s with the term robotics.
Unfortunately, this definition has nothing to do with the reality but it is wishful thinking in which the technological progress never has stopped. What we see in reality is, that automation is everywhere while robotics was never introduced in the factories. Let us take a closer look how a modern factories is working. What we can observe is, that nearly all factories are proud owner of a cnc machine and they have bought the last machine one year ago. Usually the arrival of a new cnc machine is celebrated as an important event for all the employees because it helps them to become more productive. A modern CNC machine has the size of a container and costs a lots of money. From a technical perspective, it's a movable tool for drilling, milling and welding. A CNC machine is the logical extension to an assembly line, it allows to increase the automation level of a factory.
It's important to know that a CNC machine is working different from a robot. A robot is working with algorithm which were developed by Artificial Intelligence experts, while a CNC machine doesn't need software nor algorithm. A CNC machine is located in the same category like a refrigerator, a sewing machine and a fork lift truck. It's primarily a mechanical device.which was invented before the advent of computer technology. Basically spoken from the perspective of microcomputers and software engineering a CNC machine is outdated. In contrast, the factories doesn't think so because CNC machines are used for producing goods and there is no plan to change anything in the workflow.
What is outdated are not CNC machine but the assumption that robotics have replaced CNC automation technology. Robotics is something which is not available today. Robotics is equal to academic robotics in which synthetic challenges are created as an end it itself. Famous example of modern robotics are Nao robots who are playing soccer in the Robocup challenge. This kind of academic competition has no practical purpose but it's used in the curriculum of computer science education.
It's important to make clear, that all the knowledge what students of computer science have acquired about pathplanning, deeplearning and the robocup challenge can't be transfered into the real life. A factory and the capitalism in general have no demand for Artificial Intelligence. What they are preferring are CNC machines.
The only place in which robots were introduced into a factory are marketing videos from robotics company. Rethink robotics has made a lot of clips in which robots were introduced into a factory. They are used to pick&place objects and the Baxter robot helps human employees. The reason why these clips are produced is not because it's a realistic vision for the next 10 years, but a certain plot should be told to the audience. In summary the plot is about successful engineers who have developed robots which increase the productivity in the real world.
Robots have not only failed to enter the production line in a factory but they have failed in other domains like self-driving cars, delivery drones and household robots. The problem is very similar to replacing CNC machines with robots. An objective calculation will come to the conclusion that low tech automation from the 1970s is superior to modern robotics. That means, a human controlled normal car is more cost effective than a self-driving car which runs by a computer. The most obvious proof for this hypothesis is, that all the commercial robotics companies who are trying to sell household robots and dellivery drones have become bankrupt or will so within 2 years. The problem is that today's robots are working different from the expectation. If a customer buys a delivery drone he expects that the drone will increase the productivity. Exactly this feature isn't available. A robotic drone is at foremost an academic project which has no purpose except that phd students can test out new AI algorithm on the device. It's not possible to transfer the technology into real world applications.
The main problem is not, that robotics won't increase the productivity but the problem is, that the awareness for this fact is missing. The first attempt in using robotics for commercial application was made in the 1960s with the Unimate robot. It was obvious after a short period, that the Unimate project failed. No customer was motivated to buy such a robot. They have tested it out and recognized fast, that the costs are too high. Instead of learning from the failed experiments, many repetition of introduction robotics at the workplace were made since the 1960s. Today's engineers are optimistic similar to George Devol that they can handle the technical problem and use robotics to bring automation to the next level. Robotics in the year 2020 works similar to robotics in the 1960's. The engineers have developed and programmed a new robot device, they have printed some marketing information but they didn't understand why nobody likes to buy these robots. They are not able to see, that robots have a social role which can't be overcome. This social role prevents that a robot can be utilized as a tool, but it's always a challenge.
The term challenge means, that a robot helps phd students at the university to develop new AI algorithm and write a dissertation about artificial Intelligence. A robot allows also to create robotics competition in which different teams compete for the best robot control software. The problem is, that a real factory has no demand for a challenge but they are interested in the answer. And exactly this isn't provided by robots. They have much in common with a power machine in the gym. The power machine doesn't provide added value but it forces the athlete to invest something of his own energy into the machine. This makes the athlete stronger.
Unfortunately, this definition has nothing to do with the reality but it is wishful thinking in which the technological progress never has stopped. What we see in reality is, that automation is everywhere while robotics was never introduced in the factories. Let us take a closer look how a modern factories is working. What we can observe is, that nearly all factories are proud owner of a cnc machine and they have bought the last machine one year ago. Usually the arrival of a new cnc machine is celebrated as an important event for all the employees because it helps them to become more productive. A modern CNC machine has the size of a container and costs a lots of money. From a technical perspective, it's a movable tool for drilling, milling and welding. A CNC machine is the logical extension to an assembly line, it allows to increase the automation level of a factory.
It's important to know that a CNC machine is working different from a robot. A robot is working with algorithm which were developed by Artificial Intelligence experts, while a CNC machine doesn't need software nor algorithm. A CNC machine is located in the same category like a refrigerator, a sewing machine and a fork lift truck. It's primarily a mechanical device.which was invented before the advent of computer technology. Basically spoken from the perspective of microcomputers and software engineering a CNC machine is outdated. In contrast, the factories doesn't think so because CNC machines are used for producing goods and there is no plan to change anything in the workflow.
What is outdated are not CNC machine but the assumption that robotics have replaced CNC automation technology. Robotics is something which is not available today. Robotics is equal to academic robotics in which synthetic challenges are created as an end it itself. Famous example of modern robotics are Nao robots who are playing soccer in the Robocup challenge. This kind of academic competition has no practical purpose but it's used in the curriculum of computer science education.
It's important to make clear, that all the knowledge what students of computer science have acquired about pathplanning, deeplearning and the robocup challenge can't be transfered into the real life. A factory and the capitalism in general have no demand for Artificial Intelligence. What they are preferring are CNC machines.
The only place in which robots were introduced into a factory are marketing videos from robotics company. Rethink robotics has made a lot of clips in which robots were introduced into a factory. They are used to pick&place objects and the Baxter robot helps human employees. The reason why these clips are produced is not because it's a realistic vision for the next 10 years, but a certain plot should be told to the audience. In summary the plot is about successful engineers who have developed robots which increase the productivity in the real world.
Robots have not only failed to enter the production line in a factory but they have failed in other domains like self-driving cars, delivery drones and household robots. The problem is very similar to replacing CNC machines with robots. An objective calculation will come to the conclusion that low tech automation from the 1970s is superior to modern robotics. That means, a human controlled normal car is more cost effective than a self-driving car which runs by a computer. The most obvious proof for this hypothesis is, that all the commercial robotics companies who are trying to sell household robots and dellivery drones have become bankrupt or will so within 2 years. The problem is that today's robots are working different from the expectation. If a customer buys a delivery drone he expects that the drone will increase the productivity. Exactly this feature isn't available. A robotic drone is at foremost an academic project which has no purpose except that phd students can test out new AI algorithm on the device. It's not possible to transfer the technology into real world applications.
The main problem is not, that robotics won't increase the productivity but the problem is, that the awareness for this fact is missing. The first attempt in using robotics for commercial application was made in the 1960s with the Unimate robot. It was obvious after a short period, that the Unimate project failed. No customer was motivated to buy such a robot. They have tested it out and recognized fast, that the costs are too high. Instead of learning from the failed experiments, many repetition of introduction robotics at the workplace were made since the 1960s. Today's engineers are optimistic similar to George Devol that they can handle the technical problem and use robotics to bring automation to the next level. Robotics in the year 2020 works similar to robotics in the 1960's. The engineers have developed and programmed a new robot device, they have printed some marketing information but they didn't understand why nobody likes to buy these robots. They are not able to see, that robots have a social role which can't be overcome. This social role prevents that a robot can be utilized as a tool, but it's always a challenge.
The term challenge means, that a robot helps phd students at the university to develop new AI algorithm and write a dissertation about artificial Intelligence. A robot allows also to create robotics competition in which different teams compete for the best robot control software. The problem is, that a real factory has no demand for a challenge but they are interested in the answer. And exactly this isn't provided by robots. They have much in common with a power machine in the gym. The power machine doesn't provide added value but it forces the athlete to invest something of his own energy into the machine. This makes the athlete stronger.
The gap between automation and robotics
Using robotics for factory automation is a bad idea. The reason is, that robotics is researched in universities while factory automation is a practical application. Both domains doesn't fit together and the reason why is explained in the following post.
Factory automation has to do with increasing the productivity. The idea is, that the company uses tools which helps to reduce the costs of doing a task. Typical examples of these tools are working gloves for the employees, barcode readers and CNC drilling machines. These tools are utilized in real applications because they are providing an added value. The company buys the new CNC machine for 30000 US$ and after a few month the machine has made a profit.
In contrast, academic robotics works the other way around. The reason is, that within AI research the question is not how to increase the productivity of existing workflow, but the problem is how to realize artificial life. In an artifical life project, the robot and the technology itself is important. Which means, that the AI experts are building a complicated structure which doesn't fulfill external needs but the aim is to explore something not known before.
It doesn't make sense to transfer Robotics projects from the academic domain into real world application. It will simply not work. Let us take a typical example. The Nao robot costs around 15000 US$ and it was used in many successful academic robotics project. Successful means, that the project has resulted into an academic paper which was perceived by other robotics experts as a valuable contribution to robot research. Additionally, the Nao robot was utilized in synthetic challenges for example the robocup competition. So we can say, that the price for the device is fair.
But what will happen if a company who is producing car buys 10 of these robot with the aim to increase the productivity at the assembly line? Sure, technically they can start such a project, but the prediction is, that the money is wasted. They won't increase the productivity by a single percent. The reason is, that Nao is without any doubt a robot but not a CNC machine. It can be used only in an academic context to explore AI problems, but it won't help workers in a factory.
Somebody may argue, that the true reason is the size of the device. And indeed, the Nao robot has the height of a toy robot. So a factory comes to the conclusion, that they should buy something which is capable of doing industrial tasks. A typical robot from that domain is the Rethink Robotics Baxter which is sold for around 30000 US$. Similar to Nao, Baxter is a very successful robot. It was used in many synthetic challenges and was used by universities to research new vision algorithms and graph planning systems. The surprising fact is, that even the Baxter robot won't help a car factory to increase the productivity. The reason is, that Baxter is a real robot who was designed for AI applications in mind. It's not a CNC machine and it's not a tool.
An interesting attempt to automate the service industry was realized by Joseph Engelberger in the 1990s with the helpmate robot. It was a transport vehicle for supporting employees in a hospital. The idea was to take the latest robotics technology and utilize it for increasing the productivity in a hospital. Simliar to other attempt of robotics automation the project failed. The customer didn't like the device and after a short period the company went bankrupt. What was wrong with the helpmate robot? The robot itself was great. In an academic project it was a high end device. The problem was, to utilize the machine for real world applications. Only in synthetic challenges the robot would perform successful. A synthetic challenge is a game not available in reality. For example the game is about pick&place objects in an area. For such a challenge, the helpmate device is a great choice. The problem is, that between the synthetic challenge and a real hospital there is a large gap. And the gap will become larger, if the robot is more advanced. The reason is, that the synthetic challenges are designed for the robots. For example, the latest iteration of the robocup competition was designd in a way, that the existing robots can fulfill the challenge. Even it looks like soccer, it's different.
Perhaps it make sense to investigate the helpmate hospital robot in detail to explain what the problem is. The first thing to do is to imagine a synthetic challenge in which a robot has to deliver objects in a hospital. So we need a stage with a floor, rooms and a concrete task. It's up to the robot to solve the challenge. The interesting point is, that it's possible to program the helpmate robot in such way. At the end, he will avoid the obstacle and is able to deliver all the objects to the goal position.
Great, so the problem is solved right? No it's not, what the robot has demonstrated is that he can solve a synthetic challenge. But a real hospital is working very different. The reason is, that in a hospital the employees have no time to participate at a robot challenge, but they have to solve a different task. The inner working of a robot challenge is defined by the challenge itself and it's described in the rule book. A synthetic delivery challenge contains of rooms, objects and obstacles.
The problem is, that real world applications and synthetic challenges doesn't fit together. The main difference is, that a real hospital is asking for tools which increase the productivity. But a robot is not a tool but it's an Artificial Intelligence. This social role conflict can't be fixed. What happens in reality is, that the gap between practical applications and academic robotics research will become larger. A hospital will not use robot for the delivery task, but they are prefering human workers with a handcart. On the other hand, a university who is researching robotics, will develop new synthetic challenges which fits well to their need of researching pathplanning algorithm in detail.
The simple conclusion from the past is, that it's not possible to convert a robot into a commercial product. It's only a subject of synthetic challenges but robots are not useful for real world applications.
Factory automation has to do with increasing the productivity. The idea is, that the company uses tools which helps to reduce the costs of doing a task. Typical examples of these tools are working gloves for the employees, barcode readers and CNC drilling machines. These tools are utilized in real applications because they are providing an added value. The company buys the new CNC machine for 30000 US$ and after a few month the machine has made a profit.
In contrast, academic robotics works the other way around. The reason is, that within AI research the question is not how to increase the productivity of existing workflow, but the problem is how to realize artificial life. In an artifical life project, the robot and the technology itself is important. Which means, that the AI experts are building a complicated structure which doesn't fulfill external needs but the aim is to explore something not known before.
It doesn't make sense to transfer Robotics projects from the academic domain into real world application. It will simply not work. Let us take a typical example. The Nao robot costs around 15000 US$ and it was used in many successful academic robotics project. Successful means, that the project has resulted into an academic paper which was perceived by other robotics experts as a valuable contribution to robot research. Additionally, the Nao robot was utilized in synthetic challenges for example the robocup competition. So we can say, that the price for the device is fair.
But what will happen if a company who is producing car buys 10 of these robot with the aim to increase the productivity at the assembly line? Sure, technically they can start such a project, but the prediction is, that the money is wasted. They won't increase the productivity by a single percent. The reason is, that Nao is without any doubt a robot but not a CNC machine. It can be used only in an academic context to explore AI problems, but it won't help workers in a factory.
Somebody may argue, that the true reason is the size of the device. And indeed, the Nao robot has the height of a toy robot. So a factory comes to the conclusion, that they should buy something which is capable of doing industrial tasks. A typical robot from that domain is the Rethink Robotics Baxter which is sold for around 30000 US$. Similar to Nao, Baxter is a very successful robot. It was used in many synthetic challenges and was used by universities to research new vision algorithms and graph planning systems. The surprising fact is, that even the Baxter robot won't help a car factory to increase the productivity. The reason is, that Baxter is a real robot who was designed for AI applications in mind. It's not a CNC machine and it's not a tool.
An interesting attempt to automate the service industry was realized by Joseph Engelberger in the 1990s with the helpmate robot. It was a transport vehicle for supporting employees in a hospital. The idea was to take the latest robotics technology and utilize it for increasing the productivity in a hospital. Simliar to other attempt of robotics automation the project failed. The customer didn't like the device and after a short period the company went bankrupt. What was wrong with the helpmate robot? The robot itself was great. In an academic project it was a high end device. The problem was, to utilize the machine for real world applications. Only in synthetic challenges the robot would perform successful. A synthetic challenge is a game not available in reality. For example the game is about pick&place objects in an area. For such a challenge, the helpmate device is a great choice. The problem is, that between the synthetic challenge and a real hospital there is a large gap. And the gap will become larger, if the robot is more advanced. The reason is, that the synthetic challenges are designed for the robots. For example, the latest iteration of the robocup competition was designd in a way, that the existing robots can fulfill the challenge. Even it looks like soccer, it's different.
Perhaps it make sense to investigate the helpmate hospital robot in detail to explain what the problem is. The first thing to do is to imagine a synthetic challenge in which a robot has to deliver objects in a hospital. So we need a stage with a floor, rooms and a concrete task. It's up to the robot to solve the challenge. The interesting point is, that it's possible to program the helpmate robot in such way. At the end, he will avoid the obstacle and is able to deliver all the objects to the goal position.
Great, so the problem is solved right? No it's not, what the robot has demonstrated is that he can solve a synthetic challenge. But a real hospital is working very different. The reason is, that in a hospital the employees have no time to participate at a robot challenge, but they have to solve a different task. The inner working of a robot challenge is defined by the challenge itself and it's described in the rule book. A synthetic delivery challenge contains of rooms, objects and obstacles.
The problem is, that real world applications and synthetic challenges doesn't fit together. The main difference is, that a real hospital is asking for tools which increase the productivity. But a robot is not a tool but it's an Artificial Intelligence. This social role conflict can't be fixed. What happens in reality is, that the gap between practical applications and academic robotics research will become larger. A hospital will not use robot for the delivery task, but they are prefering human workers with a handcart. On the other hand, a university who is researching robotics, will develop new synthetic challenges which fits well to their need of researching pathplanning algorithm in detail.
The simple conclusion from the past is, that it's not possible to convert a robot into a commercial product. It's only a subject of synthetic challenges but robots are not useful for real world applications.
Investigating the PR2 robot
The PR2 was a household robot developed by Willow Garage and sold for 400000 US$ each. The fact to know is, that the robot was a complete failure. The amount of sold devices was low and the customer doesn't see an advantage in using the robot. This failure is surprising because the idea was to use the device as a household robot, for factory automation and for office automation. How can it be, that a robot is technically state of the art but the device can't be utilized for practical applications?
The short explanation is, that using a robot in the real life is equal to utilize the robot as a tool. A tool is a device which can fulfill a task with a high productivity. This social role can't be played by robots, especially not by robot who are very advanced. In contrast, there is a different purpose in which the PR2 robot works great. In synthetic robot challenges and testing out new AI algorithm, the PR2 was the most succesful robot ever built. Many academic papers were published around PR2 projects and it helped researchers very well to become familiar with motion planning problems.
To answer the initial question why the PR2 was a commercial failure we have to define the difference between a tool and a robot. A tool has a low automation level, in most cases it's a mechanical machine for example a bicycle or a hammer. And if the tool is equipped with electronics for example in a fork-lift-truck, the amount of electronics is low.
In contrast, robots and especially human like robots are at first computers in which the hardware plays only a minor role. The PR2 robot was some kind of supercomputer built into moving cart. It was working with the Linux operating system and was equipped with many sensors. This makes it a great platform for software engineers but it's not longer a mechanical tool which can be utilized for practical applications.
A robot is an example for an overengineered tool. The idea is to upgrade plain mechanical tools with lots of computing power and sensors in the hope that the overall performance will become better. The opposite is the case. A useful tool, doesn't contains an onboard CPU and it can't be programmed in C++. The most advanced sorts of tools which can be used for practical applications are CNC machines. These machines have some features of a computer, but mostly they are not computers but numerical controlled machines. Sometimes a CNC is rejected as to complicated because of this reason, but other CNC machines are working fine for practical application. It's not possible to upgrade a CNC machine with more computing power into the direction of a robot. This would make the machine useless for practical applications.
Broader context
To understand why the PR2 robot project has failed we have to describe the overall idea why the robot was realized. For doing so we have to go back into the mid 1970s. In that decade certain technology was available for example electronic refrigerator, automated washing machine, cars and desktop calculator. From the perspective of the 1970s the hope was to invent new sort of machines which can increase the technology level further. And the perfect candidate is an household robot which is able to open the existing refrigerator and is able to clean plates.
The story about future robotics was told under the name “fifth generation computer”. That was technology not available in the 1970s which includes robots and artificial intelligence. The PR2 robot was a typical example of a fifth generation computer project. The idea was to realize the wishes of the 1970s. The question is not why the concrete PR2 model has failed, but the more general problem is why Fifth generation computers aren't build yet. What we can see is, that up to the 1970s the technology has made linear progress, and then the development stopped. It's not possible to automate a household or a factory above the level of the mid 1970s. The normal kitchen in the year 2019 looks the same like a kitchen in the 1970s. That means, apart from the normal household machines no further technology is available. The logical next step after a vacuum cleaner and a refrigerator would be a household robot and from a technical perspective the PR2 model fulfill all the requirements. The unsolved issue is how to use the machine in a meaningful way. It seems, that the software programmed for the PR2 doesn't fulfill the external requirements. That means, the program under with the robot operates is working with a different principle than expected.
A possible explanation why automation has limits is given by Wikipedia https://en.wikipedia.org/wiki/Automation#Limitations_to_automation If a process gets automated there is less labor available which can be automated next. Colloquial spoken the remaining manual work which is needed in a modern kitchen is low, so it makes no sense to use robots for that work. The remaining non-automated processes was reduced by technology which is already available.
The short explanation is, that using a robot in the real life is equal to utilize the robot as a tool. A tool is a device which can fulfill a task with a high productivity. This social role can't be played by robots, especially not by robot who are very advanced. In contrast, there is a different purpose in which the PR2 robot works great. In synthetic robot challenges and testing out new AI algorithm, the PR2 was the most succesful robot ever built. Many academic papers were published around PR2 projects and it helped researchers very well to become familiar with motion planning problems.
To answer the initial question why the PR2 was a commercial failure we have to define the difference between a tool and a robot. A tool has a low automation level, in most cases it's a mechanical machine for example a bicycle or a hammer. And if the tool is equipped with electronics for example in a fork-lift-truck, the amount of electronics is low.
In contrast, robots and especially human like robots are at first computers in which the hardware plays only a minor role. The PR2 robot was some kind of supercomputer built into moving cart. It was working with the Linux operating system and was equipped with many sensors. This makes it a great platform for software engineers but it's not longer a mechanical tool which can be utilized for practical applications.
A robot is an example for an overengineered tool. The idea is to upgrade plain mechanical tools with lots of computing power and sensors in the hope that the overall performance will become better. The opposite is the case. A useful tool, doesn't contains an onboard CPU and it can't be programmed in C++. The most advanced sorts of tools which can be used for practical applications are CNC machines. These machines have some features of a computer, but mostly they are not computers but numerical controlled machines. Sometimes a CNC is rejected as to complicated because of this reason, but other CNC machines are working fine for practical application. It's not possible to upgrade a CNC machine with more computing power into the direction of a robot. This would make the machine useless for practical applications.
Broader context
To understand why the PR2 robot project has failed we have to describe the overall idea why the robot was realized. For doing so we have to go back into the mid 1970s. In that decade certain technology was available for example electronic refrigerator, automated washing machine, cars and desktop calculator. From the perspective of the 1970s the hope was to invent new sort of machines which can increase the technology level further. And the perfect candidate is an household robot which is able to open the existing refrigerator and is able to clean plates.
The story about future robotics was told under the name “fifth generation computer”. That was technology not available in the 1970s which includes robots and artificial intelligence. The PR2 robot was a typical example of a fifth generation computer project. The idea was to realize the wishes of the 1970s. The question is not why the concrete PR2 model has failed, but the more general problem is why Fifth generation computers aren't build yet. What we can see is, that up to the 1970s the technology has made linear progress, and then the development stopped. It's not possible to automate a household or a factory above the level of the mid 1970s. The normal kitchen in the year 2019 looks the same like a kitchen in the 1970s. That means, apart from the normal household machines no further technology is available. The logical next step after a vacuum cleaner and a refrigerator would be a household robot and from a technical perspective the PR2 model fulfill all the requirements. The unsolved issue is how to use the machine in a meaningful way. It seems, that the software programmed for the PR2 doesn't fulfill the external requirements. That means, the program under with the robot operates is working with a different principle than expected.
A possible explanation why automation has limits is given by Wikipedia https://en.wikipedia.org/wiki/Automation#Limitations_to_automation If a process gets automated there is less labor available which can be automated next. Colloquial spoken the remaining manual work which is needed in a modern kitchen is low, so it makes no sense to use robots for that work. The remaining non-automated processes was reduced by technology which is already available.
A closer look into current automation technology
The main problem with current robotics is, that it's unclear which kind of technology is available today and which has to be developed soon. The naive assumption is, that car companies have installed a lot of robots in their factories which helps the employees to become more productive. This story is wrong. Instead what the companies are using are old school automation technology developed in the 1970s before the advent of computing. That are servo driven assembly lines, CNC machines and barcode readers. The most interesting device is a CNC machine, which can be used for drilling and welding.
Modern factories have increased their productivity and reduced their costs mainly because of these CNC machines. They allow to reduce the human work to a minimum. The interesting point is, that apart from CNC no further technology is used in the production line. Also in the service industry no other technology is available. All the work which can't be done with CNC machines and assembly lines is done manual by human workers during the day shift and night shift.
The interesting question is what is the role of modern robotics in the economy? The surprising answer is, that not a single robot is used today. A robot is a device which is equipped with lots of computational power and gets programmed in a high level programming language. Also a robot has to do with artificial intelligence and the fifth generation computer revolution. Some robots for industrial purpose were developed in the past for example the Helpmate in the 1990s and the Baxter robot in the 2010. But these robots are not used in reality. They are research projects without practical applications.
But if robots are not used in the service industry and in the factories, how have the companies automated the production line? They haven't. The companies have freezed the technology on the level of the 1970s and computers aren't used. It's not because the factories are against modern technology, but the problem is, that all attempts in using robots for automation have failed. The fallback mode was to use the well known CNC machines from the 1970s. That is the most advanced machine which can be utilized today for industrial purpose.
By definition, a CNC is a lowtech machine. It is working by predefined motion, very similar what a sewing machine is doing.Such a device is not a robot, and it can't be programmed with a program flow. It has no artificial intelligence at all and planning algorithms are not needed. The result is, that CNC machines can used for limited purposes. If the task is a bit more complicated and has to do with sensor readings the CNC machine can't be utilized but the task is handled by human workers. This kind of workmode is the best practice method. It's used by all industrial companies and it's not possible to improve it. The combination of human workers for complex problems plus CNC machine for repetitive tasks ensures the lowest possible costs.
This description sounds a bit sobering because the robot isn't introduced and it's not planned in doing so. But this is the reality. Robotics and Artificial Intelligence is a subject for universities but not for practical application. And it won't in the future. THe problem is, that the advantages of modern compute technology, AI algorithm and vision systems can't be utilized for practical applications. It's wishful thinking to imagine, that robots can reduce the costs in a factory.
Somebody may asks who to introduce robots into the factory. The answer is, that it's not possible. If somebody is interested in Artificial Intelligence he has to do the experiments without practical applications. It's impossible, that a robot will provide added value for a real company who has real customers.
It's interesting to walk through factories which have the highest amount of automation. They are using many CNC machines at the same time which are connected with automated assembly lines. No human worker is needed during normal operation, everything runs by it's own. The funny fact is, that even such companies doesn't use robotics. All the technology comes from the 1970s and no computer nor algorithms are needed. Instead it's normal engineering in which the factory has a clear layout. Another interesting information is, that the absence of robots in a factory isn't a mistake nor a decision which will be corrected in the future, but it's a best practice method if the aim is to reduce the costs. The disappointing conclusion is, that everything what was developed in the last 50 years under the name of robotics was a waste of time. It can't be utilized for practical application and the researchers have developed the wrong technology. So do have modern factories a demand for Robots? No they don't. They are completely satisfied with the well known CNC technology. What they plan to buy in the next 10 years is more of these 1970s technology. But for sure, they won't utilize robots in the production line.
Modern factories have increased their productivity and reduced their costs mainly because of these CNC machines. They allow to reduce the human work to a minimum. The interesting point is, that apart from CNC no further technology is used in the production line. Also in the service industry no other technology is available. All the work which can't be done with CNC machines and assembly lines is done manual by human workers during the day shift and night shift.
The interesting question is what is the role of modern robotics in the economy? The surprising answer is, that not a single robot is used today. A robot is a device which is equipped with lots of computational power and gets programmed in a high level programming language. Also a robot has to do with artificial intelligence and the fifth generation computer revolution. Some robots for industrial purpose were developed in the past for example the Helpmate in the 1990s and the Baxter robot in the 2010. But these robots are not used in reality. They are research projects without practical applications.
But if robots are not used in the service industry and in the factories, how have the companies automated the production line? They haven't. The companies have freezed the technology on the level of the 1970s and computers aren't used. It's not because the factories are against modern technology, but the problem is, that all attempts in using robots for automation have failed. The fallback mode was to use the well known CNC machines from the 1970s. That is the most advanced machine which can be utilized today for industrial purpose.
By definition, a CNC is a lowtech machine. It is working by predefined motion, very similar what a sewing machine is doing.Such a device is not a robot, and it can't be programmed with a program flow. It has no artificial intelligence at all and planning algorithms are not needed. The result is, that CNC machines can used for limited purposes. If the task is a bit more complicated and has to do with sensor readings the CNC machine can't be utilized but the task is handled by human workers. This kind of workmode is the best practice method. It's used by all industrial companies and it's not possible to improve it. The combination of human workers for complex problems plus CNC machine for repetitive tasks ensures the lowest possible costs.
This description sounds a bit sobering because the robot isn't introduced and it's not planned in doing so. But this is the reality. Robotics and Artificial Intelligence is a subject for universities but not for practical application. And it won't in the future. THe problem is, that the advantages of modern compute technology, AI algorithm and vision systems can't be utilized for practical applications. It's wishful thinking to imagine, that robots can reduce the costs in a factory.
Somebody may asks who to introduce robots into the factory. The answer is, that it's not possible. If somebody is interested in Artificial Intelligence he has to do the experiments without practical applications. It's impossible, that a robot will provide added value for a real company who has real customers.
It's interesting to walk through factories which have the highest amount of automation. They are using many CNC machines at the same time which are connected with automated assembly lines. No human worker is needed during normal operation, everything runs by it's own. The funny fact is, that even such companies doesn't use robotics. All the technology comes from the 1970s and no computer nor algorithms are needed. Instead it's normal engineering in which the factory has a clear layout. Another interesting information is, that the absence of robots in a factory isn't a mistake nor a decision which will be corrected in the future, but it's a best practice method if the aim is to reduce the costs. The disappointing conclusion is, that everything what was developed in the last 50 years under the name of robotics was a waste of time. It can't be utilized for practical application and the researchers have developed the wrong technology. So do have modern factories a demand for Robots? No they don't. They are completely satisfied with the well known CNC technology. What they plan to buy in the next 10 years is more of these 1970s technology. But for sure, they won't utilize robots in the production line.
November 16, 2019
A realistic description of Artificial Intelligence
Every new technology has pros and cons. Instead of only describing the technical aspect, it's important to give the context. A hammer for example is a tool which makes life easier, the same is true for CNC machines invented in the 1970s. Similar to cars and washing machines these devices are classical machines. In contrast the computer revolution since the 1980s has produced a new kind of technology namely robotics and artificial Intelligence. This AI -based system isn't working as a tool but it has a different social role.
A tool is by definition an invention which helps the people to do a task faster and with less costs. Using a car over walking the distance by foot is an improvement. This makes a car to a tool for solving the logistics problem. A naive assumption is to describe robotics as a tool as well. The idea is that the household robots helps in the kitchen. This kind of myth was transported in the popular culture but also visionary thinkers of the fifth computer generation imagined a world in which robots will become the tool of humans.
The main different between normal tools like a CNC machine and a robot is, that robots have a much greater complexity. The result is they can't be understand as tool, but they must be interpreted as games. The difference is, that the environment of a robot has to support the robot but not the other way around. The best example which explains what robotics is, is the robocup challenge. The idea is, that a team of 10 highly trained programmer has to write the software for a robot which can do a task. The task itself is useless, instead all the time for programming the machine is wasted. This is especially true, if more advanced robots are used for solving more demanding challenges. Robocup and similar challenges have negative energy balance, which means the projects costs a lot of ressources but nothing is delivered in exchange.
The problem is, that in modern business context each company has to prove that it is producing added value. If a company produces more costs than it is able to provide values, the company will go bankrupt. That is a standard rule in capitalism. The problem is, that robotics projects will produce more costs and no value which makes it impossible to build a company around a robotics project. We can imagine a robot as a machine which wastes ressources, similar to a fire which needs a lot of wood, but has no purpose. It's possible to throw more wood into the fire which makes the flame bigger, but it's not possible to utilize the flame for a meaningful task. The flame is burned without providing something back.
Under this constraints all robotics projects from the last 50 years are operating. It depends on the people and the technology how large the flame is, but at the end all the resources are gone. Why are new robotics projects get started, if it's obviously that they will fail? Because it's a lot of fun. The recent example is the fully autonomous cooking robot called “Moley kitchen robot”. The project was demonstrated on a fair two years ago, and the idea is to mount dexterous robot hands onto a UR5 robot arm. The project itself is great, it is using the latest vision technology combined with advanced motion capture and modern robothands. The problem is, that from an economical point of view it can be guaruanteed, that the robot won't find customers. The device costs a lot of money but doesn't provide any kind of value. It's a flame which burns down all the ressources and nothing will remain which makes sense.
The similarity between all the robot projects is, that they are telling a story about the future. The idea is, to explain the audience how societies gets improved within the next 10 years. And the presented show robot demonstrates the capabilities of modern technology. Roughly spoken, companies like Rethink robotics and Willow Garage aren't selling robots, but they are distributing stories about future robotics. The amount of sold robots is small. The reason is, that a customer has no advantage if he owns a household robot or not. Robots are making only sense if the idea is that the customer didn't expect anything from the device but tries to improve the technology. For example, biped robots are sold mostly to universities teams, which are programming the device so that the robot can attend the next robocup challenge.
That means, a universites spends 10000 US$ for a toy robot because they want to spend endless hours in programing the device. The idea is burn time and money for nothing. And at the end, all the ressources are gone.
Fully automated manufactoring
The interesting information is, that fully automation is in reach of today's technology and at the same time it's not. Suppose, a new robotics competition was started with the aim to build a factory which is using robots for automating all tasks. From a technical point of view, it's possible to build such hardware and program it with the help of the latest AI algorithm. A recent Starcraft AI challenge can be interpreted as a fully automated factory, but it's possible to build the same thing with Lego mindstorms robots as well.
The problem is, that the technology from the synthetic challenge can't be transfered into the needs of real factories. That means, if the aim is to use robots to make the assembly line in factory more efficient the team will struggle with the task. That means, robots are working great in synthetic challenges but they fail in real projects. This kind of gap can't be solved in the future. That means, we will see two domains at the same time. On the first field, a fully automated factory is available in which robots doesn't need humans to build something, and at the same time, the factories productivity doesn't improve in reality, because of missing robot technology.
For the beginner programmer on the fields it's hard to understand why the technology developed in the laboratory and in synthetic robot challenges can't be utilized for real world application. The reason is, that a robot in a lab works by it's own rules, while the requirement of a real factory is, that they need a robot as a tool for improving an existing workflow. A robot can't become a tool, and because of this reason robot automation fails in reality.
Perhaps it make sense to explain this point in detail. If a robot is used in a challenge for example Robocup, the robot is the super-hero of the event. He gets programmed by engineers, and the audience asks which algorithm was used exactly to control the servo motors. In a robocup challenge, the robot has a certain social role. The idea is, that around 10 human programmers has to improve the robot, and the robot has to demonstrate, that he can play soccer very well. Such a role model can be fulfilled by the robots and the programmers as well. That means, the overall robocup challenge is perceived as a success and the participants have learned a lot.
If a robot is utilized in a practical application the social roles are the opposite. In factory automation, the robot shouldn't become a superstar, but a handy tool which is supporting the overall workflow. It's not possible to put a robot into such social position. Only simple tools like a hammer or a CNC machine can become a tool.
Assume the idea is to create a video about a robot superstar. The idea is, that the robot has the role of a popstar and demonstrates what he can do. It's possible to build such a robot and equip the device with the ability to walk like a human, and even to react to natural language. That means, such a video can be created, and such a robot can be programmed. The problem is, that the social role is fixed. The robot will become only a super-star but nothing else. The problem is, that real factories doesn't need robotic superstars, but they are focused on productivity. That is the reason why the technology from synthetic robot challenges can be transfered into the reality.
Robocup at home
Let us take a look at the robocup @ home challenge. What is seen in the video are household robots who are able to serve a drink of water. All the actions are generated with software, that means, the robot is not using teleoperation but it's an autonomous robot. On the first look, this is a great success which supports the story that in the near future household robot are available in the mainstream. A closer look into the situation will show, that household robots are not realistic. Instead, the robocup@home challenge has a different purpose.
The challenge is doing two things, first it tells a story what future robots will do and how society will profit from it, and secondly, the challenge teaches humans how to do the task. Basically spoken, the social role of the robots is to educate / manipulate humans. Apart from educational purpose, robocup@home has no further purpose. The presented robots can be bought in the store, and even if a customer buys on of the devices, he won't profit from it. The most interesting issue is, that this kind of limits can't be overcome with better software. Instead the social role which a robot is able to play is fixed. That means, all robots in all robot challenges are made for educational purposes, but not as a tool to increase the productivity of humans.
For a potential customer the situation is very easy. If a tool doesn't increase the productivity it's not used anymore. Even if somebody owns a household robot he will serve the drink manually without a robot, because this is the fastest choice. The main reason why robotics challenges were invented is to provide a stage for robots. WIthout a stage a robot has no purpose. He makes only sense in a certain social role.
A tool is by definition an invention which helps the people to do a task faster and with less costs. Using a car over walking the distance by foot is an improvement. This makes a car to a tool for solving the logistics problem. A naive assumption is to describe robotics as a tool as well. The idea is that the household robots helps in the kitchen. This kind of myth was transported in the popular culture but also visionary thinkers of the fifth computer generation imagined a world in which robots will become the tool of humans.
The main different between normal tools like a CNC machine and a robot is, that robots have a much greater complexity. The result is they can't be understand as tool, but they must be interpreted as games. The difference is, that the environment of a robot has to support the robot but not the other way around. The best example which explains what robotics is, is the robocup challenge. The idea is, that a team of 10 highly trained programmer has to write the software for a robot which can do a task. The task itself is useless, instead all the time for programming the machine is wasted. This is especially true, if more advanced robots are used for solving more demanding challenges. Robocup and similar challenges have negative energy balance, which means the projects costs a lot of ressources but nothing is delivered in exchange.
The problem is, that in modern business context each company has to prove that it is producing added value. If a company produces more costs than it is able to provide values, the company will go bankrupt. That is a standard rule in capitalism. The problem is, that robotics projects will produce more costs and no value which makes it impossible to build a company around a robotics project. We can imagine a robot as a machine which wastes ressources, similar to a fire which needs a lot of wood, but has no purpose. It's possible to throw more wood into the fire which makes the flame bigger, but it's not possible to utilize the flame for a meaningful task. The flame is burned without providing something back.
Under this constraints all robotics projects from the last 50 years are operating. It depends on the people and the technology how large the flame is, but at the end all the resources are gone. Why are new robotics projects get started, if it's obviously that they will fail? Because it's a lot of fun. The recent example is the fully autonomous cooking robot called “Moley kitchen robot”. The project was demonstrated on a fair two years ago, and the idea is to mount dexterous robot hands onto a UR5 robot arm. The project itself is great, it is using the latest vision technology combined with advanced motion capture and modern robothands. The problem is, that from an economical point of view it can be guaruanteed, that the robot won't find customers. The device costs a lot of money but doesn't provide any kind of value. It's a flame which burns down all the ressources and nothing will remain which makes sense.
The similarity between all the robot projects is, that they are telling a story about the future. The idea is, to explain the audience how societies gets improved within the next 10 years. And the presented show robot demonstrates the capabilities of modern technology. Roughly spoken, companies like Rethink robotics and Willow Garage aren't selling robots, but they are distributing stories about future robotics. The amount of sold robots is small. The reason is, that a customer has no advantage if he owns a household robot or not. Robots are making only sense if the idea is that the customer didn't expect anything from the device but tries to improve the technology. For example, biped robots are sold mostly to universities teams, which are programming the device so that the robot can attend the next robocup challenge.
That means, a universites spends 10000 US$ for a toy robot because they want to spend endless hours in programing the device. The idea is burn time and money for nothing. And at the end, all the ressources are gone.
Fully automated manufactoring
The interesting information is, that fully automation is in reach of today's technology and at the same time it's not. Suppose, a new robotics competition was started with the aim to build a factory which is using robots for automating all tasks. From a technical point of view, it's possible to build such hardware and program it with the help of the latest AI algorithm. A recent Starcraft AI challenge can be interpreted as a fully automated factory, but it's possible to build the same thing with Lego mindstorms robots as well.
The problem is, that the technology from the synthetic challenge can't be transfered into the needs of real factories. That means, if the aim is to use robots to make the assembly line in factory more efficient the team will struggle with the task. That means, robots are working great in synthetic challenges but they fail in real projects. This kind of gap can't be solved in the future. That means, we will see two domains at the same time. On the first field, a fully automated factory is available in which robots doesn't need humans to build something, and at the same time, the factories productivity doesn't improve in reality, because of missing robot technology.
For the beginner programmer on the fields it's hard to understand why the technology developed in the laboratory and in synthetic robot challenges can't be utilized for real world application. The reason is, that a robot in a lab works by it's own rules, while the requirement of a real factory is, that they need a robot as a tool for improving an existing workflow. A robot can't become a tool, and because of this reason robot automation fails in reality.
Perhaps it make sense to explain this point in detail. If a robot is used in a challenge for example Robocup, the robot is the super-hero of the event. He gets programmed by engineers, and the audience asks which algorithm was used exactly to control the servo motors. In a robocup challenge, the robot has a certain social role. The idea is, that around 10 human programmers has to improve the robot, and the robot has to demonstrate, that he can play soccer very well. Such a role model can be fulfilled by the robots and the programmers as well. That means, the overall robocup challenge is perceived as a success and the participants have learned a lot.
If a robot is utilized in a practical application the social roles are the opposite. In factory automation, the robot shouldn't become a superstar, but a handy tool which is supporting the overall workflow. It's not possible to put a robot into such social position. Only simple tools like a hammer or a CNC machine can become a tool.
Assume the idea is to create a video about a robot superstar. The idea is, that the robot has the role of a popstar and demonstrates what he can do. It's possible to build such a robot and equip the device with the ability to walk like a human, and even to react to natural language. That means, such a video can be created, and such a robot can be programmed. The problem is, that the social role is fixed. The robot will become only a super-star but nothing else. The problem is, that real factories doesn't need robotic superstars, but they are focused on productivity. That is the reason why the technology from synthetic robot challenges can be transfered into the reality.
Robocup at home
Let us take a look at the robocup @ home challenge. What is seen in the video are household robots who are able to serve a drink of water. All the actions are generated with software, that means, the robot is not using teleoperation but it's an autonomous robot. On the first look, this is a great success which supports the story that in the near future household robot are available in the mainstream. A closer look into the situation will show, that household robots are not realistic. Instead, the robocup@home challenge has a different purpose.
The challenge is doing two things, first it tells a story what future robots will do and how society will profit from it, and secondly, the challenge teaches humans how to do the task. Basically spoken, the social role of the robots is to educate / manipulate humans. Apart from educational purpose, robocup@home has no further purpose. The presented robots can be bought in the store, and even if a customer buys on of the devices, he won't profit from it. The most interesting issue is, that this kind of limits can't be overcome with better software. Instead the social role which a robot is able to play is fixed. That means, all robots in all robot challenges are made for educational purposes, but not as a tool to increase the productivity of humans.
For a potential customer the situation is very easy. If a tool doesn't increase the productivity it's not used anymore. Even if somebody owns a household robot he will serve the drink manually without a robot, because this is the fastest choice. The main reason why robotics challenges were invented is to provide a stage for robots. WIthout a stage a robot has no purpose. He makes only sense in a certain social role.
Identify the productivity paradox
In the economic literature there is a mystery available called the productivity paradox. It's about a mismatch between computer technology which is available everywhere and low productivity rates in the office and in the service industry. It make sense to describe the paradox in detail.
The first important fact is, that the productivity paradox has to do with the transition from the forth to the fifth computer generation. The forth generation was from 1970-1980, while the fifth computer generation started after the year 1980. During the forth generation no productivity paradox was visible. In that time lots of innovation were made which are used for practical applications, for example the barcode reader which revolutionized the supermarket, the CNC machine which allows to increase the productivity in the factory and the electronic pocket calculator which makes office number crunching more easily.
In the early 1980s the first books were published about a potential future, in which robots and artificial Intelligence is used to increase the productivity further. The idea was that robots can improve factory automation and help the service industry to reduce the costs. This vision was never realized. In contrast to the technology from the 1970s, the next technology step since the 1980s was never introduced in the reality. The mismatch has resulted into the productivity paradox. It's situation in which high speed computer, advanced robots and modern expert systems are available in theory, but the technology can be used for practical application.
To explain this situation better we have to go back to the golden 1970s. The situation was, that technology invented in that time was useful for practical tasks. The CNC machine is a good example. In the 1970s the technology was new and it helped to improve the factory. A CNC machine is superior to technology used before. Superior means, that the company can reduce the costs and the employees are motivated in using it. The surprising fact is, that the 1970s was the last decade in which innovation took place. Modern factories in the year 2019 are using the same technology available in the 1970s which is a combination of barcode reader, mainframe computer, CNC machines, and telephone communication.
On the first look, the companies have a need for introducing more modern technology, namely robotics and Artificial Intelligence. It's important to know, that the companies have tried so in the past, since the 1980s many projects were started with the attempt to introduce advanced robots in the factory automation. All of these projects have failed. In contrast to CNC machines a robot has no advantage.
The productivity paradoxon and the missing fifth computer generation is the same problem. Both can be dated back to the early 1980s. It has to do with the absense of innovation after the 1970s were over. Or to explain it from the other perspective, the automation technology has freezed since 40 years. State of art factory automation is the same like in the mid 1970s. To understand the issue in detail we have to describe what the term fifth computer generation is about.
In the beginning it was a vision about future computer technology. The idea was, to develop robots which can help to increase the productivity. This plan was never realized. Not because of the technology itself, but because the robot prototypes can't be used in real applications. What is possible with today's robots is to use them in synthetic benchmarks, for example the Robocup challenge. In such a task the robot is able to play soccer in a team. The problem is, that the robot in the challenge can't be used for a task in the real world. The robot technology is locked into the synthetic challenge. From an academic perspective such robot competitions have become very successful. The early micromouse challenge evolved into more modern challenges in which the teams have build robots which are walking like humans on two legs. Today's robots are more advanced then their counterparts 30 years ago and they are able to master more complicated challenge. Importunately, the gap between a synthetic challenge and a real project is larger than ever. All the robot shown in youtube videos are nothing but show robots. They are working as a prototype in a fictional challenge and the technology can't be used for increasing the productivity in a real application.
That is the major difference to technology within the forth computer generation. Innovations like the barcode reader and the CNC machine can be utilized for real tasks. It seems, that Fifth generation computers in general are struggling with the reality. What can be seen is, that fifth generation robot projects have a tendency to flip the social roles. The machine isn't a tool which helps the human but it's the other way around. The team of human programmers has to invest lots of hours until the robot is able to participate in the robocup challenge. That means, the robot won't provide work, but it wastes human energy. The question is, what is the purpose of robocup like challenges? The main idea is to tell a story about future society. According to the Robocup challenge, robots will become successful in under 10 years and will help human employees. THat is in short the plot which is told by the participents of robot challenges. They are programming the robots and creating the videos to support their story about fifth generation computer. It's an optimistic outlook into the future which should inspire more users to participate in the movement.
It's important to know that this kind of vision isn't realistic. Robots can't be build, and they won't help human workers in the factories. Projects in the past which are trying to realize such projects in factories have shown the opposite. it seems, that every failed robot project has increased the need to tell an optimistic vision about future robotics.
Let us go a step back and describe the situation from an economy perspective. Suppose a companies is interested in increasing the productivity in a factory. Which kind of technology supports this attempt? The only technology which works was invented in the 1970s. Everything what was invented later won't increase the productivity but it will reduce it. That means, if the company buys some of the machines invented in the 1970s the factory will run on the maximum productivity level. It's not possible to increase it further.
Perhaps it make sense to define the difference between CNC machines and robots briefly. On the first look, there is no difference, because on timeline the CNC technology was invented in the 1970s and the robots were build in the 1980s. But CNC machines can be used for practical applications, while robots not. CNC machines are part of the forth computer generation while robots are part of the fifth generation computer. Between both there is a large gap. It's important to become aware of the gap, because it helps to explain the productivity paradox.
The major problem of CNC machines is, that they can automate some tasks in a factory but not everything. A CNC machine will only work together with humans in the loop. Most factories are using CNC machines at the assembly line and for automated welding. And from the cost perspective, it's a here to stay. The problem is, that no technology is available which can automate the factory more. The human workers can't be replaced by robots but they are working together with CNC machines. This problem can't be solved by explaining to the factory what a robot is, they know it from failed projects from the past.
Understanding the needs of a factory
Instead of asking how robots can help to automate a factory, the more elaborate question is what kind of technology a factory needs. Modern companies have a demand for barcode readers, CNC machine and other technology invented in the 1970s. They are using these tools to reduce their costs and increase the output. In contrast, technology which was developed in the fifth generation computer revolution namely robots, Artificial Intelligence and neural networks doesn't fulfill the needs of a modern factory. It's not possible to use them to increase the productivity, but they are developed for it's own purpose. The main reason why robotics has become popular since the 1980s is because the AI programmers have a need for it. They are using robot problems as a vehicle to talk about artificial Intelligence. The fifth generation computer is mostly an academic discipline which isn't solving problems but is creating a new sandbox.
None of the newly developed robots will be introduced in the mainstream market as a product. A robot can't be sold to customers, because the customer won't profit from it. What a customer likes to buy in exchange for money is a modern CNC welding machine, because such machine provides added value. In contrast a robot from the latest generation doesn't provide something in return. It's a loose-loose situation.
From an economic perspective there are two sorts of robotics company on the market. The first one are selling CNC machines under the label robots. Notable example are the Fanuc, ABB and the Kuka company. These companies are successful in the business not because their robots are great, but because their CNC machine are working reliable and are the same like in the 1970s. The second sort of robot companies are real innovators. For example, Rethink robotics, Jibo and Willow Garage have produced robots which are fitting great into the fifth generation computer revolution. What the companies have in common is that they are bankrupt or will become so within 2 years. The reason is, that the product they are selling have no added value to the customer. The only place in which a Baxter robot from Rethink robotics make sense is a academic robot challenge. The funny thing is, that especially the Baxter model was a success and a failure at the same time. It was successful because many papers were written about the model with an academic background, and it was a failure, because the robot can't be used for real applications.
The first important fact is, that the productivity paradox has to do with the transition from the forth to the fifth computer generation. The forth generation was from 1970-1980, while the fifth computer generation started after the year 1980. During the forth generation no productivity paradox was visible. In that time lots of innovation were made which are used for practical applications, for example the barcode reader which revolutionized the supermarket, the CNC machine which allows to increase the productivity in the factory and the electronic pocket calculator which makes office number crunching more easily.
In the early 1980s the first books were published about a potential future, in which robots and artificial Intelligence is used to increase the productivity further. The idea was that robots can improve factory automation and help the service industry to reduce the costs. This vision was never realized. In contrast to the technology from the 1970s, the next technology step since the 1980s was never introduced in the reality. The mismatch has resulted into the productivity paradox. It's situation in which high speed computer, advanced robots and modern expert systems are available in theory, but the technology can be used for practical application.
To explain this situation better we have to go back to the golden 1970s. The situation was, that technology invented in that time was useful for practical tasks. The CNC machine is a good example. In the 1970s the technology was new and it helped to improve the factory. A CNC machine is superior to technology used before. Superior means, that the company can reduce the costs and the employees are motivated in using it. The surprising fact is, that the 1970s was the last decade in which innovation took place. Modern factories in the year 2019 are using the same technology available in the 1970s which is a combination of barcode reader, mainframe computer, CNC machines, and telephone communication.
On the first look, the companies have a need for introducing more modern technology, namely robotics and Artificial Intelligence. It's important to know, that the companies have tried so in the past, since the 1980s many projects were started with the attempt to introduce advanced robots in the factory automation. All of these projects have failed. In contrast to CNC machines a robot has no advantage.
The productivity paradoxon and the missing fifth computer generation is the same problem. Both can be dated back to the early 1980s. It has to do with the absense of innovation after the 1970s were over. Or to explain it from the other perspective, the automation technology has freezed since 40 years. State of art factory automation is the same like in the mid 1970s. To understand the issue in detail we have to describe what the term fifth computer generation is about.
In the beginning it was a vision about future computer technology. The idea was, to develop robots which can help to increase the productivity. This plan was never realized. Not because of the technology itself, but because the robot prototypes can't be used in real applications. What is possible with today's robots is to use them in synthetic benchmarks, for example the Robocup challenge. In such a task the robot is able to play soccer in a team. The problem is, that the robot in the challenge can't be used for a task in the real world. The robot technology is locked into the synthetic challenge. From an academic perspective such robot competitions have become very successful. The early micromouse challenge evolved into more modern challenges in which the teams have build robots which are walking like humans on two legs. Today's robots are more advanced then their counterparts 30 years ago and they are able to master more complicated challenge. Importunately, the gap between a synthetic challenge and a real project is larger than ever. All the robot shown in youtube videos are nothing but show robots. They are working as a prototype in a fictional challenge and the technology can't be used for increasing the productivity in a real application.
That is the major difference to technology within the forth computer generation. Innovations like the barcode reader and the CNC machine can be utilized for real tasks. It seems, that Fifth generation computers in general are struggling with the reality. What can be seen is, that fifth generation robot projects have a tendency to flip the social roles. The machine isn't a tool which helps the human but it's the other way around. The team of human programmers has to invest lots of hours until the robot is able to participate in the robocup challenge. That means, the robot won't provide work, but it wastes human energy. The question is, what is the purpose of robocup like challenges? The main idea is to tell a story about future society. According to the Robocup challenge, robots will become successful in under 10 years and will help human employees. THat is in short the plot which is told by the participents of robot challenges. They are programming the robots and creating the videos to support their story about fifth generation computer. It's an optimistic outlook into the future which should inspire more users to participate in the movement.
It's important to know that this kind of vision isn't realistic. Robots can't be build, and they won't help human workers in the factories. Projects in the past which are trying to realize such projects in factories have shown the opposite. it seems, that every failed robot project has increased the need to tell an optimistic vision about future robotics.
Let us go a step back and describe the situation from an economy perspective. Suppose a companies is interested in increasing the productivity in a factory. Which kind of technology supports this attempt? The only technology which works was invented in the 1970s. Everything what was invented later won't increase the productivity but it will reduce it. That means, if the company buys some of the machines invented in the 1970s the factory will run on the maximum productivity level. It's not possible to increase it further.
Perhaps it make sense to define the difference between CNC machines and robots briefly. On the first look, there is no difference, because on timeline the CNC technology was invented in the 1970s and the robots were build in the 1980s. But CNC machines can be used for practical applications, while robots not. CNC machines are part of the forth computer generation while robots are part of the fifth generation computer. Between both there is a large gap. It's important to become aware of the gap, because it helps to explain the productivity paradox.
The major problem of CNC machines is, that they can automate some tasks in a factory but not everything. A CNC machine will only work together with humans in the loop. Most factories are using CNC machines at the assembly line and for automated welding. And from the cost perspective, it's a here to stay. The problem is, that no technology is available which can automate the factory more. The human workers can't be replaced by robots but they are working together with CNC machines. This problem can't be solved by explaining to the factory what a robot is, they know it from failed projects from the past.
Understanding the needs of a factory
Instead of asking how robots can help to automate a factory, the more elaborate question is what kind of technology a factory needs. Modern companies have a demand for barcode readers, CNC machine and other technology invented in the 1970s. They are using these tools to reduce their costs and increase the output. In contrast, technology which was developed in the fifth generation computer revolution namely robots, Artificial Intelligence and neural networks doesn't fulfill the needs of a modern factory. It's not possible to use them to increase the productivity, but they are developed for it's own purpose. The main reason why robotics has become popular since the 1980s is because the AI programmers have a need for it. They are using robot problems as a vehicle to talk about artificial Intelligence. The fifth generation computer is mostly an academic discipline which isn't solving problems but is creating a new sandbox.
None of the newly developed robots will be introduced in the mainstream market as a product. A robot can't be sold to customers, because the customer won't profit from it. What a customer likes to buy in exchange for money is a modern CNC welding machine, because such machine provides added value. In contrast a robot from the latest generation doesn't provide something in return. It's a loose-loose situation.
From an economic perspective there are two sorts of robotics company on the market. The first one are selling CNC machines under the label robots. Notable example are the Fanuc, ABB and the Kuka company. These companies are successful in the business not because their robots are great, but because their CNC machine are working reliable and are the same like in the 1970s. The second sort of robot companies are real innovators. For example, Rethink robotics, Jibo and Willow Garage have produced robots which are fitting great into the fifth generation computer revolution. What the companies have in common is that they are bankrupt or will become so within 2 years. The reason is, that the product they are selling have no added value to the customer. The only place in which a Baxter robot from Rethink robotics make sense is a academic robot challenge. The funny thing is, that especially the Baxter model was a success and a failure at the same time. It was successful because many papers were written about the model with an academic background, and it was a failure, because the robot can't be used for real applications.
A possible explanation of the productivity paradox
From a descriptive perspective it can be shown, that most robotics projects in the reality fail. That means, the new pick&place robot arm isn't able to increase the productivity at the assembly line. Using not a robot but human workers is from an economic standpoint the better choice. What these description doesn't provide is the reason why.
Apart from anecdotes about failed robotics automation projects in the car industry, in hospitals or in restaurants there is need to give reason, why all these projects have failed. A possible explanation has to do with the social role of a robot in a project. There are two possible roles available:
1. robot as a superstar, which is provided in dedicated robotics challenges like micromouse and robocup
2. robot as a tool, which is requested in automation projects in factories and hospitals
The reason why car companies are starting robot projects in the factory is because they are interested in a robot as a tool for improving the workflow. The hope is, that a robot is able to increase the productivity and reduce the costs. Robots are seen as advanced sort of a hammer or a CNC machine which helps the human workers. This social requirement for a robot can't be realized. All attempt in utilizing a robot as a tool have failed.
Only the first role (robot as a superstar) results into a successful project. Building a 2wheeled robot which is able to travel through a micromouse maze, is an engineering problem which can be solved if enough skills are available in the team. Many succesful demonstrations of the task are recorded in the past, and the experiment can be repeated with new hardware and new engineering teams. Sure, it's possible that the robot gets lost in the maze, but this is only a detail problem, which can be fixed with better programming. In general most robotics challenges can be solved within the given time frame. It's important to know, that the social role in all of these competitions is, that the robot isn't seen as a tool, but as the most important subject. So it's a superstar and the engineers have to improve the machine.
The difference between the social roles isn't only an academic one. It has to do if a project becomes economic productive or not. Dedicated robotics challenges in which a robot is the superstar are costing lots of money but they are providing nothing in return. The micromouse who is traversing the maze doesn't fulfill the needs of an external customer, but the robot was created for it's own. In contrast, real automation projects in a company are focussed on customer needs. The car factories likes to sell a car to a customer, and the robot should do a subtask in the production facility. In such a social role the robot fails.
It make sense to describe the situation from an abstract point of view:
1. CNC machines = forth generation computer = social role as a tool = increase the productivity
2. robots = fifth generation computer = social role as a superstar = lowering the productivity
With such a template it's easy to predict the outcome of a certain project. Using a CNC machine in a robotics challenge won't work, because the CNC machine can't be programmed freely. The same mistake is obvious if a robot is used in an automation project in a company with the aim to increase the productivity. Each of the technology has a certain sweet spot in which the device can be used in a meaningful way. It's interesting to know that outdated CNC machines are able to increase the productive, while advanced robotics aren't able to do so. Sure, every factory can test out the hypthesis for themself. It's possible to start new robot project to proof that the thesis is wrong. But according to the known projects from the past, it can be estimated what will happen.
The productivity paradoxon has to do with using a certain technology for the wrong purpose. In most cases, the idea is to utilize an advanced robot for increasing the productivity in a factory. Such projects will fail, this is called productivity paradox. It's possible to avoid the bottleneck in defining the requirements first. Which means, it's possible to increase the productive or to play with robots, but not at the same time.
It make sense to observe successful robot projects in synthetic challenges closely. In competitions like Robocup, the robot is the superstar. The task he should do is given by the rulebook, for example one requirement is, that a team of robots should play a game of soccer. This includes object recognition, pathplanning and teamplay. The most interesting feature is, that most of the participants are successful in the competition. Which means, that the robots are working great, that they are driving by software and that each year the skills become a bit higher. Without any doubt it's possible to program even biped robot in a way that they are successful in the robocup challenge.
The only thing what is a bit surprising is, that these technology can't transfered into other domains. The origin of the robocup challenge was to create a testbed for experiment with new robotics technology with the longterm goal to use the newly acquired knowledge in practical applications, for example in factory automation. The robocup challenge itself runs great. Each year, the teams are become better and lots of new AI related knowledge was written in the papers around the competition. What is missing is the knowledge transfer into real applications.
The prediction is, that this knowledge transfer is not possible. That means, the advanced robot are succeed in the synthetic challenge, but they fail in real applications outside the competition. To understand the reason why it make sense to observe the robot projects in an academic context.
The typical university driven academic project is not motivated by increasing the productivity, but the main purpose is to explore new knowledge. In the standard case, a team of researchers is unsure how to build a biped robot and they are starting the project to develop new biped walking algorithm. If they are trained well, they get after a while the first results and write a paper about the walking robot machine. This paper motivates other researchers to experiment with more advanced robots. From an academic standpoint such projects are producing sense. Because at the end, many new papers were written, and new technology was developed which was not available before. It's important to know, that the needs of university robotic projects are different from automation projects in the reality. That means, the robot in the lab is capable of biped walking and has a built in vision system but the factory automation project can utilize this technology in a meaningful way.
Or let me explain it from a different point of view. Suppose a university team has build and programmed an advanced humanoid robot which was successful in a robocup challenge. Lots of money was invested in the project, and hundred of researchers have supported the project. The problem is, that from the perspective of factory automation all the written software and all the advanced hardware is useless. It won't increase the productivity at the assembly line.
To understand the situation better we have to go back in the 1980s. In that area there was lots of interaction available between universities and factory automation projects. The idea was, to utilize the latest knowledge from the academic domain to increase the productivity in the car industry and build advanced service robot which helps to reduce the costs for the customers. The problem was that most or even all of the university-factory projects have failed. The needs of the factory can't be fulfilled by Robotics-experts, and the latest robots developed in the university are useless for factory automation. As a consequence the collaboration has stopped.
The upraising of synthetic robotics challenge is a sign that the university driven robotics community has built it's own challenges. The aim is no longer to automate existing factories, instead new challenges are created which are needed by the robotics community. Basically spoken, robotics development is working for it's own need. There is no plan to transfer the knowledge from the university into practical applications. Both domains are separated.
Today, both parties are working with opposite technology. In practical automation projects, the well known CNC machines are used which were developed in the 1970s. These machines provides the maximum productivity and help the companies to reduce the costs. On the other hand, the robot projects in the universities are working with different goals. CNC machines are not researched in the university domain, instead the prefered technology is deeplearning, biped robots and modern robot control systems. The prediction is, that in the future the gap will increase. That means, university driven robotics projects and automation projects in the factory have nothing in common. And it's done with different ideology in mind. Simply spoken, both parties have unlearned how to communicate with each other. University researchers who are interested in robotics have no reason to start a project in which a CNC machine is utilized, because this technology is 40 years old and it's not interesting enough from an academic standpoint. On the other hand, automation experts in a car factory have no obligation to introduce modern robotics in the workplace, because these devices are costing too much and doesn't provide a value.
This unwillingness to communicate is something which was not there in the mid 1970s. In that time, university research and the need of the industrial automation was the same. The latest CNC machine technology was developed first by researchers in the lab and then the technology was transfered into the practical domain. With the advent of fifth generation computer the situation changes drastically. Basically spoken academic research and industrial needs have developed into opposite direction.
To understand the reason why we have to describe the situation in the reality. What car companies and the service sector is trying to do is to earn money by providing products. At first, the company is producing a car, and then the car is sold to a customer. The money is reinvested into the factory and more cars are produced. Research projects driven by companies have the obligation to make the process more efficient. The aim is to reduce the costs of producing a car, and if en engineers has an idea how to do so, the factory will use it as soon as possible. The disadvantage of this principle is, that a company is profit oriented. They are only interested in technology which helps to reduce the costs. The problem is, that the entire domain of Artificial Intelligence and robotics won't help to reduce the costs, but it's doing the opposite. From the perspective of a car factory it make no sense to research Artificial Intelligence in detail. Because everytime a detail problem was solved, new problems become obvious. As a consequence, Artificial Intelligence isn't researched by profit oriented companies, but the research is delegated to universities or outsourced into research teams which are not profit oriented.
This kind of hypothesis can be proofed by investigating robotics company in the past. Some examples are available in which companies are trying to earn money by developing robot hardware and software. The most advanced example is the Willow Garage company which has programmed the ROS operating system. What all these companies have in common is, that they are struggle from an economic perspective. The reason is, that created hardware and software finds no customer. Basically spoken nobody likes to pay 100000 US$ for a household robot which can do nothing.
It's not the fault of a certain company, but it has to do that Artificial Intelligence in general has problems to find customers. The market principle is, that a customer pays money and then he gets something in return. A robot works a bit different. If a customer pays 100000 US$ for the PR2 Robot from willow garage he gets nothing in return. What he gets instead is the need to invest more time and more money into the robot.
In a previous blogpost, I have compared robotics projects with a flame which has no purpose. It's possible to throw more fuel into the flame but the flame wouldn't provide something back. The problem is, that market oriented products have to provide an added value for the customer. He pays for example 100 US$ and he expects something in return for the money. And exactly this is missing for AI projects. In the 1980s the naive assumption was, that robotics projects have a long duration. That means, that before the robot can improve the productivity in the company, the engineers will need 10 years in which they can explore the new technology. In the meantime it's known, that the duration is not 10 years, but 100 years and longer. That means, the AI community will research a topic for decades and at the end they won't have something to offer which helps the customer. The problem is not, that the research is hidden behind closed doors. The problem is, that even all the papers are published they are useless for automation tasks.
Perhaps some numbers make the situation more obvious. Each year around 1 million papers about artificial Intelligence were created newly in the Google Scholar directory. Most of them can be downloaded in fulltext. The papers itself are great, the authors are experts on the field and each year they are describing more complicated robots which were build in the laboratory. But a closer look into the paper will show, that nothing new was discovered. They academic community has researched a topic in detail, but they haven't found anything what can be converted into a practical product. The result is, that car factories, hospitals and restaurants are working unchanged since 40 years. The technological development has stopped. No technology is available and all the work is done by human workers.
What we can observe is, that the world is on a low technological level which was froozen in the 1970s and at the same time, the AI revolution has started and the development speed has increased over the years. Latest robotics research is more advanced than ever and nearly each week a breakthrough is available and at the same time the companies in the real world are working with outdated CNC machines, barcode scanners and repetitive human work. The hypothesis is, that the gap can't be overcome and it's described in the literature as the productivity paradox.
Apart from anecdotes about failed robotics automation projects in the car industry, in hospitals or in restaurants there is need to give reason, why all these projects have failed. A possible explanation has to do with the social role of a robot in a project. There are two possible roles available:
1. robot as a superstar, which is provided in dedicated robotics challenges like micromouse and robocup
2. robot as a tool, which is requested in automation projects in factories and hospitals
The reason why car companies are starting robot projects in the factory is because they are interested in a robot as a tool for improving the workflow. The hope is, that a robot is able to increase the productivity and reduce the costs. Robots are seen as advanced sort of a hammer or a CNC machine which helps the human workers. This social requirement for a robot can't be realized. All attempt in utilizing a robot as a tool have failed.
Only the first role (robot as a superstar) results into a successful project. Building a 2wheeled robot which is able to travel through a micromouse maze, is an engineering problem which can be solved if enough skills are available in the team. Many succesful demonstrations of the task are recorded in the past, and the experiment can be repeated with new hardware and new engineering teams. Sure, it's possible that the robot gets lost in the maze, but this is only a detail problem, which can be fixed with better programming. In general most robotics challenges can be solved within the given time frame. It's important to know, that the social role in all of these competitions is, that the robot isn't seen as a tool, but as the most important subject. So it's a superstar and the engineers have to improve the machine.
The difference between the social roles isn't only an academic one. It has to do if a project becomes economic productive or not. Dedicated robotics challenges in which a robot is the superstar are costing lots of money but they are providing nothing in return. The micromouse who is traversing the maze doesn't fulfill the needs of an external customer, but the robot was created for it's own. In contrast, real automation projects in a company are focussed on customer needs. The car factories likes to sell a car to a customer, and the robot should do a subtask in the production facility. In such a social role the robot fails.
It make sense to describe the situation from an abstract point of view:
1. CNC machines = forth generation computer = social role as a tool = increase the productivity
2. robots = fifth generation computer = social role as a superstar = lowering the productivity
With such a template it's easy to predict the outcome of a certain project. Using a CNC machine in a robotics challenge won't work, because the CNC machine can't be programmed freely. The same mistake is obvious if a robot is used in an automation project in a company with the aim to increase the productivity. Each of the technology has a certain sweet spot in which the device can be used in a meaningful way. It's interesting to know that outdated CNC machines are able to increase the productive, while advanced robotics aren't able to do so. Sure, every factory can test out the hypthesis for themself. It's possible to start new robot project to proof that the thesis is wrong. But according to the known projects from the past, it can be estimated what will happen.
The productivity paradoxon has to do with using a certain technology for the wrong purpose. In most cases, the idea is to utilize an advanced robot for increasing the productivity in a factory. Such projects will fail, this is called productivity paradox. It's possible to avoid the bottleneck in defining the requirements first. Which means, it's possible to increase the productive or to play with robots, but not at the same time.
It make sense to observe successful robot projects in synthetic challenges closely. In competitions like Robocup, the robot is the superstar. The task he should do is given by the rulebook, for example one requirement is, that a team of robots should play a game of soccer. This includes object recognition, pathplanning and teamplay. The most interesting feature is, that most of the participants are successful in the competition. Which means, that the robots are working great, that they are driving by software and that each year the skills become a bit higher. Without any doubt it's possible to program even biped robot in a way that they are successful in the robocup challenge.
The only thing what is a bit surprising is, that these technology can't transfered into other domains. The origin of the robocup challenge was to create a testbed for experiment with new robotics technology with the longterm goal to use the newly acquired knowledge in practical applications, for example in factory automation. The robocup challenge itself runs great. Each year, the teams are become better and lots of new AI related knowledge was written in the papers around the competition. What is missing is the knowledge transfer into real applications.
The prediction is, that this knowledge transfer is not possible. That means, the advanced robot are succeed in the synthetic challenge, but they fail in real applications outside the competition. To understand the reason why it make sense to observe the robot projects in an academic context.
The typical university driven academic project is not motivated by increasing the productivity, but the main purpose is to explore new knowledge. In the standard case, a team of researchers is unsure how to build a biped robot and they are starting the project to develop new biped walking algorithm. If they are trained well, they get after a while the first results and write a paper about the walking robot machine. This paper motivates other researchers to experiment with more advanced robots. From an academic standpoint such projects are producing sense. Because at the end, many new papers were written, and new technology was developed which was not available before. It's important to know, that the needs of university robotic projects are different from automation projects in the reality. That means, the robot in the lab is capable of biped walking and has a built in vision system but the factory automation project can utilize this technology in a meaningful way.
Or let me explain it from a different point of view. Suppose a university team has build and programmed an advanced humanoid robot which was successful in a robocup challenge. Lots of money was invested in the project, and hundred of researchers have supported the project. The problem is, that from the perspective of factory automation all the written software and all the advanced hardware is useless. It won't increase the productivity at the assembly line.
To understand the situation better we have to go back in the 1980s. In that area there was lots of interaction available between universities and factory automation projects. The idea was, to utilize the latest knowledge from the academic domain to increase the productivity in the car industry and build advanced service robot which helps to reduce the costs for the customers. The problem was that most or even all of the university-factory projects have failed. The needs of the factory can't be fulfilled by Robotics-experts, and the latest robots developed in the university are useless for factory automation. As a consequence the collaboration has stopped.
The upraising of synthetic robotics challenge is a sign that the university driven robotics community has built it's own challenges. The aim is no longer to automate existing factories, instead new challenges are created which are needed by the robotics community. Basically spoken, robotics development is working for it's own need. There is no plan to transfer the knowledge from the university into practical applications. Both domains are separated.
Today, both parties are working with opposite technology. In practical automation projects, the well known CNC machines are used which were developed in the 1970s. These machines provides the maximum productivity and help the companies to reduce the costs. On the other hand, the robot projects in the universities are working with different goals. CNC machines are not researched in the university domain, instead the prefered technology is deeplearning, biped robots and modern robot control systems. The prediction is, that in the future the gap will increase. That means, university driven robotics projects and automation projects in the factory have nothing in common. And it's done with different ideology in mind. Simply spoken, both parties have unlearned how to communicate with each other. University researchers who are interested in robotics have no reason to start a project in which a CNC machine is utilized, because this technology is 40 years old and it's not interesting enough from an academic standpoint. On the other hand, automation experts in a car factory have no obligation to introduce modern robotics in the workplace, because these devices are costing too much and doesn't provide a value.
This unwillingness to communicate is something which was not there in the mid 1970s. In that time, university research and the need of the industrial automation was the same. The latest CNC machine technology was developed first by researchers in the lab and then the technology was transfered into the practical domain. With the advent of fifth generation computer the situation changes drastically. Basically spoken academic research and industrial needs have developed into opposite direction.
To understand the reason why we have to describe the situation in the reality. What car companies and the service sector is trying to do is to earn money by providing products. At first, the company is producing a car, and then the car is sold to a customer. The money is reinvested into the factory and more cars are produced. Research projects driven by companies have the obligation to make the process more efficient. The aim is to reduce the costs of producing a car, and if en engineers has an idea how to do so, the factory will use it as soon as possible. The disadvantage of this principle is, that a company is profit oriented. They are only interested in technology which helps to reduce the costs. The problem is, that the entire domain of Artificial Intelligence and robotics won't help to reduce the costs, but it's doing the opposite. From the perspective of a car factory it make no sense to research Artificial Intelligence in detail. Because everytime a detail problem was solved, new problems become obvious. As a consequence, Artificial Intelligence isn't researched by profit oriented companies, but the research is delegated to universities or outsourced into research teams which are not profit oriented.
This kind of hypothesis can be proofed by investigating robotics company in the past. Some examples are available in which companies are trying to earn money by developing robot hardware and software. The most advanced example is the Willow Garage company which has programmed the ROS operating system. What all these companies have in common is, that they are struggle from an economic perspective. The reason is, that created hardware and software finds no customer. Basically spoken nobody likes to pay 100000 US$ for a household robot which can do nothing.
It's not the fault of a certain company, but it has to do that Artificial Intelligence in general has problems to find customers. The market principle is, that a customer pays money and then he gets something in return. A robot works a bit different. If a customer pays 100000 US$ for the PR2 Robot from willow garage he gets nothing in return. What he gets instead is the need to invest more time and more money into the robot.
In a previous blogpost, I have compared robotics projects with a flame which has no purpose. It's possible to throw more fuel into the flame but the flame wouldn't provide something back. The problem is, that market oriented products have to provide an added value for the customer. He pays for example 100 US$ and he expects something in return for the money. And exactly this is missing for AI projects. In the 1980s the naive assumption was, that robotics projects have a long duration. That means, that before the robot can improve the productivity in the company, the engineers will need 10 years in which they can explore the new technology. In the meantime it's known, that the duration is not 10 years, but 100 years and longer. That means, the AI community will research a topic for decades and at the end they won't have something to offer which helps the customer. The problem is not, that the research is hidden behind closed doors. The problem is, that even all the papers are published they are useless for automation tasks.
Perhaps some numbers make the situation more obvious. Each year around 1 million papers about artificial Intelligence were created newly in the Google Scholar directory. Most of them can be downloaded in fulltext. The papers itself are great, the authors are experts on the field and each year they are describing more complicated robots which were build in the laboratory. But a closer look into the paper will show, that nothing new was discovered. They academic community has researched a topic in detail, but they haven't found anything what can be converted into a practical product. The result is, that car factories, hospitals and restaurants are working unchanged since 40 years. The technological development has stopped. No technology is available and all the work is done by human workers.
What we can observe is, that the world is on a low technological level which was froozen in the 1970s and at the same time, the AI revolution has started and the development speed has increased over the years. Latest robotics research is more advanced than ever and nearly each week a breakthrough is available and at the same time the companies in the real world are working with outdated CNC machines, barcode scanners and repetitive human work. The hypothesis is, that the gap can't be overcome and it's described in the literature as the productivity paradox.
Subscribe to:
Posts (Atom)