December 30, 2019

From printed academic journals towards self-publishing authors

The term author is a new one. It is referencing to an individual who is publishing a book. The interesting fact is, that in today's academic publishing industry no authors are available, but papers are created by institutions. The so called affiliation is the most important sign to be a member of a larger institution. That means, a new paper about a computer science topic isn't created by one or two individuals but it's initiated by university department in cooperation with a journal. This is especially true for the hard science.

In the domain of human science, the situation is more relaxed. That means, in the human science individual authors are creating papers, and they doing so without asking the university first. The reason is, that in ficitional writing there is a long tradition of become an author and self-publish the work either in print format or in the internet.

Let us take a look into some recent developments in academic publishing. The upraising of academic social networks like Researchgate and Academia.edu can be called a small revolution because they put the attention on the individual author. A paper is no longer the result of group working at an institution, but it's created by individual for their own purpose. The argument against such development is very similar to authorship in general. That means, the idea that a book is created by somebody, and it's not for somebody is something which is new.

To understand the situation in detail we have to ask for the relationship between reader and author. In today's academic publishing system, the reader is the center of attention. If somebody likes to learn something about science he can visit a library, a university or a book shop. In all these cases, the reader gets a lot of help. The average library provides thousands of books, and in a university the user can decide between many hundred of lectures. Now we have to focus on the other social role. Suppose, the idea is, that somebody likes to write and publish a book. Which options are available? The amount is small, or to be more specific, there is no way, that somebody is allowed to publish anything, especially not if the topic is within science or technology. The reason is, that publishing in the ivory tower works not by individuals, but with larger groups for example it's driven by a company, or by a journal.

It's not very hard to predict, that in the future the situation will flip. The individual author will become the top priority and the reader gets ignored. This has to do with the increasing amount of authors. That means, if all the people are authors, all the people likes to publishing something on an individual basis. The boom of the self-publishing market is only the first step in that direction. Self-publishing in the new, means usually to publish ficitional books, but not knowledge about science and technology. The reason is, that in the fiction segment it's very common that the authors stays in the focus who.

The open question is, how authorship will become dominant in the non-fiction segment. One way in doing so is to define rules under which publication of non-fiction / scientific books make sense and under which situation not. A possible rule is, that only the publication of detailed specialized knowledge make sense. A non-fiction writer is asked to write a book which fits into a department library of a university, but not into a universal library. The idea is, that a document about the dewey classification 500 is useless, because 500 isn't specific enough. It's reserved for general information about natural science and mathematics. The better idea is to write a book about DDC 519.233 which is about markov processes as a subsection of probability theory. The value of non-fiction information depends on how deep in the DDC22 tree the book is located.

December 28, 2019

Analyzing the bottleneck in academic publishing

To predict the future of academic publishing it make sense to focus on conflicts in the existing library system. In general the situation can be divided into the time before the advent of the internet and the time after the 1990s which is working with electronic communication.

Academic publishing before the internet was located around special libraries. The reason was, that only special library are able to provide detail knowledge. In a single building, all the books and journals about a subject can be collected. With a general library this was not possible, because the amount of shelfs and the costs would explode. If no internet is available and printed books are the only medium, a special library fulfills the needs of professional engineers well. The user comes with a concrete specialized problem, and the library provides the answer to the question.

During the transition from printed journals to online journals there was a bottleneck visible. Most specialized libraries have failed to transfer the concept into the internet. Some library portals are available in the internet, which are providing access to a low amount of ressources from a special domain. These website have failed. The reason is, that internet users are not interested in using 100 different websites, but they are interested in a meta-search engine which provides as much information as possible. As a result, not specialized libraries but information aggregation search engines have become famous in the internet age. The most famous one are Google Scholar, and Elsevier based universal search engines.

It seems, that in the internet age, the concept of a specialized library has become obsolete. This has produced a lot of stress to the system because there is a need for something which is new, but nobody knows what the future library will look like. What is sure was the old outdated concept of specialized libraries. The main idea was to work with constraints which are reducing complexity. In a specialized library about computer science, only books and journals from this subject are welcome. Everything else gets ignored and is called not relevant. That means, if a user in a specialized computer science library likes to read an article about music theory, the request was denied because it doesn't fit to the core subject. Addtionally, all the users in a specialized library have an expert background by default. They are experts for a certain subject but not informated about all the other knowledge in the world.

The main benefit for complexity reduction is to lower the costs. If books outside a specialized domain are ignored, they do not have to stored in the shelf. This saves money, human labor and physical space. The combination of low costs plus indepth information access was the success factor for special library.

The working hypothesis is, that with the advent of the internet, the restrictions of the past do not make sense anymore. The problem is, that the complexity has exploded. The world of a special library was easy to understand. But if all specialized libraries are merging together into a single universal library, it's unclear what the system is about.

Discipline-oriented digital libraries

Since the 1990s, some attempts were made to build so called “Discipline-oriented digital libraries”. The idea is to transfer the concept of a special library into the internet age. This idea make sense but fails at the same time. From the perspective of a printed special library it make sense. A special library is the core of academic infrastructure. It's major bottleneck is, that no fulltext search is possible. Providing the same information in the online format makes a lot of sense for the users.

At the same time, these websites have failed. Because the result is, that lots of different search engines are available. What the user likes is a meta-search engine which can search through all the content. One option in doing so are information aggregation websites from Elsevier and Springer which combining the content from different special libraries. The disadvantage is, that a publishing house is different from a library. So the question remains open, how an internet version of a special library will look like.

The reason why it make sense to analyze classical special libraries in detail is because they can answer the question what academic publishing is. Academic excellence and a special library is the same. So what exactly is the difference between a normal library and a special library? It has to do with a certain sort of information. In a normal / general library, the books are about fictional topics. For example the novel Anna Karenina from Leo Tolstoy is an example. A specialized library won't collect such a book, even if it's world literature. A second property of a special library is, that it provides in depth knowledge about a concrete domain.

So we can summarize that specialized library are focussed on non-fictional information with detailed topic in mind. If a document contains lots of special vocabulary spoken by experts on the field, and provides non-fictional content, then the document is scientific.

Building a digital discipline oriented library

Reproducing the specialized focus of an academic library in the internet age can be realized with tags. The difference between a fictional book from a general library, and a scientific book is, that the second one is tagged differently. A non scientific book can be tagged with “novel, adventure”, while an academic book is tagged with “non-fiction, computer science, neural networks”. What academic users are demanding, is content which is tagged in a certain way.

Specialized libraries and special journals are creating the reputation by focus on a certain topic. Academic work means, to ignore most of the information and oriented only on a sub part of the problem. Roughly spoken, a journal which is publishing papers from different disciplines is not an academic journal. Only a journal which is dedicated to a concrete issue is able to create a reputation.

At youtube there is a video available which introduces a science library from the late 1970s, Life Sciences Library 1979 - Restored Version, https://www.youtube.com/watch?v=xe0lFUlz-io Even if the library is not very large and it working with printed index cards, it's a state of the art library even for today's needs. That means, if the internet is offline, a specialized library would provide the same or even better information available online. The low amount of books and the missing fulltext search is not a problem in a special library, because the content can be explored manually.

science library are the core of academic excellence

In the public perception, special libraries are often ignored. They doesn't have prestigious buildings and in contrast to universal libraries the amount of books they have to offer is small. Most special libraries are small in the size. That means, the amount of employees is below 100, and it's located in a single building. On the other hand, special libraries are more important for researchers than large universal libraries. What researchers are doing at foremost is to focus on a concrete topic. This is equal to become an expert. And special libraries are needed by the experts for reading new information.

A special library is the same like academic working. It's not possible to become an expert for everything. But there are domains like physics, mathematics, biology and so on. It make sense to collect all the information in a single library and prevent of creating larger universal spaces. This is not located in a certain mentality, but the simple reason has to do with costs. A special library produces lower costs than a universal library. At the same time a special library creates a high entry barrier. Users from a different department are not allowed to enter a special library. Only mathematicans can visit a mathematical library.

In the electronic age the borders have blurred. The idea is to get access information worldwide and from all subjects under a single search engine. This new understanding of a knowledge hub is working different from the former science library. And the question is how to maintain the normal specialized quality standard in the electronic age.

Let us give a concrete example. The hot topic in the age of Open Access are so called predatory publishers. That are electronic journals which have lowered the entry barrier. Predatory journals are perceived by academic as low quality journals. But what if someone creates a specialized predatory journal? That is a low entry barrier journal which is dedicated to a special topic and accepts only manuscripts from this domain. Such a predatory journal has to be called a normal academic journal, because it provides this sort of information which is needed by experts. Or let me explain it the other way around. If a researcher gets specialized information about his domain which are uptodate, the researcher is happy. No matter if the journal is called predatory or not.

In contrast, a fake journal is equal to the attempt to describe a subject in a general way. That means, by avoiding the specialized vocabulary of the expert. Mainstream information which are targeted to a larger audience can be called a fake science journal because they do not fulfill the needs of academics.

Making Wikipedia more accessible to the public

Introducing Wikipedia in the year 2019 to a wider audience is no longer needed. The website has reached the top10 of the Alexa statistics and can be called a success. Similar to encyclopedic projects in the 18th century, Wikipedia has adopted the latest technology and tries to enlighten the society.

Around the Wikipedia project there is a major concern, which is located not in the content itself, but in the documentation what Wikipedia is about and how to contribute to the project. The problem is, that many help pages are created and additionally in the academic literature a large amount of papers about the Wikipedia were written. It seems, that even for bibliographic experts, it's hard to explain what Wikipedia is and in which direction the project will evolve in the future.

The literature about Wikipedia can be divided into two groups. The beginning was dominated by papers about the project itself, and it's comparison to existing encyclopedia projects like Britannica. SInce 2010 a new sort of papers was written which is focussed on the conflicts in the project. Introducing this sort of text needs a detailed explanation.

In general, there are two options available to describe a project. The first idea is to assume, that Wikipedia is a blackbox which can be communicated to the outside world. The second idea is, to describe Wikipedia as a living system which is powered by internal conflicts. A typical example for a conflict is an edit war, banning a user from the project or deleting an article. The interesting fact is, that smaller wikis which are created by a single admin, doesn't have these sorts of conflicts. In a single user Wiki, the only possible conflict is available between the human user, and the mediawiki installation. For example, the user tries to format a heading in bold, but the syntax parser produces an error.

In the Wikipedia project are more complicated sort of conflicts is visible, which has to do with interaction users which are trying to achieve different goals. The most obvious conflict is between a user who likes to enter new information into the Wiki and the admin, who likes to prevent this, because he classifies the edit as vandalism. The amount of conflicts in the WIkipedia isn't researched very well. In the early literature, the assumption was, that conflicts can be ignored. From a game theoretic perspective, it make sense to monitor the conflicts in detail, because they are the explaining what the rules in the system are.

It's important to know, that Wikipedia is working different then it was described in the help section. That means, the rules of the Wikipedia game are not described explicit but they are communicated in the conflicts which break out and which are solved in a certain fashion. Describing these conflicts allows to identify the shared goals of the users and under which cases a stress from the outside is available.

How exactly are conflicts solved in the Wikipedia? The answer is, that the users in the system are anticipating the reaction of the community and this allows them to adapt their individual behavior. That means, a conflict is located on the time scale and it produces reactions in the future. In most cases, the prediction of future behavior is based on looking backwards in the past. Longterm admin users have a large knowledge what the workflow was for certain article conflicts. This pattern is used to generate a certain behavior in the now. A behavior consists of entering text and pressing delete buttons.

Creating a Wikipedia article

... from scratch doesn't make much sense. Because the user has no idea, which topic is important nor how to format the paragraph so that Wikipedia is happy too. The more elaborated way in creating content for Wikipedia is to search for text already written on the own home directory. In the best case, it's draft for an upcoming wikipedia article, written 2 months ago, but never uploaded to the encyclopedia. Such a draft version can be extended with two literature references, and after proofreading it's ready for the sandbox. This tool allows to check if the paragraph is well formatted, and then it can be copied into the article space.

The newbie would assume, that after uploading new content to the Wikipedia, an edit war will start in which a powerful admin collective will go through every referenced source and will ask if the user is already familiar with the topic. Such a scenario is available for mainstream articles which have a high pageview statistics, but the edit in normal scientific article in the encyclopedia is mostly ignored. That means, the content is uploaded and nothing will happen. If the edit looks not as maximum spam, but seems to look halfway informed, the edit will be accepted as valid. The reason is, that most articles in the Wikipedia doesn't have any contributors at all. That means, the last substantial edit was made 2 years ago, and if someone likes to add a small paragraph Wikipedia won't reject it.

Sure, it's important to write accurate sentences and provide quality ressources in the footnote section, but in general the quality standard is only on the average level.

Academic publishing isn't invented yet

If someone tries to identify the best practice method in academic publishing he will recognize that the subject is highly controversial. The reason is not, that each journal and each university has a different opinion but the major problem is, that so called academic publishing is invisible. Invisible means, that there is no history of publishing available which can be described and reproduced. The missing data from the past can be observed if the annual papers who get published are counted https://www.scimagojr.com/countryrank.php

In the year 1996, all the countries in the world have created less than 1 mio new papers. The United states holds the record with 350k and the other countries have each under 100k. Nearly all the papers in the 1996 were published in the printed form by larger publishing houses. This statistics doesn't describes a culture of publishing something, but a culture of doing the opposite. Let us make a thought experiment. Suppose we are take a look back into the year 1996 and try to read one of the published academic papers. How can we do so? One option would be to visit a university library. If we are going to a library in Europe, in China or in southamerica, the chance is high, that even the largest library in a country has only subscribed to local journals written in the local language which is not English. That means, in a Italian university of the year 1996 there was no Internet access, but all what the reader can expect are some journals in the archive written in Italian. Even if the user is asking for fulltext access to advanced research he won't be able to read such papers. It's also not possible to create new content.

According to the URL, the total academic output of Italy in the year 1996 over all disciplines was only 40k papers for all subjects. That means, if the user is interested in reading the latest research of this topic, he will get from the Liberian a list of 2 journals and with a bit luck, these journals are available in the library. They will fit on single desk and it can be read in one afternoon. The problem is not to describe the workflow of reading and writing academic information in the year 1996, but the problem is, that in this time no such thing like academic content was available. That means, there was an absence of excellence.

In the year 2018 the situation has increased a lot. Today, there is the internet. If we are repeating the thought experiment and travel to a library, the user gets access to online repositories from worldwide publishers. He is able to read worldwide information which are provided as paywalled and open content as well. In the year 2018 the worldwide paper production was around 3 million documents, which is much higher than in 1996 but it can't called academic publishing, because very similar to the past, most information are not written yet. Publishing a paper has a low priority. It is something done by publishing house and no standard workflow is available. What most professors are doing is not to create and upload new information but they are doing nothing. Most academics didn't have written a paper in the last year, and if they have done so it wasn't uploaded to the internet because of different reasons.

The reason why it's hard to define academic publishing is because it wasn't invented yet. What we have seen in the year 1996 was publishing with printed journals with a low amount of documents. And what is available in the year 2018 is a complete different publishing mode which can't be extrapolated into the future. The only thing what is sure is, that in ten years from today, academic publishing will work different from today. Inventions like fulltext search, a reputation management or peer review wasn't invented yet. That means, the subject can't be descibed by looking backward but it has to be invented for future needs. Some ideas who future academic publishing will look like are available today. A combination of a preprint server, fulltext search engine, reputation management and grounding in physical locations like universities make sense. The open problem is how to combine all these stakeholders and make the system open as possible.

One option in approaching the topic from a conservative standpoint is to ignore future needs and claim, that the publication system of the year 1996 was working great. Under this assumption it make sense to focus on a few printed journals which are publishing less than 1 million papers per year with a well working quality control. The future academic publishing system will look the same like in the 1990s before the internet was invented. Which means, that no content at all is created and the world has no access to the information. The question is, if such an idea make sense for the internet generation as well? The main advantage of the 1990s publishing model is, that it helps to reduce complexity. Instead of trying to improve something and invent lots of new workflows, everything remains stable.

The assumption is, that the domain of science is walking slowly forward and there is no need to increase the paper count. Important papers are written in the printed format and they are located physically to a library. The total amount of researchers in a country is small, and if they want to explain something to the public, they can invite the local newspaper into the researchlab which can write about something discovered recently. The newspaper acts as a buffer between the scientists and the public and what most of scientists are doing is to proof that everything is at the right place.

The unanswered question is, what is the role of science and technology in the world? Does a modern society needs a progress at all? Is there a need for increasing the amount of scientists and does it make sense to publish 3 mio paper a year?

During the decades the role of a library has changed. The importance has grown and today, the world has a higher demand for academic information than in the past. 50 years ago, the term library was referencing to book shelf which consists of 40-100 books mostly with fictional content, novels and poems. Today, a library is equal to a scientific library and sometimes, it's equal to an online library which provides fulltext access to the latest research papers. The definition what a library is, has become more quality oriented. Today, a bookshelf with 40 fictional books can't be called a library, but it's a joke. The reason is, that the value of these books is low, and doesn't fit to basic needs. 50 years ago, such a bookshelf was equal to a library used by educated people. The reason was, that in the past it was rare to have access to books at all. And reading fictional books is better than reading no information at all.

Libraries

Knowledge production is working as an asymmetric system: one side accumulates all the wisdom, why the other side doesn't have access. Before the internet age, libraries were the hub of academic knowledge. They are providing books to a small amount of people. Reading the books makes the people educated.

There are two sorts of libraries available: general libraries which are accumulating as much information as possible. They are collecting different languages, fictional and non-fictional books from all sort of topics. Building such libraries in the 1990s was very expensive, because lots of storage space was needed. The more interesting sort of libraries are specialized libraries which are dedicated to a single topic and collect indepth information about a subject. Special libraries are the perfect choice for academic purposes. They are able to overcome the limitation of the printed book format.

A typical example is a music library. Even if all the information is based on printed information such a lbrary would provide in the 1990s an indepth knowledge about the topic. Or let me explain the situation from the other point of view. General libraries in the 1990s were available but they are ignored by academics. Even if a general library has a large amount of ressources, it failed to provide indepth information about a subject. That means, the needs of researchers doesn't fit to what a general library has to offer.

special libraries

The most surprising information about special libraries is, that even before the internet age, they were perceived as powerful institutions. They are working similar to normal libraries with printed material. That means, books and journals are stored in bookshelfs, and if somebody likes to read it, he has to visit the library. What makes special libraries useful is, that in a single building all the information is stored. The researcher has a concrete question about a specialized subject and the library will help him.

That means, it was possible to do advanced research and find out something new without using the Internet. Even today, special libraries are very important for academics, because they have all the information in a single place and they are providing fulltext information. A nice thought experiment is to digitize a special libraries. Such a project has not the attempt to convert millions of books into digital information but only a few from a limited domain.

December 26, 2019

What is a book unhaul?

Everybody who is familiar with youtube will have recognized a certain individualized sort of clips, in which people are showing to the world the content of their bookshelf. Usually the idea is, to introduce newbies into the wonderful joy of reading books, collecting them and organize hundred of books in a self-creating library. Since a while the opposite trend is emerging and the youtube community has chosen the term “book unhaul” for this kind of very interesting video. The idea is, to clean up the bookshelf and remove all the unread, unwanted and boring books from the bookshelf. The idea is not to grow the library, but to clean up the mess.

I can link to some of these funny videos, but after entering the search term lots of these interesting examples are visible in the result list. As far as i can see, most unhaul videos are about fictional books. But perhaps some advanced youtubers have come to the conclusion, that they want to unhaul also the Encyclopedia britannica, the never read book about fuzzy logic or a dissertation of a friend ...

How to create an academic paper

The amount of published papers per year has increased. In the last year, the United states alone have published around 600k new papers from all academic disciplines. But for the newbies, the principles of writing a paper by it's own are not very clear enough so it make sense to give some advice.

The basic idea is that a high quality paper can also be created, if many low quality were written before. So the question is how to create lots of low quality paper to get the amount of experience which is needed to write the next nobel price winning paper? For the beginning, a paper is a collection of blog posts. The author has to write some new blog posts, for example 20 of them about the same topic. The content is structured hierarchical and is added with literature references. For doing so the Lyx document software is a great choice, because it is able to generate high quality PDF documents easily.

Before the document is uploaded to a repository, a proof reading is recommended. For the first paper, a proof reading plus the creation of quality figures can be ignored because this will safe a lot of time. But if the author has more experienced it will get a higher priority. Creating an academic paper and creating some blog posts is the same. The language style and the subject are very similar. The difference is, that blog posts are written about lighter topics which are read by the mainstream, while an academic paper consists of specialized knowledge which needs a larger amount of text. The main reason why so many papers are written each year, is because a paper is only a short text. In contrast to a book which contains of 500 pages, the average paper has only 8 pages. A longer paper contains of 20 pages. Even a single author is able to write such content.

The understanding of most newbie authors is, that an academic paper should provide new information for documenting the latest research. This is true for top authors who are writing excellent papers. A newbie without any experience will write in his first paper for sure not groundbreaking information. It make sense, to see the first writing attempts as an hello world example, similar to creating the first Python program. The idea is, to become familiar with software tools and test out if the resulting pdf document can be downloaded with the browser. It make sense to reduce the own expectations. Writing a bad paper is better than writing no paper at all.

Peer review

The topic of peer review was ignored in the given explanation. Peer review is equal to a conflicting pipeline between different social roles with an academic background. Peer review is very similar to the inner working of a helpdesk: What exactly the stakeholders are discussing is a bit complicated to explain. But in most cases, they are not able to agree towards the same subject and they starting to argue about it. Arguing means, that the result is disappointing for all of them. This is equal to a loose-loose situation.

Bibliographic references

On the first look, an academic paper can be written with any wordprocessing tool. In the easiest case a normal texteditor which is able to wrap words and saves a file to the harddrive is a reasonable choice. The reason why a more elaborated software tool like Lyx version 2.3.3 from Matthias Ettrich et. al. make sense is because an academic consists of bibliographic references and maintaining them can become complicated. With the Lyx tool, it's pretty easy to include such a reference. At first, the bibtex information is needed which is put at the end of the bibtex-file, then the label can be inserted into Lyx and the formatting is done by the software without further intervention. In general, The overall procedure takes around 30 seconds. In contrast to manual formatting of the information this is a great improvement.

December 23, 2019

Asking the wrong questions at SE.AI

A new question was posted today to the SE.AI forum, which is about a machine learning problem in Python:

https://ai.stackexchange.com/questions/17226/split-data-for-training-and-validation-keeping-grouped-data-together

On the first look, the OP looks great. SImilar to the guidelines in Stackoverflow, it's a minimal example and a code snippet is also included. This allows the SE.AI Community to reproduce the problem and provide high quality answer. The problem is, that a closer look will show, that the question has a low quality. RIght now, no comment was written but the probability is high, that the forum will critize the question as well.

The problem is, that AI related questions are different from programming question. In case of Artificial Intelligence the problem is not how to create a Python script, but how to create an academic paper. That means, instead of including sourcecode into the question, the better idea is to include academic references to existing papers. Artificial Intelligence is grounded in the science, and science communication works with citing each other.

Even if the wrong papers are cited, it makes a good first impression if somebody shows, that he is familiar with the bibtex / Endnote tool already. Instead of reveal all the tricks how to forumulate a good question, let us wait a bit and observe, what the SE.AI community is doing in this case.

December 22, 2019

Transition from teleoperation towards Object Action Complexes

The most powerful robot can be realized with teleoperation. Teleoperation means, that the robot is equipped with human level skills and can adapt to any situation. A pick&place with a teleoperated robot arm works perfect. The most interesting feature of remote control is, that no program is needed. The only piece of software transmits the joystick signals to the robot, but the robot movements itself are not determined by a program.

Suppose the idea is to program a robot which means, that the steering signal is not generated in realtime by a human but from a macro, script or any other robot program. The resulting question is which kind of software is needed for controlling the robot? In the easiest case a robot program is a list of points which are forming a trajectory. In the python language a typical robot program looks like the following example:


moveto(250,200)
time.sleep(1)
moveto(60,210)
time.sleep(1)
moveto(50,325)
time.sleep(1)


The robot program looks different from normal Python sourcecode, it has more in common with a list of absolute values which are executed by the robot. Will this program work? Oh yes it works great, the movements are executed precisely. The more complicated question is, if the robot movements are useful for the environment. That means, in a real life application an industrial robot is asked to do a task, for example to pick&place an object. The robot can fulfill the task or not.

The same robot program can become a failed robot project or a successful robot project. It depends on the task. If the task is easy the given robot trajectory will solve the problem. But if the environment changes to much, the trajectory of the robot doesn't make sense and it won't be able to pick&place any objects.

The overall successrate of the robot project depends on two factory. The robot program and the task description. The combination of a simple repetitive task and a simple robot program is a great choice. The problem is, if the task description is complicated but the robot program is an easy one. The result is a failed robot program.

Let us take a look into real applications. A welding robot is a typical example of an easy task description plus an easy robot program. The task for the robot is, to move the endeffector precisely along a list of points. The trajectory is always the same, and not kind of planning is needed. Such a task can be realized with the mentioned robot program which contains of two simple actions: moveto and time.sleep. The problems will upraise if the task description is more complicated. For example, if the robot should pick&place objects, but the objects can have a different location. In such a case, the easy fixed trajectory of the robot won't be succesful anymore. There are two options available to overcome the issue: first reduce the task description into something more easier or secondly, increase the complexity of the robot program.

A slightly more advanced form of creating a robot program is working with Object action complexes. This technique is derived from the STRIPS notation. The idea is not only provide a list of points, but provide a list of actions which can have preconditions and postconditions. Such motion primitives can be reordered so that the robot isn't executing a fixed trajectory, but is able to create different plans. The good news is, that the STRIPS notation can be used to generate a fixed trajectory as well.

From teleoperation to fully autonomous robot

Controlling a robot with a remote control is not very complicated. The human operaror has to press a button and the robot will move forward.The more advanced question is how to remove the human operator from the loop, so that the character runs autonomously.

On the first look the task can be solved with creating a script, very similar to automating a task on the computer with a Visual basic script. A potential program consists of building blocks like if-then, for loops and action statements. This will allow the human operator to take away the hands from the remote control and the robot will work by it's own. Really? No it was a rhetorical question because one important thing was ignored in that tutorial. The interesting fact with scripts for robots is, that they doesn't work in reality but only in a synthetic challenge.

Let us describe the pattern what is used by self-claimed robotics experts who like to proof that autonomous robotics is available. At first they are creating a macro for the robot. The script is doing a concrete task. For example, the robot is searching for line on the ground, then the robot follows the line, and if an obstacle is there it will activate a submodule to move around the box. In the next step, a game is constructed which contains of a line on the ground, a robot and an obstacle. Then the start button is pressed and the robot is working autonomously.

The problem with this demonstration is, that the task was created in response to the script. That means in the first step the macro was written and in the second step the problem for the macro was imagined. Unfortunately, the macro can't solve real tasks. This kind of bottleneck is often ignored. The assumption is, that the script can be extended to more demanding applications. It's funny to realize that not a single autonomous robot is available which is solving a practical application. So called autonomous robots are only available for trivial examples.

A possible explanation for this mismatch has to do with sorting tasks by it's complexity. The hypothesis is, that two sorts of problems are available: easy to automate problems and hard to automate problems. The problem “Follow a line” is an easy to automate problem. Replacing a human worker with a robot is a hard to automate task. What robotics engineers are able to automate are only trivial tasks. These tasks are constructed so that a robot is able to fulfill it autonomously. What robotics engineers aren't able to automate a real tasks which are important in the real life.

The funny thing is, that on the first look both task categories are looking the same. Suppose, in a factory there is a line on the ground and the transport vehicle has to move on that line from start to finish. It's exactly the same task which was automated by a script, so is the hypothesis wrong and it's possible to automate real life tasks? No it's not. A simple look into the reality will show, that not a single automated robot is available which is used for a line following task. If a company is using a robot for this task, a human operator controls the robot all the time.

Or let me explain the situation from a different perspective. Suppose, in a factory there is a transport vehicle which is moving on a line. The robot is remotely controlled by a human operator. The prediction is, that it's not possible to replace the human operator with a software program. Because what the human is doing is a little different from executing a simple line following algorithm.

To understand the paradox better we have to take a look at a task which is remote controlled already. The best example is a crane. A human operator sits behind a joystick and has to press the buttons. The operator doesn't invest physical energy into the system but only his ability to control the crane is requested. Such a crane is available on most construction sites in the reality. Now it make sense to think about increasing the productivity. The idea is, that the human operator costs too much and he can be replaced with software. Technically, a simple USB cable can be plugged into the joystick of the crane operator and then a computer is in charge of all the operation. The only what is missing is a piece of software. And at this point the problem will start. There is no such thing available like a crane control software. What the engineers have to do is to create it's own sort of software. A first step would be to create submodules for a program for the basic features of the crane like open the gripper, unload the box and so on. And then an overall highlevel planner has to decide which operation is next.

The problem with such an automation attempt is, that it will fail in reality. The computer controlled crane will behave different from a human controlled crane. That means, the system can't be used in the reality. What the human operator will do is to deactivate the program and control the crane with the normal joystick. And he is right, because it's the only option available.

The problem is not located in the crane itself. Because in the lab the software auto mode will work great. The crane is able to execute a longer program for pick&place objects. What is wrong is the reality, which provides a different kind of problem. That means, the task which is solved by the software and the task on a real construction site are different. A human operator is needed, because the real crane has to do with unstructured situations. Each day the problems on the construction site are a bit different.

A robot program

The idea of a teleoperated robot is, that all the actions are initiated by the human. Apart from the human-machine-interface no additional software is there. The opposite of teleoperation is a system which has to be programmed. A robot program is a script which runs without human intervention. For most real life applications no script is available. The answer to the problem is, to modify the application into the direction of an easy to automate task.

The reason why a program controlled robot is prefered over a teleoperation system is because it has a higher productivity. Instead of training a human operator, the idea is, that the robot can execute the task by it's own. Apart from the written script no additional input data are needed. The only problem is how to write a robust script which is able to solve important tasks?

Industrial robots are usually programmed with a trajectory table. That is a list of points in the space which are reached by the robotarm in a sequence. A typical robot program looks like:


p1, p2, p3, p4, p5, stop


The disadvantage of such script is, that it's not very robust. But in some cases it works. The question is, why exactly is a list of points powerful enough to control an industrial robot? The answer is, that the task was modified in a way, that the robot is doing something useful if he repeats over and over the same sequence. In general there are two strategies available for robot programming. The first one is to make the original task more easier for a robot, and secondly improve the software so that it can handle more complicated tasks. Let us try to improve the robustness of a robot program. A more elaborated example if formulated in the STRIPS notation. The robot executes steps, and each step has a precondition and a postcondition. For example:


step1, pre=gripper(100,100), box(50,50), post=gripper(200,100), box(50,50), action

step2, pre=gripper(200,100), box(50,50), post=gripper(250,100), box(150,50), action

step3, pre=gripper(100,100), box(50,50), post=gripper(200,100), box(50,50), action


Similar to the first example, the scripts executes a predefined robot trajectory. In this case, the some constraints checks are made to investigate if the robot and the box are inside the expected range. If not, the script stops with an error message. The advantage is, that smaller problems during the execution are recognized autonomously.

Both examples have the disadvantage that the script is static. That means, the script is executed from top to bottom and no planning is available.

Instead of explaining how to create more advanced robotics script the better idea is to focus on the most simple robot program which was given in the first example. In the simple case, the robot program contains of a point list which are traversed by the robot. It's not like a classical computer program but it looks like a pattern. The program is equal to a trajectory which gets executed with the start button. The only open problem is to find a task for such a program. That means, the robot by itself is working great, but what is missing is a situation in which the program makes sense. A very easy pick&place robot can be programmed with that principle in mind. The robot isn't able to detect objects, nor the system is able to avoid obstacles. Instead the environment has to be static so that the same trajectory results into a success. A possible usecase is, if all the products on the assembly line are at the same position and there is only size of the product available. The robot picks the object and transports it into the box, where the objects gets released.

December 21, 2019

Limits of remote controlled robots

The good news is, that a remote controlled robot can provide amazing skills. The robot can drive a car, pick&place objects, work in real life applications and so on. It's even possible to combine a biped robot with the ability of remote controlled. The resulting machine looks similar to what is known from the I, robot movie which means, that it's a humanoid biped robot which can walk on the street. Providing such functionality is technically not very complicated, because the joystick controls the servo motors and that is all the secret.

The sad news is, that all of these remote controlled robots provide the same productivity like a normal human. That means, if 10 humanoid robots should walk on the street, 10 human operators are needed. The same is true, if the robot should do a pick&place task. What is not possible is, that a single operator controls a robot fleet. This would be equal to provide a better productivity.

On the first look, the problem seems not very hard to solve. If it's technically possible to build a teleoperated robot, it's also possible that man machine interface will become more efficient. Unfortunately, this is not possible. The only option which is available is to reduce the complexity of the task. That means, if the robot should follow a line but is not asked to do something useful, that the remote control system can provide a higher productivity.

The problem is, that reduced complexity tasks are different from what a robot should do. In most cases, the idea is, that the robot is doing something useful, for example deliver a box to the destination. This kind of task has a certain complexity, which is fixed. If the complexity is reduced, the task will become something else.

I know the explanation isn't a bit complicated. Perhaps it make sense to go a step backward. Increasing the productivity of a robot has to do with programming a macro or an algorithm. The algorithm calculates the next movement and the human operator can relax. So the question is which kind of algorithm is needed to control a certain robot. And exactly this is the bottleneck. An algorithm which means an autonomous robot can only be created, if the task is very easy to handle. This is the case for computer games. If the rules are known in advance, it's possible to create some kind of solver, which transforms the remote controlled robot into an autonomous one.

Unfortunately, real robotics applications do not providing fixed rules. It's not possible to formalize the actions of a human worker in an algorithm. Surprisingly this is also the case for simple tasks like an pick&place operation. Even if the robot arm has nothing to do than pick and place an object, the task can only be handled with remote controlled but not with an algorithm.

Perhaps it make sense to research the topic from the opposite perspective. Suppose it's possible to program an algorithm for a pick&place task. A working algorithm can be executed autonomously without a human in the loop. In theory this is equal to the maximum productivity. Are such robots available? No they don't. Because this would be equal that an autonomous robot is able to fulfill a task which is important.

Let us summarize the situation a bit: teleoperated robots are working great for practical applications. The disadvantage is, that the productivity is low. It's not possible to increase the productivity, because this is equal to provide human level AI which is not available. The open question is, if such a telerobot makes sense for today's companies. In theory, it's possible to build some sort of cloud service in which human robot controller are providing the service to control all sorts of robots. These robots are able to replace normal human workers. At the same time, the workers in the cloud will produce labor costs as well. The advantage is, that the labor is located in a single place which can be requested by lots of robots.

The explanation why the productivity of a remote controlled robot is limited isn't available in the domain of Artificial Intelligence itself, but it has to do who the normal economy is organized. The normal workplace for a human is organized in a way to maximize the productivity. That means, the truck driver who transports a load isn't able to do a second task while he is driving and the worker at the assembly line is also fully occopied what he is doing right now. That means, the average workplace generates a certain amount of stress to the human worker.

If the worker is replaced by robot, the robot has to provide the same amount of work, that means, he must resist to the workload as well. The sensor signals are transmitted to a remote location and the human operator behind the joystick will get the same amount of stress like before. If the normal human worker is not able to reduce the workload, how should the remote operator can do so? And exactly of this reason it's possible to increase the productivity. No matter, if the crane operator sitts physical in the crane or is located 100 miles away, the workload for the human is the same.

The only way for reducing the stress would be to replace the human operator with an Artificial Intelligence, which is a software which doesn't need a human operator anymore. This kind of robot is the opposite of a remote controlled robot, it's an autonomous device. The problem with autonomous robots is, that they fail in reality. They are not working for practical applications. Instead of asking how to improve the robot, the more elaborated question is, why a certain workplace produces a certain workload?

The answer is located in the industrial revolution. The workplace of a crane operator is the result of the invention of the crane. What a crane operator is doing physically is to press some buttons. At the same time, this job is very hard, that means, if no crane operator is available the construction site gets in trouble. The same is true for other jobs, for example in the service industry. An existing job is a sign, that the economy has a certain workload which is important. This workload is different from playing a game.

In contrast, the task which are available in robotics challenges for example the line following task, provide a small or even a zero workload. That means, the robot how drives on the line in a circle isn't providing real work which is needed by the economy, but it's his own pleasure. Solving a zero workload task with an algorithm is easy going, but solving a high workload task with an algorithm is not possible.

That means, if somebody like to replace a real worker with a robot, he will need a teleoperated robot. And if somebody has build an autonomous robot which doesn't need a human in the loop, the task has only zero workload which means, it's a synthetic challenge which is not needed in reality.`



In the graphic, the desired goal is located in the bottom right, which is a combination of autonomous robot plus high workload which is available in reality. What most robotics engineers are trying to realize is to built a software controlled robot which can do real tasks. The reason why this combination is colored in red is because it's not possible in doing so. The reason is, that if a certain task is highly complex it's not possible to create an algorithm for it. And if a robot is controlled only by software it will only be able to solve low workload tasks. Let us make a small thought experiment. Suppose, there is a human worker available who is doing nothing else as walking back and forth on the street. He moves 100 meter from left to right, and then the same 100 meter in the opposite direction. In the thought experiment the human worker gets 20 US$ for each hour he is doing so. Automating such a task with a robot and replacing the human worker by an algorithm would be pretty easy. A simple python script in under 100 lines of code would do the job very well. The problem is, that such a task is not available in the reality. It's equal to a synthetic challenge given in a robotics competition, but it is nothing which is requested by the real economy. Real obs in which the human worker earns 20 US$ per hour are much more complicated. They can't be automated with a simple Python script in under 100 lines of code.

The perhaps most interesting feature of teleoperated robots is their ability to solve high workload tasks from the reality. A well designed humanoid robot is able to replace a human worker. The only disadvantage teleoperation has is, that they can't do much more. If a company likes to replace all the 1000 employees with robots they will need exactly 1000 humanoid robots plus 1000 human operators in the cloud. They are not able to do the same workload with only 500 human operators, because it's the same job with the same workload. It's up to the company to decide, if cloud based teleoperation make sense or not.

Workload reduction in teleoperation is a myth

Robotics is about industrial automation. The hope is to increase the productivity with modern technology. A first attempt in building a robot includes teleoperation. A teleoperated robot has the same workload, there is no advantage available for the human operator. What the engineeres are trying to achive is to reduce the workload. They want to design a human-robot-interface in which a single operator is able to control a swarm of robot. The interesting fact is, that such an interface can't be realized in reality. The reason why is a bit complicated. But for the moment it make sense to locate the increase of productivity outside of a control problem.

Let me give an example of a transportation problem. A load can be transported either by a truck or with the railroad. Logistics with the railroad is more efficient. A single operator is able to transport lots of container at once. In contrast, a fleet of trucks is needed to do the same task. The example is interesting because no artificial Intelligence at all is needed to increase the productivity. It seems, that the amount is connected to the mechanical vehicle but not to the control problem.

Somebody may argue, that from a technical perspective it's possible to invent a swarm based teleoperation device. Similar to what it's known from real time strategy games, the human operator selects 10 vehicles at once and command them to move to the new location. So he has reduced his own workload. The problem with this example is, that such a swarm control is a synthetic example. That means, in a newly created game the swarm is controlled in such a way. In reality, no such control problem is available.

This produces the question which kind of domains are available in reality? This question goes into the right direction. Jobs which are done by human workers are organized with a certain principle. In most cases, a task was optimized already. That means, that no potential for further improvement is available. The best example is a airplane pilot. What he is doing is to act inside an existing system. The combination of the airplane together with the pilot produces a certain productivity. That means, the overall system has costs and provide a service. The amount of costs is not determined by the pilot but by the system in general.

If the human pilot is replaced by a teleoperated robot, the same productivity is the result. Perhaps this is the most dominant reason why teleoperation is not discussed very often in the literature. In contrast to real Artificial Intelligence it doesn't provide extra productivity.

Let us analyze what the untold assumption of robotics engineers is. They are programming a software which can control an airplane by it's own. The idea is to install such software in all the airplanes in the world and then the human pilots are replaced by the software. Without the software, around 1000 human pilots are needed and with the working software 0 humans are needed because the system can fly by it's own. This is equal to a great productivity increase. The only problem with this outlook is, that the engineers have struggled in doing so. They are not able to write such a software. In the laboratory it works great, but in a real airplane the software is not able to replace a human pilot.

The interesting point is, that this is not a technical problem but it has to do with a bias of the engineers. They are focussed on technical problems for example how to calculate the trajectory or how to setup a neural network. What the engineers ignoring is the history of failed automation project. They are ignoring automatic airplane software from the past and they belief they can reinvent everything from scratch.

What modern computers and robots can do is to distribute work between humans more efficient. A teleoperated airplane can be controlled from the ground. What modern technology can't provide is to reduce the total amount of workload. That means, if the pilot in the airplance should be removed, a human operator on the ground is needed to do the same job.

The most efficient workflow

Suppose, the idea is combine telerobotics with a high productivity. The first step is to identify in the economy high efficient systems. The best example is an electric train plus cranes who can unload and load the containers. Such a system is highly efficient because it minimizes the demand for human work.

Now a robotic system can be used to remote control the system. The teleoperation won't increase the productivity further but it will allow the human workers to do the job from any location. The train gets remote controlled and the cranes for unloading too. The resulting system will need some humans in the loop, but not very much. And the most important feature is, that it can be realized. It's not a fictional scenario what robots can do in 30 years, but it can be realized with today's technology.

What is needed is cloud based teleoperation to control an electric train which is fulfilling a logistics task. The human operator behind the screen is replacing the physical operator in the train.

December 20, 2019

Non Teleoperated robots

Classical robotics competitions like Micromouse are working with non-teleoperated robots. That means, the robot is taking the next decision only by artificial intelligence. The reason why is because the AI engineers are trying to increase the productivity level. The assumption is, that a human operator doesn't make sense, because it will produce a high workload to that operator.

Non teleoperation has a major disadvantage. If all the decisions of the robot are controlled by software, the software needs to become very elaborated to solve a task. The problem is, that how to program an advanced AI is unknown, so what is available in reality is, that the difficulty of the robot competition gets smaller, so that current AI software is able to solve it. That means, the micromouse in the challenge doesn't need to do complex tasks, but it's enough if the mouse finds a way through the maze.- This reduced complexity allows the participant of the competition to program an AI Software which can do the task autonomously.

The problem is, that in reality the tasks have a higher complexity. A problem in which a robot has to travel through a maze isn't available in a real factory. Therefor the programmed software is useless. It will only be successful in synthetic robotics competition but not for practical applications. The gap between a robotic challenge and replacing real humans at the workplace is too large.

The reason why the gap is there has to do with the autonomous paradigm. All robotics competition have in common that the robot is controlled by software but not with a teleoperator. That means, the decision of the robot has to be calculated by the AI software. It's not possible to cheat the challenge, because it is doublechecked, if the onboard software is really able to control the robot. The requirement, that only software controlled are allowed to participant at the challenge is asking for a certain sort of robot. In case of the micromouse challenge, the typical software contains of a hierachical pathplanner in which high level trajectory is calculated together with low level motion commands. Such a pathplanner is optimized for the micromouse challenge. It's the opposite of a human level AI.

Somebody may argue, that it's not possible to program a human level AI which can solve more than a single game. And he is right, because such software is out of the scope of current AI. nobody knows, how to program a software which can play all the games and is able to communicate in natural language. The consequence is, that no alternative to a normal robotics competition is available.

Robotics competitions

The rules of challenge result into certain styles how to solve the issue. A typical example is a line following challenge. The task is to program a robot which can follow the line. The robot who is ooing so in the fastest amount of time has won. The problem with a line following challenge is, that it's not a robot task itself, but it's a programming problem. It can be solved by implementing an algorithm in the Python language. At first, the sensor information are read, then the robot has to calculate the next movement. A typical program for solving this task is working with the pid control method. That means, the sensor readings of the line is taken as input for the steering controller.

The problem is, that even a robot has won the challenge the robot project has failed. The created software can't be used outside the competition. It's a synthetic competition which provides a sandbox in which the programmer should proof if they have understood what a pid control algorithm is. The more elaborated form of a robotics challenge is an open problem which is equal to practical applications of robots in the reality.

It's important to know, that synthetici robotics challenges like a line following task and the requirement in the reality have nothing to do with each other. A line following problem is a game which is working by it's own rules. These rules are created with the purpose to simplify the problem. In contrast, using a robot in reality, for example in a factory or as a self-driving car is working by different rules.

The requirement in the reality can be summarized to “human level AI”. If a robot should replace human worker, the robot needs the same skills like a human. That means, the minimum requirement is, that the AI of the robot will pass the turing test. For sure, this requirement is a bit too high, no robot today will fulfill such a requirement. That means, a robot is not able to do useful tasks in the reality, because he failed in the turing test.

The problem is not described very well in the literature, so it make sense to focus on the detail. A naive assumption is, that practical applications of robots can be realized without providing human level AI. The hope is, that a normal AI algorithm which includes sensor perception and a bit of pathplanning is enough to solve tasks form the reality. The assumption is, that an industrial robot has to solve only simple tasks but there is no need to equip the robot with a massive human level AI which will pass the turing test.

This kind of assumption was never tested in the reality. It's the hope of the engineers because they can only program narrow AI system and the hope is, that these narrow AI systems can solve tasks from the reality. The altenative description is, to assume, that these narrow AI systems will fail in the task. That means, the robot which was programmed with some algorithm is not able to replace human workers and it can't be used in the reality. There are many facts available which are showing that this pessimistic thesis is correct.

All the robotics projects from the past have two things in common. First they are working with narrow AI but not with human level AI and secondly they failed. That means, the robot was not able to replace human workers. And between two facts there is a strong connection. That means, the robot projects failed because the robot was not able to pass the turing test.

Artificial Intelligence has a lot to do with solving games. Creating an AI controller is equal to solving a game. The open question is, which kind of games are available. An easy to solve game is TicTacToe. Such a game can be solved with a narrow AI. A basic strategy would be a gametree search. The problem is, that games played in reality are much harder to solve. For a certain standpoint, human workers are solving games too. If a cook prepares a meal he has to solve a game. This games contains of manipulation tasks with the hand and it has to do with decision making which kind of ingredients are needed. All the games in reality are too complex for current AI. That means, it's not possible to create an AI which can play games in the reality. The problem is, that games in the reality are fixed. It's not possible to simplify the existing games. So the answer is to increase the ability of the robot to a human level.

Comparison between Teleoperation and normal AI projects

- AI controlled robot: evaluated in robotics challenges like Micromouse, Narrow AI, sub-human level AI, a program is controllling the robot

- Teleoperated robot: no robotics challenge available yet, Human level skills because the human operator understands the situation as default, is not controlled by software but a human is in the loop

A software controlled robot has the maximum productivity. The machine is working similar to a electric motor: after activating the program no human intervention is needed. This reduces the costs to zero. The disadvantage is, that the robot's skills are restricted. Most software controlled robots are only able to do simple line following tasks and play games like chess.

In contrast, a teleoperated robot has the ability to solve all sorts of problems. Especially tasks from the reality which are asking of human level skills. A human controlled robotarm is able to pick&place objects as default without addtional software. The disadvantage of teleoperation is, that a human operator is needed all the time. The productivity is not higher than without any robot. The reason why teleoperation is used in reality is because the human operator can be connected over the internet. A typical example is telemedicine,in which the expert doctor is located far away from the operation room.

A contest for teleoperated robots looks the same like an RC car challenge. That means, each operator stands with a remote control in front of the robot and should do some tasks with it.

Can teleoperation increase the productivity?

A major concern in the economy literature is adressed by the term of “productivity paradox”. It means, that the productivity hasn't increased with the advent of robotics at the workplace and in the worst case it will become lower with the introduction of robotics. From an economic perspective the productivity is a very important measurements, it has to do with how much the company has to spend to produce a product.

A possible technology which will result into powerful robots is teleoperation. Teleoperation is opposite from classical Artificial Intelligence because the idea is that a human operator is needed. The only new thing about teleoperation is, that the human operator can be located everywhere. From a productivity standpoint it's possible to guess what will happen with the productivity. It will remain the same.

That means, if 10 human workers are replaced by 10 robots and for each robot one human operator is needed in the loop the overall costs are the same. Or a bit higher, because the robotics hardware produces additional costs. So it's a zero sum game, isn't it? It's true that the productivity itself remains unchanged. A human controlled robot will have the same or a slower speed than a normal human. And the promise of robots to replace the human wasn't fulfilled.

So why exactly should a company give the technology a chance? I don't know. Perhaps the idea is, that teleoperation has non-measurable effects or another reason is, that the companies likes to robotics on the workplace but didn't want to wait until human level AI is available, so they are doing a step in between and using human operators in the loop. In the case of autonomous cars, the advantages of teleoperation are easier to grasp. Most today's cars are operating less than 5% of the hours per day. The average car is parking all the time, and no sharing takes place. With teleoperated cars, the situation can change drastically. This would allow – in theory – to let a single car drive 24/7 and less cars are needed overall. The pairing between cars, human operators and customers can be managed more flexible than with existing cars.`

household robot

A different application for a teleoperated robot is a household robot for the elderly. The idea is, that the robot is controlled by family members. That means, the task isn't solved by an AI, but a human is needed.

The underlying assumption is the same like for self-driving cars. The idea is, that only human level AI can replace human workers. It's not possible to remove a human from the loop or increase the productivity. The only thing what technology can provide is to increase the distance between the request for work and the human operator who provides the work. That means, a robot for the elderly is different from a normal machine. It has more in commong with an advanced telephone which also needs somebody on the other side.

December 19, 2019

Teleoperation is a here to stay

Sometimes, Teleoperation is introduced as bridge technology for creating Artificial Intelligence. The idea is, that Teleoperation is much easier to realize than a fully autonomous system. Most engineers who have implemented Teleoperation are trying to increase the automation level further, because they want to remove the human operator from the loop. But is this vision realistic? After removing the human operator from the loop, the robot will loose it's human level AI capabilities. It's no longer possible to talk to the robot with natural language. For filling the gap of the human an advanced sort of artificial Intelligence has to be realized.

It's a well known fact, that a human level AI is not available. And it won't be available in the next 50 years. After removing the human operator the robot will have a much smaller cognitive capabilities. The naive assumption is, that a state-of-the-art AI can control the robot by it's own, and that the task doesn't require the full human level skills. But this kind of assumption is wrong.

The reason why software controlled robots are not used in reality is because all the tasks are asking for human level skills. It's not possible to control airplanes, cars, ships, drones or grasping robot with sub-human-level AI. Let us ask the engineers in which timespan human level AI in software becomes realistic. The answer is, that they don't know. What was demonstrated in robotics challenges is only a narrow AI. That is a software which drives a robot on a line, or which can do simple image recognition tasks. The engineers have no idea how to program a human level AI. That means, they are not able to provide such software. And because of this reason, it's not possible to remove a human operator from the loop. Teleoperation is the only working robot which is available.

From a fantasy point of view, it's possible that one day, human level AI is available. That is a piece of software which can do everything what a human has to offer. It includes understanding of natural language, detecting all sorts of objects, learn new things and provides empathy. In some science fiction movies, Human level AI systems were shown, for example Data from Star Trek TNG. But right now no such thing is available in reality. Therefore AI can't be realized.

What makes teleoperation so amazing is, that without complicated software, human level AI is provided as default. All what the human operator needs is a joystick and monitor and the robot is able to do everything what the human can do. It's not a real AI, because there is a human in the loop. But the robot is on the same level like a human. He can be used for practical applications.

Practical example

I'm not the first one who is arguing pro Teleoperated robots. The company “Phantom Auto” has developed a teleoperated self-driving car. The basic idea is, that a car needs at minimum human level AI, which means, that without a human driver a car is not allowed to drive in real traffic.

The interesting fact is, that self-driving car engineers in the past have argued a different way. The assumption was, that the robot car can be controlled with software and they have demonstrated it in synthetic driving challenges. Either for RC Cars which were controlled by Python script or with real cars. The problem is, that a software controlled car provides only sub-human level skills. It is able to do some tasks like path planning and automatic steering but it won't understand a simple sentence like “hello robot, what's up?”.

The main reason why robots are not available is because human level AI is the minimum for practical application. And what the company Phantom Auto is doing is to provide such feature. A teleoperated car is working on a human level. It will understand the mentioned sentence and will respond to it. This make it a good choice for practical applications.

The interesting point is, that a teleoperated car provides a lot of technology except Artificial Intelligence. The onboard computer provides only a connection to a remote location, but the computer is not able to control the car by it's own.

Robotics and the invention of neoluddism

On the first look, humanoid robotics is the forefront of technological progress. Building a biped robot who can do a task is equal to introduce futuristic technology to the world. The interesting point is, that especially advanced robotics is providing an anti-technology standpoint. The reason is, that a robotics project contains of two elements. First the robot itself and secondly, the explanation about the robot. The second part of the system can be called neoluddism because it won't bring the world forward but it is spreading misinformation.

Let us go into the details, how humanoid robotics are explained in the literature and in videos. In most cases, it's described as a successful project. The humanoid robot walks through the house and is doing useful tasks for example cleaning the kitchen. The audience gets the impression, that the robot is a product which will increase the productivity in reality. What makes the story problematic is, that no alternative is presented. The audience has no opportunity to validate if the robot is useful in reality or not.

To make the bottleneck clear it's important to tell a different kind of story. Suppose, there is a practical joke, which is a machine who can't provide anything. And the story is, that this non-sense machine will become a useful product. If the story is told the right way, the audience will laugh about it, because it makes no sense at all. Do the people laugh if they read stories about household robots? No they don't because the plot prevents that the audience gets the full impression. If the audience is not allow to laugh about the product it gets indoctrinated.

Laughing is equal to freedom. It allows somebody to stand above a subject. Telling a joke is equal to spread the truth. In case of humanoid robotics the amount of jokes is rare. That means, that there is no intention to explain what a robot is really doing. And the user is fooled with misinformation.

Productivity

There is a reason why the productivity of robotics is incredible low. By self-definition a robot is trying to replace the control part of a system with automated algorithm. The robot isn't working like a classical industrial machine but the robot is using sensors and actuators to decide something. The car is driven by the motor, and the robot is controlling the wheel of the car. The crane is driven with electric current but the robotics crane operator controls the buttons.

Unfortunately this part is hard or even impossible to automate. Most robots are working great from the technical side. But they fail in doing the sensor-actuator task in a meaningful way. The work hypothesis is, that only human level Artificial Intelligence is able to replace human workers. Right now, no human level AI is available and as a consequence robots have to fail in increasing the productivity.

The problem is located in missing research about failed industrial robots in the past. Many attempts were made over the decades. But the amount of productivity was never measured. If a company who has sold industrial robots went into bankruptcy it's ignored by the robotics community. They pretend, the case was never there. Instead they are talking about future robots which are more powerful.

Human Level AI for industrial robots

Industrial robots were never successful because all the tasks in reality are asking for human level AI. A human level AI is a robot which is on the same level like a human worker. That means, he understands normal English, is able to learn new tasks and is able to fulfill complex tasks by it's own. What autonomous robots can provide is reduced form of Artificial Intelligence. The typical AI control program is able to steer a robot on a line or can do simple pick&place tasks which are preprogrammed by the algorithm. In robotics challenges like micromouse and robocup such minimal AI is enough to solve the task. The problem is, that in reality the robot needs more skills to become highly productive.

From a technical point of view, it's not possible to program a human level AI in software. Even advanced research projects in the universities are not providing such features. All the existing robots have only sub-human level AI implemented. Because of this reason, they failed in real life applications. The better alternative is to use a teleoperated robot. Teleoperation means, that a human operator controls the robot which is connected with the robot with an internet connection. Teleoperation itself is not able to increase the productivity. The human operator will need the same time until the task is finished, and he has to be payed like the normal worker. The advantage of teleoperation is, that the distance between the robot and the human operator can be increased.

The normal Internet connection is remarkable fast. It allows to control a robot in realtime, similar to what a multi-player online game is about. That means the latency in games is enough for a robot control problem. In contrast to autonomous robots, a teleoperated robot is on the same cognitive level like a human. That means, it's possible to talk to the machine like “hello robot”. And the robot will answer in normal English, because on the other side there is normal human.

This kind of human level skills is required for solving real tasks. For example, the crane on a construction site is doing a complex task and there is a need to talk to the crane operator. If the crane operator is a software which was programmed for pick&place actions, it's not possible to talk to the crane. As a result, autonomous cranes are not used in reality. But a teleoperated crane is useful tool. The same is true for delivery robots which transports a box from a to b. A normal robot which is working with a computer program doesnt provide human level capabilities. A simple request like “put the box down” won't be understood by the robot, because the software has no speech recognition module. But if the same delivery drone is controlled a by a human operator it will understand each single word. And much better, the human operator will understand even sign language without extra commands so that the interaction make sense.

The work hypothesis is, that teleoperated robots are useful for commercial applications while autonomous robots are not. The only task which can be solved by software controlled robots are synthetici challenges like Micromouse, but these challenges are different from practical applications.

Is there a need for human level AI?

Perhaps it make sense to go a step backword and describe the precondition for normal robotics. The common idea of robot programming is, that at first the robot is equipped with piece of software, and then the software is able to solve the task. A typical example is a pick&place robot which moves an object from A to B. The assumption is, that the pick&place software is enough for solving problems in reality.

The problem is, that the engineers are not able to increase the skills of the software but what they are doing in reality is to modify the requirements of the tasks. In case of the pick&place robot the engineers are inventing a robot challenge in which a box needs to be moved from A to B. If the robot is able to do so, he has won the challenge. This kind of task is very different from real applications. In reality, a pick&place task is more complicated. This sort of real tasks can't be solved by the initial software. That is the reason, why a pick&place robots works great in the laboratory but fails in the reality. Let us imagine a real pick&place task which is required in the factory. Solving this task with a robot is not possible. What the companies are doing is to utilize human workers for this task. So the question is: which kind of software is needed to replace a human worker with an AI?

The answer is a bit complicated. It has to do with the task. Or let me reformulate the question: how much Artificial Intelligence is needed to solve pick&place tasks from the reality? The answer is, that only human level AI is capable of doing so. Even if the task looks easy to solve a normal algorithm isn't able to do so. That is the true reason why robotics were never used in the factory. Because the gap what robots have to offer and the requirement of the factory is, is too large.

The problem is not located in the domain of Artificial Intelligence. But it has to do with the human work in reality. All the jobs in the service industry, on the construction site, in the supermarket and for driving trucks to a destination are highly complex. They look easy only for humans, but they too complicated for robots. The reason why these tasks are so demanding is because most of the work was automated already. For example the engine in the truck moves the vehicle forward and the engine is driven by fuel. The only thing what is not automated is the steering task, which means to operate the truck and decide in which moment the brake is needed. The same is true for the crane on a construction site. The crane itself is driven by an electric motor. What the human operator is doing is to control the crane. That means, he is doing a high level task which needs a lot of domain specific knowledge.

This kind of human level knowledge isn't provided by simple path planning algorithm. The minimum requirement for a human worker is, that he understands normal English. Nearly all existing robots are not able to do so, only humans can understand a sentence like “please stop the engine”. If a robot doesn't even understand a simple sentence, how is he able to replace the human worker? Right, there is no way and as a result the automation project will fail.´

December 18, 2019

Switch off the robot and and have fun -- Analyzing the bottleneck in modern automation technology

From a technical perspective, Artificial Intelligence research has developed algorithm for controlling robots. The most advanced one are motion planning with model predictive control. The idea is to create a forward model of the system and use the model for trajectory planning. This allows to build biped robots and manipulation robot hands.

Somebody may argue, that the practical applications are obvious. Because it's possible to utilize the technique in self-driving cars and in pick&place robots which can be used for industrial applications. There is only a smaller problem. It seems, that solving a robotics task from a technical application is not enough. That means, on the one hand it's possible to build a pick&place robot and on the other side it's not possible.

What can be solved with modern AI techniques easily are so called robotics challenges. That are synthetic challenges like micromouse, robocup, Mario AI or robot pick&place tasks. The mentioned combination of a forward model, motion planning and model predictive control results into a working system. The problem is, that a synthetic robot challenge is very different from a practical application of a robot. A practical application is equal to convert the technology into a product, and sell it to customers. Exactly this is not possible and attempts from the past into that direction have failed.

What does that mean? Creating a robot with the help of model predictive control and planning is the best practice method for building a pick&place robot which works in a challenge. Using the same technique to build a commercial robot which is sold on the market will fail. That means, the same technique is a powerful one and a useless one at the same time. This paradox is hard to grasp. Mostly it's assumed, that artificial Intelligence is a technical challenge. That means, if the algorithm was identified to control a robot than the overall problem is solved.

To measure the bottleneck in reality it's not enough to describe robotics from a technical perspective. The more elaborated form is to investigate the history of robotics companies and their failed attempt in selling a product to customers. The good news is, that in the last decade many examples are visible. The interesting question is why a certain robotics company have failed to sell the product. There are some arguments available:

- price of the product is too high. This was the case for the PR2 robot which costs around 400k US$

- robot technology is not advanced enough. This was the case for the helpmate robot in the 1990s. In that time, the onboard computer was slow and the sensors were not accurate.

- price is low and robot technology is advanced but the customer doesn't buy the product too, this was the case for the Baxter robot from rethink robotics

The pessimistic prediction is, that even the robot is sold for little money and is working with the latest hardware and software the customer won't buy the product. This outlook is equal to a general failure of robotics. Which means, that it's not possible overall to sell a robot to customers. Let us construct a hypothetical example. Suppose a company builds a pick&place robot which is very cheap and is working with model predictive control. Will this product become successful on the market or not?

The prediction is, that the robot won't find it's customers. The reason is, that even a lowcost, MPC-based robot is not able to increase the productivity in a real life condition. The task which is solved by the robot, and the requirements in reality are not the same. Or let me give another example which is in the domain of most people.

Suppose, a car company develops a self-driving car for the same price like a normal car. It's equipped with the latest sensor technology and advanced AI software. Technically spoken the car is able to drive autonomously. Will this car get customers or not? The naive prediction is, that such a car will get sold many million times worldwide. Because it reduces the workload of all the human drivers. It is useful for private households and commercial applications as well.

The problem is, that such an optimistic assumption is maybe wrong. In reality, a self-driving car is not fulfilling the real requirements. It won't reduce the workload of the driver but it increases the workload. That means, the human driver will deactivate the autopilot if he likes to relax a bit. This kind of counter-intuitive strategy isn't showing a missing knowledge of the human driver but it shows, that something with robotics technology is wrong.

Autopilots in ships

An interesting example of automated controlled vehicles is a ship. Autopilots for ships are available for decades, or at least it is written in the literature that such autopilots are available. A more realistic investigation comes to the conclusion that 0% of all ships today are using an autopilot. In contrast, all the miles were driven manual without computerized decision support. How can this mismatch be explained?

It's important to separate between autopilots in ships and the literature about autopilots. What is explained in the literature is the technical working of an autopilot. In the average book it's mentioned that steering a ship is a mathematical challenge. In some newer publication it's described as a problem for control theory which can be solved with modern algorithms. It's possible to make this problem more obvious by developing an autopilot from scratch and compare different RC controlled ships in a challenge.

On the other hand, there is the problem of ship steering in reality. In reality, the autopilot is something which isn't available. A ship is working with a human operator in the loop. The operator has some buttons he is able to press and before he can do so, he has to ask the captain what he likes to do next. An autonomous controlled ship would be equal to replace human work with software. This futuristic vision is not available in the reality, and it won't happen in the next 50 years. In reality, the amount of human operators on the bridge is constant. That means today ships are controlled the same way, like in the 1950s.

Sure, the technology has evolved a little bit. Today's ships are using computers and modern sensors. But the productivity is the same. Productivity is the measurements how many human workers are needed to control a ship with a certain size. The productivity has never increased. That means, computer technology was not able to replace human work with algorithms.

The surprising fact is, that even remote controlled ships are not available right now. It's a vision for the future to increase the efficiency for freight transport. The idea is, that if the human operator can stay outside of the ship it's easier to control the device. In case of a remote controlled ship, the amount of needed humans in the loop remains the same. The only advantage is, that they doesn't need to by physical on the ship. This kind of low end automation was never realized because the disadvantage is, that many new sensors and costly equipment has to be installed on the ship. That means, even in the year 2019, the amount of remote control freight ships is 0%. The prediction is, that in the next 50 years the amount will be the same.

Conclusion: Autopilot for ships are not available. Remote controlled ships are not available. The productivity over the last 50 years hasn't increased and it's not possible to use Autopilots in the reality. What is written in the book about Artificial Intelligence is wishful thinking.

A pessimistic prediction for the future of robotics

There are two advanced robotics available, the Baxter robot from Rethink robotics company, and a self-driving car from different companies. Both systems are equipped with cutting edge software and they are able to fulfill certain tasks. The interesting fact is, that Baxter and self-driving cars have two sides which should be mentioned. On the first look, the project is amazing. In the case of the Baxter robot it's the first time, that an industrial robot can be programmed by everyone. The system is robust against errors and can be reprogrammed for different tasks. In contrast to other robots, Baxter is way more advanced.

Unfortunately, there is a less known part of the project. The rethink robotics company has went bankrupt last year because they were not able to sell the product to the market. And after watching some of the videos the reason why is obvious: the Baxter robot isn't solving practical tasks, but it's increases the complexity. That means, with the Baxter robot in the loop the costs will become higher and not cheaper. The same problem is there for self-driving cars. From a technical perspective, current autonomous cars are advanced. But they can't be used in reality.

The open question is why such a gap is there between the promise of the inventor and the reality. The problem has nothing to do with Artificial Intelligence itself, but it's located in the tasks which should be done by robots. In most cases, the promise is, that a robot can replace a human worker. For doing so, the robot needs the same capabilities as a human worker, and this is not the case. What self-driving cars and the Baxter robot have to offer is a computer program which is some sort of Narrow AI. It selves a certain task which was programmed before. This kind of capabilities is not enough to replace a human worker.

Some engineers will argue, that this is not a real problem, because current robotics is sold as a co-robot, which means, that the robot and the human are working together on the same problem. Exactly this is not available. A comparison between a) a single human and b) a human and a robot will show, that the single human is more efficient. He can do the same task in a lower amount of time.

Let us focus on the latest generation of autopilots which are available in some of luxuary cars. The surprising fact is, that if the human drivers activates the autopilot his workload will become higher, but not lower. That means, the autopilot isn't supporting the human but he puts the human under stress. The same is true for the Baxter robot. So the conclusion is, that Artificial Intelligence isn't a helpful tool but it's the opposite. Exactly of this reason, Rethink robotics has went into bankrupt.

Let us make a simple reality check. The amount of Youtube videos about the Baxter robot is amazing. Nearly all features are explained of how to use the machine. In contrast, not a single company is using the robot for practical applications. So the conclusion is, that Baxter is some kind of educational project but can't be utilized in reality. It seems, that the world has huge interest in explaining the Baxter robot to others, but there is nobody who is watching all these tutorials and use this knowledge. So the prediction is, that the knowledge is useless. It means, it's not possible to learn how to install a robot in a factory.

There is without a need for increased productivity in the econmy. But robots who are working are a dead end. If the idea is to increase the automation level, other options apart from Artificial Intelligence should be investigated first.