September 21, 2021

Short history of the assembly line

 

Even assembly lines are used frequently by the capitalism the amount of literature about the subject is small. To understand the situation better we have to go back to the time before the first conveyor belt was invented. What human societies are trying to achieve is to produce goods. For example, food and household products. Before somebody can eat bread it has to be produced first.
The main bottleneck in production goods is the low productivity. It takes cost and amount of time until a product is ready. For example, if somebody tries to create a bread in the home kitchen he will need some hours until the bread is ready to consume. The amount of invested time and money has influence to the cost how the product is sold.
The assembly line is a way to improve the productivity. The idea is that at least the transportation of products from one station to another is fully automatized by the conveyor belt. And the task at each station is performed by human labor. In a more advanced setup mechanical machines are used as workstations so that the amount of human labor is reduced further.
What all studies for assembly lines have shown is, that the principle works remarkable well. It is some sort miracle to compare assembly line factories with their manual counterpart. The advantage for the assembly line is huge.
Especially for producing mass products the principle is the best known technique today. It helps to reduce the costs for a single product drastically. And perhaps this advantage is the main reason why the assembly line is described seldom in the literature. It works too good to be true. That means companies how are using the technique are able to increase their productictity drastically and the amount of production is endless. The assembly line has generated the paradox situation that valuable products are no longer limited but they can be produced in endless amount of quantities. The same factory can produce 100 units per day or it can produce 10k units per day. And no further costs will be there.
Basically spoken a well maintained assembly line can create products out of nothing, similar to what a magician is doing. Assembly lines are improved further with the introduction of robots. The transportation from goods between the workstations was automated since the beginning, but in the past, human workers were needed for eacht workstation. A robot can automate the remaining workforce so at the end, no humans at all are needed. This is not wishful thinking but it is the reality.
Let us try to understand what the consequence is. The consequence is, the production costs for mass products can become zero or nearly zero. This sounds great for the customer but it can produce negative results for the economy in general. The capitalism is great in managing limited resources, but if the goods are provided for free the idea of capitalism is no longer working. Basically spoken, the fully automated assembly line is a positive and a negative thing at the same time. It has revolutionized the capitalism in the 19th century but it will make the capitalism obsolete in the future.
From a technical perspective a factory is doing two things. First thing is to transport goods between the workstations. That means, at workstation 1 the bread is created and at workstation 2 it gets packaged. The distance in between has to be traveled for all the breads. Somebody may argue, that it is not very complicated to take a bread with a weight of 500g and transport it 500 meter to the next workstation. Sure, for a single unit this is true, but what will happen if the same action has to be done 1000 times? Then it is a large scale logistics problem. And exactly this problem is solved by a conveyor. A conveyor is an automated logistics machine which works 24/7 without any human labor. The only thing what is needed is electric current and the conveyor will transport thousands of raw products.
The second task which is done in a factory is to handle an item at the workstation. For example to put a bread into a box. Automating this task with machines is more complicated. Some mechanical machines are available but most of the work is done by humans. So we can say that a workstation is the weak point of an assembly line. If the humans are working slow, the conveyor speed is limited. From a technical perspective a single workstation can be automated with a robot. A typical example is a pick&place robot who is packaging the bread into a box by itself. No humans are needed but the operation works autonomously. Such kind of production line is the most advanced technology available. The output per minute is high and the overall costs are low.
The main advantage is, that such a system is used for mass production. It can create easily many bread in an hour and this is what today's and future companies are doing.
Production lines and robots have some of inner logic which is realized in any case. The main principle is to increase the productivity. All converyors and all robots are created with the goal to do so. That means, the though output is high, the costs are low and the benefit for the owner of such a machine is high. The reason is that technology is used to solve problem. The problem in capitalism is how to produce goods. There is need to create a certain amount of bread, and the bread factory is doing so. It doesn't make sense to invent a conveyor who runs slowly or build a robot who is making mistakes. These potential errors are fixed easily and at the end the machine is running with maximum productivity.
 
Conveyor
At foremost a conveyor is a technical innovation. It consists of an electric motor plus a tape. After activating the machine it will transport the items. Even the principle looks not very advanced it can be used everywhere. Especially for cases in which a high amount of units has to be transported the conveyor is a here to stay principle. It is superior over human powered transport vehicles and it outperforms even a car or a truck. for lower distances.

How to write Wikipedia articles?

 

Wikipedia is without any doubt the most successful online encyclopedia in the world. Many million people are reading the information every day but only a small fraction is motivated to contribute to the project. This mismatch is surprising because after adding new information to an article it is for sure, that somebody else will read it. So Wikipedia provides an additional value over a normal weblog in which in the worst case, newly created content is perceived by anybody.
The main reason why somebody isn't uploading content to Wikipedia is because the pipeline is hard to understand. It is simply not predictable if a certain sort of content is wanted by wikipedia or not. Some tutorials were written in the past how to participate at Wikipedia but the amount of newbies who likes to read these tutorials is low. That chance is high, that only the existing wikipedia authors are interested in edit in the project and apart from this closed circle nobody else is motivated to learn how Wikipedia is working internally.
Somebody may argue, that this situation has to be changed and that everybody can become a Wikipedia author. According to the raw numbers, this wishful thinking stays in contrast to the reality. Even the english wikipedia which is the largest project wasn't able to increase the amount of authors but they have become smaller over the years. Basically spoken, 99% of the world population isn't interested in the project at all, but they are happy with reading the existing content without become part of the project.
Even if the situation looks bad there is need to iimprove the Wikipedia project and add new articles. The reason is, that for many important topics no information is provided yet and somebody has to write all the missing information. So the question is, how to do so?
Today's Wikipedia works entirely with references at the end. These references are more important than the article itself. So the first question is, which sort of sources has to be selected? In the information age there are different sources available: online forums, books, youtube clips, weblogs, private websites and academic journals. From a technical point all these sources can be added into an article, and there are real life examples available in which exactly this was done. But let us describe the situation from a more conservative perspective. The most valuable sources for a wikipedia article are printed books and printed academic journals.
This excludes youtube clips, amateur websites and online forums and accepts only content which was written for university students and very important which was peer reviewed. peer review is a step before a book or journal gets printed.
Websites like plos (which is an electronic journals) and Academia.edu (which is an academic social network) are never printed but they are electronic only academic websites. From a conservative standpoint, electronic only publications are not peer reviewed so they are not allowed as a wikipedia reference. This makes it easier to define what a good source for wikipedia is:
It is either a printed book or a printed academic journal.
Now it is possible to describe the workflow how to create a new paragraph for a wikipedia article. First some references are identified. Then notes are taken from these references. Then the notes are converted into full text. In the next step the wiki syntax is ended and last but not least the paragraph is uploaded to Wikipedia.
This is the overall workflow how to update Wikipedia as an author. Sure, the pipeline is very complicated and it is done only by people who are already familiar with the project. But this is how Wikipedia is working. Every information not available in a printed book or printed journal can't be added to Wikipedia.

Predicting failure pattern for Linux

 

Most windows user are shy to give Linux a chance. The reason is, that they expect that something with the installation will fail. Failure means, that the graphics card isn't detected, that that hardware isn't able to connect to the internet and that the newly installed system won't boot anymore.
In spite of so much positive articles how wonderful Linux is, that fear of Windows users is right. Especially if they have bought recent hardware in a store the chance is high that all the Linux distributions won't work.
Or let me explain it the other way around: First assumption is that the electronics stores are selling brandnew laptops. Second assumption is, that especially computer experts are buying the latest hardware because they want the maximum performance and high quality LED screen. So the resulting situation is, that the user sits on an advanced computer but Linux won't work with this machine. The strength of Linux is, that it will run on older computers which were sold 2 years before.
This kind of mismatch can't be fixed by a single user. No matter if he is using Debian; Linux Mint or Arch Linux. The chance is high that the kernel won't recognize the chipset and the only one who can fix the problem is Linus torvalds in the upstream.
From a communication situation it is interesting to describe who feels in charge to fix the problems. Nobody feels responsible. The Linux distribution will argue, that it depends on the kernel to detect recent hardware. The kernel maintainer will argue, that the hardware is available for only 6 months which is too short to write a driver and the hardware vendor will argue, that the system was tested with Windows but not with Linux.
Instead of judging who is wrong in this game let us make a simple thought experiment. Suppose 100 randomly selected new laptops are choosen. How many of them will work with Linux? 10% or less. The rest will have major problems for wifi, gpu and SSDs. None of these problems can be fixed by changing a config file, but they have to do with writing new sourcecode in the upstream. This makes Linux a useless operating system for new computers. And this means if Windows are shy to give Linux a chance they are right. They are estimating that Linux won't boot and this will happen in the reality.

September 16, 2021

Will robots steal your jobs?

 

There are different sorts of robots available. First category are educational robots like Lego Mindstorms and research robots like line following robots. These device are used for teaching Artificial Intelligence at the university, and they are used by researchers as a testbed for implementing new algorithm. It is unlikely that these robot will replace human workers because the tasks they are solving are not done by humans. Even if a line following robot is able to master the par curs this won't have any effects to the global economy.
The second category of robots are working quite different. Industrial robots and especially robots at the assembly line are created with the purpose to increase the productivity. The idea of a packaging robot who puts food into a box is, to replace human workers who have the task in the past. These robots can replace human workers and they will influence the global economy.
The reason is not located in the robots itself or the software they are using but it has to do with how the economy is working. What most companies are doing is to mass produce goods like clothes, food, and other products. For example the idea behind a pizza making factory is to create each day 100k pizzas and freeze them at -20 degree. Then the packages are transportet with trucks to the end customer. That means, the economy is about mass production and mass shipment. If robots are utilized to improve these tasks, then the technology will replace human workers.
Or let me explain it the situation from a different perspective. A pizza making factory has no demand for a robot who can navigate in a maze. Because there is maze in the factory. What is available during the prozcess is a production line. So the factory has a demand for robots for this purpose. With this idea in mind it is possible to predict at which moment human workers will be replaced by humans. Suppose two robot models are created by the engineers. First one is a robot created for the assembly line to handle large amount of items. And the second robot is an automated truck which can drive by it's own at the highway.
Both robots combined will affect the global economy strongly. The result is that all the human workers are getting replaced by these two robot models and the result is a new situation in which mass automation is available.

September 15, 2021

Dividing computer history into two groups

 

Most computer museums are trying to explain on a time axis the development of computer technology. The idea is that all the computers in the 1960s have something in common and were replaced by much faster computers in the 1970s. The problem is, that this description emphasizes to much revolutionary technology so each year is labeled as a major milestone. But if every year is important no year is important and the technology can't be described at all.
The more elegant way in structure the computer history is to introduce only two periods. First one is from 1940s to 1992 and the second is from 1992 until today. From the self understanding of computing history the year 1992 can be seen as the transition from forth generation microprocessors to 5 th generation artificial intelligence. That means, the generations 1 to 4 are put into the same basket and the 5 th generation since 1992 is the second basket.
Let us describe the situation in detail. The most advenced computers in the year 1992 were systems like the atari ST, the 296 Intel PC and the early sun workstations. These systems are able to emulate all the computers before, that means an MS-DOS 286 PC can be seen as a modern calcluator which has replaced transistorized computers. The idea was to combine electornics compoenents like a harddrive, a cpu and a monitor into a single unit and this called a personal computer. This was exactly the objective of the are from 1940-1992. That means all the computers were created with this objective in mind and took many decades until the goal was realized.
The computers invented after the year 1992 were created with a different objecitve. The main goal of this area which is valid today is to connect the computers to a world wide internet and to realize Artificial Intelligence.
Let us describe the situation until 1992. The interesting situation was, that none of the computers in that area were used to control robots and none of the computer were connected the internet. Sure, some exception from this rule are available, because the internet has it's roots in the 1970s. But this application was a minor application and not discussed widely. In contrast the computing debate until 1992 was around more technical things like how to build 8bit cpus, how to implement programming languages and how to use a pc as a word processing machine.

September 08, 2021

Analyzing the advertisement for Stackexchange

The stackexchange network has an interesting AI section which is available under https://ai.stackexchange.com/ Instea of reading the postings itself, I'd like to give an introduction into the ads on the website. Right top there is a large box which contains commercial advertaisment for the KeyDB database. The same advataisment is available in a banner layout on top of each posting. If the user clicks on the ad, he gets the information that the ad was created by google.
If the user is browsing on the SE.AI website. The ad is switching into a another one which is rev.ai According to the self-claim, rev.ai is an accurate speech to text api which is sometimes called a voice recognition engine.

September 07, 2021

Industrial robots at the assembly line

 

Assembly lines are the most efficient production facilities out there. There are running 24/7, there are mounted o a fixed place and they can be automated. In contrast to research oriented robotics projects like Robocup or micromouse an industrial robot mounted at the assembly line is doing something meaningful.
From the history point of view, assembly lines are a typical example of mass production at the lowest cost. The question is not how to produce and package a single piece of pizza but millions of them each day. The goal is to increase the quantity and reduce for a single unit down to zero. Then the product is sold to the mass consumer. Somebody may argue that mass produced goods which are created by assembly lines are outdated. From a research perspective this might be correct, because programming such a robot can become a boring task. On the other hand, such maschines are highly important for the economy. They are the backbone of the capitalism.
so let us think about not to create toy robots which can play games or helping in the kitchen, but the idea is to create real robots for the assembly line. Such a robot is never autonomous and he has no onboard battery pack. Instead, the amount o electrical current is endless and the system can be created with heavy metallic parts.
First thing to do is not to focus on the robot itself but on the concrete task. An assembly line delivers a raw product from the left, then an action is made on the work station and the improved product is delivered to the right side. This principle is repeated many times. A typical example is a workstation which drills a hole in each piece of wood delivered by the assembly line. The technical question is how to program a robot which can do such task error free.
To answer the question we have to investigate how research oriented robots are getting programmed and adapt this technology for industrial robot programming. A state of the art technique for implementing Artificial Intelligence is a reward function which maps the current state to a numerical value. This is equal to evaluate the actions of the robot If the robot is working well the reward function will show a value of 0.95, 0.94, 0.96 and so on. But if the robot made a mistake the score is reduced to a value near 0.0.
This realtime reward function is the backbone in modern robotics. It can be used to realize advanced control systems. At the same time it is utilized to debug existing robot software. Debugging means, that the robot is controlled by a joystick and while the human operator is doing so, he sees on the console the reward value. If this interaction works well enough, it is possible to program another software which removes the joystick from the loop ad controls the robot autonomously.
Let us imagine a robot control software for the drilling task is available already. The robot at the assembly line is repeating the same task over and over again. While it is doing so, the reward value of 0.95 and higher is shown on the display. This reward value can be displayed as a chart and shows what the system is doing. That means, in the robot software there is a module available which converts the current action into a single reward value. No matter what the task is or which programming language was used, the reward value is always visible. The importance is the same like the amount of energy a robot consumes. It is not a physical measurement but it is a task oriented information which have to do with the Artificial Intelligence in the robot.
Let us imagine under which condition a reward value of 0.95 is shown for a drilling robot First important thing is, that the drilling starts at the correct position, not to early and not to late Second important thing is, that the hole has a certain length. If everything is perfect the reward is equal to 1.0 which is the maximum value.

September 06, 2021

How future ready are q-tips?

 

Q-tips are everywhere. In contrast to the explicit warning label on each box not to use it for cleaning the ear, the majority of customers around the world are doing so. They are buying billions of qtips each year and the only reason why the consumption is so high is because it is used for removing the earwax. On the first look the procedure looks harmless. The user can remove earwax and won't feel pain. Everything looks fine.
What most newbies doesn't know is, that 700 Million people each year have to visit ENT doctors around the globe only because they have have clogged ear and sometimes an ear infection or more serious diseases. The problem with qtips is that the catastrophic effect will occur over a longer timespan. It takes some months until the cerumen is compressed near the ear drum. Then the person will put his head under water and bacteria will flood into the ear. But the water can't leave the ear canal and an infection will grow exponential.
So if qtips are wrong, what is the recommended alternative? It is called a aspirator bulb. This rubber device is filled with water and the water is pressed into the ear canal. The same technique is used by most ENT doctors to remove the ear wax. It is a but unusual in doing so but it works well enough.
A while ago there was an interesting paper published in which the relationship between using qtips and ear infections was analyzed. the conclusion was that nearly all ear infections are produced by qtips and nothing else. This was determined by two trial groups. The first volunteer group were using qtips, the other not. And then it was compared which persons have to deal with otitis media and which not. Another hint why qtips might be a bad idea to remove cerumen is available if someone analyzes the situation from a technical perspective. Suppose in a small opening there is some dust and somebody moves a qtip back and forth, it is very likely that a remaining fraction of the dust remains in the opening forever and gets compressed at the dead end. And exactly at this location a lot of bacteria can grow.
Suppose somebody feels that his ears are clogged. So it might be caused by compressed earwax. The interesting situation is that a look into the ear canal with a microscope will show, that the cerumen is always located very deep in the canal so that it is impossible to remove it. Deep means, that the cerumen is next to the ear drum and this organ is very sensitive. What wil happen if somebody tries to clean earwax which is very deep in the ear canal? Right, it will hurt the eardrum for sure. Such a situation is not a trivial one but it is serious health condition. The interesting point is, that many people are affected by this issues. 100% of the world population is equipped with ears and all these ears are producing a lot of earwax every day. That means, doing a mistake in this procedure is equal to a global health problem.

September 04, 2021

Funny computers in 1992

 

There are two interesting computers available in the year 1992. Commodore 64 and the Nextstation color turbo. First thing to mention is, that from today's perspective both systems are outdated The C64 was equipped with only 64kb of RAM, while the Nextstation had around 16 MB of RAM. Both is too little for serious applications.
In 1992 both computers were used with passion from their owner. The Commodore 64 was the most sold computer ever and a lot of software was available. While the next computer was state of the art technoogy which had better hardware specifications than any other PC at this time. But the problem is, that in both cases the system was not ready fro internet access, it wasn't possiblee to run biggger appliation and a simple video can't be played back. The chance is high that even a simple mp3 file can't be played on the next machine but this is only a cynical assumption.
The problem is not located in the manufactoring companies which was Commodore and Next but the main cause why the hardware was slow was the year 1992. At this time, all the computers on the market where slow and too expensive. The funny thing at the same time, the year 1992 is equal to the most advanced technology ever available in the computer technology history. Compared to the computers 10 years ago, the Next station was a great piece of hardware.
It was explained in a previous blog post that the year 1992 was some kind of milestone year. The classical period of computers has ended and the way was started to new sort of technology which will revolutionize everything. We can say the computers until 1992 were designed with classical principle in mind. There was a monitor, a cpu, a bit of RAM and sometimes a harddrive was available. The idea was, that the computer can execute small programs and is able to run some games.
So we have to ask what is the difference to today's technology. The difference is, that computers in 1992 never were equipped with 256 MB and more RAM, they had never a harddrive larger than 1 gb, they were not able to playback video clips and very important they weren't able to show the World wide web. Let us figure out how these applications were handled until the year 1992. Larger amount of data were not stored on computers but they were stored in printed sheet of papers. Also books and journals were very important in this time. The only technology option to playback videoclips was the television which includes a VHS recorder, and a technology like asearch engine over the internet wasn't invented yet. Instead the people have visited a library or asked a printed encyclopedia.
The reason why the year 1992 is the milestone year but not the year 1987 or 5 years later is because around the year 1992 the World wide web was invented. This marks the transition from classical computing into a new area. If we are going only 10 later into the future to the year 2002 every thing has changed. At this time, fast and very powerful computers are available who are able to playback video and can browse in the internet. All the problems of former 1992-computer technology was solved. The average desktop PC in 2002 was equipped with endless RAM, unlimited amount of harddrive and a fast cpu which can run multiple appplications.

Are qtips the number one cause of otitis media?

 

According to a famous academic paper it is the reason. But let us investigate the situation in detail. There is some sort of action sequence which looks natural on the first look but will result into a maximum credible accident which is a technical term for a nuclear disaster.
1. the ear is producing ear wax all the time
2. to remove the ear wax a qtip is used which doesn't produce pain
3. if the qtip are used frequently the cerumen gets compressed near the ear drum and can't be removed anymore
4. Then the patient swims in a lake, bacteria from the water is put into the ear
5. until now no measurable pain is there. Everything looks normal. At the same time there is a great danger for the patient
6. it will take around 3 days until the bacteria behind the cerumen have grown exponential. They have infected the ear canal and the patient will feel a small irrigation.
7. the patient is doing nothing but waits for another day. The cerumen together with the bacteria has become an infection disease and the patient needs for sure an ENT doctor.
8. After another month without investigating the situation in detail the lymph nodes are swollen, the ear infection has become an acute otitis media and the patient is in a life threatening situation.
What we can see is, that the situation starts not very harmful but escalates quickly into a medical emergency. The assumption is that such an action sequence is normal for 700 million patient world wide who are visiting the ENT doctor for acute ear infection.
The recommendation is very simple: monitor your own ear canal and investigate the ear canal of close friends. This will help a lot to prevent infectious diseases.

September 03, 2021

A cultural history of the cotton swab

 

The q-tip is widely used product. At the same time it is the number reason for many infectious diseases in the world. How can it be, that a simple product has such a paradoxical effect?
Q tips are sold by the industry with the purpose to clean the ear. Sometimes they are used for cleaning mechanical machines but their main purpose is the application in the human ear. The interesting situation around the qtip is, that everybody knows that an ENT doctor doesn't recommend it and at the same time qtips are used frequently. The underlying problem has to do with missing education. There is not a certain instance in charge but it is cultural problem which affects all countries.
Let us analyze the situation from an ENT doctor perspective. The qtip is inserted into the ear, remove some of the cerumen while a small part gets compressed into the ear canals. If the qtip is used frequently the cerumen near the ear drum will become harder to remove And now something interesting happens. If the patient takes a bath and puts the head under water or if the person is diving in a lake water will run into the ear. The cerumen soak up the water and swells up. A compressed swollen cerumen makes it easy for bacteria to grow in the ear canal. This condition is called otitis media.
On the first look this sounds like a minor problem which effects nobody. But the raw numbers make clear that the problem is urgent. Around 700 million people are effected each year by otitis media worldwide. Ear wax removal and clogged ears are the most common reason why patients are visiting an ent doctor. Also a lot of antibiotics is prescribed each year because of this single problem.
Let us try to understand the opposite party. Why are the people using qtips to cleaning the ear. Because otherwise the amount of the cerumen in the ear is growing. The ear is producing ear wax in large quantities every day. The cerumen never runs out by it's own but it has to be removed. If no qtip is used the ear gets clogged and the person can't hear.
The good news is, that there is treatment available which negotiates between both sides (ent doctor and normal users as well). It is the water irrigation method for cleaning the ears. The idea is to replace cotton swab with a small water pump. Warm water is shot into the ear this removes the cerumen very effectively. Also it prevents that the cerumen gets compressed near the ear drum.
There is only a simple problem available. The method is unknown in the public. Only ENT doctors are using water irrigation systems and microsuction systems to remove the ear wax. Normal users at home are preferring qtips. And because of this mismatch many million people world wide are suffering from ear infection.

Can artificial Intelligence be realized at all?

 

In the past, artificial Intelligence was at first a technical problem. It was obvious that chess playing robots were not powerful enough and that biped robots have failed to walk upwards to stairs. With the advent of more modern algorithm it has become within reach to build much stronger AI systems which can do most of the desired tasks. It was succesfully demonstrated that AI can play Tetris, drive a car by it's own and cook a meal in a kitchen. From a technical perspective there is no longer a limitation visible. The engineers have figured out how to solve most of the problems, and the remaining tasks for example more general Artificial Intelligence will be solved by future scientists.
On the first look this sounds like a pathway to a wonderful future in which robot trucks are driving on the street and household robots are cleaning the dishes. Let us assume a world in which robots are demonstrated successfully for all domains, is this equal to build robots?
The difference between a robot demonstration a reall robot is that the later one is used for practical applications. Only if an autonomous car is sold to the public the technology is ready. The question is, when this will happen and does it make sense at all? At first let us understand what a common narrative idea. The idea is, that engineers are able to program autonomous cars and the consequence is that within the next 10 years these cars are marketed and sold to the general public. So we will see AI technology in the reality.
The first part of the statement is correct. Engineers have developed the technology and it was demonstrated many times. It is possible to control toy cars with a neural networks and larger cars as well. In a sense that the car takes car of the traffic light, obstacles on the road and maintains the correct speed. But, the second part of the narrative which is about introducing this working technology into the real world is under question.
How many examples for commercial robots are available in the last 50 years? Right not a single one. And the assumption is, that nothing will change in the next 50 years. Announcing a commercial robot is some sort of running gag but can't be transformed into a real product. At least for the last 10 years the common explanation was, that the hardware and software wasn't developed well enough. That means, it is too costly to build robots and if they are created from servo motors they are not able to solve tasks by themself. This prevents that a lab project is transformed into a valuable product.
With the advent of cheap microcontrollers like the raspberry pi and advanced software this bottleneck was solved. In theory there is no reason to delay the introduction of robots. Let us take a closer look into the situation today which is the year 2021. Not a single commercial robot can be bought worldwide. The only thing what is available are announcement of companies that they are planning to build robots in the future. The open question is will this happen or not?
Perhaps it makes sense to simplify the overall situation a bit and imagine some sort of robot in a sandbox. Suppose the idea is to build and sell a line following robot from scratch. The first thing to do is to build the hardware and then install a software on the device. The first stage for the prototype would be a working robot which is shown in a youtube video. Such a robot is able to navigate on a line and it can be used for many applications for example cargo transport. How many customers will buying such a robot? It is a rhetorical question because the amount is zero. The reason is that concrete customer has no purpose for such a robot. And perhaps this is the most surprising insight.
How can it be that a working line following robot, a chess playing software or an autonomous car doesn't fulfill the needs of the customer? The estimation is, that factories have a demand for a line following cargo robot and that private households need a self-driving car. And because of this assumption they will buy a robot if it is available at the market. But, is the assumption correct that there is a need for such technology?
What we can say for sure is, that in the past there was no need. The amount of kitchen robots and hospital robots sold to customers is known precisely, it is 0.0. that means, no one in the world have a demand for such a robot. The open question is why this should change in the future. Or let me explain it the other way arround. The implicit assumption is, that in the in next 10 years private households and large companies as well will buy lots of autonomous cars biped robots and kitchen robots. Will they?
Let us investigate what will happen if they don't. Not buying a robot means, that a robot is not delivered to the customer. And this is equal that the amount of sold units is low or even zero. The prediction is, that this kind of outlook is the more realistic one. That means, even if automative companies will announce self-driving cars they are not able to sell them because of missing customer demand.
Let us investiage which sort of product is highly demanded by customers. THis are classical computers like desktop pcs and smartphones as well. Also customers are buying lots of gadgets like remote controlled drones and fitness trackers. At the same time, the customers don't buy robots. How can it be that the customer has no need for automating existing process? The reason is, that a robot was never invented for such a task. Automating a process is done by mechanical machines like cars and with infrastructure like electric current. A robot was invented with the objective to increase the automation level further. And exactly for this additional step in technology a demand is not there.
Instead of focussing on existing companies who could develop and sell robot the more interesting platform is the kickstarter website. Kickstarter has made the introduction of new technology simpler. Some robots are shown on kickstarter. Most of them are companion robots which are talking to the owner and can do tricks like a dog. In contrast the amount of industrial robots or household robots is low. Why? Because of missing demand. The only example for a kickstarter robot from the domain of industrial appliation was a mini robot arm. But even this arm isn't sold for practical applications but according to the description it is a teaching tool to explain engineering students how to program robots. The chance is high that not a single real industrial or household robot was sold at kickstarter in the last 5 years.
And it is possible to make the argument more clearer. The chance is high that within the next 10 years nothing will change. That means no industrial robots at all are put on the kickstarter website and if a single example is available it won't find customers.
Let us stay for a while on the kickstarter platform because it allows to browse in the examples for robots. In the category robot most of the items are marketed with a certain plot. The idea is that somebody likes to know what robot and artificial intelligence is about and so he can bay a spider robot or a line following robot. The robot comes with an instructional manual and preprogrammed routines and then the human will have a lot of fun with the device. This is in short, the idea behind 99% of the robot at kickerstarter. But this use case is the opposite of a real robot. A real robot is marketed as a serious tool which helps to save time. Not the human but the robot should do a task. Unfortunately, kickstarter doesn't provide these robot and the plot isn't used to describe existing one. So it seems, that the customers have no need for such a plot. What the customer want's is to learn about robotics, but the customer doesn't need robot for practical applications.
Homecomputers
To grasp the difference between robots and homecomputers let us go back into the 1970s before the advent of homecomputers. At this time, the only way to become familiar with a home computer was to build such a machine by it's own. Sometimes a kit was sold in electronics store for amateurs. These kits were a great success because the result was that the customer was getting access to a home computer.
Since the early 1980s, homecomputers were sold as normal consumer products and the amount of people who have bought the technology is growing over the years. Nearly everybody has recognized how useful a computer is. The typical application of a home computer is to play games, write texts and program short programs. Because of this reason the customer are spending a lot of money for the technology and home computers have become a great success.
The assumption is, that for robots the situation is the opposite. First thing to mention is, that from a technical perspective it is possible since the 1980s to buy a robot by itself. That means, electronic experts do not need to buy commercial robots but they can tinkering the hardware in his garage. But nobody is doing so. Building a robot never has become a mass phenomena. Since a while, robot hardware is also sold in dedicated shops but the amount of sold unit remains low or even zero. It seems, that the customers have simply no reason to get it's own robot? And if they are buying a robot they are using the device never for practical applications but as a learning tool to understand what robotics is about.
Let us ask the question in a different way. If robots are so powerful why are no pictures available made of amateurs about self-created kitchen robots who are doing something useful? Technically it is possible to build and program a kitchen robot but it seems, that all the programmers in the world are not doing so. This stands in contrast to the homecomputer revolution In the 1980s it was a common situation that somebody has built a computer from scratch and used the machine for different applications.
In search for a kitchen robot
The most valuable source for searching working robot prototypes is youtube. There are two sort of kitchen robots available. First one are announcement of large companies that in the near future such robots will become available. Ii the meantime a prototype is shown which can in theory prepare a meal. The second example for kitchen robots are created by amateurs. Their DIY kitchen robot are always failed projects. That means, the robot is not able to grasp a bottle and this looks funny. So the amateur has captured the sceen in a video and other people like it. What is not available at youtube are working kitchen robots made by amateurs used in a meaning full way. The assumption is that such a situation is not needed and therefor nobody has created such videos.
The official description for a non working robot is a rube goldberg machine. The interesting fact is, that even a rube goldberg machine is working technically great it fails to fulfill the expectations. There is an example available about a rube goldberg machine who can make a pizza. On the first look this sounds like a practical applications of modern robotics. But similar to all rube godberg machines something is wrong. The result is that this machine remains the only one in the world and no other customer will buy such a machine.
In contrast, a working robot will produce a demand. A demand is equal that the robot is manufactored in higher quantities similar to the commodore 64 which was sold 17 million times. Let us go into the details. Somebody has created a single prototype for a pizza making robot. It is a not a real robot but only a rube goldberg machine The inventor has invested lots of hours to build and program the machine and then he makes a video about it. Other uses in the internet will watch the video and upvote it. Now something interesting will happen. After the robot was build, the inventor decides to disassemble the machine, because he needs the space in his house Also he will reuse the parts for the next project. So the project is lost for the world. And it is important to know, that even people who have seen the video won't rebuilt the project. Because it makes no sense to build the same non sense machine again. That means the total amount of pizza making rube goldberg machine was only 1 and it is not possible that the project will become a success in the future.
And this kind of lifecycle repeats over and over again. No matter if someone has built a line following robot, an autonomous car or any other AI powered device. The amount of copies is 1 but not more.

September 01, 2021

High end workstations in 1992

 

Sometimes it helps to understand the current situation with a look into the past. The importance of the year 1992 for the computer history was explained already. It marks the the revolution from the old decade of classical computing into the new internet based global network of interconnected computers. In the 1980s and 1990s as well workstation computers were the most expensive personal computers available. They were sold for 40k US$ and more and they were created in a low amount of quantities. Workstation computers were used by engineers, technical writers and of course by computer programmers to design the new generation of future technology not invented yet. All the layout software in publishing houses, all the video game development for home computers, all the movie production were realized with workstation computers.
The interesting point is that from today's perspective the hardware capabilities of workstations in the 1990s were low. A typical system in the year 1992 was equipped with a 32bit CPU and not more than 16 MB of RAM For this time, such a configuration was state of the art. It was the most advanced technology available at this time.
Similar to the computer technology in general the development never stood still but it has improved all the time. Workstations in the year 1993 were much better and in 1994 the hardware was improved further. Around the year 2000 the workstations and concumer PC technology have become the same and the price has reduced drastically. But let us stay for a second in the year 1992. What sort of tasks can be realized with a 16 MB RAM workstation? Right not very much. So the question is, what exactly was the difference between a high end workstation and a home computer like the Commodore 64?
If we are taking a look into the publication around the C64 in the 1980s and 1990s it is a bit surprising how often the 8bit computer was compared with PCs and even with workstation. The interesting point is, that this comparison makes sense. The difference between a 3d animation on the C64 and a 3d Animation on a 16mb RAM workstation is not very large. The problem was, that even on a 40k US$ workstation from 1992 it wasn't possible to render something in realtime with 30fps because for such tasks the hardware was too slow. And creating a simple newspaper with some pictures was also beyond the capabilities of high end workstations at this time.
There are some videos available at youtube which are showing some so called workstations from the 1990s in action. What the user see is, that the GUI is rendering the window very slow and the graphics are not shown with colors, but there is only a frame in blackwhite. What Commodore 64 users have done in the late 1980s was to accept the limitations of their hardware and created software which was working fine with low amount of RAM. So we can say, that in the late 1980s a commodore 64 had a much better price to performance ratio than any other machine out there. Sure, the amount 64kb RAM is not very large, but the computer industry in general struggled in the past to provide larger amount of memory.