February 13, 2023

There is no AI singularity yet

The main criteria for an upraising of robotics is mass production of intelligent machines. If advanced biped robots are created with only 1 units it is a boring research-only robot. Such a single unit robot has no relevance to the pubiic but it is a demonstration only project to write a paper about it.
True robotics is equal to increase to amount of units drastically. If millions of customer around the world are motivated to buy a certain product, that it is competitive. In case of robotics the situation is relaxed. The amount of cobots shipped for automation reason is only 45k units worldwide. The amount of miles driven by self driving cars is also very low.
Other household robots like vacuum cleaners and kitchen robots are not produced on a mass scale. Despite its attention on computer fairs the public demand for such technology not there.
Let us imagine who an AI takeover will look like without any real robots. It is a paradox situation because it is the opposite. An AI takeover described in the literature means usually that millions of human robots are produced on a mass scale so every household own at least one. Such a scenario is unrealistic. What is available instead are early prototypes similar to mechanical automaton in the 18th century. These robots are demonstrating the technical skills of university and research institutions but they are created with only 1 unit. This is similar to the famous fictional robot in Star Trek TNG which was also produced in a low amount of units.
What is available today are mass produced classical electronics devices lie smartphone, laptops and watches. But this technology can't be described as intelligent but it is normal pre-singularity innovation.
The opeq question is where robots are not mass produced yet? To answer the question lets take a look at the previously mentioned cobots. Cobots are industrial robots which are more powerful than normal automation technology. In most cases cobots are used for intermediate pick&place tasks at the conveyer. in contrast to a famous myth such a task has a low priority in the industry. Even if cobots are working great they are not used frequently. The amount of annually shipped cobots is 45k worldwide whcih includes all brands and all sort of cobots. so it ic compared to the importance of the industry similar to nothing.
If cobots are not used in the reality, how exactly are the conveyor belts automated? Right it is a rhetorical question. In most cases normal automation technique is used which doesn't need Artificial Intelligence but it is working purely mechanically. And the remaining tasks are highly complex and can't be automated soon. So we can say that the industry struggles even with easy to built cobots. It is unlikely that more advanced human robots are produced on a mass scale soon. Let us make a prediction in numbers who the year 2030 will look like.
the amount of cobots will grow by 5% annually. That means around 65k cobits are getting produced in the future per year. Not a single human worker is getting replaced by this little amount of technology. The chance is high that many of these cobots not even installed on a production facility. The same prediction can be made for self driving cars. The chance is high that because of regulation problems existing autopilots will remain offline and new cars are produced without any sort of AI. The only automation technique which is produced on mass-scale is an automatic door opener. That means, the owner can press a button and the car will unlock the door even if the owner is 1 meter away.
Such an easy to describe outlook sounds a bit boring for an audience which is fear that the robot revolution has already started. Most of the so called Singularity is only available in science fiction literature. The reality is much more conservative in introducing AI technology.

February 12, 2023

Typing speed for analog index cards

The main cause why modern computer users are shy to start an analog Zettelkasten is because they are believing in the myth of a highly efficient electronic data. The assumption is, that typing something on a keyboard is very fast and that URL and bibliographic references can be added by simple copy&paste to the knowledge database.
The underlying assumption behind the average Obsidian PKM user is, that there a no costs at all in creating all the nodes in the second brain while in contrast analog note taking is perceived as very complicated. The understanding is, that for creating an analog note somebody has to search endless time for a pen and needs lots of hours until all the index cards are written in long hand.
It is a widely known fact that Niklas Luhmann has created all the 90k index cards by hand. But Luhmann has worked in a time before the advent of the IBM PC. The prefered way of creating a Luhmann like Zettelkasten since the 1990s is of course a modern Windows or Linux computer which has many advantages of a luhmann style note taking system.


To get a better understanding why analog note taking is even in the year 2023 the most effieient way in creating a knowledge base, a short look in the figure may help. The spreadsheet compares the amount of data and the needed effort in minutes to create an analog vs. a digital zettelkasten.
The obvious difference is the typing speed in chars. It is a fact, that a computer keyboard allows to type in something much faster than an analog pen. On the second hand the difference is smaller than expected. If the user is reducing the amount of written of cards for each byte and also reduces the amuunt of daily cards he will need less overall time for creating new notes.
Let us slow down the situation a bit and describe why digital notetaking takes so long. According to the table lots of cards are created each year. Even if the typing speed is high, these cards are requiring manual effort. The user has to invest around 64.4 hours per year to write down 1095 digital note cards. It is not possible to reduce the needed time because there is a limit in the typing speed on a computer keyboard.
In contrast to a famous myth, the amount of effort for creating a digital card index is not zero but it will produce endless amount of hours in which the user has to type in something. And the example with only 3 newly created cards is simple one. If the user puts more effort into the Obsidian PKM software he will need more hours.
The main advantage of analog note taking is, that the user knows in advance that it is complicated to write in longhand. So he will think twice before he creates a new card and will reduce the information to a minimum. This allows to reach a higher productivity than with digital note taking. In the figure the overall needed time per year is smaller for analog note taking. That means, Analog note taking can safe time compared to digital note taking.
The main cause for the paradox situation is because of the slow typing speed in general. No matter if a human is preferring an analog pen or a digital computer keyboard, the possible speed is very slow. A typing speed of 170 chars per minute is equal 23 bit/s. In comparison, outdated analog modems are using 14400 bit/s as the average speed. Even if somebody is able to type in something very fast, his writing speed ridiculous slow. The world record on a computer keyboard is around 800 chars per minute. That means even the best user in the world will need endless amount of time until 1000 and more nodes in the Obsidian PKM software have been created.
The assumption is that the slow typiong speed in general plus the small difference to analog writing speed explains why a paper based Zettelkasten makes sense. It allows to use the ressources in an optimal way and avoids wishful thinking. Humans, no matter if they are prefering analog or digital note taking are not capable in creating hundred of index cards every day but in the optimal case there are only 2 or 3 newly created cards possible.

February 11, 2023

The bottleneck of AI is the application in reality

 

There are endless examples available for sophisticated AI projects which have no practical value. The paradox situation is, that the engineers who have developed software and hardware components are not aware of these limitations or they are ignoring the missing use case. Here is a short list:
- hospital robots
- self driving cars
- Chess AI engines
- generative language models based on neural networks
- Q&A systems
- ingame AI for the 15 puzzle game, Tetris and pong
- pick&place robots
The perhaps oldest example for a technical fully working AI which has at the same time no practical value is a chess AI. With the advent of the first chess playing software it was a promise that these programs can replace human players and that they are useful for human player to learn how to play better. Many chess AI were created over the decades and endless amount of books are available which are explaining in detail how to use them and how to create a chess AI from scratch. At the same time, none of these projects has concrete practical application. In a chess tournament played by human players, the chess AI installed ont he laptop of the player isn't used.
The same situation is there for more advanced AI systems like Q&A programs and hospital robots. In the self understanding these tools have endless amount of applications. But in the reality there is no concrete example available in which a hospital is using a robot or a Q&A system has helped to find a diagnosis for a problem.
In most cases the lack of practical applications is ignored. What is done instead is to improve the original project. That means version 1.0 of the costly Q&A software is useless but instead of asking the reason why, the next version 2.0 is created.
The general explanation why AI is researched is because it helps to improve the efficiency. The idea is, that humans are not powerful enough to solve a task and should be replaced by machines which are according to the bias – are more powerful and less costly. The interesting situation is, that such a claim was never proofed. If chess engines are so much better than humans why nobody is using them? If a hospital robot doesn't need sleep and never complains about an order, why is the robot not available in a hospital.
Are chess player, the existing hospital employees or language experts not capable in seeing the value of artifical intelligence? The more sense making explanation is that something with the AI is wrong which prevents that it has a practical value. The main ideology behind AI and robotics is to speculate about the future. The researchers are programming a biped robot and the story is, that such a robot will be able in the future to do useful tasks. Artificial intelligence is not about technology but about the myth of improved efficiency in a possible future. People like to listen to a narrative in which they are replaced by more intelligent robots and they want to know how this can be realized in detail.
A closer look into the narrative “Robots are replacing humans” that it is unrealistic. Before it makes sense how future robots will replace humans it should be described firstly how existing technology can replace humans. It is important to ask how exactly a Q&A software or a robot build in the last year will replace humans. How many of them and where exactly and what is the amount of reduced costs.
If the question is asked in detail nobody can answer it. What is given as a possible answer is, that the robot available yet was not powerful enough and a new model is build by the researcher which is not available in the next 4 years. And in the meantime the normal human workers are doing the same job like in the past and the costs are of course the same or have even increased.

Proof that analog note taking is faster than Obsidian

The Zettelkasten community is proud of their self- developed tools like Obsidian, Emacs org-mode and Cherrytree. The promise is that creating a digital only knowledge base is highly efficient. To investigate the claim in detail the following Spreadsheet might help:


The user has to enter some information like writing speed and the amount of characters and then the spreadsheet will determine the estimated amount of minutes for each day. In the concrete example the analog Zettelkasten will occupy 2 minutes fewer than the digital version.It is more productive to create analog notes than typing the information into Obsidian.
But let us slow down the situation a bit and explain what the user of a zettelkasten is doing usually. He has to write of course new cards to add content to the system. Its up to the user how many cards he creates each day. This will affect how the note box will look in 6 months or in 12 months. More added cards per day will result into a larger and more interesting note box and vice versa.
The promise of digital note taking is, that it is pretty easy to write new cards. The assumption is that the computer keyboard in combination with an advanced GUI software is able to reduce the amount of time to zero. But this is not the case. Typing in something and clicking on all the drop down menus will require a certain amount of minutes. In the concrete example, the user has to invest 12.5 minutes each day for updating the digital Zettelkasten. After only a year the total amount is 12.5*365=76 hours. That means the user has to use the computer keyboard over endless amount of hours only to add some small piece of information into the digital knowledge base.


The limits of Artificial Intelligence

 

There are two different definitions what AI is. The first one was created by the AI community itself. In most cases it is about robots and other sort of advanced algorithms which can solve problems. Typical example would be a self driving car, a chess AI or a machine translation software.
The other definition about AI is what the public thinks about the topic. In most cases the public likes to know if AI can replace human workers. It is important to know that there is a gap between both definitions. It is possible to create AI according to the first definition but fail at the same time according to the second definition. Let me give an example.
It is possible to program a chess AI which will defeat any human player. There are some Open Source projects available which are well documented. The assumption would be that with the existance of such algorithm normal matches between human players are no longe needed because all the games can be played by software much better. The surprising observation is, that since the 2000s more matches between humans are made before. In spite of the existence of powerful chess AI there is a demand for human opponents.
The same situation is available for a hospital robot. Today's hospital robots are highly advanced. They are equipped with cameras and natural language processing algorithm. At the same time it is not possible to replace human service employees with these robots. The demand for human labor in this sector has increased. The prediction is that after the invention of improved future AI systems nothing will change for the public. That means AI will never replace human chess players and won't replace human hospital employees.
All these promises which are going in this direction are either illusion or simply science fiction fairy tales. From a technical perspective it is possible to build intelligent robots but these robots are different from humans and they are not what is needed by the society.
Let us describe the situaton for chess playing AI in detail because the subject is easier to understand. What is the purpose of the “GNU Chess” engine if it can't replace human players? In case o ELO score the gnu chess program is great. It is a very powerful chess AI which never makes a mistake and contains of advanced algorithms. Most chess players are not interested in playing a game against this engine itself but what the human chess experts are doing is to research the subject of computer chess. They want to know how gnu chess was programmed which bugs are in the code and how to improve it. So it is a self referential task. The engine is improved with the reason that the new version o gnuchess is interesting for a new generation of AI researchers.
The subject of AI is a large and very interesting topics. There are many subdomains available which have a scientific background. The only thing what AI can't do is to provide a value apart from AI itself. No matter how advanced a software or a robot was it won't affect normal life. This rule is not fixed and most AI researchers won't confirm it. It is only on observation which is made for AI projects from the past. The paradox situation is that from a technical standpoint AI has made big improvements over the decades At the same time it is remaining unchanged in terms of the ability to start a revolution.
In any decades the people were afraid of the upcoming robot revolution. They are in fear that their jobs are replaced by automation. But only 10 years later the former utopian promise gets forgotten and not a single robot was installed at the assembly line. The problem is not located in a certain sort of computer or algorithm but it has to do with AI in general.

February 10, 2023

The self understanding of AI engineers

What is missing in the public debate around chatgpt and robotics is to describe the gap between what engineers are believing about the world and how new technology affects the reality. At first we have to summarize the self understanding of robotics engineers and neural network designers. Their assumption is that a newly created robot will make life easier. For example the engineers of a self driving car are assuming that the AI will drive instead of a human and will reduce the costs. The self understanding of the GPT-3 and IBM Watson programmers is, that the language models allows ordinary people to translate between two languages and simplifies the writing of a book.

The interesting situation is, that this underlying technology isn't questioned at all and if someone is doing so it is told that the person hasn't understood the technolgy. That means, all the programmers who are able to program a neural network are convinced that the neural network allows to translate between two languages.
A short look into the past of failed automation and AI projects will show that in the reality it is the opposite. Technically there are self driving cars and machine translation software available since the 1980s. But none of the projects are helpful for solving practical tasks. Even very advanced projects llike IBM watson are completely useless for practical applications. It is a bit surprising to see but patients in a hospital are preferring a human service employee.
The open question is, if modern AI technology is so amazing why nobody is using this to reduce the costs? Instead of answering the question directly it is important to recognize that this is the reality. That means robots from 1990s, from the 2000 and even future robots from 2030 and later can't be utilized for practical applications. That means, in the year 2030 there are of course self driving trucks available which can harvest a potato field but these trucks are not utilized in the reality but the harvesting is done the same way like in 1980s and even the 1960s before any computer technology was invented.
What we can observe since the upraising of large language models like GPT-3 isn't an AI take over or an AI revolution but there is some sort of self-indoctrination available that AI is great and will reduce the costs for mankind. The opposite is the case. AI is not able to replace human writers or truck drivers because the humans find it interesting to do such work.
Perhaps it makes sense to delimit the situation to a concrete example. Suppose an AI programmer has created a chess computer which is able to win any game. What will happen is that a human will play with this machine and after a while the human will prefer human opponents. That means the human will reject the technology and play the same game of chess like before. Not because the technology is so complicated to understand but because the human is interested in chess itself.
The same situation is visible for other videos games like Tetris, Lemmings and so on. All these games can be solved by advanced AI. At the same time humans will play these games by itself and ignoring the AI feature. So what is the purpose of developing a video game AI if nobody likes to use this software in the real world? Right there is a contradiction. This kind of problem is available since decades for Artificial Intelligence projects. No matter which sort of robot or neural network the engineers have created it isn't used for practical applications.
Artificial Intelligence is a highly self related disciplines which creates problems and tries to solve these problems and none of these tasks have any further meaning. Each time if a user is starting a gpt-3 based machine translation it is a complete failure and each time the owner of a car activates the self driving feature it won't work.
 
From a Science Fiction perspective the situation with Robotics and AI is pretty simple. Advanced robots will replace human workers and in the future all the work including programming, writing and car driving is done by Human level AI.
The sad situation is, that such a promise isn't new but was formulated decades ago. Until now it wasn't realized, in spite of the fact that Artificial intelligence was created many times. In most cases the failure of AI was explained with too little ressouces. That means the robot is in theory able to automate a task but for doing so in reality the software has to be improved drastically. The paradox situation is that the same explanation was applied to the improved robot and so on.
It is likely, that the explanation why GPT-3 is not able to translate a book into a foreign language will become that the language model is no powerful enough but future version which is available in 10 years from now is able to do so. What the engineers aren't saying is that this future scenario isn't realistic. That means no matter how advanced the projects are they can't replace humans.
The funny situation is, that this simple rule is also visible for less important tasks like game playing.. Somebody may argue that it is pretty easy to program an AI which will play against other players in a video game. A game is an easy to automate task and in addition it won't affect human health if the AI isn't perfect. The surprising situation is, that human players doesn't like to play against AI especially not if the AI is more powerful then they. Even for the well formalized game of chess all the experts players are playing of course against other humans but never against AI software.
It is unlikely that AI can't replace human chess player but is powerful enough to replace human truck drivers or human artists. This is simply an illusion.

What is wrong with GPT-4?

 

First thing to do is to analyze what the promise of large language models like GPT-3 and GPT-4 is. There is a professional video available which explains the advantages:[1] #5:00
1. Question answering
2. Text analysis
3. language translation
4. creation of videos
These applications are mentioned, because there are papers available in which language models were used in this context. The surprising situation is, that even gpt-4 is not capable in doing so. Let us investigate this point in detail.
The ability of AI Software for solving q&a problems and translating text automatically is nothing which is completely new. IBM Watson was presented in 2011 to the public and it was assumed that the underlying technology will help doctors to make better diagnosis. Also the ability to translate natural language in real time was shown 10 years ago. The interesting situation is, that none of these tasks were demanded from the end users. There are no doctors available which have a need for IBM Watson or gpt-4 to do their job and if someone likes to translate between two language he will have to learn both of them fluently.
So what is the deal of Artificial Intelligence? The problem is that modern technology like robots and neural network are on the first look capable of doing a certain task and on the other hand they don't. In the 1990s there was a company “Helpmate” founded by Engelberger. The promise of these early hospital robots was to increase the efficiency in the health care services. Technically the robots were highly advanced. They were able to move autonomously at the corridor and have avoided obstacles. At the same time the robot haven't shown an advantage over human staff, so the project was canceled after a while. The same situation is visible for IBM Watson and more recently for GPT-4.
All these technologies are working by it's own understanding with superior performance. The latest gpt-4 model is able to translate a text much faster and more accurate between two languages than any human. At the same time it is trivial to predict what the outcome will be. There is no effect, and there is no demand for such technology. GPT-4 and all the other neural networks are nothing but a waste of time. The engineers find it exciting to show their capabilities in benchmarks and lots of youtube videos are made to promote this technology but at the end there is no practical applications.
The average customer won't notice the insignificance because if the hype around one technology is over the next improved language model gets released. Instead of asking for which concrete purpose the current GPT-3 transformer networks can be utilized the developers have invented the successor which works completely different.
Instead of asking what the future has to offer it makes sense to take a look back and ask why technology of the past wasn't helpful at all. In contrast to a famous believe the Helpmate hospital robot was using advanced technology. There is no need to improve the technology or reinvent the system with more cameras. The heretical question is why wasn't the robot used in a real hospital? Was it a lack of training or has it do with the AI technology itself?
Let us take a step back and understand what the ideology of the AI engineers is. They have recognized very soon that the IBM Watson software wasn't helpful for practical applications. Instead of admit that the Q&A technology in general doesn't make sense for doctors, lawyers and even normal users of video games, they have improved the software into more advanced neural networks. The current chatgpt software is much more powerful than the previous IBM Watson project. At the same time it will become a completely failure in terms of practical applications. Nobody is able to write an essay or answer a trivial question with the software. And similar to the IBM Watson project the explanation will be that the more powerful gpt-4 model which is more powerful can be used for practical applications.
The uninformed public assumes that the AI community is researching something which has an impact to the daily life. The hope is that self driving cars will help human drivers, kitchen robots will make cooking easier, and language models allow to write books. The sad situation is, that none of these tasks can be done with Artificial Intelligence. What humans want to do instead is to acquire these skills by themself. That means, human want to learn how to drive a truck, they want to cook with its own hand and they want to learn foreign language to understand the opponent.
References
[1] Chat GPT 4 Was Just ANNOUNCED (Open AI GPT 4) https://www.youtube.com/watch?v=CW5xgCxXwdY

February 01, 2023

On the fly citation

There are endless amount of tutorials available how to cite literature in different software. The Latex community has the \cite{} command while Lyx users can select the reference from a gui menu and in Word there is a plugin available for endnote or citavi. The surprising situation is, that even the citation can be created in this way some important information is missing. It is about why a certain citiation is inserted into the text and why not?
What most tutorials about academic writings are ignoring or explaining only as a side note is, that the source of each reference is not the database in the library and it is not google scholar but the source is always the personal knowledge management software. That means, an author has created his individual bibliographic database which looks different from a public database like google scholar.
Some programs for creating such databases are: jabref, bibtex, Obsidian, Zotero and citavi. The important question is why such a database needs to be created if it contains the same information like Google scholar? Wouldn't it make sense to cite directly from google scholar?
No because the inbetween layer is the personal note file of the user. During reading of books and papers, the user makes personal notes and in this notes the references were written down. During writing the prose text, the user has it's personal notes in a separate window and is citing from these notes the books. That means, a user doesn't cite a random book which may fit to the subject, but the user has to cite a book which is already in the own note file.
Suppose the citavi software was used to create notes. Then the newly written document gets enriched with bibliography references from this citavi file. And suppose the user prefers an Analog Zettelkasten in the style of Niklas Luhmann, then the references are taking from this note taking system.
This might explain why most authors are creating an individual reference database even they can use the normal database in a library or from google scholar. Because it is about making notes and the references are a part of this attempt. Let us imagine a counter example.
Suppose somebody hasn't created notes and has no personal bibliographic database. Then it is not allowed to cite anything. The unwritten rule is, that a citation can only be created if it is stored in the personal notes already. Otherwise, the user hasn't read a book and hasn't made notes about the book. So it makes no sense to cite it in the own paper.
If an academic paper has endless amount of notes, then the author of this paper has made endless amount of notes. Note taking is the prestep towards citing literature in the own paper.