What is missing in the public debate around chatgpt and robotics is to describe the gap between what engineers are believing about the world and how new technology affects the reality. At first we have to summarize the self understanding of robotics engineers and neural network designers. Their assumption is that a newly created robot will make life easier. For example the engineers of a self driving car are assuming that the AI will drive instead of a human and will reduce the costs. The self understanding of the GPT-3 and IBM Watson programmers is, that the language models allows ordinary people to translate between two languages and simplifies the writing of a book.
The interesting situation is, that this underlying technology isn't questioned at all and if someone is doing so it is told that the person hasn't understood the technolgy. That means, all the programmers who are able to program a neural network are convinced that the neural network allows to translate between two languages.
A short look into the past of failed automation and AI projects will show that in the reality it is the opposite. Technically there are self driving cars and machine translation software available since the 1980s. But none of the projects are helpful for solving practical tasks. Even very advanced projects llike IBM watson are completely useless for practical applications. It is a bit surprising to see but patients in a hospital are preferring a human service employee.
The open question is, if modern AI technology is so amazing why nobody is using this to reduce the costs? Instead of answering the question directly it is important to recognize that this is the reality. That means robots from 1990s, from the 2000 and even future robots from 2030 and later can't be utilized for practical applications. That means, in the year 2030 there are of course self driving trucks available which can harvest a potato field but these trucks are not utilized in the reality but the harvesting is done the same way like in 1980s and even the 1960s before any computer technology was invented.
What we can observe since the upraising of large language models like GPT-3 isn't an AI take over or an AI revolution but there is some sort of self-indoctrination available that AI is great and will reduce the costs for mankind. The opposite is the case. AI is not able to replace human writers or truck drivers because the humans find it interesting to do such work.
Perhaps it makes sense to delimit the situation to a concrete example. Suppose an AI programmer has created a chess computer which is able to win any game. What will happen is that a human will play with this machine and after a while the human will prefer human opponents. That means the human will reject the technology and play the same game of chess like before. Not because the technology is so complicated to understand but because the human is interested in chess itself.
The same situation is visible for other videos games like Tetris, Lemmings and so on. All these games can be solved by advanced AI. At the same time humans will play these games by itself and ignoring the AI feature. So what is the purpose of developing a video game AI if nobody likes to use this software in the real world? Right there is a contradiction. This kind of problem is available since decades for Artificial Intelligence projects. No matter which sort of robot or neural network the engineers have created it isn't used for practical applications.
Artificial Intelligence is a highly self related disciplines which creates problems and tries to solve these problems and none of these tasks have any further meaning. Each time if a user is starting a gpt-3 based machine translation it is a complete failure and each time the owner of a car activates the self driving feature it won't work.
From a Science Fiction perspective the situation with Robotics and AI is pretty simple. Advanced robots will replace human workers and in the future all the work including programming, writing and car driving is done by Human level AI.
The sad situation is, that such a promise isn't new but was formulated decades ago. Until now it wasn't realized, in spite of the fact that Artificial intelligence was created many times. In most cases the failure of AI was explained with too little ressouces. That means the robot is in theory able to automate a task but for doing so in reality the software has to be improved drastically. The paradox situation is that the same explanation was applied to the improved robot and so on.
It is likely, that the explanation why GPT-3 is not able to translate a book into a foreign language will become that the language model is no powerful enough but future version which is available in 10 years from now is able to do so. What the engineers aren't saying is that this future scenario isn't realistic. That means no matter how advanced the projects are they can't replace humans.
The funny situation is, that this simple rule is also visible for less important tasks like game playing.. Somebody may argue that it is pretty easy to program an AI which will play against other players in a video game. A game is an easy to automate task and in addition it won't affect human health if the AI isn't perfect. The surprising situation is, that human players doesn't like to play against AI especially not if the AI is more powerful then they. Even for the well formalized game of chess all the experts players are playing of course against other humans but never against AI software.
It is unlikely that AI can't replace human chess player but is powerful enough to replace human truck drivers or human artists. This is simply an illusion.
No comments:
Post a Comment