In contrast, to the PC revolution in the early 1990s the AI revolution isn't a mainstream movement. What is called the AI revolution is mostly invisible in the public perception. there are no Cebit like expos, no dedicated computer journals in the bookstore and even online forums about Artificial Intelligence are missing. For the newbies this situation is very disappointing and there are rumors available that perhaps the AI revolution isn't there.
To make the development around robotics and AI visible, there is a need to focus on the Gutenberg galaxy. Even the term needs to be introduced first. What is called the Gutenberg galaxy is term coined by Marshall McLuhan and describes the ecosystem of printed information. Inside the Gutenberg galaxy there is a sub-section available which contains of books and journals with a small amount of readers from the academic domain. This sub-section is the epicentre of the AI revolution.
According to the published papers there is a lot of progress available. Especially from 1990 until 2020 many new disciplines and algorithms around Artificial Intelligence were invented. It was mentioned before, that this development was mostly invisible or ignored by the public, but this fact doesn't mean, that it is not there. It means only that the entry barrier is high.
The dominant question asked by published papers around AI is how to enable a computer to think like a human. This question was investigated under multiple perspectives by different authors. There are not a handful or hundreds of papers available but million papers were written about robotics and Artificial intelligence. For a single reader it would take hundreds of years to get familiar with all this knowledge.
The reason why it makes sense to become familiar with the literature around AI is because its not the only place in which the AI revolution is visible. If the proceedings are printed out, it can be traced back similar to a puzzle at which moment in time which innovation was available in the literature. All the important technology around neural networks, SLAM localization of robots and grounded language is already available in the literature. What is missing are the readers.
Perhaps it makes sense explain why the entry barrier to the Gutenberg galaxy is high. From a formal perspective, 90% of the written information about Artificial Intelligence was formulated in English and the content has made public available under a creative commons license. That means, technically the pdf documents can be downloaded and read by everyone without any costs. The problem is the language in these documents which contains complex abstract vocabulary and assumes that the reader is already familiar with a subject. Its not possible to explain the inner working of a neural network to beginners, but the assumption is always that the reader is already an expert on this field and needs only detailed information. Even the most accessible book “Russel/Norvig: AIMA” can't be called beginner friendly because its a scientific book written for computer scientists.
A good starting point for newbies into the subject of AI might be so called np hard problem. NP hard problems are a subcategory of computer science puzzle for example the traveling salesman problem or motion planning problems. These problems are usually unsolvable, but the AI communities tries to find algorithms for solving these problems anyway. So we can say, that the AI papers are trying to solve np hard problems. The task can be compared with “Squaring the circle”, some researchers are convinced that this is possible, while other have give up.
Until the 1990 there was a widespread opinion available that AI in general can't be realized. It was proven mathematical that np hard problems are unsolvable and that even neural networks are overwhelmed by motion planning problem. This situation has changed since 1990s. Today, a large percentage of AI researchers is convinced that robots can be build in reality.
No comments:
Post a Comment