April 03, 2026

The long history of artificial intelligence

Historians are introducing Artificial intelligence with the fact that the subject was researched for over 50 years without much effort. This makes AI research to the most unsuccesful sciences. In contrast to other disciplines like mathematics, physics or psychology its completely unclear what AI is about and how to make computer smart. A closer look into the facts will show that the situation is more dramatic, because AI was researched since 70 years because the Dartmouth conference on AI was held in 1956.

Despite lots of robotics project and a huge amount of published papers there is little or no progress available. Before the advent of chatgpt in Nov 2022, the subject was nearly unknown in public awareness. What was available instead was classical computer science in the form of PCs, the internet and modern gadgets like the iphone. In contrast, the goal of building intelligent machinery was perceived as impossible to realize.

One possible explanation why AI research from the last 70 years is perceived as a dead end is because there are no logic bricks available which are building on each other. Of course many dedicated robotics software frameworks and AI programming languages were created but none of them are perceived as important. There is no such thing available like the x86 hardware standard for AI or the Unix specification which allows other programmers to build on top of the standard new application. Instead the common AI project in the past was started by a few researchers who have written some code and discussed the results in a conference paper and other researchers have read the paper and decided that the outcome makes no sense so they started their own project from scratch.

There is one exception from this rule available which is responsible to the emergence of Artificial intelligence since the year 2010. So called deep learning datasets have become the standard in AI research. A dataset is a table with annotated information, like motion capture data, picture database and more recently multimodal dataset with question answer pairs. These datasets have evolved from simple table with 2000 entries up to larger table created by different researchers in a multi year effort. A certain dataset formulates a puzzle which has to be solved by a neural network. The network has the obligation to interpolate missing data and reproduce existing question answer pairs. For example the neural network is feed with a walking sequence of a human and the neural network recognizes that the left foot is in front of the right foot.

In contrast to the mentioned endless amount of AI libraries and AI algorithm, the dataset principle is surprisingly stable. Dataset were used in the year 2000, in 2010, in 2020 and are also relevant for the future. The assumption is, that datasets are a fundamental building blocks in creating intelligent machines.

In modern understanding an AI dataset acts as a benchmark to score a machine learning algorithm. The ability to score an algorithm transforms AI research from alchemy into a scientific disciplines. Instead of arguing from a philosophical standpoint if machines can think, the question is what the numerical score in a benchmark is. Such a fact can be compared against each other and allows to determine a direction for future research. 



No comments:

Post a Comment