Before the advent of large language models, grounded language and interactive Turing machines, there was a different understanding available how to program intelligent computers. This naive paradigm should be explained in detail, because it allows to identify potential bottleneck of outdated Artificial Intelligence research.
Until the year 2010 the shared assumption was, that artificial intelligence is a subdiscipline within existing computer science and could be categorized with established terminology about software, hardware and algorithm design. A practical example was a chess playing artificial intelligence. The typical powerful chess program until 2010 was realized in the C language because of performance reasons, has implemented a powerful alpha beta pruning algorithm, runs on a multicore CPU and was able to figuring out the optimal move by itself.
The only difference between the chess program was which preference was available within this category space, For example, some older chess programs were programmed in pascal in stead of C which was perceived as less efficient, and sometimes a new algorithm was implemented which was able to calculate a larger time horizon into the future. 99% of the chess programs were realized with such a paradigm because it was the only available discourse space how to talk about artificial intelligence and about computer chess in detail.
The reason why a certain chess engine was more powerful than another one was located in improved technology which might be a faster programming language, an improved algorithm, a or a better hardware. All these criteria are located within computer science. Its the same vocabulary used for describing other computer science topics like databases, operating systems, or video games. So we can say, that Artificial intelligence until 2010 was seen as detail problem within computer science which has to do with hardware, software and algorithm design.
Surprisingly, the discourse space has changed drastically after the year 2010. A look into recent papers published from 2010-2024 will show, that a different terminology was introduced for Artificial intelligence research. Performance criteria from the past, like a fast programming language like C or a fast multi core CPU seem to be no longer relevant. Many AI related projects have be realized in Python which is known as a very slow programming language not tailored for number crunching problems. Also, lots of AI papers are introducing multimodal datasets which have nothing to do with computer science at all but they have its origin in statistical computing. So we can say, that AI projects after the year 2010 are trying to overcome the limitation of computer science in favor of external disciplines like computer linguistics, biomechanics and even humanities. Such a shift in focus has introduced many new vocabulary. For example, if a computer is used to create a motion capture dataset of a walking human, a certain bio-mechanical vocabulary is used which has nothing to do with computers.
A possible explanation why classical computer science oriented description of AI topics has felt out of fashion is, that it has failed to solve any notable problem. Even for simple games like tictactoe a highly optimized C program won't be able to traverse the state space. And for more advanced problems in robotics kinodynamic planning, a classical focus on programming languages and faster hardware won't solve anything.
Classical computer science which is working with a discourse space of hardware, software and algorithm works only for trivial problems but not for more demanding robotics problems which are always np hard. It seems, that the AI problems itself were the cause why a classical computer oriented discourse space has felt out of fashion and was replaced with a different paradigm which is oriented on non computational requirements. Since the advent of large language models, this new discourse space is equal to chatbot driven AI.
No comments:
Post a Comment