March 08, 2023

How Artificial Intelligence was discussed in the 1990s

With the advent of Deep learning and humanoid robotics the AI community has demonstrated that everything is possible. Software is able to parse natural language, play video games and can control robots. The surprising success of AI technology wasn't expected by most researchers in the past. There is a difference how AI related problems are discussed in the past from today's perspective.
Before it makes sense to explain more recent algorithms there is a need to a take a look back how AI problems were analyzed 30 years ago in the 1990s. The obvious difference is, that the in past the amount of optimism was lower. In the late 1980s there was an AI winter available, which means that even computer experts were disappointed about the capabilities of neural networks and expert system. The reason why is that during this time a certain sort of questions were asked.
A typical problem discussed in the early 1990s was the n-queen problem which can be solved with a back tracking algorithm. At this time, such a problem was treated as a state of the art Artificial intelligence problem. Many obstacle were available before such a problem can be solved. First challenge was to get access to a reasonable fast computer. A normal Commodore 64 homecomputer was too slow for computer science problems and it was difficult to program the sourcecode. The more effective way in implementing computer science problems was an MS DOS PC in combination with the Turbo pascal programming language. So most of the effort in the 1990s was directed towards getting such sort of PC setup running. SImple tasks like installing the pascal compiler or write a hello world program with the IDE was some sort of advanced programmed task.
Suppose the AI programmer in the early 1990s has mastered all the requirements and was able to write simple programs in Pascal. The next challenge was to explore the direction called Artificial Intelligence. The n-queen problem is some sort of benchmark to test out search algorithms. The task is to find the positions of 8 queens on the chess board by simulating all the possible game state. And the only valid method to solve the problem was complete enumeration. Complete enumeration means to calculate thousands of thousands different possibilities and test if the solution fits to the constraints.
A typical runtime behavior of such an algorithm is, that it will occupy all the CPU resources over hours. The turbo pascal program gets first compiled into a binary .exe file and then it has to run for 3 hours without interruption. During that time the progress is shown on the screen as a percentage number which is counting upwards very slowly. At least in the 1990s such kind of problem solving strategy was state of the art Artificial Intelligence.
It should be mentioned, that in most cases the n queen algorithm hasn't figured the answer to the problem. Either something with the pascal program was wrong, or the state space was too large. The logical consequence was that the problem remains unsolved. That means, technology in the 1990s was not able to solve the problem. If simple n-queen problems remain unsolved, more advanced tasks like controlling a robot are also out of reach for the 1990s software developers.
Complete enumeration and backtracking algorithms are typical examples for non-heuristics problem solving strategies. In the 1990s these algorithms were used as the only strategy to address AI related problems. They are simple to explain and simple to implement. They need a huge amount of CPU time and it can be shown that they can't solve any serious problem. This outcome was used as a proof that a certain problem is np-complete which means, that computers in general are not able to solve it. Even if the computer is 10x faster it is not fast enough to traverse the state space in a reasonable amount of time.
Most of the AI programmers in the 1990s were educated with such a bias. The reason was that more advanced problem solving algorithm were unknown during this time.
 

AI in the 1990s vs 2020s

There are reasons available why AI has evolved over the years. The following table shows a direct comparison of the tools.

early 1990s
2020s
Programming language
Turbo Pascal
Python
Operating system
MS DOS
Linux
Hardware
Intel 286 PC
multi core CPU
Algorithm
backtracking
heuristics
Problem
n-queen
Sensor grounding
Computer books
printed
Internet
There is no single reason why modern AI research works so much better but it is combination of factors. Most software in the 1990s was programmed on slow PC with complicated to use programming languages. In contrast, most programming works under stable Linux like operating system in combination with the easy to handle python language. This reduces development effort. Another major difference is the access to computer science books. In the early 1990s it was nearly impossible to read a state of the art book or paper about Artificial Intelligence. In the 2020s it is pretty easy to find such information online.
The logical consequence is, that over the years the researchers have created more advanced AI related projects. Former challenges are solved and lots of discoveries were made. The chance is high that this development is constant. That means in 20 years from now, lots of improvements are visible. Perhaps future programmers will laugh about current technology like the Internet or the python programming language.
A seldom described difference between AI research in the 1980s vs today are the problem which are addressed. Suppose the idea is to solve the n-queen problem. Then a certain sort of algorithms and programming languages are selected. Before the problem can be solved, somebody has to explain why it is important. From a technical perspective it is possible to solve more recent problems like “sensor grounding” on 1980s hardware and software. That means a simple MS DOS PC in combination with the pascal programming language can be utilized to realize an advanced robot. But, in the 1980s nobody was aware of such thing like the symbol grounding problem. Even the idea to realize a line following robot wasn't invented during this time.
On the other hand it is funny to see, that with modern hardware and software the well known nqueen problem remains unsolved. Even on a quadcore CPU and a 64bit operating system it is not possible to traverse the entire state space to find the correct positions for eight queens.