In the mentioned period there was a lot of published research available around artificial intelligence and robotics. This research didn't resulted into practical application because of an untold bias which was missing ability to use natural language as an intermediate layer. To explain the situation let us explain in detail, who AI research was done until 2010.
The assumption was, that all the AI algorithms have its root in mathematical science. An optimization algorithm is trying to solve a numerical problem. For example, the model predictive control paradigm is about finding a robot trajectory similar to a path planner. Or a neural network can adjust its weights to recognize images. Both algorithms were known from 1990 to 2010, are described frequently in the literature but they are completely useless. For example, optimal control sounds great from a theoretical perspective. The idea is that the robot figures out possible alternatives in the state spaces, plans some step ahead and use this information to generate the optimal action. The problem is, that in the reality there is no clearly defined mathematical state space which allows to apply the theory.
A common situation until 2010 was, that a newbie has implemented a certain optimal control algorithm or programmed a certain neural network architecture, but the robot wasn't able to solve a problem. Even very basic challenges like finding the exit in a maze, were out of reach for AI algorithms until 2010.
Such a disappointing situation wasn't the exception but the normal situation. That means, the entire AI related corpus of mathematical algorithms were not able to proof anything, and it was unclear what a possible improvement was.
The situation changed dramatically with the advent of language based human machine interaction since around 2010. Since this year, the AI research community decided to explore a new paradigm which was ignored mostly before which was to utilize natural language for providing external knowledge. The principle wasn't completely new, because the famous SHRDLU project (1970) was mentioned in most AI books. But until 2010 the concept wasn't described in detail, because the untold assumption was, that AI needs a mathematical but not a linguistic representation. The surprising situation was, that robotics problems can be described from a linguistic perspective more elegant than with a mathematical understanding which resulted into rapid progress in robotics research.
So we can say, that the absence of natural language interaction with machines was the major cause why AI research until 2010 was slow.
Perhaps it makes sense to give an example how the understanding about robotics has influence the AI research. Before the year 2010 a common description about motion planning was, that there is a robot arm who should grasp an object from the table. The world is described in 3d coordinates. More advanced models assumed that the robot's reality contains of gravity and friction between the robot's hand and the object, so the world model was a realistic physics engine. But this understanding doesn't help to control the robots arm with an AI but it prevented to apply optimal control or similar approaches. Especially the attempt to plan multiple steps into the future needs to much CPU resources in a realistic physics simulation, so it was out of reach to control the robot arm with any known algorithm.
The bottleneck was there for multiple or even all robot projects until 2010, so it makes sense to assume that it was a general bias in AI research until 2010. The paradox situation was, that with increased effort to model robot problems in a mathematical notation, the resulting optimization problems were much harder to solver. Even faster computer hardware for example multiprocessing arrays were not able to overcome the obstacles.
January 31, 2025
Limitation in AI from 1990 to 2010
Labels:
AI history
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment