Until the year 1990 computer science was working with a fixed principle which was the algorithm. An algorithm allows the computer to compute something. Different hardware systems like 8bit computers, 16 bit computers and so on can execute the same algorithm with higher speed. The algorithm paradigm wasn't critized but it was applied to all existing problems. Programming an operating system, implementing faster hardware, writing a word processing software has always to do with inventing and implementing algorithms.
It should be mentioned that the algorithm paradigm has failed to solve AI related problems. Even if some path planning and game tree search algorithms are available, they can't be used for solving real world problems because of the large state space. Algolrithms are only powerful for solving non ai related problems.
There is a simple reason available why the AI research has entered an AI winter in the 1990s because the old paradigm of an algorithm didn't worked anymore but a new paradigm wasn't inveted yet. It took until the year 2000 before the deep learning including its focus on a dataset has in partial replaced the former algorithm perspective. By definition, a dataset doesn't contain of executable program code but a dataset is file on the hard drive which stores data. The absence of executble programs provides additional freedom to capture domain specific knowledge. There are datasets available for all sort of practial problems like image recognition, question answering, game playing, motion capture and so on. Creating and interpreting these datasets doesn't belong to classical computer science but its an intermediate discipline between computer science and the concrete problem.
Perhaps it makes sense to go a step back ward and explain what the puprose of an algorithm is about. An algorithm is the answer to a problem. For example the task is to sort an array, and then the bubblesort algorithm will solve it. Or there should be line drawn on a pixel map, and the Bresenham's algorithm will do the job. The precondition is always that the problem was defined already. In case of AI this is not the case. Problem definition can't be realized with algorithms, but its done with a dataset. an OCR dataset defines an ocr challegne, a motion capture dataset defines an activity recognition problem while a VQA dataset defines a VQA challenge. So we can say, that dataset creation is the prestep before a concrete algorithm can be invented. If the problem is fixed, e.g. an ocr challenge, there are multiple algorithms available who to solve it, e.g. neural networks, rule based expert system or decision tree learning.
Until the year 1990 there was a missing awareness about prohlem definition for AI and robotics tasks available. A colloquial goal like "Build a biped robot" are not mathematical enough to start with software development. What is needed instead is a very accurate and measurable problem definition. In the best case the problem is given as a video game in combination with a dataset in a table including a scoring function to determine if a certain robot is walking or not. Such kind of acurate problem definition were missing before the year 1990 and this was the reason, why AI wasn't be available.
January 13, 2025
The algorithm bias in computer science
Labels:
AI history
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment