In the mentioned period there was a lot of published research available around artificial intelligence and robotics. This research didn't resulted into practical application because of an untold bias which was missing ability to use natural language as an intermediate layer. To explain the situation let us explain in detail, who AI research was done until 2010.
The assumption was, that all the AI algorithms have its root in mathematical science. An optimization algorithm is trying to solve a numerical problem. For example, the model predictive control paradigm is about finding a robot trajectory similar to a path planner. Or a neural network can adjust its weights to recognize images. Both algorithms were known from 1990 to 2010, are described frequently in the literature but they are completely useless. For example, optimal control sounds great from a theoretical perspective. The idea is that the robot figures out possible alternatives in the state spaces, plans some step ahead and use this information to generate the optimal action. The problem is, that in the reality there is no clearly defined mathematical state space which allows to apply the theory.
A common situation until 2010 was, that a newbie has implemented a certain optimal control algorithm or programmed a certain neural network architecture, but the robot wasn't able to solve a problem. Even very basic challenges like finding the exit in a maze, were out of reach for AI algorithms until 2010.
Such a disappointing situation wasn't the exception but the normal situation. That means, the entire AI related corpus of mathematical algorithms were not able to proof anything, and it was unclear what a possible improvement was.
The situation changed dramatically with the advent of language based human machine interaction since around 2010. Since this year, the AI research community decided to explore a new paradigm which was ignored mostly before which was to utilize natural language for providing external knowledge. The principle wasn't completely new, because the famous SHRDLU project (1970) was mentioned in most AI books. But until 2010 the concept wasn't described in detail, because the untold assumption was, that AI needs a mathematical but not a linguistic representation. The surprising situation was, that robotics problems can be described from a linguistic perspective more elegant than with a mathematical understanding which resulted into rapid progress in robotics research.
So we can say, that the absence of natural language interaction with machines was the major cause why AI research until 2010 was slow.
Perhaps it makes sense to give an example how the understanding about robotics has influence the AI research. Before the year 2010 a common description about motion planning was, that there is a robot arm who should grasp an object from the table. The world is described in 3d coordinates. More advanced models assumed that the robot's reality contains of gravity and friction between the robot's hand and the object, so the world model was a realistic physics engine. But this understanding doesn't help to control the robots arm with an AI but it prevented to apply optimal control or similar approaches. Especially the attempt to plan multiple steps into the future needs to much CPU resources in a realistic physics simulation, so it was out of reach to control the robot arm with any known algorithm.
The bottleneck was there for multiple or even all robot projects until 2010, so it makes sense to assume that it was a general bias in AI research until 2010. The paradox situation was, that with increased effort to model robot problems in a mathematical notation, the resulting optimization problems were much harder to solver. Even faster computer hardware for example multiprocessing arrays were not able to overcome the obstacles.
January 31, 2025
Limitation in AI from 1990 to 2010
January 30, 2025
Programming heuristic algorithms
Around the year 1990 the understanding of heuristic algorithms was poor. The problems are visible even in the definition what a heuristic is about. Normal algorithms are step by step instructions formulated in source code, e.g. bubble sort or a path planning algorithms. In contrast, a heuristic algorithm is based on domain specific knowledge, but it remains unclear how this knowledge is encoded in software.
It makes sense to assume, that a heuristic algorithm isn't a technology but only a wish for such a technology. Its more a question for future mopre efficient algorithms not invented yet. Nevertheless there are some attempts available to describe the situation in detail. One famous example is a cost function. Cost functions are used to encode the knowledge of experts into a simple mathematical equation. For example the distance there is a distance in a maze to the goal, or there is a cost function available for colliding with an obstacle. The idea behind a cost function is to encode a high level descripotion "Move to the goal, avoid the obstacles" into a mathematical model which can be translated into actions.
If cost functions are an entry level heuristic algorithms, there is another more advanced strategy available which are text based robot control. The idea is, to provide domain knowledge on the fly, during the runtime of the program. The human operator speaks to the robot e.g. he gives the current subgoal, and this information is converted into a mathematical equation. In contrast to the previously mentioned static cost function, a text based teleoperation is harder to implement but provides higher flexibility.
Let us analyze the workflow in detail. The human operator provides the next subgoal in natural language, this information is translated into a cost function by a parser, and the cost function is used to plan the trajectory of the robot. Even if the pipeline sounds a bit complicated it can be realized in software. The bottleneck is how to translate natural language instructions into a mathematical equation.
The similarity between cost function and text based control is, that the domain specific knowledge isn't available in the robot itself, but its provided from the environment. This redfines the robot's role from a former autonomous robot into an interactive device. The principle is similar to bottom up robotics formulated by Rodney brooks in the late 1980s. In contrast to Brooks subsumption architecture, the robot is more dependent from the environment. Instead of using a frontsensor to avoid an obstacle, the robot gets textual input from a human operator, so its entirely remote controlled.
January 28, 2025
The emergence of AI from 1990 to 2010
The starting point of the history lesson in technology is the year 1990 which was the end point of the 4th computer generation. Around the year 1990 most modern computing technology like 32bit CPU, the C++ programming language and Internet protocols like TCP/IP were invented. Also videocompression and many video games were available.
Artificial Intelligence and robotics belongs to the 5th computer generation which has emerged after the year 1990. The starting point are heuristics algorithms. In contrast to a normal algorithm which can be search in a sorted database, a heuristic algorithms is using external knowledge to fasten up the performance. Entry level heuristics algorithms are based on an evaluation function. The most famous one is A*. In contrast to a simple traversal of the state space, A* is using ja distance function to follow only nodes in the graph which are near to the goal. This allows to search much faster.
Modern AI technology including robotics is build up on heuristic algorithms. Its not possible to solve AI somehow else because problems like motion planning and grasp planning are so complex, that always sort sort of heuristics is needed to fasten up the processing speed. The only debate is about which sort of heuristics fits well to a certain problem.
Early and easy to explain heuristic algorithm are based on the previously mentioned evaluation function which can be used to implement path planning algorithm and chess playing computer programs. In computer chess, the evaluation function determines a score for the current board, this score is used to prioritize the search in the state space. Such an algorithm can be executed on normal desktop computers which were available in the 1990s like a 286'er PC. An evaulation function alone can't solve more complex robot problems. The problem is that for high level tasks like "grasping an object" the evaluation function is unknown. These obstacles has prevented, that robotics was available in the 1990s so there was a longer process needed to heuristic algorithms.
A promising approach developed during the 2000s in parallel to the Deep learning boom was called feature engineering. Feature engineering tries to create evaluation functions for any domain. The question is which statistical variables are important for a domain. For exampole a robot hand might have a touch sensor, while a car might have the features speed and direction. The principle of features allows to create more advanced evaluation function which can solve lots of robot problems.
A third and most advanced tool to create modern heuristic algorithms is natural language. Instead of treating robot control problems as purely mathematical subjects in a multidimensional numerical space, the idea is to communicate with a robot over a language interface similare to a text based adventure game. The ability to introduce grounded language into existing heuristi algorithm porovides a powerful framework to create advanced robot systems since the year 2010.
January 27, 2025
Zweiter Besuch im Computer reset store, Dallas
1. Ankunft und Entdeckung
Die Morgensonne brannte bereits unbarmherzig auf den Asphalt, als Mark und Steve ihren altersschwachen Ford auf den staubigen Parkplatz des Computer Reset Store in Dallas lenkten. Die Freunde, beide Mitte vierzig und leidenschaftliche Technik-Nostalgiker, hatten von diesem Laden gehört - einem wahren Mekka für Liebhaber alter Computer.
"Sieh dir das an!", rief Mark begeistert und deutete auf das verwitterte Schild über dem Eingang. "Wie eine Zeitkapsel!"
Steve nickte grinsend. "Lass uns hoffen, dass drinnen noch mehr Schätze auf uns warten."
Mit klopfenden Herzen betraten sie den Laden. Der Geruch von altem Plastik und Elektronik schlug ihnen entgegen, vermischt mit einer Prise Staub und Vergangenheit. Schon im Eingangsbereich türmten sich Kartons mit ausrangierten Tastaturen und Monitoren.
"Wow", hauchte Steve ehrfürchtig. "Das ist ja der Wahnsinn!"
Sie drangen tiefer in den Laden vor, vorbei an endlosen Regalen voller Hardware. Hier ein Stack originalverpackter 5,25-Zoll-Disketten, dort eine Kiste randvoll mit alten Prozessoren. Mark blieb wie angewurzelt vor einem Regal stehen.
"Steve! Sieh dir das an!" Er hielt triumphierend eine Packung in die Höhe. "Eine original versiegelte Kopie von Lotus 1-2-3! Mein Vater hat damals damit gearbeitet."
Steve pfiff anerkennend durch die Zähne. "Unglaublich. Und da drüben - ist das etwa ein Xerox Alto?"
Die beiden Freunde verloren jegliches Zeitgefühl, während sie durch die Gänge streiften. Jede Ecke barg neue Überraschungen. Ein kompletter Apple Lisa Computer, noch in der Originalverpackung. Ein ganzes Regal voller Commodore 64 Zubehör. Sogar einige seltene NeXT Cubes entdeckten sie.
"Ich fühle mich wie in einer Zeitmaschine", murmelte Mark, während er ehrfürchtig über das Gehäuse eines IBM PC/XT strich. "All diese Geräte... sie haben die Welt verändert."
Steve nickte nachdenklich. "Ja, und jetzt stehen sie hier, vergessen und verstaubt. Es ist fast ein bisschen traurig."
Sie erreichten einen Bereich, der offenbar der Software gewidmet war. Regale über Regale mit Softwareschachteln aus den 80ern und 90ern. WordPerfect, dBase, Norton Utilities - Namen, die sie in ihre Jugend zurückversetzten.
"Erinnerst du dich noch, wie wir nächtelang Civilization gespielt haben?", fragte Mark lachend und hielt eine vergilbte Spielepackung hoch.
Steve grinste. "Wie könnte ich das vergessen? Meine Eltern dachten schon, ich wäre computerabhängig."
Plötzlich stutzte Mark. "Ist dir aufgefallen, dass niemand sonst hier ist? Kein Personal, keine anderen Kunden..."
Steve runzelte die Stirn. "Stimmt. Und sieh mal - einige Regale sind schon leer geräumt."
Ein ungutes Gefühl beschlich die beiden Freunde. Irgendetwas stimmte hier nicht. Sie beschlossen, weiter in den hinteren Teil des Ladens vorzudringen, um nach Antworten zu suchen.
Wenig ahnten sie, dass sie dort auf eine Szene stoßen würden, die ihre Begeisterung in blankes Entsetzen verwandeln sollte.
2. Konfrontation mit der Liquidation
Als Mark und Steve tiefer in die staubigen Gänge des Computer Reset Store vordrangen, vernahmen sie plötzlich ein ohrenbetäubendes Kreischen. Alarmiert eilten sie dem Geräusch nach und gelangten in einen großen Hinterhof. Dort bot sich ihnen ein Bild des Grauens: Ein gigantischer industrieller Schredder thronte wie ein Moloch über einem Berg alter Computer.
Fassungslos beobachteten die beiden Männer, wie Arbeiter einen Commodore 64 nach dem anderen in den gierigen Schlund der Maschine warfen. Das Knirschen von zerberstendem Plastik und zersplitternden Leiterplatten ließ Mark erschaudern. "Das können die doch nicht machen!", rief er entsetzt.
Steve, sonst eher der Besonnene von beiden, stürmte auf den nächstbesten Arbeiter zu. "Halt! Stoppen Sie sofort!", brüllte er gegen den Lärm an. Der Arbeiter zuckte nur gleichgültig mit den Schultern und deutete auf einen Mann in Anzug, der abseits stand und das Geschehen überwachte.
Entschlossen marschierten Mark und Steve auf den Anzugträger zu. "Sir, Sie müssen das stoppen!", forderte Mark eindringlich. "Diese Computer sind Kulturgut! Man kann sie doch nicht einfach zerstören!"
Der Mann, der sich als Mr. Johnson vorstellte, seufzte genervt. "Meine Herren, ich verstehe Ihre Aufregung. Aber wir führen hier lediglich einen Auftrag aus. Der Laden wird geschlossen, alles muss raus."
"Aber warum vernichten?", hakte Steve nach. "Es gibt doch sicher Sammler oder Museen, die Interesse hätten!"
Johnson schüttelte den Kopf. "Zu aufwendig. Die Lagerkosten fressen uns auf. Außerdem - wer will schon diesen alten Krempel?"
Mark ballte die Fäuste. "Viele Leute! Diese Geräte sind Teil unserer Geschichte! Sie haben die digitale Revolution eingeläutet!"
"Tut mir leid, meine Entscheidung steht fest", erwiderte Johnson kühl.
Die nächsten Stunden vergingen wie in Trance. Hilflos mussten Mark und Steve mit ansehen, wie ein Stück Computergeschichte nach dem anderen in den Shredder wanderte. Apple II, Atari ST, Amiga 500 - nichts wurde verschont.
Sie versuchten, wenigstens einige Geräte zu retten, boten sogar an, sie abzukaufen. Doch Johnson blieb hart. "Alles oder nichts", war seine Devise.
Als die Sonne unterging, war der Hof leer gefegt. Wo einst Tausende Computer gestanden hatten, türmte sich nun ein Berg aus Schrott. Mark und Steve standen wie betäubt da, unfähig zu begreifen, was gerade geschehen war.
"All diese Erinnerungen... einfach weg", murmelte Steve.
Mark nickte stumm. Dann ballte er entschlossen die Faust. "Das darf nie wieder passieren. Wir müssen etwas unternehmen!"
Erschöpft, aber mit neuem Feuer im Herzen, verließen die beiden Freunde den Ort der Zerstörung. Sie wussten nicht, wie - aber sie würden einen Weg finden, um die verbliebenen Schätze der Computergeschichte für die Nachwelt zu bewahren.
January 25, 2025
A short introduction into the End of moores law
Before its possible to explain why the law is dead, let us give a short overview what the original idea was. Until the year 2010, the moores law was an unwritten rule, how much faster a new CPU is. The observation of Gordon Moore (the founder of Intel) was that the transistor count doubles every 18 months which is equal to an exponential growth. If the CPU has twice as much transistor it delivers also twice as much performance.
To verify if this law is valid, we need to determine the Compound Annual Growth Rate value for the Gflops of newly produced CPU over multiple years. Or in more simpler terms, an Excel sheet is created with different processors over the years which are compared to each other. To simplify the task, i've selected the Thinkpad laptop, because precise information about the CPU in this hardware is available which allows to create a precise table.
A CAGR value of 0.59 means, that a new generation of a laptop delivers 59% improvement in performance than the laptop one year ago, which is equal to the moores law ratio (doubling performance every 18 months). According to the figure, until the year 2010 a CAGR value of 0.4 upto 0.6 was available which is close to the moores law. Unfurtunately, the performance improvement has reduced after 2010 drastically to around 0.20.
The possible explanation is, that its much harder to increase the speed anymore, if the level is higher. Improving a 1 gflops CPU by 50% is much easier, than improving a 100 gflop CPU by 50%. This dilemma is called the “End of moores law”.
A realistic growth rate estimation for future Thinkpads and other Laptop models is a CAGR of 0.15 which is equal to 15% speed improvement every year. It will take 5 years, until the performance of a CPU is twice as fast which is much longer than the assumption of 18 months formulated by Moore.
year | Thinkpad | CPU | Gflops | CAGR 5y |
2000 | T21 | Intel M Pentium III Copermine 800 Mhz* | 0.4 | |
2001 | T23 | Intel Pentium III-M Tualatin 1.13 Ghz* | 0.6 | |
2002 | T30 | Intel Pentium IV-M 2 Ghz* | 1.5 | |
2003 | T41 | Intel Pentium M Banias 1.6 Ghz* | 2.5 | |
2004 | T42 | Intel Pentium M Dothan 1.8 Ghz* | 3.0 | |
2005 | T43 | Intel Pentium M750* | 3.5 | 0.54 |
2006 | T60 | Intel core 2 duo T5500* | 8.0 | 0.68 |
2007 | T61 | Intel core 2 duo T7300* | 11.0 | 0.49 |
2008 | T61 same | Intel core 2 duo T7300* | 11.0 | 0.34 |
2009 | T400 | Intel core 2 duo P8600* | 19.2 | 0.45 |
2010 | T410 | Intel i7-620M | 21.3 | 0.44 |
2011 | T420 | Intel i7-2640M | 44.8 | 0.41 |
2012 | T430 | Intel i7-3520M | 46.4 | 0.33 |
2013 | T440 | Intel i7-4600U | 67.2 | 0.44 |
2014 | T440 same | Intel i7-4600U | 67.2 | 0.28 |
2015 | T450 | Intel i7-5600U | 83.2 | 0.31 |
2016 | T460 | Intel i7-6600U | 83.2 | 0.13 |
2017 | T470 | Intel i7-7600U | 89.6 | 0.14 |
2018 | T480 | Intel i7-8650U | 121.6 | 0.13 |
2019 | T490 | Intel i7-8565U | 115.2 | 0.07 |
2020 | T14 Gen1 | Intel Core i7-10610U | 115.2 | 0.07 |
2021 | T14 Gen2 | Intel i7-1185G7 | 192.0 | 0.16 |
2022 | T14 Gen3 | Intel i7-1255U | 272.0 | 0.17 |
2023 | T14 Gen4 | Intel i7-1365U | 288.0 | 0.20 |
2024 | T14 Gen5 | Intel Core Ultra 7 155U* | 307.2 | 0.22 |
total | 0.32 |
* gflops is an estimation
sources:
thinkwiki.de
thinkwiki.org
wikipedia.org
intel.com Intel APP metrics for Intel Microprocessors
notebookcheck.net
January 24, 2025
Besuch im Computer reset store Dallas
1. Die Entdeckung
Sarah starrte ungläubig auf den Bildschirm ihres Smartphones. Ein alter Forumsbeitrag hatte sie auf die Spur des Computer Reset Stores in Dallas gebracht - ein wahres Eldorado für Technik-Nostalgiker und Sammler. Als leidenschaftliche Retrocomputing-Enthusiastin konnte sie ihr Glück kaum fassen. Der Laden, ein ehemaliger Gebrauchthandel für Computer, sollte eine unglaubliche Sammlung von Hardware, Software und Büchern aus den 1980er und 1990er Jahren beherbergen.
Ohne zu zögern, buchte Sarah einen Flug nach Dallas. Während der Reise malte sie sich aus, was sie dort alles finden könnte. Alte Apple-Computer? Vergessene DOS-Programme? Vielleicht sogar einen funktionsfähigen Commodore 64?
Als sie endlich vor dem unscheinbaren Gebäude in einem Industriegebiet am Stadtrand von Dallas stand, klopfte ihr Herz vor Aufregung. Die verblichene Aufschrift "Computer Reset" über dem Eingang versprach ein Abenteuer in die Vergangenheit der Computertechnologie.
Mit zitternden Händen öffnete Sarah die schwere Metalltür. Der Geruch von altem Plastik, Staub und leicht oxidiertem Metall schlug ihr entgegen. Als sich ihre Augen an das schummrige Licht gewöhnten, stockte ihr der Atem. Vor ihr erstreckte sich ein riesiges, 18.000 Quadratfuß großes Lagerhaus, vollgestopft bis unter die Decke mit Computerteilen, Peripheriegeräten und Kartons.
Ungläubig wanderte ihr Blick über endlose Regalreihen. Hier stapelten sich Mainframe-Komponenten neben ausrangierten Röhrenmonitoren. Dort türmten sich Kisten mit Disketten und CD-ROMs. In einer Ecke entdeckte sie sogar noch original verpackte Kopien von Windows 95 und OS/2 Warp.
Die schiere Menge an Technik aus vergangenen Jahrzehnten war überwältigend. Sarah fühlte sich, als wäre sie in einer Zeitmaschine gelandet. Jedes Regal, jeder Karton versprach neue Entdeckungen. Sie sah alte IBM-Tastaturen, Apple II-Gehäuse, sogar einige Atari-Konsolen.
Besonders faszinierend fand sie die Bücherabteilung. Hier reihten sich vergilbte Handbücher zu längst vergessenen Betriebssystemen an dicke Wälzer über Programmiersprachen, die kaum noch jemand beherrschte. Sarah zog vorsichtig ein Exemplar von "Inside Macintosh" heraus und blätterte ehrfürchtig durch die Seiten.
Während sie tiefer in den Laden vordrang, wurde ihr klar, dass sie hier nicht nur alte Technik, sondern ein Stück Computergeschichte vor sich hatte. Jedes Gerät, jedes Buch erzählte von einer Zeit, als Computer noch neu und aufregend waren, als jede technische Innovation die Welt zu verändern schien.
Sarah wusste, dass sie Tage, wenn nicht Wochen bräuchte, um alles zu erkunden. Aber sie hatte Zeit. Mit einem Lächeln auf den Lippen machte sie sich daran, systematisch die Regale zu durchforsten. Wer weiß, welche Schätze hier noch auf ihre Entdeckung warteten?
Mit jedem Schritt durch die staubigen Gänge wuchs ihre Begeisterung. Der Computer Reset Store war mehr als nur ein Laden - er war ein Museum, ein Archiv und ein Spielplatz für Technikbegeisterte zugleich. Sarah war fest entschlossen, jeden Winkel zu erkunden und vielleicht das eine oder andere Stück Computergeschichte zu retten.
2. Die Schatzsuche
Mit wachsender Begeisterung begann Sarah ihre systematische Erkundung des Computer Reset Stores. Sie beschloss, methodisch vorzugehen und sich Bereich für Bereich vorzuarbeiten, um kein Highlight zu verpassen.
Zuerst wandte sie sich dem Hauptteileraum zu. Hier lagerten Tausende von Komponenten: Motherboards aus verschiedenen Epochen der PC-Geschichte, Grafikkarten von längst vergessenen Herstellern und Prozessoren, die einst das Herz leistungsfähiger Workstations waren. Sarah staunte über die Vielfalt und den guten Erhaltungszustand vieler Teile.
Besonders faszinierte sie ein Regal voller alter Soundkarten. Sie erkannte eine AdLib, die erste Soundkarte, die Musiksynthese in PCs brachte, und daneben mehrere Modelle der legendären Sound Blaster-Serie. Vorsichtig nahm sie eine Sound Blaster 16 in die Hand und erinnerte sich an die Stunden, die sie als Kind damit verbracht hatte, Spiele zum Laufen zu bringen.
Im nächsten Bereich fand Sarah sich inmitten von Softwareboxen wieder. Riesige Kartons mit Betriebssystemen wie OS/2 und NeXTSTEP standen neben schlanken Jewel Cases mit Spieleklassikern. Sie entdeckte sogar noch original versiegelte Kopien von Programmen wie WordPerfect und Lotus 1-2-3 – Namen, die heute kaum noch jemand kennt.
Während sie die Gänge entlang schlenderte, fiel ihr Blick auf einen speziellen Laptop-Bereich. Hier reihten sich Geräte verschiedenster Marken und Epochen aneinander: klobige Toshiba-Modelle aus den frühen 90ern, schlanke Sony VAIOs und robuste Dell Latitudes.
Plötzlich blieb Sarah wie angewurzelt stehen. Vor ihr lagen zwei schwarze ThinkPads, deren charakteristische rote TrackPoints wie kleine Leuchttürme aus dem Chaos hervorstachen. Mit zitternden Händen hob sie die Geräte an. Es waren ein 760XD und ein 600E – wahre Legenden aus den späten 90er Jahren.
Vorsichtig klappte sie die Displays auf und drückte die Einschaltknöpfe. Zu ihrer Überraschung erwachten beide Laptops mit dem unverkennbaren IBM-Startton zum Leben. Sarah konnte ihr Glück kaum fassen. Diese Schätze würde sie auf jeden Fall mitnehmen.
Ermutigt durch diesen Fund, setzte Sarah ihre Erkundung fort. Sie entdeckte einen Bereich mit alten Handheld-Computern und PDAs. Zwischen Palm Pilots und Newton MessagePads fand sie sogar einen gut erhaltenen Psion Series 5 – ein Gerät, von dem sie immer geträumt, aber nie besessen hatte.
In einer verstaubten Ecke stieß Sarah auf mehrere Kisten voller alter Computer-Zeitschriften. Sie zog eine Ausgabe der "BYTE" von 1985 heraus und blätterte ehrfürchtig durch die vergilbten Seiten. Die Anzeigen für längst vergessene Hardware und die Artikel über damals revolutionäre Technologien versetzten sie in eine andere Zeit.
Als sie weiter in die Tiefen des Ladens vordrang, entdeckte Sarah immer neue Schätze: einen original verpackten Amiga 500, mehrere Apple II-Modelle und sogar einen seltenen NeXT Cube. Jedes dieser Geräte erzählte eine Geschichte von Innovation, Konkurrenz und technologischem Fortschritt.
Die Stunden vergingen wie im Flug, während Sarah von Entdeckung zu Entdeckung eilte. Sie wusste, dass sie nur einen Bruchteil des Angebots gesehen hatte, aber die beiden ThinkPads in ihren Händen erinnerten sie daran, dass sie bereits echte Schätze gefunden hatte.
Mit einem Gefühl der Zufriedenheit, aber auch der Wehmut – denn sie wusste, dass sie unmöglich alles mitnehmen konnte – machte sich Sarah auf den Weg zur Kasse. Ihr Besuch im Computer Reset Store war mehr als nur eine Einkaufstour gewesen. Es war eine Reise durch die Geschichte des Personal Computings, eine Begegnung mit vergessenen Technologien und eine Erinnerung an die rasante Entwicklung der digitalen Welt.
January 23, 2025
A closer look into the end of moores law
In addition to the previous post, I've improved the table with the gflops performance of Lenovo Thinkpad models from the past. The moving average over an increase for 5 years was determined which shows that the processing speed is growing with 15% anually. This is much lower than the expected Moores law which assumes, that the performance is growing by 59% anually.
The slow growth of 15% is different from the much higher improvement during the 1980s and 1990s which was working with the expected 59% annual increase in processing speed. It seems that some sort of plateau was reached after the year 2010. From this year, the processing speed measured in gflops stays constant on a high level. Its unlikely, that the former Moores law with its exponential improvements will come back in the near future.
Moores law is dead, example for Lenovo Thinkpad
Since some years, there are rumors available that the Moores law isn't visible anymore. Sometimes it was argued, that the improvement rate in chip performance has slowed down. To investigate the situation with valid numbers lets us take a deeper look into the performance of different Thinkpad models. The main reason why this computer was selected is because its manufactored under the same name over many years which makes it easy to campare the performance.
The table shows different 14 Inch Thinkpad models from 2011 until 2024, including a common Intel i7 cpu from this year together with the proposed Gflops performance. its visible that the speed has improved, a thinkpad from 2023 is 6 times faster, than the counterpart from the year 2011. But, the improvement per year, is smaller than the expected improvement:
In reality, the gflops has improved by 19% each year, while the moores law expects an improvement of 59% for each year. What we see in the reality is no longer an exponential curve, but a slow increase over a longer horizon. It doesn't take 18 months until the performance has doubled but in reality it takes 7 years.
January 15, 2025
Artificial intelligence from 1990 to 2020
Until the year 1990 it was unclear how to realize AI and robotics, but 30 years later in 2020 AI was available. The period in between should be described because its equal to the existence vs. the absence of artificial intelligence.
The most advanced technology until 1990 was the classical computer workstation including its connection to the internet. The typical workstation was equipped with graphics cards, sound cards, moderate amount of RAM and a hard drive. From a software side a unix compatible operating system including multitasking GUI environment was a typical application. Such a unix workstation wasn't able to control robots and it wasn't equipped with AI, but it was an ordinary example of well engineered computer technology.
AI in the year 1990 was only available in movies and in science fiction books which are describing intelligent robots who are doing useful tasks, like driving cars, walking on two legs and do complex assembly tasks. In the reality these tasks wasn't out of reach for the engineers. Even most advanced books and papers written at Universities in 1990 doesn't contain algorithms, nor ideas how to realize robots in reality.
The single cause why engineers until 1990 have struggled with AI is because of missing problem definition. It was unclear what AI is about from a mathematical standpoint. Without such a definition it wasn't able to program it. A user oriented definition like "A robot has to walk on two legs" isn't enough to create a mathematical equation or to program an algorithm. Programming even very simple robots resulted every time into failed projects. That means simple wheeled robots in 1990 were not able to find the exit in a simple maze.
From 1990 until 2020 lots of effort was put into the AI and robotics issue. The most promising direction was to create a precise problem space first and with this problem space, different algorithms can be combined. There are two major approaches available:
1. Create a physical problem space, e.g. invent a robot competition like Micromouse and Robocup
2. Create a dataset problem space, e.g. a dataset with motion capture recordings, or a dataset with OCR problems
After a problem space was created for example a micromouse maze, its possible to benchmark a certain robot and a certain algorithm how well it performs in the puzzle. For example, a certain robot will need 30 seconds until it found the exist. Or a neural network will recognize in a ocr dataset 50% of the pictures correctly.
During the period 1990-2020 endless amount of datasets and robotics competitions were introduced, described and compared to each other in the academic literature. Especially the second attempt "create a dataset problem space" has become an important enebling technology in AI, because it allows to discuss AI from a mathematical standpoint. Instead of asking what is AI from a philosophical standpoint, the new question was, if a certain dataset makes sense, or what the score for a certain neural network is on a dataset. These hard scientific questions can be adressed with existing tools like back propagation algorithms, statistics and diagrams. A well defined problem space realized as a machine learning dataset was the major steps from former alchemy driven AI philosophy towards a scientific defined AI.
In the published literature from 1990 until 2020 it can be shown, that over the years more advanced datasets were created which were solved with more advanced neural network algorithms. Early datasets in the 1990 were short tables which were described only in the appendix of a paper, while in later years, the entire paper described the dataset in detail because it was the main subject.
Modern AI published after the year 2020 is mostly the result of improved problem formulation. After this date, endless amount of physical problem descriptions plus thousands of datasets with additional problem descriptions are available. These datasets are very realistic, they have to do with real challenges like grasping objects, biped walking, path planning in a maze and even natural language understanding. So we conclude, that the search for Artificial intelligence is equal to the search for a problem formulation. Only if a problem was formulated in a mathematical format, a computer can be used to solve it.
Outline for documentation about a GUI operating system for the Commodore 64 written in Forth
## Documentation Outline: C64 Forth GUI Operating System
This outline details the structure for documenting a GUI operating system written in Forth for the Commodore 64.
**I. Introduction**
* A. Background on the Commodore 64 and its limitations.
* B. The choice of Forth as the development language and its advantages (speed, extensibility, etc.).
* C. Overview of the GUI operating system's goals and features (windowing, icons, mouse support, etc.).
* D. Target audience for the documentation (users, developers).
**II. System Overview**
* A. Architecture of the operating system.
* 1. Memory map and usage.
* 2. Interaction with the C64's hardware (VIC-II, SID, CIA).
* 3. Forth wordset extensions for GUI functionality.
* B. Boot process and system initialization.
* C. Core components:
* 1. Kernel (task scheduling, memory management).
* 2. Window manager (window creation, movement, resizing, z-ordering).
* 3. Input handling (keyboard, mouse/joystick).
* 4. Graphics primitives (drawing lines, rectangles, bitmaps).
* 5. Event system.
**III. User Guide**
* A. Getting started:
* 1. Loading and running the OS.
* 2. Basic navigation and interaction.
* B. Desktop environment:
* 1. Icons and their functions.
* 2. Menus and dialog boxes.
* 3. Window management techniques.
* C. Built-in applications (if any):
* 1. File manager.
* 2. Text editor.
* 3. Other utilities.
* D. Customization options.
**IV. Developer Guide**
* A. Forth wordset reference for GUI programming:
* 1. Window creation and manipulation words.
* 2. Drawing and graphics words.
* 3. Event handling words.
* 4. Input handling words.
* B. Programming examples:
* 1. Creating a simple window.
* 2. Handling user input.
* 3. Drawing graphics.
* 4. Implementing a basic application.
* C. Memory management considerations for developers.
* D. Guidelines for creating applications for the OS.
* E. Debugging tips and techniques.
**V. Technical Details**
* A. Memory map in detail.
* B. Interrupt handling and timing.
* C. Low-level graphics routines.
* D. Source code organization and structure.
* E. Forth wordset implementation details.
**VI. Appendices**
* A. Glossary of terms.
* B. Troubleshooting common problems.
* C. List of known bugs and limitations.
* D. Forth wordset quick reference.
* E. Hardware requirements and compatibility.
**VII. Index**
This detailed outline provides a comprehensive structure for documenting a complex project like a GUI OS for the C64. It caters to both end-users and developers, ensuring that the system is both usable and extensible. Remember to include plenty of screenshots and code examples to illustrate the concepts and make the documentation more engaging.
## I. Introduction - B. The choice of Forth as the development language and its advantages
This section explains why Forth was chosen as the development language for this GUI operating system on the Commodore 64. Given the C64's limited resources and the specific requirements of a graphical user interface, Forth offers several distinct advantages over other languages like BASIC or assembly language.
**Reasons for Choosing Forth:**
* **Speed and Efficiency:** Forth is known for its speed and efficiency, especially when compared to interpreted languages like BASIC. Its threaded code architecture and direct access to machine code instructions allow for fast execution, crucial for real-time graphics rendering and user interaction in a GUI environment. This is especially important on the C64's relatively slow 6510 processor.
* **Small Footprint:** Forth systems are typically very compact, requiring minimal memory overhead. This is a significant advantage on the C64 with its limited 64KB of RAM. The small size of the Forth interpreter and the ability to define custom words allows for efficient use of memory, leaving more space for application code and data.
* **Extensibility and Flexibility:** Forth is highly extensible. Developers can easily define new words (subroutines) that perform specific tasks. This allows for the creation of a custom wordset tailored to the needs of the GUI operating system, including words for window management, graphics primitives, and event handling. This flexibility is essential for building a complex system like a GUI.
* **Interactive Development:** Forth encourages interactive development through its read-eval-print loop (REPL). Developers can test code snippets directly in the interpreter, making it easier to debug and experiment with different approaches. This interactive nature can significantly speed up development time.
* **Direct Hardware Access:** While not unique to Forth, it makes direct access to hardware relatively straightforward. This is essential for a GUI operating system, which needs to interact closely with the C64's hardware, such as the VIC-II graphics chip, the SID sound chip, and the CIA input/output controllers. Forth allows developers to write code that directly manipulates hardware registers, providing fine-grained control over the system.
* **Structured Programming:** Although Forth is sometimes perceived as unconventional, it supports structured programming principles. By defining small, well-defined words, developers can create modular and maintainable code. This is important for a larger project like an operating system.
**Comparison to Other Languages:**
* **BASIC:** While BASIC is the C64's native language, it is interpreted, which makes it considerably slower than Forth. BASIC also has limitations in terms of extensibility and direct hardware access.
* **Assembly Language:** Assembly language offers the highest performance and control over the hardware, but it is much more complex and time-consuming to write and maintain than Forth. Forth can be seen as a good compromise, providing a balance between performance and development efficiency.
**In summary:**
Forth's speed, small footprint, extensibility, interactive nature, and direct hardware access make it a well-suited choice for developing a GUI operating system on the Commodore 64. It provides a powerful and efficient environment for creating graphical applications within the constraints of the C64's hardware. While other languages could be used, Forth offers a unique combination of features that make it particularly well-suited for this task.
## VI. Appendices - D. Forth Wordset Quick Reference
This appendix provides a quick reference to the custom Forth words implemented for the GUI operating system. It is intended as a handy guide for developers and does not replace the detailed descriptions in the Developer Guide.
**I. Window Management:**
* `WINDOW-CREATE` ( x y width height title -- window-id ) - Creates a new window.
* `WINDOW-DELETE` ( window-id -- ) - Destroys a window.
* `WINDOW-MOVE` ( window-id x y -- ) - Moves a window.
* `WINDOW-RESIZE` ( window-id width height -- ) - Resizes a window.
* `WINDOW-SHOW` ( window-id -- ) - Makes a window visible.
* `WINDOW-HIDE` ( window-id -- ) - Hides a window.
* `WINDOW-SET-TITLE` ( window-id title -- ) - Changes a window's title.
* `WINDOW-BRING-TO-FRONT` ( window-id -- ) - Brings a window to the front.
* `WINDOW-SEND-TO-BACK` ( window-id -- ) - Sends a window to the back.
**II. Graphics Primitives:**
* `DRAW-PIXEL` ( x y color -- ) - Draws a pixel at the specified coordinates.
* `DRAW-LINE` ( x1 y1 x2 y2 color -- ) - Draws a line between two points.
* `DRAW-RECT` ( x y width height color -- ) - Draws a rectangle.
* `FILL-RECT` ( x y width height color -- ) - Fills a rectangle.
* `DRAW-BITMAP` ( x y bitmap-address width height -- ) - Draws a bitmap.
* `SET-PEN-COLOR` ( color -- ) - Sets the current drawing color.
* `GET-PEN-COLOR` ( -- color ) - Gets the current drawing color.
**III. Input Handling:**
* `MOUSE-X` ( -- x ) - Returns the current X coordinate of the mouse.
* `MOUSE-Y` ( -- y ) - Returns the current Y coordinate of the mouse.
* `MOUSE-BUTTON` ( -- state ) - Returns the state of the mouse button (0=released, 1=pressed).
* `KEY-PRESSED?` ( key-code -- flag ) - Checks if a specific key is pressed.
* `GET-KEY` ( -- key-code ) - Gets the last key pressed.
**IV. String Handling (if extended):**
* `S"` ( "string" -- addr len ) - Creates a counted string.
* `STRING-COPY` ( source-addr dest-addr len -- ) - Copies a string.
* `STRING-COMPARE` ( addr1 addr2 len -- flag ) - Compares two strings.
**V. Memory Management (if implemented):**
* `ALLOCATE` ( size -- addr ) - Allocates a block of memory.
* `FREE` ( addr -- ) - Frees a block of memory.
**VI. System Words:**
* `DELAY` ( milliseconds -- ) - Pauses execution for a specified duration.
* `SYSTEM-TICKS` ( -- ticks ) - Returns the number of system ticks.
**VII. Constants:**
* `SCREEN-WIDTH` ( -- width ) - Returns the screen width.
* `SCREEN-HEIGHT` ( -- height ) - Returns the screen height.
* `BLACK` ( -- color-code ) - Color code for black.
* `WHITE` ( -- color-code ) - Color code for white.
* `RED` ( -- color-code ) - Color code for red.
* `GREEN` ( -- color-code ) - Color code for green.
* `BLUE` ( -- color-code ) - Color code for blue.
*(And other standard C64 colors)*
**Example Usage (Illustrative):**
```forth
10 10 50 30 S" Test Window" WINDOW-CREATE VALUE my-win
my-win 20 20 100 50 WINDOW-RESIZE
100 50 RED DRAW-PIXEL
10 10 100 10 RED DRAW-LINE
BLACK SET-PEN-COLOR
50 50 20 20 DRAW-RECT
100 100 30 30 BLUE FILL-RECT
my-win WINDOW-DELETE
```
**Note:** This is a quick reference. Refer to the corresponding sections in the Developer Guide for detailed explanations, parameter descriptions, and error handling information. The actual word names and functionality may vary slightly depending on the specific implementation. This reference is intended to provide a convenient overview of the available GUI-related Forth words.
January 13, 2025
The algorithm bias in computer science
Until the year 1990 computer science was working with a fixed principle which was the algorithm. An algorithm allows the computer to compute something. Different hardware systems like 8bit computers, 16 bit computers and so on can execute the same algorithm with higher speed. The algorithm paradigm wasn't critized but it was applied to all existing problems. Programming an operating system, implementing faster hardware, writing a word processing software has always to do with inventing and implementing algorithms.
It should be mentioned that the algorithm paradigm has failed to solve AI related problems. Even if some path planning and game tree search algorithms are available, they can't be used for solving real world problems because of the large state space. Algolrithms are only powerful for solving non ai related problems.
There is a simple reason available why the AI research has entered an AI winter in the 1990s because the old paradigm of an algorithm didn't worked anymore but a new paradigm wasn't inveted yet. It took until the year 2000 before the deep learning including its focus on a dataset has in partial replaced the former algorithm perspective. By definition, a dataset doesn't contain of executable program code but a dataset is file on the hard drive which stores data. The absence of executble programs provides additional freedom to capture domain specific knowledge. There are datasets available for all sort of practial problems like image recognition, question answering, game playing, motion capture and so on. Creating and interpreting these datasets doesn't belong to classical computer science but its an intermediate discipline between computer science and the concrete problem.
Perhaps it makes sense to go a step back ward and explain what the puprose of an algorithm is about. An algorithm is the answer to a problem. For example the task is to sort an array, and then the bubblesort algorithm will solve it. Or there should be line drawn on a pixel map, and the Bresenham's algorithm will do the job. The precondition is always that the problem was defined already. In case of AI this is not the case. Problem definition can't be realized with algorithms, but its done with a dataset. an OCR dataset defines an ocr challegne, a motion capture dataset defines an activity recognition problem while a VQA dataset defines a VQA challenge. So we can say, that dataset creation is the prestep before a concrete algorithm can be invented. If the problem is fixed, e.g. an ocr challenge, there are multiple algorithms available who to solve it, e.g. neural networks, rule based expert system or decision tree learning.
Until the year 1990 there was a missing awareness about prohlem definition for AI and robotics tasks available. A colloquial goal like "Build a biped robot" are not mathematical enough to start with software development. What is needed instead is a very accurate and measurable problem definition. In the best case the problem is given as a video game in combination with a dataset in a table including a scoring function to determine if a certain robot is walking or not. Such kind of acurate problem definition were missing before the year 1990 and this was the reason, why AI wasn't be available.
Timeline of AI history
- 1950-1990 computer generation #1 to #4
- 1990-2000 AI winter
- 2000-2010 Deep learning hype
- since 2010 AI hype
- since 2023 chatgpt available
Classical computing consists of computer generation #1 to #4 which went from 1950-1990. The last and most important fourth computer generation was from 1980 until 1990 and included the homecomputer, internet protocols and the PC including graphical operating systems. The period from 1990 until 2000 can be called the last AI winter. During this decade the prediction for AI and robotics was pessimistic, it was unclear how to build and program such devices. Since 2000 more optimism was available which can be summarized with the buzzword deep learning. Deeplearning means to create a dataset and search in the data for patterns. Since around 2010, Artificial intelligence has become a mainstream topic. New innovation like the Jeapardy AI bot were created and new sorts of robots were available. Since around 2023 the AI hype has entered the mass market and the large language model became available.
Roughly spoken, the history of computing can be divided into 2 periods: 1950-1990 which was equal to classical computing and from 1990-today which consists of Artificial Intelligence after a slow start for the first decade.
Transition from 4th to 5th computer generation
4th generation is classical computing which consists of software and hardware, but 5th generation is about future AI based computing which isn't available today. So there is a transition available which should be explored in detail.
Classical computing is well understood because it has evolved over decades. The first computers were created in the 1940 with vacuum tubes, while later and mower powerful computers were based on transistors and microprocessors. Even if the advancement was huge the underlying perspective remains the same. The goal was always to improve the hardware and write more efficient software. In the 1980s this has produced advanced workstations which were connected to a world wide internet.
The more interesting and seldom discussed problem is how to make these powerful workstations from the 4th generation intelligent so that the computer can control a robot. The answer can be derived from the history of technology which took place from 1990 until 2020. After a slow start in AI during the 1990s which is called an AI winter a major breakthrough was available during the 2000s which was called Deep learning hype. Deep learning means basically to use existing neural networks which were available in the 1990s but train them on larger datasets with a deeper layer structure. Such training was possible because of the moores law which has provided since 2000 faster GPU and CPU technology.
Deep learning alone doesn't explain how to make computer smart but they are providing an important puzzle piece. The prestep before a deep learning model can be trained is to prepare a dataset. During dataset preparation a problem has to be encoded in a tabular format. This allows to convert a low specified problem into a high specified problem. Possible exammples created in the 2000s were motion capture datasets, OCR recognition datasets and even question answer datasets. The existinence of these datasets was a major milestone in AI development, because it allows to use computers to solve real world problems.
Datasets are important for deep learning because they are used to determine the score of a neural network. A certai trained network is able to reproduce the dataset with a score from 0 to 100%, for example to determine who many numbers the neural network has recognnied with OCR correctly. Such a score allows to compare different architectures and different training algorithms next to each other which is equal to transform the former AI alchemy into a mathematical science discipline.
The logical next step after the deep learning was to natural language processing. The idea was to introduce natural language to annotate datasets and use neural networks for chatbot purposes. This development took place from 2010 until 2020 and resulted into large language models which are available since 2023.
Let us summarize the development in decades:
- 1990-2000 AI winter with no progress at all
- 2000-2010 Deep learning hype with a focus on datasets
- 2010-2020 dataset based natural language processing
- 2020-today Large language models
This short overview explains, that Artificial Intigelligence wasn't created by randomly discovering a new algorithm, but AI was the result of a long term research effort which includes 4th computer generation, deep learning, natural language procsssing and large language models.
If we want to describe the devolopment as a single concept, than it would be the shifting focus from algorithms towards datasets.Classical computing until 1990 was influenced by the algorithm ideology. The goal was always to build new hardware and run a piece of software on this hardware as fast as possible. This attempt was realized with high efficiency language like C in combination with CISC architecture realized in microprocessors. unfurtunately, even the most advanced 4th generation computer's can't provide artificial intelligence. Even super computers built in 1990 were not able to beat human chess player and they were not powerful enough to control biped robots.
To solve the gap something more powerful than only algorithms was needed which is a dataset. A dataset is a problem formulation, its some sort of riddle stored in a table. Datasets allow to benchmark a system. Its some sort of quiz which has to be solved by a computer. Such kind of problem oriented computing allows to create artificial intelligence by selecing certain datasets. For example to control a robot the following datasets might be useful: object detection dataset, visual question answering dataset, instruction following datasets. If a neural network is able to solve all these benchmarks, the neural network is able to control a robot.
One interesting insight of modern AI research is, that the discovery has nothing to do with computing anymore. A modern robot build in 2025 doesn't contain of advanced hardware nor it runs advanced software. But all the components might be developed in the 1980s. That means, a microcontroller from the 1980s combined with the C language from the 1980s are more than powerful to create an advanced robot. What has changed is the problem addressed by the robot. The mentioned instruction following dataset wasn't available in the 1980s. A machine readable problem formulation is the key improvement.
January 08, 2025
Von pseudo Robotern zu echten Robotern
January 05, 2025
Computer science before the advent of Artificial Intelligence
The history of computing is divided into periods which are numbered from 1 to 5. The third generation was from 1965-1975 and included Minicomputers and integrated circuits, while the 4th generation was from 1975-1990 and included microcomputers and CPUs. The fifth, and last computer generation, was never established because its the AI revolution in which computers are able to think by its own.
To explain this last 5th generation in computer there is a need to give an overview over the precursor which is a classic example of computing. Most developments availale today have its origin in the 4th computer generation. It can be shown best for the mid 1990s in which most of today's technology was available already for example powerful 32bit cpu, GUI operating systems and of course the World wide web.
Even advanced subjects in computing like the MPEG-1 standard for audio and video compression was available in the mid 1990s. Modern innovation like large DRAM chips, hard drives, color monitors and the ethernet standard was available too. On the software side, all the powerful tools like C++ compilers, UNIX compatible operating systems, and multitasking ability were invented in the mid 1990s. So we can say, that the 4th computer generation from 1975-1990 is equal to modern computing. Its hard or even impossible to find major technology innovation not available in the mid 1990s already.
The 4th computer generation is important from a historic perspective because it contains of all the elements in computing. The idea is to built a machine, called the computer, which contains of input devices like keyboard and mouse, add a display to the device and process information on the motherboard with electronic circuits. Such a machine can be programmed for any task because its a universal computer. The concept was available before the year 1975 as a theory, but it took until around 1995, until all the hardware and software was widespread available. Its not possible to develop this principle further, but the mid 1990s are the end of the development. What was developed from 1995 until 2025 was only a small improvement to the general principle. For example, the hardware was made more energy efficient, and the C++ language standard was improved in minor elements.
A high end workstation from 1995 which was equipped with mpeg capabilites and plugged into the internet of the past, looks pretty similar to a today's PC available in most households. Such a computer can playback video, can be utilized to type in documents and has access to the World wide web.
From a birds eye perspective, the computer generation are describing certain sort of computers. A 4th generation is equal to a Personal computer, the 3th generation was equal to a mini computer, the 2th generation to main frame programmed in fortran, and the first generation was equal to mechanical computers.
Before its possible to invent the 5th computer generation which consists of robotics and Artificial Intelligence there is need to describe what a robot is about. At least in the year 1995 nobody was able to define the term precisly. The problem has to do with the self undertanding of computer science which is about hardware, software and computer networks. The hardware is connected to a computer network with ethernet cables, on a single computer there runs the software, but it remains unclear where the Artifiial Intelligence is located.
Sometimes it was assumed, that AI is located in the software as a certain sort of algorithm. On the other hand, there are no such algorithms available to realize artificial intelligence so the definition isn't valid. A possible starting point to describe the transition from 4th to 5th computer generation is to investigate what AI is not about.
A common assumption in the 1990s was that a teleoperated robot is an antipattern, because such a system wasn't controlled by a computer and it can't be called intelligent. Such an understanding gives a first hint what AI is about. It has to do with the difference between man and machine. A man is intelligent because a man can solve problems and control a car, while a computer isn't intelligent.
A possible working thesis is, that AI can't be reduced to hardware+software but AI has to do with man to machine communication. Or to be more precisely, its a man to machine communication based on natural language. A high level interface results into a working robot. Such a robot is able to solve tasks and its even possible to automate the task, so that no human operator is needed anymore.
Einstieg in Linux Mint
Als die Ubuntu Distribution ab dem Jahr 2004 verfügbar war, wurde das Konzept häufig als zu oberflächlich kritisiert. Anders als bei Slackware oder Arch Linux brauchen Ubuntu Nutzer nur wenig Unix Kentnnisse zu haben. Auch ein selbst kompilierter Kernel ist unnötig.
Linux Mint kann man als Steigerungsform von Ubuntu betrachten. Zumindest hatten Ubuntu User vom Hören sagen eine gewisse Bereitschaft, sich mit der Open Source Bewegung auseinanderzusetzen und waren vertraut mit einfachen Befehlen für die Command Line. Bei Linux Mint wurde diese Einsteigshürde entfernt und der typische Mint User hat die Command Line noch nie geöffnet und war bis vor 2 Wochen noch ein hartgesottener Windows user. Dementsprechend verläuft der Diskurs in den Linux Mint Foren auch anders als die Threads in den Fedora/Arch Linux oder Debian Foren.
Irgendwelche Anleitungen wie man über die command line auftretende Probleme selber löst finden sich erst gar nicht, weil die Annahme lautet, dass der gemeine Mint User selbst einfachste Befehle wie top, systemctl, sudo oder man nicht kennt. Stattdessen werden die Systemeinstellungen ausschließlich über GUI Tools getätigt, genau wie man es von Windows gewohnt war. Und mit Vorliebe experimentieren Linux Mint user mit dem Aussehen des Desktops herum, das betrifft die Auswahl des Hintergrundbildes, über die Farbwahl bis hin zum Modding des kompletten Erscheinungsbildes, so dass Mint auf einmal aussieht wie das gute alte Windows XP aussieht.
Eine Bereitschaft, sich mit der C Programmiersprache auseinanderzusetzen, um dann aktiv sich in Open Source Projekte einzubringen gibt nicht. Die Mint User haben weder eine mehrjährige Erfahrung mit Linux als Server Betriebssystem, noch mit dem Schreiben von Bash scripten. Sondern Mint User befinden sich gedanklich noch in der Microsoft Welt, wo man auf bunte Menüs klickt, was ein tiefgriefendes Verständnis des Betriebssystems im Wege steht. Es ist durchaus üblich, dass User ihr System soweit kaputt konfigurieren, dass es bereits nach wenigen Tagen nicht mehr ordnungsmäß hochfährt um dann zu behaupten, das wäre durch einen Virus verursacht worden, obwohl es sowas in Linux bekanntlich nicht gibt.
Leider hält das die Mint Community nicht davon ab, ihr zweifelhaftes Nutzererlebnis mit anderen zu teilen und so finden sich in den sozialen Netzwerken zahlreiche Berichte wo Mint auf älteren Laptops installiert wird.