Until the year 1990 it was unclear how to realize AI and robotics, but 30 years later in 2020 AI was available. The period in between should be described because its equal to the existence vs. the absence of artificial intelligence.
The most advanced technology until 1990 was the classical computer workstation including its connection to the internet. The typical workstation was equipped with graphics cards, sound cards, moderate amount of RAM and a hard drive. From a software side a unix compatible operating system including multitasking GUI environment was a typical application. Such a unix workstation wasn't able to control robots and it wasn't equipped with AI, but it was an ordinary example of well engineered computer technology.
AI in the year 1990 was only available in movies and in science fiction books which are describing intelligent robots who are doing useful tasks, like driving cars, walking on two legs and do complex assembly tasks. In the reality these tasks wasn't out of reach for the engineers. Even most advanced books and papers written at Universities in 1990 doesn't contain algorithms, nor ideas how to realize robots in reality.
The single cause why engineers until 1990 have struggled with AI is because of missing problem definition. It was unclear what AI is about from a mathematical standpoint. Without such a definition it wasn't able to program it. A user oriented definition like "A robot has to walk on two legs" isn't enough to create a mathematical equation or to program an algorithm. Programming even very simple robots resulted every time into failed projects. That means simple wheeled robots in 1990 were not able to find the exit in a simple maze.
From 1990 until 2020 lots of effort was put into the AI and robotics issue. The most promising direction was to create a precise problem space first and with this problem space, different algorithms can be combined. There are two major approaches available:
1. Create a physical problem space, e.g. invent a robot competition like Micromouse and Robocup
2. Create a dataset problem space, e.g. a dataset with motion capture recordings, or a dataset with OCR problems
After a problem space was created for example a micromouse maze, its possible to benchmark a certain robot and a certain algorithm how well it performs in the puzzle. For example, a certain robot will need 30 seconds until it found the exist. Or a neural network will recognize in a ocr dataset 50% of the pictures correctly.
During the period 1990-2020 endless amount of datasets and robotics competitions were introduced, described and compared to each other in the academic literature. Especially the second attempt "create a dataset problem space" has become an important enebling technology in AI, because it allows to discuss AI from a mathematical standpoint. Instead of asking what is AI from a philosophical standpoint, the new question was, if a certain dataset makes sense, or what the score for a certain neural network is on a dataset. These hard scientific questions can be adressed with existing tools like back propagation algorithms, statistics and diagrams. A well defined problem space realized as a machine learning dataset was the major steps from former alchemy driven AI philosophy towards a scientific defined AI.
In the published literature from 1990 until 2020 it can be shown, that over the years more advanced datasets were created which were solved with more advanced neural network algorithms. Early datasets in the 1990 were short tables which were described only in the appendix of a paper, while in later years, the entire paper described the dataset in detail because it was the main subject.
Modern AI published after the year 2020 is mostly the result of improved problem formulation. After this date, endless amount of physical problem descriptions plus thousands of datasets with additional problem descriptions are available. These datasets are very realistic, they have to do with real challenges like grasping objects, biped walking, path planning in a maze and even natural language understanding. So we conclude, that the search for Artificial intelligence is equal to the search for a problem formulation. Only if a problem was formulated in a mathematical format, a computer can be used to solve it.
Robotics and Artificial Intelligence
January 15, 2025
Artificial intelligence from 1990 to 2020
Outline for documentation about a GUI operating system for the Commodore 64 written in Forth
## Documentation Outline: C64 Forth GUI Operating System
This outline details the structure for documenting a GUI operating system written in Forth for the Commodore 64.
**I. Introduction**
* A. Background on the Commodore 64 and its limitations.
* B. The choice of Forth as the development language and its advantages (speed, extensibility, etc.).
* C. Overview of the GUI operating system's goals and features (windowing, icons, mouse support, etc.).
* D. Target audience for the documentation (users, developers).
**II. System Overview**
* A. Architecture of the operating system.
* 1. Memory map and usage.
* 2. Interaction with the C64's hardware (VIC-II, SID, CIA).
* 3. Forth wordset extensions for GUI functionality.
* B. Boot process and system initialization.
* C. Core components:
* 1. Kernel (task scheduling, memory management).
* 2. Window manager (window creation, movement, resizing, z-ordering).
* 3. Input handling (keyboard, mouse/joystick).
* 4. Graphics primitives (drawing lines, rectangles, bitmaps).
* 5. Event system.
**III. User Guide**
* A. Getting started:
* 1. Loading and running the OS.
* 2. Basic navigation and interaction.
* B. Desktop environment:
* 1. Icons and their functions.
* 2. Menus and dialog boxes.
* 3. Window management techniques.
* C. Built-in applications (if any):
* 1. File manager.
* 2. Text editor.
* 3. Other utilities.
* D. Customization options.
**IV. Developer Guide**
* A. Forth wordset reference for GUI programming:
* 1. Window creation and manipulation words.
* 2. Drawing and graphics words.
* 3. Event handling words.
* 4. Input handling words.
* B. Programming examples:
* 1. Creating a simple window.
* 2. Handling user input.
* 3. Drawing graphics.
* 4. Implementing a basic application.
* C. Memory management considerations for developers.
* D. Guidelines for creating applications for the OS.
* E. Debugging tips and techniques.
**V. Technical Details**
* A. Memory map in detail.
* B. Interrupt handling and timing.
* C. Low-level graphics routines.
* D. Source code organization and structure.
* E. Forth wordset implementation details.
**VI. Appendices**
* A. Glossary of terms.
* B. Troubleshooting common problems.
* C. List of known bugs and limitations.
* D. Forth wordset quick reference.
* E. Hardware requirements and compatibility.
**VII. Index**
This detailed outline provides a comprehensive structure for documenting a complex project like a GUI OS for the C64. It caters to both end-users and developers, ensuring that the system is both usable and extensible. Remember to include plenty of screenshots and code examples to illustrate the concepts and make the documentation more engaging.
## I. Introduction - B. The choice of Forth as the development language and its advantages
This section explains why Forth was chosen as the development language for this GUI operating system on the Commodore 64. Given the C64's limited resources and the specific requirements of a graphical user interface, Forth offers several distinct advantages over other languages like BASIC or assembly language.
**Reasons for Choosing Forth:**
* **Speed and Efficiency:** Forth is known for its speed and efficiency, especially when compared to interpreted languages like BASIC. Its threaded code architecture and direct access to machine code instructions allow for fast execution, crucial for real-time graphics rendering and user interaction in a GUI environment. This is especially important on the C64's relatively slow 6510 processor.
* **Small Footprint:** Forth systems are typically very compact, requiring minimal memory overhead. This is a significant advantage on the C64 with its limited 64KB of RAM. The small size of the Forth interpreter and the ability to define custom words allows for efficient use of memory, leaving more space for application code and data.
* **Extensibility and Flexibility:** Forth is highly extensible. Developers can easily define new words (subroutines) that perform specific tasks. This allows for the creation of a custom wordset tailored to the needs of the GUI operating system, including words for window management, graphics primitives, and event handling. This flexibility is essential for building a complex system like a GUI.
* **Interactive Development:** Forth encourages interactive development through its read-eval-print loop (REPL). Developers can test code snippets directly in the interpreter, making it easier to debug and experiment with different approaches. This interactive nature can significantly speed up development time.
* **Direct Hardware Access:** While not unique to Forth, it makes direct access to hardware relatively straightforward. This is essential for a GUI operating system, which needs to interact closely with the C64's hardware, such as the VIC-II graphics chip, the SID sound chip, and the CIA input/output controllers. Forth allows developers to write code that directly manipulates hardware registers, providing fine-grained control over the system.
* **Structured Programming:** Although Forth is sometimes perceived as unconventional, it supports structured programming principles. By defining small, well-defined words, developers can create modular and maintainable code. This is important for a larger project like an operating system.
**Comparison to Other Languages:**
* **BASIC:** While BASIC is the C64's native language, it is interpreted, which makes it considerably slower than Forth. BASIC also has limitations in terms of extensibility and direct hardware access.
* **Assembly Language:** Assembly language offers the highest performance and control over the hardware, but it is much more complex and time-consuming to write and maintain than Forth. Forth can be seen as a good compromise, providing a balance between performance and development efficiency.
**In summary:**
Forth's speed, small footprint, extensibility, interactive nature, and direct hardware access make it a well-suited choice for developing a GUI operating system on the Commodore 64. It provides a powerful and efficient environment for creating graphical applications within the constraints of the C64's hardware. While other languages could be used, Forth offers a unique combination of features that make it particularly well-suited for this task.
## VI. Appendices - D. Forth Wordset Quick Reference
This appendix provides a quick reference to the custom Forth words implemented for the GUI operating system. It is intended as a handy guide for developers and does not replace the detailed descriptions in the Developer Guide.
**I. Window Management:**
* `WINDOW-CREATE` ( x y width height title -- window-id ) - Creates a new window.
* `WINDOW-DELETE` ( window-id -- ) - Destroys a window.
* `WINDOW-MOVE` ( window-id x y -- ) - Moves a window.
* `WINDOW-RESIZE` ( window-id width height -- ) - Resizes a window.
* `WINDOW-SHOW` ( window-id -- ) - Makes a window visible.
* `WINDOW-HIDE` ( window-id -- ) - Hides a window.
* `WINDOW-SET-TITLE` ( window-id title -- ) - Changes a window's title.
* `WINDOW-BRING-TO-FRONT` ( window-id -- ) - Brings a window to the front.
* `WINDOW-SEND-TO-BACK` ( window-id -- ) - Sends a window to the back.
**II. Graphics Primitives:**
* `DRAW-PIXEL` ( x y color -- ) - Draws a pixel at the specified coordinates.
* `DRAW-LINE` ( x1 y1 x2 y2 color -- ) - Draws a line between two points.
* `DRAW-RECT` ( x y width height color -- ) - Draws a rectangle.
* `FILL-RECT` ( x y width height color -- ) - Fills a rectangle.
* `DRAW-BITMAP` ( x y bitmap-address width height -- ) - Draws a bitmap.
* `SET-PEN-COLOR` ( color -- ) - Sets the current drawing color.
* `GET-PEN-COLOR` ( -- color ) - Gets the current drawing color.
**III. Input Handling:**
* `MOUSE-X` ( -- x ) - Returns the current X coordinate of the mouse.
* `MOUSE-Y` ( -- y ) - Returns the current Y coordinate of the mouse.
* `MOUSE-BUTTON` ( -- state ) - Returns the state of the mouse button (0=released, 1=pressed).
* `KEY-PRESSED?` ( key-code -- flag ) - Checks if a specific key is pressed.
* `GET-KEY` ( -- key-code ) - Gets the last key pressed.
**IV. String Handling (if extended):**
* `S"` ( "string" -- addr len ) - Creates a counted string.
* `STRING-COPY` ( source-addr dest-addr len -- ) - Copies a string.
* `STRING-COMPARE` ( addr1 addr2 len -- flag ) - Compares two strings.
**V. Memory Management (if implemented):**
* `ALLOCATE` ( size -- addr ) - Allocates a block of memory.
* `FREE` ( addr -- ) - Frees a block of memory.
**VI. System Words:**
* `DELAY` ( milliseconds -- ) - Pauses execution for a specified duration.
* `SYSTEM-TICKS` ( -- ticks ) - Returns the number of system ticks.
**VII. Constants:**
* `SCREEN-WIDTH` ( -- width ) - Returns the screen width.
* `SCREEN-HEIGHT` ( -- height ) - Returns the screen height.
* `BLACK` ( -- color-code ) - Color code for black.
* `WHITE` ( -- color-code ) - Color code for white.
* `RED` ( -- color-code ) - Color code for red.
* `GREEN` ( -- color-code ) - Color code for green.
* `BLUE` ( -- color-code ) - Color code for blue.
*(And other standard C64 colors)*
**Example Usage (Illustrative):**
```forth
10 10 50 30 S" Test Window" WINDOW-CREATE VALUE my-win
my-win 20 20 100 50 WINDOW-RESIZE
100 50 RED DRAW-PIXEL
10 10 100 10 RED DRAW-LINE
BLACK SET-PEN-COLOR
50 50 20 20 DRAW-RECT
100 100 30 30 BLUE FILL-RECT
my-win WINDOW-DELETE
```
**Note:** This is a quick reference. Refer to the corresponding sections in the Developer Guide for detailed explanations, parameter descriptions, and error handling information. The actual word names and functionality may vary slightly depending on the specific implementation. This reference is intended to provide a convenient overview of the available GUI-related Forth words.
January 13, 2025
The algorithm bias in computer science
Until the year 1990 computer science was working with a fixed principle which was the algorithm. An algorithm allows the computer to compute something. Different hardware systems like 8bit computers, 16 bit computers and so on can execute the same algorithm with higher speed. The algorithm paradigm wasn't critized but it was applied to all existing problems. Programming an operating system, implementing faster hardware, writing a word processing software has always to do with inventing and implementing algorithms.
It should be mentioned that the algorithm paradigm has failed to solve AI related problems. Even if some path planning and game tree search algorithms are available, they can't be used for solving real world problems because of the large state space. Algolrithms are only powerful for solving non ai related problems.
There is a simple reason available why the AI research has entered an AI winter in the 1990s because the old paradigm of an algorithm didn't worked anymore but a new paradigm wasn't inveted yet. It took until the year 2000 before the deep learning including its focus on a dataset has in partial replaced the former algorithm perspective. By definition, a dataset doesn't contain of executable program code but a dataset is file on the hard drive which stores data. The absence of executble programs provides additional freedom to capture domain specific knowledge. There are datasets available for all sort of practial problems like image recognition, question answering, game playing, motion capture and so on. Creating and interpreting these datasets doesn't belong to classical computer science but its an intermediate discipline between computer science and the concrete problem.
Perhaps it makes sense to go a step back ward and explain what the puprose of an algorithm is about. An algorithm is the answer to a problem. For example the task is to sort an array, and then the bubblesort algorithm will solve it. Or there should be line drawn on a pixel map, and the Bresenham's algorithm will do the job. The precondition is always that the problem was defined already. In case of AI this is not the case. Problem definition can't be realized with algorithms, but its done with a dataset. an OCR dataset defines an ocr challegne, a motion capture dataset defines an activity recognition problem while a VQA dataset defines a VQA challenge. So we can say, that dataset creation is the prestep before a concrete algorithm can be invented. If the problem is fixed, e.g. an ocr challenge, there are multiple algorithms available who to solve it, e.g. neural networks, rule based expert system or decision tree learning.
Until the year 1990 there was a missing awareness about prohlem definition for AI and robotics tasks available. A colloquial goal like "Build a biped robot" are not mathematical enough to start with software development. What is needed instead is a very accurate and measurable problem definition. In the best case the problem is given as a video game in combination with a dataset in a table including a scoring function to determine if a certain robot is walking or not. Such kind of acurate problem definition were missing before the year 1990 and this was the reason, why AI wasn't be available.
Timeline of AI history
- 1950-1990 computer generation #1 to #4
- 1990-2000 AI winter
- 2000-2010 Deep learning hype
- since 2010 AI hype
- since 2023 chatgpt available
Classical computing consists of computer generation #1 to #4 which went from 1950-1990. The last and most important fourth computer generation was from 1980 until 1990 and included the homecomputer, internet protocols and the PC including graphical operating systems. The period from 1990 until 2000 can be called the last AI winter. During this decade the prediction for AI and robotics was pessimistic, it was unclear how to build and program such devices. Since 2000 more optimism was available which can be summarized with the buzzword deep learning. Deeplearning means to create a dataset and search in the data for patterns. Since around 2010, Artificial intelligence has become a mainstream topic. New innovation like the Jeapardy AI bot were created and new sorts of robots were available. Since around 2023 the AI hype has entered the mass market and the large language model became available.
Roughly spoken, the history of computing can be divided into 2 periods: 1950-1990 which was equal to classical computing and from 1990-today which consists of Artificial Intelligence after a slow start for the first decade.
Transition from 4th to 5th computer generation
4th generation is classical computing which consists of software and hardware, but 5th generation is about future AI based computing which isn't available today. So there is a transition available which should be explored in detail.
Classical computing is well understood because it has evolved over decades. The first computers were created in the 1940 with vacuum tubes, while later and mower powerful computers were based on transistors and microprocessors. Even if the advancement was huge the underlying perspective remains the same. The goal was always to improve the hardware and write more efficient software. In the 1980s this has produced advanced workstations which were connected to a world wide internet.
The more interesting and seldom discussed problem is how to make these powerful workstations from the 4th generation intelligent so that the computer can control a robot. The answer can be derived from the history of technology which took place from 1990 until 2020. After a slow start in AI during the 1990s which is called an AI winter a major breakthrough was available during the 2000s which was called Deep learning hype. Deep learning means basically to use existing neural networks which were available in the 1990s but train them on larger datasets with a deeper layer structure. Such training was possible because of the moores law which has provided since 2000 faster GPU and CPU technology.
Deep learning alone doesn't explain how to make computer smart but they are providing an important puzzle piece. The prestep before a deep learning model can be trained is to prepare a dataset. During dataset preparation a problem has to be encoded in a tabular format. This allows to convert a low specified problem into a high specified problem. Possible exammples created in the 2000s were motion capture datasets, OCR recognition datasets and even question answer datasets. The existinence of these datasets was a major milestone in AI development, because it allows to use computers to solve real world problems.
Datasets are important for deep learning because they are used to determine the score of a neural network. A certai trained network is able to reproduce the dataset with a score from 0 to 100%, for example to determine who many numbers the neural network has recognnied with OCR correctly. Such a score allows to compare different architectures and different training algorithms next to each other which is equal to transform the former AI alchemy into a mathematical science discipline.
The logical next step after the deep learning was to natural language processing. The idea was to introduce natural language to annotate datasets and use neural networks for chatbot purposes. This development took place from 2010 until 2020 and resulted into large language models which are available since 2023.
Let us summarize the development in decades:
- 1990-2000 AI winter with no progress at all
- 2000-2010 Deep learning hype with a focus on datasets
- 2010-2020 dataset based natural language processing
- 2020-today Large language models
This short overview explains, that Artificial Intigelligence wasn't created by randomly discovering a new algorithm, but AI was the result of a long term research effort which includes 4th computer generation, deep learning, natural language procsssing and large language models.
If we want to describe the devolopment as a single concept, than it would be the shifting focus from algorithms towards datasets.Classical computing until 1990 was influenced by the algorithm ideology. The goal was always to build new hardware and run a piece of software on this hardware as fast as possible. This attempt was realized with high efficiency language like C in combination with CISC architecture realized in microprocessors. unfurtunately, even the most advanced 4th generation computer's can't provide artificial intelligence. Even super computers built in 1990 were not able to beat human chess player and they were not powerful enough to control biped robots.
To solve the gap something more powerful than only algorithms was needed which is a dataset. A dataset is a problem formulation, its some sort of riddle stored in a table. Datasets allow to benchmark a system. Its some sort of quiz which has to be solved by a computer. Such kind of problem oriented computing allows to create artificial intelligence by selecing certain datasets. For example to control a robot the following datasets might be useful: object detection dataset, visual question answering dataset, instruction following datasets. If a neural network is able to solve all these benchmarks, the neural network is able to control a robot.
One interesting insight of modern AI research is, that the discovery has nothing to do with computing anymore. A modern robot build in 2025 doesn't contain of advanced hardware nor it runs advanced software. But all the components might be developed in the 1980s. That means, a microcontroller from the 1980s combined with the C language from the 1980s are more than powerful to create an advanced robot. What has changed is the problem addressed by the robot. The mentioned instruction following dataset wasn't available in the 1980s. A machine readable problem formulation is the key improvement.
January 08, 2025
Von pseudo Robotern zu echten Robotern
January 05, 2025
Computer science before the advent of Artificial Intelligence
The history of computing is divided into periods which are numbered from 1 to 5. The third generation was from 1965-1975 and included Minicomputers and integrated circuits, while the 4th generation was from 1975-1990 and included microcomputers and CPUs. The fifth, and last computer generation, was never established because its the AI revolution in which computers are able to think by its own.
To explain this last 5th generation in computer there is a need to give an overview over the precursor which is a classic example of computing. Most developments availale today have its origin in the 4th computer generation. It can be shown best for the mid 1990s in which most of today's technology was available already for example powerful 32bit cpu, GUI operating systems and of course the World wide web.
Even advanced subjects in computing like the MPEG-1 standard for audio and video compression was available in the mid 1990s. Modern innovation like large DRAM chips, hard drives, color monitors and the ethernet standard was available too. On the software side, all the powerful tools like C++ compilers, UNIX compatible operating systems, and multitasking ability were invented in the mid 1990s. So we can say, that the 4th computer generation from 1975-1990 is equal to modern computing. Its hard or even impossible to find major technology innovation not available in the mid 1990s already.
The 4th computer generation is important from a historic perspective because it contains of all the elements in computing. The idea is to built a machine, called the computer, which contains of input devices like keyboard and mouse, add a display to the device and process information on the motherboard with electronic circuits. Such a machine can be programmed for any task because its a universal computer. The concept was available before the year 1975 as a theory, but it took until around 1995, until all the hardware and software was widespread available. Its not possible to develop this principle further, but the mid 1990s are the end of the development. What was developed from 1995 until 2025 was only a small improvement to the general principle. For example, the hardware was made more energy efficient, and the C++ language standard was improved in minor elements.
A high end workstation from 1995 which was equipped with mpeg capabilites and plugged into the internet of the past, looks pretty similar to a today's PC available in most households. Such a computer can playback video, can be utilized to type in documents and has access to the World wide web.
From a birds eye perspective, the computer generation are describing certain sort of computers. A 4th generation is equal to a Personal computer, the 3th generation was equal to a mini computer, the 2th generation to main frame programmed in fortran, and the first generation was equal to mechanical computers.
Before its possible to invent the 5th computer generation which consists of robotics and Artificial Intelligence there is need to describe what a robot is about. At least in the year 1995 nobody was able to define the term precisly. The problem has to do with the self undertanding of computer science which is about hardware, software and computer networks. The hardware is connected to a computer network with ethernet cables, on a single computer there runs the software, but it remains unclear where the Artifiial Intelligence is located.
Sometimes it was assumed, that AI is located in the software as a certain sort of algorithm. On the other hand, there are no such algorithms available to realize artificial intelligence so the definition isn't valid. A possible starting point to describe the transition from 4th to 5th computer generation is to investigate what AI is not about.
A common assumption in the 1990s was that a teleoperated robot is an antipattern, because such a system wasn't controlled by a computer and it can't be called intelligent. Such an understanding gives a first hint what AI is about. It has to do with the difference between man and machine. A man is intelligent because a man can solve problems and control a car, while a computer isn't intelligent.
A possible working thesis is, that AI can't be reduced to hardware+software but AI has to do with man to machine communication. Or to be more precisely, its a man to machine communication based on natural language. A high level interface results into a working robot. Such a robot is able to solve tasks and its even possible to automate the task, so that no human operator is needed anymore.
Einstieg in Linux Mint
Als die Ubuntu Distribution ab dem Jahr 2004 verfügbar war, wurde das Konzept häufig als zu oberflächlich kritisiert. Anders als bei Slackware oder Arch Linux brauchen Ubuntu Nutzer nur wenig Unix Kentnnisse zu haben. Auch ein selbst kompilierter Kernel ist unnötig.
Linux Mint kann man als Steigerungsform von Ubuntu betrachten. Zumindest hatten Ubuntu User vom Hören sagen eine gewisse Bereitschaft, sich mit der Open Source Bewegung auseinanderzusetzen und waren vertraut mit einfachen Befehlen für die Command Line. Bei Linux Mint wurde diese Einsteigshürde entfernt und der typische Mint User hat die Command Line noch nie geöffnet und war bis vor 2 Wochen noch ein hartgesottener Windows user. Dementsprechend verläuft der Diskurs in den Linux Mint Foren auch anders als die Threads in den Fedora/Arch Linux oder Debian Foren.
Irgendwelche Anleitungen wie man über die command line auftretende Probleme selber löst finden sich erst gar nicht, weil die Annahme lautet, dass der gemeine Mint User selbst einfachste Befehle wie top, systemctl, sudo oder man nicht kennt. Stattdessen werden die Systemeinstellungen ausschließlich über GUI Tools getätigt, genau wie man es von Windows gewohnt war. Und mit Vorliebe experimentieren Linux Mint user mit dem Aussehen des Desktops herum, das betrifft die Auswahl des Hintergrundbildes, über die Farbwahl bis hin zum Modding des kompletten Erscheinungsbildes, so dass Mint auf einmal aussieht wie das gute alte Windows XP aussieht.
Eine Bereitschaft, sich mit der C Programmiersprache auseinanderzusetzen, um dann aktiv sich in Open Source Projekte einzubringen gibt nicht. Die Mint User haben weder eine mehrjährige Erfahrung mit Linux als Server Betriebssystem, noch mit dem Schreiben von Bash scripten. Sondern Mint User befinden sich gedanklich noch in der Microsoft Welt, wo man auf bunte Menüs klickt, was ein tiefgriefendes Verständnis des Betriebssystems im Wege steht. Es ist durchaus üblich, dass User ihr System soweit kaputt konfigurieren, dass es bereits nach wenigen Tagen nicht mehr ordnungsmäß hochfährt um dann zu behaupten, das wäre durch einen Virus verursacht worden, obwohl es sowas in Linux bekanntlich nicht gibt.
Leider hält das die Mint Community nicht davon ab, ihr zweifelhaftes Nutzererlebnis mit anderen zu teilen und so finden sich in den sozialen Netzwerken zahlreiche Berichte wo Mint auf älteren Laptops installiert wird.
December 03, 2024
Should software be updated?
Before this controversial question gets answered let us take a step back and describe the situation from a general perspective. Software development is usually done in the upstream which is technically realized as a git repository. Multiple software developers are commiting changes to the repository frequently. They are solving bugs, clean up the code and introducing new features. Its not possible to reinvent this workflow because the described combination of programmers who are submitting updates to the git repository is the industry standard.