October 27, 2023

The Linux kernel as a device driver repository

 Most of the lines of code within the Linux project are about device drivers. Its not about about 1000 lines of code, nor 1 million lines of code but device drivers need around 20 million lines of code within the Kernel. Instead of analyzing what the drivers are doing from a technical perspective there is a need to describe the philosophy.

In classical closed source operaring systems, the device driver is provided by the hardware manufactorer. A certain company is producing a flatbad scanner and has to deliver the hardware itself plus a 3.5" floppy disc which contains of drivers to run the hardware. The same issue is there for a mouse, an usb stick, a camera and so on. In the 1990s it was common that device drivers where delivered on physical discs inside the box of the hardware. The end user was asked to insert the disc into the pc and run a program which was mostly setup.exe to install the drivers. Then and only then the hardware was working.

More recent version of Windows are installing the needed drivers in the background without human interention. The Windows operating system detects with plug and play which hardware is in use and downloads the drivers from the internet. These drivers are mostly writtein in C language and compiled into executable binary programs.

In contrast the linux kernel is working with open source hardware drivers. The Kernel is basically a collection of drivers for getting access to all the devices like cd-rom, ssd, ethernet card and so on. The shared similarity between Windows and Linux is that somebody has write all the drivers. Within the Windows ecodsystem this task is handled decentralized. Each company has to write its own drivers and doesn't explain to the public what the code is about. In contrast, Linux is working with a centralized model. There is only a single kernel and all the drivers are in the kernel.

The focus on the device drivers might explain why apart from the major three operating systems (win, linux and macos) there are no alternative projects available for desktop PC. Everybody who likes to establish a new operating system has to make sure that all the hardware is working with this operating system. The only way for doing so is to write all the needed drivers from scratch. This will take a lot o man years. Because of this single reasson there is no Forth operating system, and smaller projects like Haiku are not working well enough for production machines. The cause is, that most devices won't work with these devices. That means, the proud user of the Haiku OS is plugging in an USB stick into the PC but nothing happens. The OS isn't detecting the hardware and has no executable driver for this hardware.

The major cause why device drivers are released as closed source for Windows is because it is a time consuming task to write the software. A single expert programmer is able to create around 10 lines of code per day. Even if the programmer has access to all the hardware specification and has a lots of experience he will need months up to years until the driver for a certain device was written. It doesn't make sense for a hardware company to release the software as open source because the source code including the ability to write code for new devices is a asset for a company which can't be shared with other comapnies.

The Linux ecosystem is working the opposite way. Its a mandatory rule that all the code has to be released in the open source version. If a certain driver is not available then the device won't work with the kernel. The result is, that the quality of the hardware drivers in Linux is lower and that the amount of drivers is lower. There are lots of hardware available which is supported in Windows but not in Linux. Its not of technical requirements but because of the ecosystem and especially the time consuming effort to write a driver in C.

Suppose it would be possible to create a universal device driver in 10k lines of code which can interact with any possible hardware. Then it would be pretty easy to create new operating systems from scratch. All what is needed is this single device driver and some additional programs can be added. Unfurtunately there are technical limitations which prevents that such a universal driver can be realized. Existing computing hardware is so complex and so different that any single device will need a dedicated driver.

Let us estimate how many different hardware devices are available. suppose a single device like an ethernet card is controlled by driver with 10k lines of code. There are 20 million of codelines in the Linux project for hardware drivers., so the total amount of different devices is 2k. It is equal to a large museum fully equipped with computer hardware from floor to ceilling. In addition, modern computer hardware is more complex than previuos models For example the avarage mouse is equipped with infrared sensors while the typical mouse in the 1990s was using a simple rubber ball to detect the movements. So we can estimate that in the future the complexity will grow further which results into more different devices which have a need for more codelines.

October 26, 2023

Writing device drivers in C

 The core element of any operating system is a collection of device drivers. These drivers ensures that hardware components like a mouse, the graphics card, ethernet cards and keyboard is available for the end user. The only programming language for implementing a device driver is C. C gets compiled into assembly instructions and this ensures the maximum efficiency.

Suppose the idea is to realize an operating system in Forth with the aim to run it on a stackmachine. In such a case, the c language isn't available but the user is forced to rewrite the device drivers in Forth. This will produce a situation in which all the code isn't written yet but has to be rewritten. It will take lots of man years to rewrite c device driver in a Forth dialect. Even if the programmers are highly motivated, they won't be able to fullfill the task within the next 30 years.

One possible attempt to overcome the Forth bottleneck is a virtual machine and high level languages like BASIC. The basic programming language gets converted into byte code which is executed on a virtual machine. The virtual machine is running on top of a Forth chip so the programmer doesn't need to program in Forth anymore. The only problem is, that BASIC is a high level programming language while device drivers are written in a low level language.

Unfurtunately, it is not possible to execute C in a virtual machine because C code needs direct hardware access. The only way to execute c code is by compiling it into assembly language. But compiling c code into assembly is only possible for register machine, not for stackmachines. There is no such thing available like a C to forth converter, and even it is possible to implement such a thing it can't be applied to device drivers.

Device drivers are an important element of any operating system. They make sure that all the hardware like printers, usb port, webcam and so on are working. Programming an operating system by ignoring the device won't make sense. It seems that only the C  low level language is only option for writing device drivers. This situation makes it unlikely, that operating systems will work on stackmachines.

The problem is not located in technical terms. Forth is a great language for getting direct access to hardware. There are some microcontrollers available which are running with Forth at bare metal. The more serious problem is, that a desktop operating system consists of thousnands of different hardware devices. Writing the code for all the devices will take endless amount of man years.[1] This effort is very costly. Rewriting existing C device drivers into Forth is to expensive. This prevents that such a project gets started. It seems, that the x86 architecture is the only valid computer system which is able to run desktop operating systems.

Perhaps it makes sense to go a step backward and understand why exactly C was choosen in mainstream computing. The goal was to write device drivers which contains of millions of codelines. Instead of writing this code in assembly language which is different for each processor, the idea was to write the device drivers in C. C is more portable than Assembly and is easier to learn. What is needed in addition is a compiler for generating assembly instructions automatically. This paradigm is valid in computing since decades.

The only bottleneck for a c compiler is, that it needs a certain target architecture which is a compiler friendly x86 architecture. Possbile alternatives like a RISC Cpu and especially a Forth cpu are preventing that C code gets converted into assembly instructions.  

The linux kernel contains of at least 5 million lines of code reserved for device drivers.[1] A potential Linux alternative written in Forth has to provide the same functionality. From a technical point of view it is possible to rewrite the device drivers in Forth, but from an economical perspective it doesn't make sense. Its a well known fact that a single programmer can write down only 10 lines of code per day no matter which programming language he prefers. And the open question is who exactly should write all the code in Forth?

There is a certain reason available why all the desktop operating systems were written in C. Because existing code was written mostly in C and it is much easier to add something to an existing codebase than rewriting it from scratch. The untold assumption is, that all the 5 million lines of code are needed, otherwise the computer isn't able to detect or manage a certain hardware for example a graphics card or a network card. The second assumption is, that even the Forth language will need device drivers. It is not possible to write a Forth OS in 10k lines of code which provides the same functionality like the existing device drivers which are written in 5 million lines of code.  This might explain why Forth is not very popular in mainstream computing. Even if the concept is interesting from a theoretical point of view it can't answer the question how to write all the source code which is needed in an operating system. Existing Forth tutorials are explaining to the newbie what a stack machine is about and how to combine Forth words into programs. But this ability is not enough to realize full blown desktop opeating systems in the style of Linux, MacOS or Windows.

In contrast the C language explains very well how to handle complexity. According to the C language paradigm the programmer has to write C code for newly hardware, commit this C into the existing codebase and this will improve the functionality of the Linux kernel. That means, there are 5 million LoC already there, and a new device driver will add around 200 lines of code and its only a detail question how to program the code exactly.

So called Forth systems and stack oriented programming languages like Factorcode are ignoring the problem of device drivers. Especially the aspect how to create millions of codelines to get access to endless amount of existing hardware.

[1] Kadav, Asim, and Michael M. Swift. "Understanding modern device drivers." ACM SIGPLAN Notices 47.4 (2012): 87-98.

Programming an operating system in Forth

 Existing operating systems like Linux and Windows are very huge and have a lots of redundancy. For example in Linux there are many different GUI frameworks avaialble. Also the user has the choice between hundreds of programming languages likes Fortran, C, C++, pascal and so on.

A possible minimalist alternative to Linux would be realized in the Forth language which can be executed on a GA144 Forth cpu. the main advantage of bare metal stack machines is, that they have low register count which results into an energy efficient design. The major cause why Forth is not very popular in computer mainstrea is because it is much harder to program than C. Even with a good tutorial, it is complicated to write down instructions into forth. Even a register based assembly language is easier to explain than Forth.

But this single bottleneck can be solved with a high level language interpreter, for example a BASIC interpreter which is encoded in Forth. There are some examples available from the past. The idea is to write a program in Forth and this program is able to execute a BASIC program. The main advantage is, that newbe programmers are not forced to type in the routine in Forth, but they can do in a normal BASIC dialect.

Let us summarize the idea of a Forth operating system. The kernel is written of course in Forth because this code gets executed with the maximum speed on a stack machine. The kernel has access to the hardware including the graphics display. High level programs like a text editor, a hello world program or a prime number generator are written in BASIC which is executed by the basic interpreter. This ability makes it likely that a larger audience is interested to write new software or port existing software to the Forth operating system.

Estimate the costs for Forth programming

 
The current computing mainstream works with a combination of C programs which are running on x86 CPU. From the perspective of Forth advocates the main critics towards this principle is, that x86 hardware has a huge transistor count and software written in C is not very efficient. Let us investigate how a potential alternative will look like.

The goal is to create Workstation desktop with stackmachines which are programmed in the Forth language. Current desktop ready operating systems like Linux and Windows have a space requirement on the hard drive of around 50 GB in total for all the libraries and programs which is equal to 1250 million lines of code. Because the existing C code base can't be executed on GA144 and similar Forth procssor there is a need to rewrite the entire operating system in Forth. The assumption is, that similar to C a single Forth programmer is able to write 10 lines of code per day. So it will take 342k man years until this project is done The only bottleneck is that such a project will cost a huge amount of money.

Let us go a step backward and describe what the underlying problem is. Existing operating systems are mostly written in C code. In addition the amount of code lines is very huge. There are many attempts available in the past to m odify the development process. Some kernels for operating systems were written in pascal and even assembly lanugage, while other projects were created as a lightweight single floppy disc operating systems. None of these alternative were successful. It seems that only the combination of the c programming language plus a huge amount of code line results into a successful operating system.

This makes it hard to decide for the Forth ecosystem which is operating with a different understanding of excellence. The typical forth project is of course realized in the Forth language and it has a low amount of codelines which is described as high efficiency coding. In contrast, existing operating systems are critized as bloatware.

Nevertheless there is a need to discuss possible future computer architectures because of the end of the moores law. Current x86 CPU have an overheating problem as a result of too many transistors on small space. So there is a need to build hardware with a lower footprint. In the past such cpu design was available for eample the Pentium I was equipped with only 3 million transistors. Unfurtunately, the Pentium I is too slow for modern requirements and putting more voltage to the device won't work.

What is a needed is a low transistor count CPU architecture which can run C programs. A c program is never executed on the baremetal but it is translated first into assembly language. The bottleneck is located in compiler design. Only if a compiler is available for a new CPU it is possible to port existing c code towards this platform. So we can say it is not about ARM cpu, Forth cpu or RISC cpu but what matters are compilers for these hardware.

A compiler is an intermediate between high level C code which is already there and a certain computer hardware which has to be invented from scratch. The existing x86 hardware itself is not very powerful, what makes this hardware relevant is the existence of powerful compiler toolchains which are converting existing software into assembly instructions.

Programming a compiler is perhaps the most advanced topic within software engineering. in contrast to a normal program which is a video game or a spreadhseet application, a compiler can transform one program into another program. it has much in common with an interpreter which is easier to realize but is much slower. A good example for an interpreter written in Assembly is the BASIC interpreter in the Commodore 64 written by Microsoft. This Basic interpreter is able to fetch the next statement in a program and executes it on the 6502 CPU. For doing so the high level BASIC command is converted into low level assembly instructions.

The main task of a compiler / interpreter is to communicate between human programmers who are not familiar with Asembly language and the CPU which accepts only assembly instructions. Without such an interpreter the computer can't execute BASIC programs.

It doesn't make sense to explain to the BASIC newbie that he should earn Assemlby if he likes to paint a graphics on the screen. The reason why the programmer prefers BASIC over assembly is, that it is easier to use and allows him to code the same program in a shorter amount of time. What is needed instead is a fast interpreter or even better a fast compiler which provides an additional layer between man and machine.

October 25, 2023

Writing a compiler for Forth

 There is a reason available why non x86 cpu architecture are ignored by mainstream computing. Because these hardware isn't compiler friendly. For explaining this term we have to sort possible CPU architecture by their complexity.

The most easy to build hardware is a stackbased Forth CPU. Such a machine can be realized with a low transistor count. A Forth cpu supports a limited amount of assembly instructions. It has no registers but only a single datastructure which is the stack. The next logical step in processor design is the RISC architecture. RISC stands in the middle between CISC and stack machines. Typical examples for RISC machines are the MIPS cpu which has some registers but very few. On the other end of the scale there are full blown AMD64 compatible x86 processor like the famous Intel Core i series which is used in mainstream computing and it is powering more than 1 billion desktop PC worldwide.

The acceptance of RISC CPU is low, while the market share of Forth CPU is nearly zero. Both processors are difficult to program or to explain it more technically, it is difficult to write a C compiler for these systems. There are some C compilers available for MIPS processors but they are complicated because a lot of optimization is needed. in contrast writing a c compiler for AMD64 is much easier, because the underlying hardware provides more high level assembly instructions.

The best way to program Forth CPU and also MIPS processors is by typing in the assembly instruction manually. This is equal to avoid any compiler inbetween and the user has to think like the mentioned cpu. Its obvious that most programmers are not interested in such an interaction because it takes endless amount of time to program complex software direct in assembly. This situation prevents an upraising of RISC and stackbased forth CPUs.

What mainstream programmers are doing all the time is to formulate software in C. C is the only important language in modern software development. Nearly all the operating systems like Linux, windows, MacOS and even Haiku are written in C/C++. The main advantage over Assembly instructions is, that C code can be written much faster, this allows to create full blown GUI systems including libraries. The result is, that low efficiency CPU design like the x64 processor is prefered over advanced chip design like RISC and stack machines.

A possible attempt to make non x86 processors more popular would be the existence of advanced compilers. From a technical perspective every CPU is turing capable, that means the same algorithm written for CISC cpu can also be executed on a stackmachine. The only bottleneck is, that somebody has to create the code first. In modern computing the automatical compilation process will generate the code. So there is a need to create / program modern C compilers which are able to create code for targets like Forth CPUs and for MIPS cpu.

From a technical perspective, a compiler is a translator. it takes a C program as input and generates Assembly instruction as output. In case of a stackmachine the needed assembly instructions have a certain format which is known as Forth, or as Forth code. A forth like stackmachine is minimalist computer which is of course controlled by a program. This program needs to be written before useful behavior can be generated.

To understand the pros and cons of stack based machines better it makes sense to take a closer look into MIPS assembly. Risc based MIPS cpus have the role of an inbetween. They are not as minimalist as Forth but they are less complex than x86. MIPS cpu have an integrated stack which allows to push and pop values. So there is a similarity to Forth. in contrast to Forth, mips provides further storage capacity and more complex commands. Mips can be programmed in Assembly language and in high level C as well.  of course the assembly language is more efficient and especially for embedded systems it is the prefered choice of programmers. On the other hand, C has a faster development cycle so there is need to use this high level language as well.

What we can say for sure is, that mips assembly and Forth assembly are both examples for low level language. Even Forth advocates are claiming that Forth is also a high level language, the claim can be rejected. Because Forth is different from C. C is a high level language because it allows to formulate algorithms in a non CPU perspective. A  C programmer doesn't need to know how many registers the underlying hardware has or what a push to the stack is about. A c programmers writes the code only in C and then the compiler generates the machine instructions.

October 19, 2023

LaTeX at its worst

Usually, the latex community tries to explain how wonderful the output of TeX is. The self understanding is, that latex generated academic paper are easier to read because they are formatted with certain accessible rules. The opposite sounds more realistic. LaTeX generated output has a lack of pictures, has no sections, and doesn't provide any helpful bullet points, free spaces or boxes. Instead, latex generated output looks like monoculture. Every word has the same size, every paragraph looks the same and its boring to read the manuscript.

October 11, 2023

The paradox success of the Linux operating system

 There are rumors out there that Linux on the desktop is dead. Even Linux advocates have to admit, that there are too many Linux distributions available and that the market share of the Linux desktop is low and will become smaller in the future. It is hard or even impossible to convince a Windows user to give the Open Source project a chance. This sad situation is new, some decades ago in the late 1990s, Linux was discussed differently. It was seen as a valid alternative to Microsoft Windows and there were serious attempts started to use it on production ready PCs.

These projects were stopped a long time ago and nearly all of them were a failure. The end users were not convinced, that Linux is better. Even in an university setup  it is nearly impossible to install Linux on one of the workstations computers. Beginners and expert users as well have decided that Windows is much better.

The paradox situation is, that at the same time, Linux has become a great success. The gnome desktop environment looks better than ever, and the Linux kernel is able to communicate with more hardware devices than in the 1990s. How can it be that at the same time, Linux is dismissed by the world, and the same world supports Linux that much? It is indeed hard to explain this paradox, but let us try.

What we can say for sure is, that the idea of embedding Linux into existing Windows installation has become a great success. There are lot of books available about Windows subsystem on Linux, running Linux in a virtual machine and using winget for installing Open Source software in Windows. These attempts can be called a Linux light. Instead of using Linux as an alternative to windows, a modern Linux distribution is started as an app inside Windows or inside the browser. It seems that such kind of interaction fulfills the needs of the user better. From a technical perspective the difference is the absence of the Linux kernel.

A classical Linux distribution like Ubuntu is started on bare metal hardware. The hardware is controlled by the kernel and on top the Xwindow system gets started. Such kind of Linux-only installation has to be called a dead end. What is used in reality is to use Windows as the bare metal operating system for a computer and then run Linux programs and Linux tools inside a windows computer. There are many options available for doing so.

To understand the situation we have to describe the components of a linux system. The low level layer is the kernel, the visible layer on top is the GUI. Both elements are not wanted by the mainstream PC user. As a kernel he is using the Windows operating system, and the GUI frontend is also rendered by Windows. What is interesting and new for the enduser are the Linux middleware programs

These programs are seldom described. For example it is the bash scripting language, the awk tool, the ghostscript program, the grep command, the ffmpeg tool, C++ compilers, webservers and the pandoc text converter. These programs are running in between the Linux kernel and the Gnome gui. They do not belong to the kernel and they do not belong to the Gnome GUI but they are hidden inside a Linux distribution.

The windows subsystem for Linux is doing the same. It is  providing access to Linux middleware tools. It seems, that the average Windows user has a need for these tools. They are powerful, highly mature and come free of charge. Also there are no Windows programs available which are providing the same functionality.

Let us describe the situation from the outside perspective. The classical Linux distribution is rejected by Windows users. The users have no need to install an alternative operating system and they don't need buggy hardware driver written by Linux Torvalds. On the other hand, the average Windows user has huge demand for the Linux middleware tools which are preinstalled in any Linux system. The user likes to run these programs within Windows.

This kind of description assumes that Linux consists of two parts. Some elements of a Linux distribution are dismissed by Windows users which are the kernel and the gui, while other parts (the middleware) are highly wanted.

In the past, there was a clear border between Linux and windows available. Programs like Latex, apache webserver, the pandoc tool or awk were only available in Llnux. For getting access the user was forced to boot its computer with a Linux distribution which was highly complicated. The idea was that the user has to decide between two operating system. Either he is using Windows and can use only Windows software, or he installs Linux and then he gets access to bash including powerful programs.

In more recent years, there has become a new paradigm available which can be called Linux without Linux. This paradox description is referencing to a situation in which linux has become a great success and a failure at the same time. What we can say for sure is, that the Linux kernel and the Gnome desktop both are a failure. Booting a PC with the Linux kernel and run the gnome desktop environment is a rare situation which is rejected by 99.9% of the PC users. The only thing what they are booting on their PC is the normal Windows installation including the windows kernel and the Windows GUI. The attempt to replace these operating system elements with open source alternatives has failed. But this doesn't means, that the users are not interested in Linux. They want to run a different sort of software which is the mentioned middleware. This middleware is missing in a normal Windows system because of different reasons. Running textual commands was a sign of a working Unix system while Windows was able to run GUI programs. These GUI programs can't compete with command line tools. Unix commands like grep, find, latex and other programs are very powerful commands which can't be replaced by windows GUI software.

Over decades windows user were not allowed to run these textual commands. They were forced to decide. Either they are using Windows or they are using Linux. But from a technical perspective it is possible to combine both systems into a single one. An existing Windows installation can be enhanced with Unix tools. This allows to run bash scripts, start a webserver, format latex documents and search in textfiles similar to a unix. Such kind of enhanced Windows can't be called a classical Windows OS, and it is not a Linux distribution, but its something in between. It combines the Windows kernel, the Windows GUI with Unix tools like AWK and bash scripting.

The chance is high that such kind of inbetween operating system will become the future. Many Windows only users are interested in these tools. It allows them to use Linux without installing bare metal. In contrast there is an alternative approach available which should be explained briefly. From a technical perspective it is possible to boot a linux distribution on a physical machine and then run Windows inside a virtual machine. Some Linux advocates are recommending this approach. The idea is that Linux will become a host operating system for Windows machines which are running in an emulator. This approach sees Linux as the only operating system on a computer and would like to put Windows into a sandbox. The interesting situation is, that such kind of Linux only approach isn't used in reality very much. Because it means basically to uninstall Windows, boot Linux on bare metal PC which isn't wanted by the average user. for different reasons.

In contrast, the idea behind WSL is the other way around. Linux or parts of Linux are put into a sandbox and then the user can run Linux as an app inside a working Windows computer. This idea is prefered and it is well documented in the literature. The concept has not a single name but it can be called Linux without Linux. The microsoft term is "Windows-Subsystem für Linux" which means to run a Linux distribution like Ubuntu in a virtual machine.

Its difficult how frequently WSL is used in the reality. The only valid information are available for the opposite. According to the market share for operating systems we can say that 0% of the PC are running Linux. while at the same time 90% of the computers are running Windows. The result is, that if someone likes to run Unix commands on a PC he will do so with WSL or he can install cygwin which also runs in Windows. This development has pros and cons. The negative perspective is, that Linux is indeed dead. Linux was a project to replace the windows operating system with open source hardware drivers. This project has failed. Many windows users have tried out a Linux distribution and they have decided against the software. The positive argument is, that Unix tools which are delivered inside a Linux distribution can become a success in the future. Tools are like grep and awk are not started to replace Windows as an operating system, but they will run on any operating system. Similar to a four in a row game or a text editor.

The Unix middleware is described in detail in [1][2]. It contains of basic commands like grep, ls and awk but there are also high level commands like ssh, zip, g++, git, and latex. The working thesis is, that Windows users have a great interest into these middleware tools. At the same time, these Windows user have no need for other component of a Linux distribution which is the kernel and the gnome system. The reason is, that Windows has a built in kernel and a built in GUI layer which are more powerful than the Linux kernel and gnome. It doesn't make sense to replace the windows kernel with the Linux kernel because this would be equal that most of the hardware won't running anymore. The only Linux component which is superior over the Windows counterpart is the middle layer. Windows has no or only poor tools for compressing zip files, for searching fulltext in files or for creating a git repository. These objectives are fulfilled by Unix tools with ease. So it makes sense to combine the best of both worlds.

[1] https://en.wikipedia.org/wiki/List_of_Unix_commands
[2] https://en.wikipedia.org/wiki/Cygwin

Gaming on Linux

 What the newbie gamer has to know is, that the Linux operating system provides a different experience than the well known Windows ecosystem. Its not about kernel drivers nor open source licences, but Linux gaming is about reducing the own expectations. The user has to like what he gets no matter how low the quality is. So Linux gaming is equal to potato gaming. The typical resolution is 320x200 pixel which is upscaled to the monitor. Of course, such a resolution looks terrible and it doesn't fullfill the minimum standard. But on the other it allows to see gaming from a new perspective.

In a direct comparison, Linux gaming has a lower quality than windows gaming. More advanced graphics hardware from nvidia isn't supported in Linux and if a graphic chip was detected it will produce for sure a lower frame rate. In every case it is impossible to run the same games as in Windows so the overall situation is about missed opportunity. There is no way to fix it because the Linux ecosystem hasn't the resources to program drivers. Most users are playing older games like Tetris or Pingus which was developed 25 years ago.[1] If they are asking for more recent games with 3d graphics they barely understood what Linux is about.

It is unlikely that the situation will become better in the near future. Linux gaming is a mess and perhaps this was the objective from the beginning. Because of this reason, it doesn't make sense to compare Windows with Linux because they have different objectives. The only thing which is for sure is, that Windows is a here to stay, while Linux is hated by everybody. Nobody likes to play low resolution games from 20 years ago which have lots of bugs.

[1] Wikipedia: pingus

October 05, 2023

10+ reasons why Windows is better than Linux

  • 1 It needs around 3 Watt less on the average laptop for running most software
  • 1a The hardware support is execellent. Every wifi chipset and any graphics card is running with Windows
  • 1b It will run out of the box because its preinstalled on every PC
  • 2 All the programs known from Linux like libreoffice, python and video editing software is available for Windows too
  • 2a There is more software available especially commercial high end software
  • 3 Linux has proven for over 30 years that its a bad choice for the desktop. Most projects like Mandriva, Slackware and Antergos have been discontinued
  • 3a The promise of the open source community to make development transparanet has failed. Even the Debian distribution is shipped with proprietary drivers and there is no such thing like Open Hardware
  • 3b Linux and not Windows has to prove that it is superior, because Windows is the defacto standard
  • 4 Windows has a larger user base which ensures that innovation takes place
  • 4a Microsoft standards like the FAT32 filesystem, the docx document format and MS Access databases are industry wide standards used by everybody
  • 5 The system configuration is centralized which makes it likely that problems can be fixed.