November 17, 2023

Numbering photos with the Luhmann id

 Facebook is the largest photo sharing website in the world. The amount of daily uploaded pictures is at least 300 million per day. The best way to manage this amount of information is filename which is working with an alphanumerical key. Example:

1_holiday.jpg
2_christmas.jpg
2a_christmastree.jpg
2b_Deliciouscake.jpg
2b1_recipeforcake.png
3_sport.jpg
3a_photoofshoes.jpg


The numerical ID for each photo remains the same. It is given once during the creation. New photos can be added to the collection by choosing the id according to the existing system. The files are sorted by its similarity. In the literature the concept is described as hierarchical clustering of images

November 15, 2023

Homecomputers until the 1990s

 Before the advent of today's PC technology which is dominated by Windows and Linux operating system there was a much smaller community available of computer enthusiasts. From 1980-1990 most of todays computer hardware and software was invented in the first place and the magazines which were introducing the subject to a readership were sometimes very well informed. In that period two dominant computer systems were available which was the Commodore 64 and the MS DOS PC.

At first it should be mentioned that on IBM PC hardware until the year 1990 the well known Windows operating won't be running fast enough. The only available operating system for early DOS Computers was DOS itself which is a single user, single tasking operating system. In comparison to the Commodore 64 advantage was that it was much easier to write software for MS DOS than for the C64. 8bit homecomputers with 64kb and less main memory and without any harddrive are not capable in running compiled C programs. The only sense making programming technique is the Assembly language. In contrast, early MS DOS PC until 1990s are working fine with c compilers. These large scale programs including the build in libraries can be installed on the small hard drive (less than 100 MB) and it is possible to write and debug software direct on a MS DOS PC.

The reason why this workflow is described in detail is because its working the same like modern programming workflow in the 2020s. That means, in the last decades the programming itself hasn't changed that much. Typing in C code on a 286 PC and compile it into machine code or typing in java code into a mto a more recent 4 core PC is based on the same abstraction mechanism. The human programmer has a set of libraries and combines existing functions into a new software.

Somebody may argue, that the difference between Assembly language and Turbo C is small because both programming language were invented decades ago. This assumption is wrong. Learning assembly from scratch and write larger software is very complicated while the same task in C can be handled easily. The difference is that the C language is a problem oriented language while assembly is hardware oriented. The typical assembly program is written for a certain CPU and a certain adress space in the main memory, while c program are written around a certain domain like a game or a word processing applications.

The only negative point of the C language are the hardware requirement. C assumes that an entry level 286 MS DOS PC is available which has 600 kb of RAM and a harddisc of 10 MB and more. Its not possible to run  a compiler with less RAM and without a harddrive. Even if some C compilers are available for the C64, it can't be used in reality for writing programs. The cause is that a compiled C program is much slower and will need more RAM than a hand coded assembly program.

The main difference between the C64 and the MS DOS PC Is, that C64 programmers claim that Assembly language is a here to stay. This assumption is the result of the lower hardware of the C64 which prevents to use any other programming language than Assembly. Even if its possible to write Assembly program on the MS DOS PC most programmers prefer a c compiler because the language allows to increase the productivity. Especially if a graphics library is available and the programmer is familiar with the computer its possible to write simple games in a short amount of time, very similar to what today's programmer can archive. in other words the existence of a c compiler is the single cause why MS DOS PC have replaced 8bit homecomputers.

November 06, 2023

Benchmarking operating systems

 Before different operating systems can be compared against each other there is a need to define a scale for an objective judgment. Possible measurement in the past are: number of users, size in Megabytes, or easy of usage.

One important measurement isn't mentioned in the list which is hardware support. Device drivers are a seldom investigated subject in operating systems but they have a great impact on the success or failure of an os. The main difference between Linux and KolibriOS isn't the programming language (C vs. Assembly) but its the hardware support. Linux supports out of the all graphics cards, while KolibriOS is restricted to VESA modes. Linux supports wifi cards while KolibriOS has only support of ethernet cards.

Even if someone likes the idea of using the KolibriOS for daily usage he will notice that most of the hardware isn't working. So he will decide against the system. The main reason why Linux is rejected in favor of Windows can also be explained with device drivers. The support in Windows is better, and additional features like power savings are supported in WIndows, while Linux has only basic support which is provided with a delay of 3 years after new graphics cards are available.

Let us assume that are no existing operating systems available but the goal is to write a perfect operating system from scratch. The core feature for a desktop OS is to support all the hardware out of the box indlucing the advanced features like certain resolutions and Energy saving modes. Each operating system is judged by this ability.

The reason why device drivers are usually ignored as a benchmark criteria is because this important subject is difficult to realize. Programming the device drivers for all the hardware is a large project. Even in the Linux project (which is the largest open source project today) there is no enough manpower available for this task. The assumption is that around 250k different hardware devices are available and programming the drivers for all of them need will produce a binary blob file of 1 gigabyte an more.

A collection of device drivers is the core element of any operating system. Any other part like a c++ compiler, a GUI environment or certain application software can be added later. in case of doubt existing source code can be recompiled for a new operating system, but the device drivers can't. They have to be written from scratch.

Different operating systems are working with different philosophy how to create the device drivers. In the Windows ecosystem the assumption is, that hardware companies are producing the code in a closed source fashion. Linux assumes that a group of voluntiers are writing the hardware drivers for the Kernel while KolibriOS says that the hardware drivers are written in Assembly language for only basic devices like an ethernet card and USB mouse.

The main reason why desktop opeating system are hard to program is because the endless amount of hardware available for desktop PC. There are hundreds of different graphics cards, network cards and soundcards availalable. And each card has multiple parameters which can be changed. Apart from the VESA standard which wasn't updated since decades there is no hardware standard available and every new  hardware has a need for a new driver. Without a driver the device won't interacting with the computer or it will consume too much energy. So we can say that a device driver is the single point of failure in an operating system.

An additional problem is that device drivers are usually written in Assembly or in hardware level C which is both complicated to master and the amount of experts in this subject are low. This makes it unlikely that entire desktop operating system can be created from scratch. In contrast to writing application software, a device driver is needed by any user group. No matter if the PC is used for writing in a letter or programming something in both cases there is a need for a graphics driver otherwise the PC won't work at all.

In summary a device driver collection is difficult to program, has a huge code size, is needed by every user and has to support 250k different devices which results into a very complicated project. Only large organizations have the manpower to create operating systems but not amateurs.

November 02, 2023

Introduction to the Linux operating system

Books about the Windows operating system are assuming that the user has never seen a computer before and needs guidance for most tasks. The typical windows book explains, that a computer consists of a mouse, a USB interface, a printer and a monitor and the user is asked to start a program or modify settings in the menu.

It doesn't make sense to transfer this language style to Linux operating systems because Linux wasn't created for computer newbies but for experts. The simple difference between both user groups is, that computer newbies can't program in Python while computer experts are able to do so. As a consequence the typical book about Linux should assume that the user is familiar with the Python language.

The positive effect is, that the explanation what Linux is about can be shorten drastically. Linux allows to execute self written python programs, consists of a powerful package manager and is distributed for free. If the user have no need for such functionality he probably doesn't need open source software and is recommended to use existing Windows software.

The reason why the market share of Linux is much smaller is because the amount of programmers is small. What we can say for sure is that non programmers won't feel comfortable with Linux. Its not about a certain windows manager like gnome vs xfce and it is not about the position of the start menu, but without programming skills in at least one language the Linux OS doesn't make sense for the user.

From a programmers perspective it is pretty easy to understand the Linux operating system. In comparison to write a medium size software project in Python, the interaction with Linux is much easier. There is no need to call a method in a python program, nor implement a recursive function, but the installation of linux is gui driven and apart from clicking on some buttons no further skills are needed. 99% of existing software engineers are able to install and use Linux with ease.

Let us take a closer look into the numbers. There are 1000 million PC worldwide available and around 27 million programmers. This ratio of 2.7% is the maximum market share of Linux on the desktop. It is not possible to growth the market share of Linux above this level because this would imply that non programmers have installed Linux on the desktop PC in favor of Windows.

October 27, 2023

The Linux kernel as a device driver repository

 Most of the lines of code within the Linux project are about device drivers. Its not about about 1000 lines of code, nor 1 million lines of code but device drivers need around 20 million lines of code within the Kernel. Instead of analyzing what the drivers are doing from a technical perspective there is a need to describe the philosophy.

In classical closed source operaring systems, the device driver is provided by the hardware manufactorer. A certain company is producing a flatbad scanner and has to deliver the hardware itself plus a 3.5" floppy disc which contains of drivers to run the hardware. The same issue is there for a mouse, an usb stick, a camera and so on. In the 1990s it was common that device drivers where delivered on physical discs inside the box of the hardware. The end user was asked to insert the disc into the pc and run a program which was mostly setup.exe to install the drivers. Then and only then the hardware was working.

More recent version of Windows are installing the needed drivers in the background without human interention. The Windows operating system detects with plug and play which hardware is in use and downloads the drivers from the internet. These drivers are mostly writtein in C language and compiled into executable binary programs.

In contrast the linux kernel is working with open source hardware drivers. The Kernel is basically a collection of drivers for getting access to all the devices like cd-rom, ssd, ethernet card and so on. The shared similarity between Windows and Linux is that somebody has write all the drivers. Within the Windows ecodsystem this task is handled decentralized. Each company has to write its own drivers and doesn't explain to the public what the code is about. In contrast, Linux is working with a centralized model. There is only a single kernel and all the drivers are in the kernel.

The focus on the device drivers might explain why apart from the major three operating systems (win, linux and macos) there are no alternative projects available for desktop PC. Everybody who likes to establish a new operating system has to make sure that all the hardware is working with this operating system. The only way for doing so is to write all the needed drivers from scratch. This will take a lot o man years. Because of this single reasson there is no Forth operating system, and smaller projects like Haiku are not working well enough for production machines. The cause is, that most devices won't work with these devices. That means, the proud user of the Haiku OS is plugging in an USB stick into the PC but nothing happens. The OS isn't detecting the hardware and has no executable driver for this hardware.

The major cause why device drivers are released as closed source for Windows is because it is a time consuming task to write the software. A single expert programmer is able to create around 10 lines of code per day. Even if the programmer has access to all the hardware specification and has a lots of experience he will need months up to years until the driver for a certain device was written. It doesn't make sense for a hardware company to release the software as open source because the source code including the ability to write code for new devices is a asset for a company which can't be shared with other comapnies.

The Linux ecosystem is working the opposite way. Its a mandatory rule that all the code has to be released in the open source version. If a certain driver is not available then the device won't work with the kernel. The result is, that the quality of the hardware drivers in Linux is lower and that the amount of drivers is lower. There are lots of hardware available which is supported in Windows but not in Linux. Its not of technical requirements but because of the ecosystem and especially the time consuming effort to write a driver in C.

Suppose it would be possible to create a universal device driver in 10k lines of code which can interact with any possible hardware. Then it would be pretty easy to create new operating systems from scratch. All what is needed is this single device driver and some additional programs can be added. Unfurtunately there are technical limitations which prevents that such a universal driver can be realized. Existing computing hardware is so complex and so different that any single device will need a dedicated driver.

Let us estimate how many different hardware devices are available. suppose a single device like an ethernet card is controlled by driver with 10k lines of code. There are 20 million of codelines in the Linux project for hardware drivers., so the total amount of different devices is 2k. It is equal to a large museum fully equipped with computer hardware from floor to ceilling. In addition, modern computer hardware is more complex than previuos models For example the avarage mouse is equipped with infrared sensors while the typical mouse in the 1990s was using a simple rubber ball to detect the movements. So we can estimate that in the future the complexity will grow further which results into more different devices which have a need for more codelines.

October 26, 2023

Writing device drivers in C

 The core element of any operating system is a collection of device drivers. These drivers ensures that hardware components like a mouse, the graphics card, ethernet cards and keyboard is available for the end user. The only programming language for implementing a device driver is C. C gets compiled into assembly instructions and this ensures the maximum efficiency.

Suppose the idea is to realize an operating system in Forth with the aim to run it on a stackmachine. In such a case, the c language isn't available but the user is forced to rewrite the device drivers in Forth. This will produce a situation in which all the code isn't written yet but has to be rewritten. It will take lots of man years to rewrite c device driver in a Forth dialect. Even if the programmers are highly motivated, they won't be able to fullfill the task within the next 30 years.

One possible attempt to overcome the Forth bottleneck is a virtual machine and high level languages like BASIC. The basic programming language gets converted into byte code which is executed on a virtual machine. The virtual machine is running on top of a Forth chip so the programmer doesn't need to program in Forth anymore. The only problem is, that BASIC is a high level programming language while device drivers are written in a low level language.

Unfurtunately, it is not possible to execute C in a virtual machine because C code needs direct hardware access. The only way to execute c code is by compiling it into assembly language. But compiling c code into assembly is only possible for register machine, not for stackmachines. There is no such thing available like a C to forth converter, and even it is possible to implement such a thing it can't be applied to device drivers.

Device drivers are an important element of any operating system. They make sure that all the hardware like printers, usb port, webcam and so on are working. Programming an operating system by ignoring the device won't make sense. It seems that only the C  low level language is only option for writing device drivers. This situation makes it unlikely, that operating systems will work on stackmachines.

The problem is not located in technical terms. Forth is a great language for getting direct access to hardware. There are some microcontrollers available which are running with Forth at bare metal. The more serious problem is, that a desktop operating system consists of thousnands of different hardware devices. Writing the code for all the devices will take endless amount of man years.[1] This effort is very costly. Rewriting existing C device drivers into Forth is to expensive. This prevents that such a project gets started. It seems, that the x86 architecture is the only valid computer system which is able to run desktop operating systems.

Perhaps it makes sense to go a step backward and understand why exactly C was choosen in mainstream computing. The goal was to write device drivers which contains of millions of codelines. Instead of writing this code in assembly language which is different for each processor, the idea was to write the device drivers in C. C is more portable than Assembly and is easier to learn. What is needed in addition is a compiler for generating assembly instructions automatically. This paradigm is valid in computing since decades.

The only bottleneck for a c compiler is, that it needs a certain target architecture which is a compiler friendly x86 architecture. Possbile alternatives like a RISC Cpu and especially a Forth cpu are preventing that C code gets converted into assembly instructions.  

The linux kernel contains of at least 5 million lines of code reserved for device drivers.[1] A potential Linux alternative written in Forth has to provide the same functionality. From a technical point of view it is possible to rewrite the device drivers in Forth, but from an economical perspective it doesn't make sense. Its a well known fact that a single programmer can write down only 10 lines of code per day no matter which programming language he prefers. And the open question is who exactly should write all the code in Forth?

There is a certain reason available why all the desktop operating systems were written in C. Because existing code was written mostly in C and it is much easier to add something to an existing codebase than rewriting it from scratch. The untold assumption is, that all the 5 million lines of code are needed, otherwise the computer isn't able to detect or manage a certain hardware for example a graphics card or a network card. The second assumption is, that even the Forth language will need device drivers. It is not possible to write a Forth OS in 10k lines of code which provides the same functionality like the existing device drivers which are written in 5 million lines of code.  This might explain why Forth is not very popular in mainstream computing. Even if the concept is interesting from a theoretical point of view it can't answer the question how to write all the source code which is needed in an operating system. Existing Forth tutorials are explaining to the newbie what a stack machine is about and how to combine Forth words into programs. But this ability is not enough to realize full blown desktop opeating systems in the style of Linux, MacOS or Windows.

In contrast the C language explains very well how to handle complexity. According to the C language paradigm the programmer has to write C code for newly hardware, commit this C into the existing codebase and this will improve the functionality of the Linux kernel. That means, there are 5 million LoC already there, and a new device driver will add around 200 lines of code and its only a detail question how to program the code exactly.

So called Forth systems and stack oriented programming languages like Factorcode are ignoring the problem of device drivers. Especially the aspect how to create millions of codelines to get access to endless amount of existing hardware.

[1] Kadav, Asim, and Michael M. Swift. "Understanding modern device drivers." ACM SIGPLAN Notices 47.4 (2012): 87-98.

Programming an operating system in Forth

 Existing operating systems like Linux and Windows are very huge and have a lots of redundancy. For example in Linux there are many different GUI frameworks avaialble. Also the user has the choice between hundreds of programming languages likes Fortran, C, C++, pascal and so on.

A possible minimalist alternative to Linux would be realized in the Forth language which can be executed on a GA144 Forth cpu. the main advantage of bare metal stack machines is, that they have low register count which results into an energy efficient design. The major cause why Forth is not very popular in computer mainstrea is because it is much harder to program than C. Even with a good tutorial, it is complicated to write down instructions into forth. Even a register based assembly language is easier to explain than Forth.

But this single bottleneck can be solved with a high level language interpreter, for example a BASIC interpreter which is encoded in Forth. There are some examples available from the past. The idea is to write a program in Forth and this program is able to execute a BASIC program. The main advantage is, that newbe programmers are not forced to type in the routine in Forth, but they can do in a normal BASIC dialect.

Let us summarize the idea of a Forth operating system. The kernel is written of course in Forth because this code gets executed with the maximum speed on a stack machine. The kernel has access to the hardware including the graphics display. High level programs like a text editor, a hello world program or a prime number generator are written in BASIC which is executed by the basic interpreter. This ability makes it likely that a larger audience is interested to write new software or port existing software to the Forth operating system.

Estimate the costs for Forth programming

 
The current computing mainstream works with a combination of C programs which are running on x86 CPU. From the perspective of Forth advocates the main critics towards this principle is, that x86 hardware has a huge transistor count and software written in C is not very efficient. Let us investigate how a potential alternative will look like.

The goal is to create Workstation desktop with stackmachines which are programmed in the Forth language. Current desktop ready operating systems like Linux and Windows have a space requirement on the hard drive of around 50 GB in total for all the libraries and programs which is equal to 1250 million lines of code. Because the existing C code base can't be executed on GA144 and similar Forth procssor there is a need to rewrite the entire operating system in Forth. The assumption is, that similar to C a single Forth programmer is able to write 10 lines of code per day. So it will take 342k man years until this project is done The only bottleneck is that such a project will cost a huge amount of money.

Let us go a step backward and describe what the underlying problem is. Existing operating systems are mostly written in C code. In addition the amount of code lines is very huge. There are many attempts available in the past to m odify the development process. Some kernels for operating systems were written in pascal and even assembly lanugage, while other projects were created as a lightweight single floppy disc operating systems. None of these alternative were successful. It seems that only the combination of the c programming language plus a huge amount of code line results into a successful operating system.

This makes it hard to decide for the Forth ecosystem which is operating with a different understanding of excellence. The typical forth project is of course realized in the Forth language and it has a low amount of codelines which is described as high efficiency coding. In contrast, existing operating systems are critized as bloatware.

Nevertheless there is a need to discuss possible future computer architectures because of the end of the moores law. Current x86 CPU have an overheating problem as a result of too many transistors on small space. So there is a need to build hardware with a lower footprint. In the past such cpu design was available for eample the Pentium I was equipped with only 3 million transistors. Unfurtunately, the Pentium I is too slow for modern requirements and putting more voltage to the device won't work.

What is a needed is a low transistor count CPU architecture which can run C programs. A c program is never executed on the baremetal but it is translated first into assembly language. The bottleneck is located in compiler design. Only if a compiler is available for a new CPU it is possible to port existing c code towards this platform. So we can say it is not about ARM cpu, Forth cpu or RISC cpu but what matters are compilers for these hardware.

A compiler is an intermediate between high level C code which is already there and a certain computer hardware which has to be invented from scratch. The existing x86 hardware itself is not very powerful, what makes this hardware relevant is the existence of powerful compiler toolchains which are converting existing software into assembly instructions.

Programming a compiler is perhaps the most advanced topic within software engineering. in contrast to a normal program which is a video game or a spreadhseet application, a compiler can transform one program into another program. it has much in common with an interpreter which is easier to realize but is much slower. A good example for an interpreter written in Assembly is the BASIC interpreter in the Commodore 64 written by Microsoft. This Basic interpreter is able to fetch the next statement in a program and executes it on the 6502 CPU. For doing so the high level BASIC command is converted into low level assembly instructions.

The main task of a compiler / interpreter is to communicate between human programmers who are not familiar with Asembly language and the CPU which accepts only assembly instructions. Without such an interpreter the computer can't execute BASIC programs.

It doesn't make sense to explain to the BASIC newbie that he should earn Assemlby if he likes to paint a graphics on the screen. The reason why the programmer prefers BASIC over assembly is, that it is easier to use and allows him to code the same program in a shorter amount of time. What is needed instead is a fast interpreter or even better a fast compiler which provides an additional layer between man and machine.

October 25, 2023

Writing a compiler for Forth

 There is a reason available why non x86 cpu architecture are ignored by mainstream computing. Because these hardware isn't compiler friendly. For explaining this term we have to sort possible CPU architecture by their complexity.

The most easy to build hardware is a stackbased Forth CPU. Such a machine can be realized with a low transistor count. A Forth cpu supports a limited amount of assembly instructions. It has no registers but only a single datastructure which is the stack. The next logical step in processor design is the RISC architecture. RISC stands in the middle between CISC and stack machines. Typical examples for RISC machines are the MIPS cpu which has some registers but very few. On the other end of the scale there are full blown AMD64 compatible x86 processor like the famous Intel Core i series which is used in mainstream computing and it is powering more than 1 billion desktop PC worldwide.

The acceptance of RISC CPU is low, while the market share of Forth CPU is nearly zero. Both processors are difficult to program or to explain it more technically, it is difficult to write a C compiler for these systems. There are some C compilers available for MIPS processors but they are complicated because a lot of optimization is needed. in contrast writing a c compiler for AMD64 is much easier, because the underlying hardware provides more high level assembly instructions.

The best way to program Forth CPU and also MIPS processors is by typing in the assembly instruction manually. This is equal to avoid any compiler inbetween and the user has to think like the mentioned cpu. Its obvious that most programmers are not interested in such an interaction because it takes endless amount of time to program complex software direct in assembly. This situation prevents an upraising of RISC and stackbased forth CPUs.

What mainstream programmers are doing all the time is to formulate software in C. C is the only important language in modern software development. Nearly all the operating systems like Linux, windows, MacOS and even Haiku are written in C/C++. The main advantage over Assembly instructions is, that C code can be written much faster, this allows to create full blown GUI systems including libraries. The result is, that low efficiency CPU design like the x64 processor is prefered over advanced chip design like RISC and stack machines.

A possible attempt to make non x86 processors more popular would be the existence of advanced compilers. From a technical perspective every CPU is turing capable, that means the same algorithm written for CISC cpu can also be executed on a stackmachine. The only bottleneck is, that somebody has to create the code first. In modern computing the automatical compilation process will generate the code. So there is a need to create / program modern C compilers which are able to create code for targets like Forth CPUs and for MIPS cpu.

From a technical perspective, a compiler is a translator. it takes a C program as input and generates Assembly instruction as output. In case of a stackmachine the needed assembly instructions have a certain format which is known as Forth, or as Forth code. A forth like stackmachine is minimalist computer which is of course controlled by a program. This program needs to be written before useful behavior can be generated.

To understand the pros and cons of stack based machines better it makes sense to take a closer look into MIPS assembly. Risc based MIPS cpus have the role of an inbetween. They are not as minimalist as Forth but they are less complex than x86. MIPS cpu have an integrated stack which allows to push and pop values. So there is a similarity to Forth. in contrast to Forth, mips provides further storage capacity and more complex commands. Mips can be programmed in Assembly language and in high level C as well.  of course the assembly language is more efficient and especially for embedded systems it is the prefered choice of programmers. On the other hand, C has a faster development cycle so there is need to use this high level language as well.

What we can say for sure is, that mips assembly and Forth assembly are both examples for low level language. Even Forth advocates are claiming that Forth is also a high level language, the claim can be rejected. Because Forth is different from C. C is a high level language because it allows to formulate algorithms in a non CPU perspective. A  C programmer doesn't need to know how many registers the underlying hardware has or what a push to the stack is about. A c programmers writes the code only in C and then the compiler generates the machine instructions.

October 19, 2023

LaTeX at its worst

Usually, the latex community tries to explain how wonderful the output of TeX is. The self understanding is, that latex generated academic paper are easier to read because they are formatted with certain accessible rules. The opposite sounds more realistic. LaTeX generated output has a lack of pictures, has no sections, and doesn't provide any helpful bullet points, free spaces or boxes. Instead, latex generated output looks like monoculture. Every word has the same size, every paragraph looks the same and its boring to read the manuscript.

October 11, 2023

The paradox success of the Linux operating system

 There are rumors out there that Linux on the desktop is dead. Even Linux advocates have to admit, that there are too many Linux distributions available and that the market share of the Linux desktop is low and will become smaller in the future. It is hard or even impossible to convince a Windows user to give the Open Source project a chance. This sad situation is new, some decades ago in the late 1990s, Linux was discussed differently. It was seen as a valid alternative to Microsoft Windows and there were serious attempts started to use it on production ready PCs.

These projects were stopped a long time ago and nearly all of them were a failure. The end users were not convinced, that Linux is better. Even in an university setup  it is nearly impossible to install Linux on one of the workstations computers. Beginners and expert users as well have decided that Windows is much better.

The paradox situation is, that at the same time, Linux has become a great success. The gnome desktop environment looks better than ever, and the Linux kernel is able to communicate with more hardware devices than in the 1990s. How can it be that at the same time, Linux is dismissed by the world, and the same world supports Linux that much? It is indeed hard to explain this paradox, but let us try.

What we can say for sure is, that the idea of embedding Linux into existing Windows installation has become a great success. There are lot of books available about Windows subsystem on Linux, running Linux in a virtual machine and using winget for installing Open Source software in Windows. These attempts can be called a Linux light. Instead of using Linux as an alternative to windows, a modern Linux distribution is started as an app inside Windows or inside the browser. It seems that such kind of interaction fulfills the needs of the user better. From a technical perspective the difference is the absence of the Linux kernel.

A classical Linux distribution like Ubuntu is started on bare metal hardware. The hardware is controlled by the kernel and on top the Xwindow system gets started. Such kind of Linux-only installation has to be called a dead end. What is used in reality is to use Windows as the bare metal operating system for a computer and then run Linux programs and Linux tools inside a windows computer. There are many options available for doing so.

To understand the situation we have to describe the components of a linux system. The low level layer is the kernel, the visible layer on top is the GUI. Both elements are not wanted by the mainstream PC user. As a kernel he is using the Windows operating system, and the GUI frontend is also rendered by Windows. What is interesting and new for the enduser are the Linux middleware programs

These programs are seldom described. For example it is the bash scripting language, the awk tool, the ghostscript program, the grep command, the ffmpeg tool, C++ compilers, webservers and the pandoc text converter. These programs are running in between the Linux kernel and the Gnome gui. They do not belong to the kernel and they do not belong to the Gnome GUI but they are hidden inside a Linux distribution.

The windows subsystem for Linux is doing the same. It is  providing access to Linux middleware tools. It seems, that the average Windows user has a need for these tools. They are powerful, highly mature and come free of charge. Also there are no Windows programs available which are providing the same functionality.

Let us describe the situation from the outside perspective. The classical Linux distribution is rejected by Windows users. The users have no need to install an alternative operating system and they don't need buggy hardware driver written by Linux Torvalds. On the other hand, the average Windows user has huge demand for the Linux middleware tools which are preinstalled in any Linux system. The user likes to run these programs within Windows.

This kind of description assumes that Linux consists of two parts. Some elements of a Linux distribution are dismissed by Windows users which are the kernel and the gui, while other parts (the middleware) are highly wanted.

In the past, there was a clear border between Linux and windows available. Programs like Latex, apache webserver, the pandoc tool or awk were only available in Llnux. For getting access the user was forced to boot its computer with a Linux distribution which was highly complicated. The idea was that the user has to decide between two operating system. Either he is using Windows and can use only Windows software, or he installs Linux and then he gets access to bash including powerful programs.

In more recent years, there has become a new paradigm available which can be called Linux without Linux. This paradox description is referencing to a situation in which linux has become a great success and a failure at the same time. What we can say for sure is, that the Linux kernel and the Gnome desktop both are a failure. Booting a PC with the Linux kernel and run the gnome desktop environment is a rare situation which is rejected by 99.9% of the PC users. The only thing what they are booting on their PC is the normal Windows installation including the windows kernel and the Windows GUI. The attempt to replace these operating system elements with open source alternatives has failed. But this doesn't means, that the users are not interested in Linux. They want to run a different sort of software which is the mentioned middleware. This middleware is missing in a normal Windows system because of different reasons. Running textual commands was a sign of a working Unix system while Windows was able to run GUI programs. These GUI programs can't compete with command line tools. Unix commands like grep, find, latex and other programs are very powerful commands which can't be replaced by windows GUI software.

Over decades windows user were not allowed to run these textual commands. They were forced to decide. Either they are using Windows or they are using Linux. But from a technical perspective it is possible to combine both systems into a single one. An existing Windows installation can be enhanced with Unix tools. This allows to run bash scripts, start a webserver, format latex documents and search in textfiles similar to a unix. Such kind of enhanced Windows can't be called a classical Windows OS, and it is not a Linux distribution, but its something in between. It combines the Windows kernel, the Windows GUI with Unix tools like AWK and bash scripting.

The chance is high that such kind of inbetween operating system will become the future. Many Windows only users are interested in these tools. It allows them to use Linux without installing bare metal. In contrast there is an alternative approach available which should be explained briefly. From a technical perspective it is possible to boot a linux distribution on a physical machine and then run Windows inside a virtual machine. Some Linux advocates are recommending this approach. The idea is that Linux will become a host operating system for Windows machines which are running in an emulator. This approach sees Linux as the only operating system on a computer and would like to put Windows into a sandbox. The interesting situation is, that such kind of Linux only approach isn't used in reality very much. Because it means basically to uninstall Windows, boot Linux on bare metal PC which isn't wanted by the average user. for different reasons.

In contrast, the idea behind WSL is the other way around. Linux or parts of Linux are put into a sandbox and then the user can run Linux as an app inside a working Windows computer. This idea is prefered and it is well documented in the literature. The concept has not a single name but it can be called Linux without Linux. The microsoft term is "Windows-Subsystem für Linux" which means to run a Linux distribution like Ubuntu in a virtual machine.

Its difficult how frequently WSL is used in the reality. The only valid information are available for the opposite. According to the market share for operating systems we can say that 0% of the PC are running Linux. while at the same time 90% of the computers are running Windows. The result is, that if someone likes to run Unix commands on a PC he will do so with WSL or he can install cygwin which also runs in Windows. This development has pros and cons. The negative perspective is, that Linux is indeed dead. Linux was a project to replace the windows operating system with open source hardware drivers. This project has failed. Many windows users have tried out a Linux distribution and they have decided against the software. The positive argument is, that Unix tools which are delivered inside a Linux distribution can become a success in the future. Tools are like grep and awk are not started to replace Windows as an operating system, but they will run on any operating system. Similar to a four in a row game or a text editor.

The Unix middleware is described in detail in [1][2]. It contains of basic commands like grep, ls and awk but there are also high level commands like ssh, zip, g++, git, and latex. The working thesis is, that Windows users have a great interest into these middleware tools. At the same time, these Windows user have no need for other component of a Linux distribution which is the kernel and the gnome system. The reason is, that Windows has a built in kernel and a built in GUI layer which are more powerful than the Linux kernel and gnome. It doesn't make sense to replace the windows kernel with the Linux kernel because this would be equal that most of the hardware won't running anymore. The only Linux component which is superior over the Windows counterpart is the middle layer. Windows has no or only poor tools for compressing zip files, for searching fulltext in files or for creating a git repository. These objectives are fulfilled by Unix tools with ease. So it makes sense to combine the best of both worlds.

[1] https://en.wikipedia.org/wiki/List_of_Unix_commands
[2] https://en.wikipedia.org/wiki/Cygwin

Gaming on Linux

 What the newbie gamer has to know is, that the Linux operating system provides a different experience than the well known Windows ecosystem. Its not about kernel drivers nor open source licences, but Linux gaming is about reducing the own expectations. The user has to like what he gets no matter how low the quality is. So Linux gaming is equal to potato gaming. The typical resolution is 320x200 pixel which is upscaled to the monitor. Of course, such a resolution looks terrible and it doesn't fullfill the minimum standard. But on the other it allows to see gaming from a new perspective.

In a direct comparison, Linux gaming has a lower quality than windows gaming. More advanced graphics hardware from nvidia isn't supported in Linux and if a graphic chip was detected it will produce for sure a lower frame rate. In every case it is impossible to run the same games as in Windows so the overall situation is about missed opportunity. There is no way to fix it because the Linux ecosystem hasn't the resources to program drivers. Most users are playing older games like Tetris or Pingus which was developed 25 years ago.[1] If they are asking for more recent games with 3d graphics they barely understood what Linux is about.

It is unlikely that the situation will become better in the near future. Linux gaming is a mess and perhaps this was the objective from the beginning. Because of this reason, it doesn't make sense to compare Windows with Linux because they have different objectives. The only thing which is for sure is, that Windows is a here to stay, while Linux is hated by everybody. Nobody likes to play low resolution games from 20 years ago which have lots of bugs.

[1] Wikipedia: pingus

October 05, 2023

10+ reasons why Windows is better than Linux

  • 1 It needs around 3 Watt less on the average laptop for running most software
  • 1a The hardware support is execellent. Every wifi chipset and any graphics card is running with Windows
  • 1b It will run out of the box because its preinstalled on every PC
  • 2 All the programs known from Linux like libreoffice, python and video editing software is available for Windows too
  • 2a There is more software available especially commercial high end software
  • 3 Linux has proven for over 30 years that its a bad choice for the desktop. Most projects like Mandriva, Slackware and Antergos have been discontinued
  • 3a The promise of the open source community to make development transparanet has failed. Even the Debian distribution is shipped with proprietary drivers and there is no such thing like Open Hardware
  • 3b Linux and not Windows has to prove that it is superior, because Windows is the defacto standard
  • 4 Windows has a larger user base which ensures that innovation takes place
  • 4a Microsoft standards like the FAT32 filesystem, the docx document format and MS Access databases are industry wide standards used by everybody
  • 5 The system configuration is centralized which makes it likely that problems can be fixed.

August 23, 2023

Writing blog posts in ASCII only

title: Hello world ascii file date: Aug 23, 2023 version: 1.0 This is a short test document. It doesn't contain of real content but its a proof of concept how to use the ASCII file format to write a text. One two three, test.. New paragraph. New paragraph. Images are not allowed, only text can be stored in a ascii file.

August 05, 2023

How to beautiy old index cards

One of the main reasons why paper based analog Zettelkasten is rejected by the note taking community is because it's low efficiency. It's more than a myth but a fact that writing something on a sheet of paper takes much longer than typing the notes into a keyboard. In addition, an electronic database makes sure the content is accessible forever, but a sheet of paper goes yellow after some months.

If a frequently used index card has dog's ear and the content can't be read anymore there is a need to copy the content to a new card. That means, the owner of the Zettelkasten has to manual copy the content word by word with a pen to a fresh card which has no visible signs of wear and then put the card back into the tray. Of course, the procedure will take some time and if the content is boring the time is wasted.

July 21, 2023

Is C/C++ really a bad decision?

 There are lot of programming languages available. The famous one is Python, but Java, C# and Go are also looking powerful. For writing a software prototype, the Python language is perhaps the ideal choice. Its available in Linux and Windows both and can be used to create all sort of apps and scripts.

The main problem of Python is its performance. Especially for programming games the system is too slow. Even if precompiled third party library are used for example pygame, the framerate in python is too little. Its nearly impossible to write a fast scrolling racing game.

The next better choice over Python is C/C++. C++ is known for its complex syntax. Also the problem with C++ is, that every programmer invents its own programming style. The main advantage of C++ is, that it is much faster and the programmer has more control over the situation. Sure, C++ is harder to learn than Python, but compared to vanilla C and even compared to asembly language, C++ has to be mentioned as a beginner friendly language.

Its beginner friendly because the newbie can create with a low amount of code lines and with the help of existing tutorials a graphical demo like the following one:

// compile with g++ hello.cpp -lsfml-graphics -lsfml-window -lsfml-system
#include <SFML/Graphics.hpp>
#include <iostream>

int main() {
    // init window
    sf::RenderWindow window(sf::VideoMode(640, 480), "Hello world!");
    window.setVerticalSyncEnabled(true);
    window.setFramerateLimit(25);
    sf::CircleShape shape(40);
    shape.setFillColor(sf::Color(0, 0, 250));
    sf::Vector2f position;
    // game loop
    while (window.isOpen()) {
        sf::Event event;
        while (window.pollEvent(event))
            if (event.type ==
            sf::Event::Closed)
                window.close();
        window.clear(sf::Color(255, 255, 255));
        window.draw(shape);
        window.display();
        // move
        shape.move(1.f, 0.f);
        position = shape.getPosition(); // = (15, 55)
        std::cout<<"pos "<<position.x<<" "<<position.y<<"\n";
        if (position.x>300) {
          shape.move(-300.f, 30.f);
        }
    }
    return 0;
}


In around 30 lines of code, a ball is shown on the screen which is moving from left to right. The source code compiles into a 24kb large binary file which can be created for Linux, MacOS and Windows. Also the file will need only a little amount of cpu ressources and runs more efficient than the python version.

Sure, the source code looks compared to the python version a bit messy. The programmer has to manual care about many things and it is hard to understand what the code is doing. But for writing a production ready app, C/C++ is the prefered choice. There is no programming language available which can replace this well known and powerful C dialect.

July 16, 2023

The ISA VGA card has made the PC a success

In the 1980s, many 8bit and 16 bit computer systems were available. The most successful one was the Commodore 64, and in addition there was the Amiga 500 and the Atari ST. These systems were sold for a moderate price and were equipped with mid range graphics and sound capabilities. The period of home computers ended at a specific year which was 1991. In this year, the VGA card for the IBM PC were introduced to a mass market.

On the first look the ISA VGA card doesn't look very impressive. But it was the major cause why the IBM PC has superseded the former home computers.  The VGA mode 13h provides a resolution of 320x200 pixels with 256 colors. This spectrum makes games looking the same like on an arcade machine. In addition the VGA mode provides a better gaming experience than even on the Amiga 500.

Before the invention of the VGA mode, an IBM PC provided only a low amount o colors in the CGA resolution. CGA games are looking poor compared to the Commodore 64. In other words, the decision for or against a certain computer system is made in respect to the graphics card adapter. The VGA card was a revolutionary technology which allows to use an IBM PC as a gaming machine. It is possible to use the mode for creating well looking games. Here is a comparison table from the early 1990s:

Commodore 64 (1982), 160x200 with 16 colors
Atari ST (1985), 320x200 with 16 colors
Amiga 500 (1987), 320x200 with 32 colors
IBM PC, VGA resolution (1991), 320x200 with 256 colors

May 31, 2023

Creating beautiful papers with LaTeX

The LaTeX communities judges about the layout of a paper in a certain unusual way. Everything which looks like a wall of text is qualified as excellent typographic style, while documents which containing of huge amount of white space, images and keypoints are treated as low quality paper. The following Lorem-Ipsum comparison was created in both cases with the LaTeX software and the layout looks complete different.
The page on the right side wasn't created with MS Word but everything was layouted with the LateX engine. The difference is, that the line space was increased, the reggaedright mode was activated, more paragraphs were created and 2 images were added. The judgment about good vs. wrong typography has to do with the preference for wall of text vs accessible typography.
The page on the left side is a typical example for a LaTeX formatted wall of text. It is the reason why MS Word users are arguing that every LaTeX documet looks the same. There is endless of amount of text but no visible anchors for the reader to rest. The content is hard to grasp, the visual layout is missing and additional sections or pictures are not there. The surprising situation is, that from the perspective of LateX the example on the left side is a here to stay This is how an academic text has to look like.
The question which is open yet is why exactly is a wall of text is perceived as high quality typography. Is typography not the same as an easy to read text? This is perhaps the most obvious misconception, creating a document with LaTeX doesn't mean, that it contains of images or has a lot of white spaces to rest. But it means to follow the best practice formatting style used in academics for decades. The assumption in Academia is, to write about the world in a highly abstract language style. Abstraction is the opposite of using images and examples written in tables, but abstraction means to formulate endless long sentences with lots of hard to explain specialized vocabulary. In other words, academic text are hard to read.
The advice to improve the readability of an academic text by using a special formatting and include images is ignoring the principle of a book. By definition there is a difference between a book and a powerpoint presentation. A book contains always of long sentences but has only seldom some images. In contrast, a powerpoint presentation works with the opposite principle. Every presentation divides the subject into easy to grasp sections and every page contains of at least one image. Creating a presentation without images and with full sentences is not recommended.
The main reason why the academic community prefers LaTeX for typesetting books and papers is because LaTeX is the king of formatting a wall of text. It allows to compress the text into the page. Every line looks the same. The bottom border is aligned between both columns.
This visual appearance was realized with advanced typographic algorithms for adjusting the fully justified paragraph, the vertical space between the sections and with the recent microtype extension in the pdftex program the homogeneous effect was increased further. MS Word and even Indesign can't compete with this visual appearance and the result is that since decades LaTeX is the document formatting software of choice in all the universities worldwide.



May 21, 2023

Advantages of a wall of text

 

Most tutorials about writing are explaining to the reader, that a wall of text is an antipattern which should be avoided for any costs. in contrast, a well structured and image enhanced text is prefered because it makes reading much easier for the audience.
In this blog post the opposite perspective should be explained. The starting point is the assumption that avoiding a wall of text is a trivial case. Creating a well structured text which is easy to read can be realized with a power point presentation. According to the self definition a powerpoint presentation contains only of keyterms plus visual gimmicks like tables and images but doesn't provide prose text. in addition each slide has a title and can be grasp in a short amount of time.
Or let me explain it the other way around. All the existing powerpoint presentations including LaTeX generated beamer presentation are avoiding a wall of text. There is no shortage in easy to read documents but there is a difference between a presentation and a prose text The working thesis is, that so called longer text aka a paper has to be a wall of text, otherwise it is not a paper but something else. In addition, a novel which is a fictional text is also a wall of text because in the other way it is a comic book or a movie which is a different category.
It is simply unfair to argue, that a 500 pages long fictional book is a wall of a text, because such a layout pattern is the self-understanding of a book. It is not possible to write a text which has many thousands words but do not format it as a wall of text.
Rejecting a wall of text is equal to rejecting the idea of a book in general. With this judgment in mind all the newspapers, academic papers and books have to be dismissed because they are containing of endless amount of text and only non books are allowed. This would include power point presentations, comic strips, dialogues between people and TV shows. In these case there is no longer text available.
The term typography in the core sense means usually to format a longer amount of text.

May 19, 2023

Wall of text, or: the beauty of LaTeX

In the existing debate around MS Word vs LaTeX one important aspect is missing. Everybody knows that LaTeX document are looking all the same, but the open question is why should somebody prefer such a style. To answer this question we have to provide the requirements which is fulfilled by LaTeX with ease. The challenge is to create a text intensive document which is unreadable and basically a wall of text. Such a requirement sounds a bit paradox because it is seldom formulated in this explicit manner, but let us assume that this is the task for a typographer.
The typographer isn't asked to create a power point presentation which contains of pictures on every page plus the important keypoints, but the task is to create homogeneous endless document without any iimages and without any subsections. Now it is possible to discuss how to do so in detail.
The first Lorem ipsum document was created with Libreoffice writer, the document can be read easily: there is a picture, different subsections, there is enough space between the paragraphs and there are also keypoints which makes it easy to grasp the content. In one word, the libreoffice example fulfills the criteria of an accessible document.
The example in the right place was created of course with the LaTeX software. All the reader style elements like the picture and the bullet points were removed and only a long sequence of paragraph is visible. Such kind of text-only layout can be realized great with LaTeX. The document fulfills the requirements easily and its shape is for sure unreadable. In theory, text walls can be created with Libreoffice too. But the LaTeX internal algorithms are designed for this purpose much better. The spaces between the words are more dense which makes the page look as a single block.
The document on the right side looks very scientific. The only way to improve the style is by reducing the font size from 10pt down to 9pt and add some esoteric mathematical equations. In other words, creating a wall of text is the meaning of LaTeX.


May 18, 2023

The meaning of LaTeX

The LaTeX community is using a certain sort of typographic style for creating academic journals and books. This style is codified in the TeX engine itself and it is explained in online forums how to use the software. The open question until now was the reason why this style is applied.
The assumption is, that even the LaTeX community itself doesn't know why they are formatting pdf documents in a certain way. The only thing what is for sure is, that a LaTeX generated paper looks very different from a Word generated document. There is a single working thesis available why LaTeX documents are looking all the same. In a single sentence it is about creating long winded text which creates a typographic dessert.
Let us describe the situation from a birds eye perspective and ignore the LaTeX ecosystem including the underlying algorithms. Suppose the idea is to format the text in the most boring fashion. The text should look like monoculture in agriculture. There is no structure available but all the textlines, characters and paragraphs are looking the same. That means, the size is the same, the textline width is the same and very important there are no images and subsections. It is a simply an endless amount of characters without any orientation. The reader won't see a starting point and he hardly find any lighthouse. Of course, such a text is unreadable, it has much in common how newspapers were looking until the 1970s. During that period it was technically not possible to use pictures and even tables were not available. Instead, the article in a newspaper was a long homogeneous text block.
With this requirement in mind the text logical step is to investigate different options how to create such a document with a computer. One option is the MS Word program the much better alternative is the LaTeX engine. Both programs are able to create an image free, justified text without any subsections. The reader will loose the orientation for sure but there is only an endless amount of text in a tiny font. There is no visible structure because a single paragraph occupies multiple pages. In other words, such a text is the opposite of an easy to read document.
The surprising situation is, that such an antipattern in typography is exactly the same what LaTeX is trying to archive. The goal is, that the text is hard to read. The internal algorithms in TeX which are equal to typography are trying to emphasize a situation in which the reader is lost in the text. The goal is not to provide waypoints and a structure but the opposite is the case. That means, everything looks the same. The meaning of a text isn't provided by its visual appetence but only by the words itself. Without reading the text, a fictional book and a non fictional academic text are looking the same. if the text was written mirrored it is impossible to guess what is written in the book.
LaTeX is working with some sophisticated algorithm for paragraph justification, and adjusting the white spaces. These algorithms have a simple purpose. They are reducing any visible structure. The goal is, that every line looks the same, and that the resulting text is hard to read. The eye has no visible anchor points. There are no subsections, there are no images and at the end of a line there is no white space. In contrast, the text will look like an endless ocean of characters which are forming words never seen before.
There is a possible explanation available why this unusual formatting structure makes sense. The reason is, that a book is different from a comic book and it is different from a Television show. A book is by definition equal to text. Typography is not about increasing the amount of pictures in a book but it is about making the text look like a dessert. A book has to confuse the reader by presenting him endless amount of pages which are looking all the same.
 
There is no need to utilize LaTeX for the purpose to create long winded documents. MS Word can fulfill the same purpose with some modifications. The only thing what the author has to do is to reduze the font size, format the text fully justified, removes all the images and removes all the white space and subsections from the text. The result will a bit different from the very homogeneous LaTeX rendering but it comes close to the expectations. Typography isn't a subjective decision but it's the art of creating hard to read text.

Why LaTeX is great

 

Many attempts were made in the past to compare the LaTeX engine with possible alternatives like MS Word. In most cases, LaTeX advocates are promising, that their typesetting software has a higher typographic quality. This kind of judgment is subjective because it hides the criteria why a certain layout is better.
The underlying reason why LaTeX is the superior rendering engine is its ability to create long winded text. Hard to read text without any images and sections is generated with the TeX engine easily. Here is an example:

The only thing what is sure is, that such a text is the opposite of accessible typography. It doesn't of any pictures and all the text lines are looking the same. LaTeX makes it easy to create such typography. To emphasize the situation, I have reduced the line spacing to 0.95 that means the vertical space between the lines is lower than normal which makes the text even harder to grasp.
To understand the content the reader has to go through the text line by line. It is much harder than watching Television or reading a comic but the task of acquiring the knowledge from the text is hard work. This makes the layout perfect for a scientific paper. The only thing what is missing in the document are some footnotes and non english vocabulary which will increase the reading difficulty further. In other words, academic publishing is mostly about creating in accessible content.

May 04, 2023

LaTeX revisted

 LaTeX is known as the standard tool for academic publication and lots of online forums and external software is available in the ecosystem. The main problem is, that the promise of LaTeX isn't matching to the reality and the following blog posts explains in detail what the problem is with LateX.

Let us start with the main claim of the TeX ecosystem. The self understanding is, that the output quality of LaTeX exceeds possible alternative programs especially MS-Word. The interesting situation is, that the measurement how to judge about MS Word vs. LaTeX is not given. To make the situation more realistic let us take a closer look at a pdf file generated with LaTeX.

The surprising situation is, that such a latex pdf file doesn't contain of PDF tags, also the file isn't working with the default Postscript fonts which are times, helvetica and so on. And last but not least, it is impossible to convert a pdflatex file back into the HTML format or read it aloud with the Jaws screenreader. In abstract words, the latex created pdf file has no accessibility at all. And there is a reason for this unusual behavior.

At first it should be mentioned, that this problem can't be fixed by simply adding a certain parameter or adding a new latex package. But it has to do with the self understanding of LateX that all the pdf documents are not accessible. The reason is, that LaTeX is some sort of advanced printer driver. Its main purpose is to generate a bitmap picture like a TIFF image which has a well defined size and a well defined position of each pixel. It is not possible to zoom, to scale or to convert the image into another format but the image is static.

This kind of behavior can be explained with the origin of LaTeX. Ine late 1970s LaTeX was a pro-processor for offset printing devices. These machines need an image as input and the objective is to print this image in a high amount of copies. This makes LateX a great tool for creating newspapers and printed journals but at the same time it is a poor choice for creating office documents or HTML pages.

Office documents and HTML files are operating with different assumptions about the reality. They are not assume a fixed size A4 paper in the target output but the assumption is, that each user prefers a different size. The same HTML file gets rendered to a smartphone display, can be printed on US Letter page or gets rendered on a desktop screen. Such kind of flexibility is not available with LateX.

The LaTeX community ignores the problem. The users are assuming that there is not need to read aloud a latex file in jaws, and they are assuming that every pdf file gets printed. This assumption was working fine in the 1980s but it produces a reality gap in the 2020s. Most internet traffic isn't generated by desktop users but smartphones are the preferred display devices. In addition it is very important that a pdf document can be converted into other formats like HTML because the user likes to render the information by its own.

The only thing what LaTeX can do really well is to provide a static image which contains of justified text. It looks like it was scanned from a book created in the 1960s and the LateX community assumes that this format is the only valid layout.

LaTeX-free text editing

 

The LaTeX typesetting system was used in the past for creating all sorts of academic texts. The advantage was that it allows to separate the content from the layout and has a high output quality of the PDF document. A lot of external software was created around the TeX ecosystem which are Lyx, Texstudio and open source fonts which can simplify the text creation workflow especially for larger documents.

Apart from LaTeX there some alternatives available. Especially the markdown format has the potential to replace existing Latex workflow with a thinner alternative. The screenshot shows how a textfile is edited. To make the section visible, the gedit program was improved with a outline plugin which allows to jump to each section in the text. Combined with the internal spell checking feature, this will emulate the standard Lyx editor very well. Even if the document format is not LaTeX, the workflow shares many similarities. It is very easy to create longer documents.

April 11, 2023

LaTeX and the Knuth plass algorithm

 In the past many attempts were made to compare LaTeX with MS Word. On the first look the comparison is a subjective one, that means person1 likes MS word while person2 not. To judge on a more rational basis, it has to be defined first what the goal in typography is. The hidden goal is to create justified text which is the opposite of left alignment.

If the goal is only to provide left-align text the output of LaTeX and MS Word is the same. At the end of each line there is a white gap which is fluctuating. Such a text can be read easily and it also easy going to program a software to format such a text. Nearly all word processors are able to do so. In contrast, if the objective is to produce fully justified text, a certain word processor will come to its limit.

The goal in typography in general and in LaTeX in detail is to realize a certain sort of formatting which is known as difficult to realize. For the same task "produce justified paragraph" it is possible to judge about different programs like MS Word, LaTeX, indesign and so on. LaTeX is known for its strength in this single use case.

In contrast, possible alternatives over LaTeX like MS Word, Libreoffice, the fpdf2 library or an ascii text editor are not able to produce high quality justified text. These programs were programmed to typeset only left aligned text. In other words, if the idea is to write a LaTeX replacement from scratch, then the software needs the feature to produce justified text with ease.

Let us go a step backward. The most advanced challenge in typography is to produced justified text. Such a goal was hard to realize for metal based typography before the advent of the computer and it is also hard for modern software programs. In contrast, left justified text are much easier to realize. All what the software has to do is to put the characters next to each other with the same white space between the words. Writing a computer program or a library which is able to do so is an easy task.

In other words, the self understanding of LaTeX is to master the hardest topic within typography. And comparing LaTeX with other programs makes only sense for this single problem. So it is not about putting glyphs to a sheet of paper in general but in the context of a certain arrangement.

The interesting situation is, that 9/10 people will agree that LaTeX capabilities in creating justified paragraph is better than the Word ability to do so. Because this task can be measured on an objective basis. Such a benchmark doesn't explain why it is important to format text in this way. It is mostly a non practical challenge with the attempt to investigate if a certain book printer or a typesetting software is mastering the complicated problems.

It should be mentioned that programs like MS Word, Webbrowsers and text editors are never claiming that they can master this problem. For example the MS Word software has the left-aligned paragraph as the default setting for every new document. In contrast, the LaTeX software is using justified text as default. This is a hint what the self understanding of the program is. In other words, LaTeX assumes, that a book or a journal should be typesetted only in justified mode and no alternative is allowed.

April 10, 2023

The Knuth plass algorithm

 Over the decades the LaTeX community has become an ideology. The idea is, that LaTeX is superior over MS Word and the user is advised only how to format documents within LaTeX. There are endless amount of definitions available what LaTeX is about. But a more technical explanation is missing.

The main difference between LaTeX and possible alternatives like Lowriter and MS Word is, that LaTeX has the builtin Knuth plass algorithm.[1][2] Implementing the algorithm from scratch is very difficult.[3] Most of the TeX related source code is devoted to this single problem.

Suppose, MS Word is adding a simple button in the settings menu which enables the algorithm for a Word document. The resulting justification and the distribution of content over multiple pages would work with the same principle used within LaTeX. Then, the visual layout would be improved and the difference between MS Word and LaTeX would be smaller. All the other features in LaTeX like separation between content and layout and a robust file format are available in MS Word too. For example, Word has a built in draft editor which allows to enter the text without any formatting and Word stores documents in an open XML format which is superior over the .tex format used in LaTeX.

The difference between MS Word and TeX can be reduced to the mentioned line breaking / page layout algorithm. The idea behind the algorithm is that the boxes on the screen are positioned more elegant. Elegant means, that the word space is equal over different lines and the pictures are located at the correct position.

The surprising situation is, that even within the LaTeX community the Knuth plass algorithm is ignored or it is discussed only seldom. The amount of papers about the subject is small. There are fewer than 100 papers published from the early 1980s about this algorithm. So it is some sort of expert knowledge not available for the masses. The interesting situation is, that apart from the algorithm, the TeX ecosystem has to offer nothing. Or at least, nothing which is more advanced than what is available in MS Word today. Word can export documents into a pdf file with a simple mouse click, it can format a document with the "Latin Modern Roman" font and its ability to insert mathematical equation is excellent. The only real weakness in Word is the calculation of white vertical and horizontal spaces which results into a low quality typesetting. Even untrained users will see on the first look if a two column text was formatted with MS Word or with LaTeX. That means, the Knuth-Plass algorithm is producing a visible difference.

The question is not how to indoctrinate happy MS Word users to switch to LaTeX but it is the other way around. The idea is to explain in simple words how the TeX internal line breaking algorithm works so that it can be integrated into mainstream applications like MS Word, fpdf2, Libreoffice and so on.

References
[1] Knuth, Donald E., and Michael F. Plass. "Breaking paragraphs into lines." Software: Practice and Experience 11.11 (1981): 1119-1184.
[2] Plass, Michael Frederick. Optimal pagination techniques for automatic typesetting systems. Stanford University, 1981.
[3] Verna, Didier E. "ETAP: Experimental Typesetting Algorithms Platform." ELS 2022: 15th European Lisp Symposium. 2022.

April 08, 2023

Creating a LaTeX clone from scratch

The core element of a text rendering engine is a datastructure which holds the boxes on the screen.

id name x y w h
0 pageborder 0 0 210 297
1 textborder 35 30 140 227
2 line 60 30 115 14
3 line 35 49 140 14
4 line 35 68 140 14

The entry id2 shows the first line of a paragraph with a small intendation. All the elements on a page are stored in this single boxtable. The units are not pixel position on the screen, but they are metric millimeters on the physical sheet of paper. For rederning the boxtable to the screen a second table is created which holds the pixel information according to the scaling factor.

What an algorithm like the Knuth plass linebreaking algorithm is doing is to convert a piece of text like "the quick brown fox jumps over the lazy dog" into a boxtable. So the program is not operating with the 2d rendered page on the screen but it is using the internal tabular representation of the boxes.

April 07, 2023

Word vs. latex, what is the difference?

 There are at least two major document typesettings available used in the reality which are MS Word and LaTeX. Both programs have a large fanbase and it depends on the personal judgement which is prefered. What is missing in the debate is a general description about the differences. The only thing what is sure is, that Word and LaTeX are operating with different design principles. So let us summarize what the idea behind the latex typesetting system is:
1. open source
2. strict seperation between edit and rendering mode
3. high quality in both modes

Now it is possible to explain these features in detail. The first criteria is, that latex is provided under a gpl license while MS Word is distributed in a commercial fashion. An open source clone to word is libreoffice.

The second and third feature on the list are important to understand the interaction with the system. The main feature of the latex ecosystem is, that the user has to switch back and forth between two modes: editing and viewing. The editing mode has much in common with the draft view known in word. The difference is, that in case of latex the differences are emphasized. Editing in a latex editor means usually to use a monospace font, hide the images completely and avoid justification and hyphenation of the paragraph. What latex users are prefering is ironically that during editing there is no typography at all. That means, the hyphenation is wrong because it is missing, the spaces between the words is always the same and the vertical spaces between the paragraphs is always the same.

All the typographic enhancement are only visible in the rendering mode. The user has to press the preview button and then he sees the the DVI / PDF output on the screen which contains of hyphonation, justification and float images. This two mode philosophy is the core element of LaTeX typesetting.

So the underlying question is if there is a need for two modes in word processing or is a single mode (which is usually available in WYSIGYG DTP software enough)?

the main reaosn why this two mode interaction was introduced is because it simplifies the man machine communitcation and it makes it easier to program the software. In latex there are different front end / backend combination available. The user can run a lyx instance combined with lualatex or he has use texmaker in combination with the pdflatex backend. This allows to program a text editor and a text renderer as different project. This is perhaps one of the strengths of latex because each project can be made more feature complete.  In contrast, the GUI based Word software combines the editing and the rendering capability in a single program. From a cynical perspective this results into a medum quality draft mode plus a medium quality typesetting quality. What latex users are prefering is a high quality draft mode plus a high quality layout.

April 05, 2023

Creating PDF documents without Latex Part 2

 The only software which can be mentioned as a true alternative to latex is MS Word and libreoffice. Both programs are powerful word processor programs which allows to create single column and multi column documents. In contrast to latex, they have an elaborated document file format which allows to insert images and annotate text.

Until today, word and libreoffice were not able to replace latex because of many reasons. The problems are well known and described within the latex community. It is about the poor typesetting quality in combination with the missing seperation between layout and content. Both is a strength of latex which has the best typesetting quality and allows the user to focus on the content of the text.

Instead of arguing what of the programs should be used in the future the better idea is to describe first the current situation. The current situation is, that latex has the largest market share for creating academic content. It is followed by a large empty gap, and then all the minor software programs like Libreoffice and Distiller are following. The preditiction is, that within the next 10 years nothing will change. That means, the Knuth software has dominated the 1990s and it will do so in the 2020s too.

WYSIWYM Editors
Between Latex and Libreoffice there is a big difference. Libreoffice is working with a rendered layout editor and has no draft mode. In contrast what Latex users are prefering is the seperation between entering the text and preview on the screen. Let us describe the principle of a WYSIWYM in detail:

Edit mode:
- fixed monospace font
- left justified text
- no hyphenation
- no page border
- only a frame for images

Preview mode:
- high quality typesetting
- full justified text, global line break algorithm
- precise position of captions and pictures




In other words Latex combines very different principles: a high performance draft editior plus a visualually advanced rendering capabitly. In contrast, the libreoffice software combines both modes into a single GUI window. It has no dratt mode and no advanced rendering mode.

In the history of software development the VIM editor comes close to this concept: VIM is also working with a two mode concept. The user has to switch between both modes.

The reason why a seperation between edit mode and preview mode makes sense is because of the complex layout in two column typeset documents. Editing a two column paper in Libreoffice is very complicated for the user. The user has to understand at the same time the content and the visual apperance. For example, he sees the columns, the pictures, possible footnotes and a fully justified paragraph with different spacings between the words. Such kind of rendering isn't bad itself, but it has nothing to do with editing a text. At least, this is the opinion of the Latex community.

Another more traditional reason why latex preferes a clear distinction between draft and editing is because of the program complexity. Implementing all the typesetting algorithms is a demanding project. And writing a fully text editor is also a larger project. It makes sense to develop both components in different projects. Otherwise the resulting single project would have millions of code lines.

Writing a latex clone from scratch

 Of course the idea sounds like a failed project because everybody knows, that latex contains of millions of millions of codelines. On the other hand it would be interesting to write a prototype which reduces a typesetting system to its minimum.

First thing to know is, that an elaborated markup language exists already which is markdown. Markdown is an enhanced plain text format which allows the user to define sections, bullet points and tables. The language is more than capabie as an input format for a typesetting system.

The open question is how exactly a markdown file gets rendered into a picture? The creation of a .PNG file itself is a trivial task, many python libraies are available for this purpose. and converting a picture into pdf is also easy going. The more serious problem is how to position characters, lines and paragraphs at the picture.

A rough estimation comes to the conclusion that typesetting is mostly about a list of features which are stored in a long table. Features can be: margin left, margin bottom, font size for text, font size for sections, linespacing, distance between pictures and so on. In addition the table needs to store dynamic data like "word space in line1", "word space in line" and so on.

The working thesis is, that the creation of the png image is realized by sending queries to the datatable and storing information into the table.

Let me give an example Suppose the idea is to draw only the first page of a book and the page contains of a black rectangle which is filled. For doing so, the drawring routine needs some information from the layout engine:
- margin of the page
- position of the rectangle
- color of the rectangle
and so on.

The idea is, that any drawing process is working with the same principle. That means the datatable is the core element of a layout engine. Technically such a table can be realized as a hierarchical python struct, but it remains unclear how to do so in detail.