August 23, 2021

OpenAI gym with a reward function

The OpenAI gym library is the perhaps most important reinforcement learning project available. It provides an out-of-the-box environment for simulating control problems and it gives advice how to solve them with algorithms like q-learning and neural networks. The only problem available is, that no documentation is available how to control the example domains like the inverted pendulum problem.
Let us start with the basics because many newbies doesn't know how to run the simulation in general. After installing the openai library in a Linux or Windows operating system the programmer can utilize the library in Python. A simple example is given in the following sourcecode.

 

import gym
import time, random

class Plaincartpole:
  def __init__(self):
    self.env = gym.make('CartPole-v0')
    observation=self.env.reset()
    for framestep in range(100):
      self.env.render()
      action=random.randint(0,1)
      observation,  reward,  done,  info  =  self.env.step(action)
      print("observation",observation, "reward",reward)
      time.sleep(0.5)
    
if __name__ == '__main__':
  p=Plaincartpole()

 

Apart from the “gym” library itself, two extra Python libraries are imported for generating random actions and for slowing down the simulation. After executing the python script the user should see an inverted pendulum on the screen which is doing something.
On the terminal, the status is shown which contains of the measured features itself and a reward information. This first python script is nothing new or special but most openai Gym tutorials are working with this example. In the for loop of the script, the frame counter is increased and each time step is send to the graphical screen.
After this trivial example is running, the more complicated question is how to control the pendulum. From a technical perspective, the user send the actions (0=left, 1=right) to the pendulum. This will affect the system and the pendulum will swing into a certain direction. It is important to know that the reward will change from 1.0 to 0.0 if the pendulum has reached an angle of greater than 45 degree. That means the game has stopped and the control problem wasn't solved.
For generating a sequence of actions which can stabilize the pendulum the first thing to know is, that the reward provided by openai gym is the bottleneck. The built in reward function isn't providing useful feedback but it is simply a check if the angle is larger than 45 degree or smaller than -45 degree. A second problem with the reward function is, that it can become only 0 or 1 but no value inbetween. This problem can be fixed easily with a self created reward function.

 

import gym
import time, random

class Plaincartpole:
  def __init__(self):
    self.env = gym.make('CartPole-v0')
    observation=self.env.reset()
    for framestep in range(100):
      self.env.render()
      action=random.randint(0,1)
      observation,  reward,  done,  info  =  self.env.step(action)
      # handcrafted reward function
      reward=1-abs(observation[2])
      if reward<0: reward=0
      print("observation",observation, "reward",reward)
      time.sleep(0.5)
    
if __name__ == '__main__':
  p=Plaincartpole()

 

The new reward function measures also the angle of the pole but it provides a more elaborated information. If the pendulum is in the upward position the reward is 1.0 and if it is a bit rotated then the reward is 0.8 and so on. The idea is that the original reward function from the open AI gym environment is overwritten by a self-created function.
This handcrafted reward function can be modified according to the needs of the programmer. The example shows only a very basic version. It is possible to improve it for example by checking if the cart is outside of the visible playfield.
The idea is that random actions are send to the system and then a reward is determined. Before it can be determined what the optimal control action, it should be defined what the goal is. The goal is formalized in the reward function.
Let me give an example. Suppose the goal is to bring the cart into the middle of the playfield. The reward function would be:
      reward=1-abs(observation[0])
      if reward<0: reward=0

 

That means, the feature in the observation variable is converted into a numercial value. This value is 1 if the cart is in the middle, it will decrease to 0.5 if the cart has left the middle and it will be 0 if the cart is outside of the allowed range. In between values are also provided, so it is a continuous reward function.
Or let me give a more advanced example. If the pole should be upwards and the cart should be in the middle the combined reward function is:

 

rewarda=1-abs(observation[2])
if rewarda<0: rewarda=0
rewardb=1-abs(observation[0])
if rewardb<0: rewardb=0
reward=(rewarda+rewardb)/2

 


 

Example

Suppose a reward function was created which determines the position of the cart and ignores the angle of the pole. If the cart is outside of the playfield the reward become zero. The idea is that is cart is moving left or right and while the cart is doing so the reward is shown on the screen. It some sort of score like in a videogame.
That means all the other features which are stored in the observation variable are no longer interesting but only the reward value is monitored. Winning the game means to maximize the reward. And different reward functions will result into different games. A controller which maximizes the cart position reward will produce actions in which the cart is always in the middle and it will never leave the playfield. So the reward function is some sort of constraint which defines what the problem is about.
The interesting point is, that after changing the reward function the game engine remeains the same, that means, the pendulum will fall with the same speed like before. The only new thing is that the reward score is determined different.
Suppose there is a universal policy available which maximizes the reward function. The actions generated by this policy will depend on the reward function. that means, after adjusting the reward function a new behavior is shown on the screen. Or let me explain it the other way around. The forward model of the gym environment aka the simulation remains the same, and the policy which converts a reward signal into actions is also the same. The only variable is the reward function which is handcrafted by a human programmer.





August 21, 2021

Reward function

 

The topic seems to be relevant for robot control. In short the idea is to map measured feature values to a numerical reward value, and then use this value for controlling the robot towards the highest reward. So it is some sort of layer between the robot and the game it is playing.
What the paper doesn't answers is how to create reward functions. There are two opposite approaches available, first idea is to create reward functions with algorithm mainly neural networks, q tables and reward automaton. And the second approach is to handle reward design as a collaborative social activity. This is realized with examples from previous projects and a code repository which holds concrete reward functions. The deepracer project from Amazon goes into this direction.
The interesting point around reward functions is, that it answers the question how to control a robot. Robot control is nothing else than navigating the robot on the reward map. The reward map is an artificial created mathematical model. The advantage is, that the details for games likes tetris, car driving or biped robot simulators can be ignored. That means, the robot doesn't know which game he is playing because the robot sees only the reward map.

August 17, 2021

What is the unix philosophy?

 

Instead of a common myth Unix is not about pipes or the kernel but it is a programming style created around the C language. The idea is to create software with the high level C language instead of using assembly language. This allows to writer larger programs which will need lots of RAM and disc space.
Some examples for typical Unix programs are gnuplot which has 150k lines of code in C and lyx which contains of 350k lines of code in C++. Both programs have a long tradition, are typical Unix programs and they are fulfilling lots of requirements.
There is a difference between writing a small piece of software in Assembly language which runs on a home computer and writing a larger program whcih is running on a mainframe computer. The obvious difference is, that a program like gnuplot can't be created by a single person in a weekend but it is long term effort done by a programmer's team.
Perhaps it sounds a bit unusual to ask, but why exactly are larger programs needed? Wouldn't it be ok to paint a chart with a software which needs only 100kb for the sourcecode? This has to do with the unix philosophy. The idea behind Unix is to write large scale programs which are fulling requirements from the end user. The typical program written in C has lots of parameters. It is not because the program itself is so complex but because the task which is textprocessing, graphics drawing or whatever is so complicated. The assumption is, that it is not possible to rewrite the gnuplot software in assembly language and compress it into 10k lines of code. Such a program would be run much faster, but it won't provide all the features of the original gnuplot software.
It is not a coincidence that 90% of the unix software is written in C/C++. Because this programming was designed for creating large programs. In contrast to Assembly language it wasn't invented for hardware needs, but C/C++ is a programmer friendly language. It supports functions, variables, and structs. The result is, that the programmers can write software which has lots of lines of code. Let us take a look what the typical Linux user will see after the first installation of a distribution. He will see that the operating system has occupied many gigabytes of his harddrive. In case of Debian a default installation will need 15 GB. And if the user installs additional software the requirement is much higher.
Somebody has written all the code in the past. This was done by Unix programmers who are familiar with the C language. They have created software packages for database management, text processing, image manipulation, mathematics needs and of course for networking demands. Programming itself is easy, but these applications are hard. A basic tutorial for the c language will fit on a single sheet of paper. C consists only if-then-statements, function declaration and some case switches. But programming a concrete application is very different from programming itself.

Is Assembly language more efficient over C?

 

Some Linux newbies are often surprise why after an installation the system will need so much discpace on the hard drive and will occupy 2 GB of RAM. Even minimalist Window manager like LXDE will need 700 MB of RAM without starting a single application. The answer to this problem is not located in Linux itself but is has to with the C programming language.
In contrast to a common myth, the c language is not the fastest and most efficient programming language in the world but it is only a language used everywhere. C compilers are optimized for compatibility reasons and will in a direct comparison with handcrafted assembly language. To understand how big the gap is we have to focus on some operating systems which were written entirely in Assembly. These systems for example MenuetOS will need only 70 MB of RAM and are providing network functionality and a GUI. In contrast, the Debian system will need 10x more RAM.
With improved programming techniques for example storing Assembly instructions as bytecode it is possible to reduce the RAM consumption further. So it is possible to compress an entire operating system into 20 MB of RAM. It will use the latest graphics modes and is ultrafast.
The only problem is, that the sourcecode is difficult to maintain. At least for c programmers. Only Forth programmers and assembly specialist find it easy to maintain such a minimalist operating system. Bascially spoken, technically it is possible to program a linux like OS which needs only 10% of the RAM of a normal Linux system, the problem is, that somebody has to write the sourcecode first and this is the bottleneck.
The reason why the C language is used so much in the reality is, because it has simplified the programming. Even larger programs which contains of many modules can be created in C easily. The additional advantage is, that the same code can be compiled for different hardware architecture. The disadvantage of this flexciblity is, that the RAM Conspumption is high, the and the runtime speed low. But in the reality this is not a big problem, because modern computers have large amount of RAM. So we can say, that programming in the now works different from programming in the 1980s in which every byte of the RAM was expensive. In the 1980s most programmers have tried to reduce the amount of occupied RAM to a minimum. It was common to write programs which were running with 200 kb RAM and less. For today's ears this sounds like overengineering. Because a single PNG image which is rendered in Firefox will occupy this amount of RAM easily. It simply makes no sense to reduce the needed amount of RAM For executable binary files.
Perhaps it makes sense to explain what programming in general is. Let us investigate the similarity of some larger apps which are gnuplot and lyx. The shared feature is, that both applications are containing a large amount of codelines. In these codelines all the features are implemented. Each submenu in Lyx was realized with complicated C functions. So we can say, that the reason why these programs are big is not because of the programming style itself, but it has to do with the requirements how the application is working. That means, if somebody writes a gnuplot program from scratch which has all the same features and all the fancy graphics and buttons, he will need the same amount of codelines like the original gnuplot needs today. The only choice for creating smaller programs is to reduce the amount of features. For example if the idea is to program a simple command line based texteditor which has no features at all it is possible to compress the binary file in less than 50 kb of RAM. The problem is that modern normal users have such high expectations in the software. They wouldn't be satisfied by a simple 50kb large texteditor but they have a need for a lyx-like all in one system.

August 15, 2021

Comparing Forth with C

 

Both language are using the opposite paradigm. In short, Forth is a minimalistic programming language while C has a tendency to produce larger programs. On a first look, the Forth language fits very well to the needs of programmers, but a closer look on the problem will show that not Forth but C has became the dominant programming language.
In a c program there are variables available and the functions are larger. The typical length of a function is 100 lines of code that means an entire file is used to store a function. In contrast, in a Forth program a function which is called a word is very small That means the code is factored and no variables at all are used.
Let us take a look into the reality. A typical example for a software written in C is gnuplot. The main feature of gnuplot is, that the sourcecode is very long. The program takes lots of megabyte and the end user can adjust endless amount of parameters. The question is, if the program is so large and complicated why it is used so often? From a computer perspective, gnuplot wastes a lot of RAM and most of the code wasn't programmed very efficient. That means, the code wasn't factored very much.
The answer is, that gnuplot isn't thought from a hardware perspective, but the idea is to create a plotting software which can draw bar charts, 2d plots and lots of other formats. Basicially spoken, gnuplot and most of other Unix program doesn't care about main memory and CPU needs, but the goal is to write a full blown software which can be used by the end user.
In contrast, the paradigm of Forth is the opposite. Here is the idea that the needs of the CPU are important, and that the program needs to run efficient in terms of low memory footprint and low amount of cpu cycles. It simply doesn't make sense to write a program like gnuplot in Forth.
The main idea behind the c language is to abstract from low level machine operation. C provides the programmer a virtual machine which has endless amount of variables, long functions, files and libraries. This virtual machines makes it easy to write long complicated programs which are documented in endless amount of books in contrast, the idea of Forth is, that the user writes only small programs which have 20 words and doesn't need a documentation.
to understand the difference it makes sense to take a look back in the late 1980s. At this time the transation was made from 8bit home computers to modern PC like computers which were equipped with harddrives and 32bit cpus. The Unix operating system and the c language was never a great success on 8bit homecomputers. The reason was, that hardware of 8bit homecomputers is too small to run larger programs. Even 16bit home computers like the Amiga 500 have not enough main memory and disc capacity to run a unix operating system.
The situation has changed dramatically with the advent of the PC. If the computer hardware is more powerful it is possible to run modern cpu demanding c programs. For today's ears it sounds a bit uncommon but in the 1980s it was very complicated to provide 2 MB and more of RAM to run a program. That means, a software which needs so much RAM can't be executed in the past.
If a 32bit PC is available which has 16mb and more RAM in combination with a harddrive is it possible to run all sorts of C programs, which includes Unix, microsoft Windows or any other operating system. What these programs have in common is, that they are providing lots of programming libraries and the software will need very much RAM to operate. On the other hand the value for the end user is higher. He can use the computer to draw pictures, write a text and play games. Today's situation is, that C has replaced any other programming language and especially the Forth paradigm has felt into obscurity.
What Forth programmers are doing today is to assume a different ecosystem. They are imagine that there is only a 8bit cpu available, no harddrive and low amount of RAM, and then they are writing a program for this computer. It is easy to show that in such a restricted envrionment Forth performs much better than C. Simply because of the reason that a c program can't be executed on such a minimal computer.

August 14, 2021

Some reasons why Forth felt out of fashion

 

In the late 1980s the Forth programming language was discussed in lots of magazines as a valuable alternative over C and assembly language. With the advent of the PC the situation changed drastically and Forth has became the most esoteric programming language ever. It is located in the now on the last position in the TIOBE index which means that user share is low.
There are many reasons available why Forth went into a minority language. The strength of Forth is that the sourcecode very small. This was important for homecomputers in the 1980s in which the amount of RAM was 64 kb and less. Today's RAM has become much larger and nobdy care if a program needs 20 kb or 200 kb in the main memory. A second advantage of Forth is it is working without batch compilation but a Forth system provides a real time compiler. That means, the user hasn't wait until the compiler has created the binary file but after entering the code into the editor it can be executed immediately.
Unfortunately, this advantage is also no big thing. Modern compilers are fast enough and in cast of doubt the user can take an interpreted language so he won't confronted with long compile runtime. A third advantage which speaks for Forth is the ability to get executed on minimalistic cpu which have only a little amount of registers. The reason is, that Forth works fine with only two simple stacks and no further memory cells on the cpu are needed. This advantage has also become not needed anymore, because all the CPUs were improved with additional registers so that they can run high level code generated by c compilers.
Apart from the mentioned advantages, Forth has some built in disadvantages which are avaialble. For example the programmer has to think like a push down automaton, he has to factor code into small subroutines, he is not allowed to use variables and the runtime speed of Forth is lower than the output of a modern c compiler.
As a result, all the advantages of Forth are no longer relevant but the disadvantages haven't disappear. It is not very complicated to estimate that the average programmer has lost it's enthusiasm for the Forth language and prefers more mainstream alternatives like C, Java, perl or GNU bash scripting. The situation today is, that apart from a small amount of demo programs for printing out primenumbers to the screen no larger software was written in Forth and the language is ignored by the world.
In general, Forth is the natural opponent to the C language. the c language has become very popular. This wasn't expected in the 1980s. During the 1980s, C was only one language and many. It stands in contrast to pascal, Simula and Basic. Compiing a C program on an 8bit homecomputer was an advanced operation. With the advent of the modern PC which includes graphical operating systems, hardrives and fast processors the situation has changed drastically. C has become the defacto standard for software writing. And all the important programs are implemented in this language first. Some years ago it was estimated that 75% of a Linux distribution was written in C. And the chance is high that for MacOS and Windows operating system the same ratio is valid.
Forth can be seen as an alternative world in which C never became popular. From a Forth perspective, the c language is using too much energy for the CPU and the code is bloated which results into lots of bugs. A typical Forth program will run on CPU which needs 5 milliwatt and is stored in 100 kb of ROM, while a typical C program gets executed on a CPU whic consumes 200 Watt and needs 8 GB of RAM.

August 12, 2021

Building a computer is easy

 

On the first look the task of building a computer from electronics components is a demanding task. the problem is, that a relais has to be used as the main memory an other components are used to create the logic unit. It will takes many hours until the system is running. But, in compaarison with the task which is required after the computer was built, the hardware task is the easier one.
To make the situation easier to understand let us assume not a physical computer is created but only a virtual machine which can execute a program. The simplest possible machine is a brainfuck interpreter. It is a basically a computer but realized as a software program. Similar to building a computer, on the first look the task o creating such a machine seems to be very complicted. The interpreter has to fetch the next instruction, it has to analyze if the word is known and then the processing step follows.
But a short look into existing brainfuck compilers and interpreter will show that the task can be realized by a single person within a small amount of time. It is possisble to create such a machine and fix all the bugs.
The paradox situation is, that after the interpreter is ready the more complicated tasks wasn't adressed yet. An interpreter which is able to execute a program is useless if no software is available. That means somebody has to write the code which is executed on the machine. This task is the more challenging one. The amount of possible software is endless, in contrast there only a few possible ideas how to create a virtual machine. The chance is high that the principle is the same for a physical machine. That means, there are only 8bit computers, 16bit and some other types. In general the layout is the same, and if someone creates a computer from scratch he will come to the same design like in previous attempts. But suppose someone has built a homebrew cpu and all the parts are working great. What can this machine do? n case of doubt the only thing what the machine will do is to run a calcuator program or can figure out the prime numbers from 0 to 100. Everything else needs to be created first in software.
Sometimes it was mentioned that assembly programming is an easy task. From a hardware perspective, this is correct. In contrast to plug in cables manual assembly instructions are an elegant way to program a computer. But the same language will become hard and even unusable if the idea is to create a certain application for example a computer game or a text processing software. That means, the computer is no longer used as a simple machine which runs a program but the idea is to program a useful application for the machine. In such a case a low level hardware oriented language will become a bottleneck.

August 05, 2021

How to become an A-Blogger

 

The traffic to weblogs is categorized into A,B,C. C blogs have low amount of traffic, B-Blogs are read by more people and A-Blogs are international relevant blogs which stay in competition to existing large online portals and forums.
Technically the traffic is measured by the blogging software which is recording the amount of visits on each day. In the screenshot the traffic from this blog is shown. It has a very low traffic of 3 visits per day. That means, literary nobody is reading the blog and no one has subscribed to the RSS feed or is motivated to comment something. Somebody may ask what is wrong with the blog. Does it has content? Yes a lot of content was posted over the years. The blog consists roughly 500 posts and many of them are equipped with self created images. The language is English so in theory the world can read the content.
It is very common for a blog to receive only a low amount of traffic. The reason is, that the internet has millions of different blogs and they are listed in the search engines in the last position. So nobody will find them. For example, one of the postings is about the C language. But if a random visitor is typing in “c language” into google, he won't see my blog because it is located on position 2 million and 321.

Why promoting Linux is a bad idea ...

 

The existing computer industry works great. The hardware vendors are producing fast quiet laptops and the software industry provides secure affordable applications. If the customer trust the companies he will in return modern technology which allows him to write documents, play games and watch videos. The only disturbing element are so called Linux users who think they have more knowledge about computing than even the hardware vendor.
A typical Linux user will claim, that a certain notebook has a loud fan. This is simply not possible and the only reason why the fan is on is because something is wrong the user. He is not allowed to install alternative operating systems. Linux users are doing the wrong actions and then they are complaining about so called hardware malfunction.
Normal users which are running only the preinstalled Windows operating system have no such problems. The games on their PC are running smoothly and they doesn't hear any noise. The reason is that these customers have more knowledge about computers. They trust the experts on the field and doesn't try to use their acquired amateur knowledge for ideological arguments.
The best thing what hardware vendors and software companies can do is to ignore Linux users at all. A discussion which operating system is better or if a certain hardware is working together with Linux won't produce an added value but it is a dead end. If Linux users are interesting in understanding how computers are working they should solder together their own computer but they not allowed to spread a bad mood in the online support forums.

August 03, 2021

How to make Windows 10 compatible with Linux

 Most users have the understanding that they have to decide between Linux which is open source and Windows which is not. The users think that have to remove the existing Windows partition from their harddrive otherwise they can't become part of the Linux movement. A more softer transition is described in the following blog post.

Instead of arguing for or against certain operating systems the better idea is to investigate the software which is installed. A typical LInux distribution is shipped with preinstallation of the following software: gcc, python3, firefox, Libreoffice. What Linux users are doing is not interact with Linux itself but they want to run these programs. The reason is that the mentioned software is powerful and comes free of charge.

What speaks against the attempt to install these programs in Windows instead of running LInux? Right, it works great. All of the software plus extra programs like gimp are available in a windows 64bit version. It can be installed with a simple mouseclick. After the software was installed it will run the same way like the Linux version.

What most users are interested to do is not to switch to open source but they want simple become python programmers. Let us describe the idea behind Linux in detail.. Linux is a layer between the GUI and the hardware. Usually not the end user decides about this layer but the hardware vendor. A certain company decides which bios a computer get and which hardware drivers are working fine. Does it makes sense for ordinary user to quetion the decision of the hardware vendor? It is a rheotrical question beause in case of doubt, the hardware vendor has more experts.

Let us go a step backward. It makes sense that user1 recommends user2 to install the python3 interpreter or the libreoffice suite. Because they are normal windows programs and after they have been installed the computer will look the same. The only difference is, that the user gets another icon on the desktop. In contrast the recommendation to remove Windows in general over Linux is way too much innovation. Such thing can't be undone and most users won't be happy with Linux. Linux and open source are two different things. open source means to use software tools like gcc under Windows. While Linux means that the user is replacing the entire Windows partition with something else.

the interesting point is, that the advantage of Linux is low. If python3 and libreoffice was installed in a windows environment they user won't get any additional value if he is switching to Ubuntu or Debian. But he will loose so many things for example the ability to run Windows software. Let us try to describe the situation from a different angle. Suppose, microsoft would prohibit that a user installs programming tools like notepad++, a text only browser, or a c compiler. The explanation should be that the user is not allowed to run these advanced programs. Under such constraints it would make sense to deinstall the windows OS in general and search for something which is more open. But in the reality the user doesn't has these restrictions. He can install as many open source software under Windows he likes. Because it is own computer and he is the administrator.

The main problem with Linux is that the system isn't backward compatible. That means, the software industry is producing Linux apps and the hardware industry has no PC for Linux users. Instead of trying to change the situation the more realistic approach is to regonize the situation and try to question if Linux might be wrong.

Does Windows has disadvantages?

 In a blog post somewhere in the internet some cons of Windows were mentioned. Most of the cons are wrong and it makes sense to go through the list.

1. High resource requirements: Windows isn't using more cpu ressources but less. The software is more adapted to the software and was tested. In contrast Linux is known for lower battery performance.

2. closed source. The argument was that it is not possible to troubleshoot problems because the software is close source. This argument is wrong as well. Most linux issues remain unsolved, while Windows issues are solved much faster. That is the reason why Microsoft has over one billion customers in the world.

3. pour security. Windows is by far the most secure system in the world. Window server is certified for many security applications.

Then there some minor arguments given like poor technical support and vulnerable to Linux. Point #8 on the list is interesting. It was mentioned in the blog that the price for windows is too high, while Linux is provided for free. How long does it take until somebody is familar with Linux. Right it will take years, in contrast, all the users are familiar with Windows already so the system is much cheaper.

9. Additional expenses. This argument is wrong too. The amount of freeware and open source software for Windows is much higher than for Linux itself. There is a lot of freeware available which isn't available in Debian and other Linux distros. It is up to the end user to decide to install these programs.

The point #11 on the list is funny. It is vendor lock in. The argument was the Windows forcs the user into a certain cage. The reality is, that Windows isn't supported by a single company but by all the hardware and software companies in the world. The mixing of different vendors are one of the key elements of the Windows ecosystem. Windows is nothing else than a bazar in which different companies are offering their software.

So we can say, that windows has no disadvantages. It is a here to stay modern operating system and will succeed Linux easily.

August 02, 2021

Windows for Linux users

 Sorry Linux, but your no longer interesting. The better alternative is Windows. Since a while, Microsoft Windows has become the perfect replacement for former Linux installation. It allows to run the same software and in addition all the existing Windows applications can be installed. It detects recent hardware and the performance is a bit better than with Linux. Not so much but at least this is my subjective impression.

Perhaps it makes sense to give some tipps for other Linux users who have decided to remove the Linux kernel and become a real programmer. First thing to do is to reactivate the bitlocker encryption. If Windows is the only operating system on a pc there is no need that gparted is invited to resize the partition. This will increase the security drastically.

The next step is to change the boot order in the BIOS. So that not external unsigned usb sticks are allowed to boot the computer but only the newly installed Windows 10. After these basic settings are working, the next step is to install some software in Windows. For example python3, lyx, geany and libreoffice. Like i mentioned before there is no need to use different software than under Linux. The newly installed software in combination with the powershell will transform the Win10 computer into a Linux like workstation.

Sure the first weeks the user has to unlearn some things. The program starter is not on top like in Gnome but on the bottom. But the rest is working the same. What the user will see is that the Windows machine will run much smoother and more secure than any existing Linux operating system.

Let us be honest. Linux is nothing else than an ideology and has failed to convince the poweruser with sourcecode. Most of the software was written first for Windows. And it will run on this platform more efficient than with Linux. It is some sort of joke if Linux users are trying to convince normal users that they should switch to Open Source. Because Linux has nothing to offer at all. There is no such thing like a Linux system.

Perhaps we should mention some important facts from computer history. Not Linux has invented the PC industry but all the games and C++ compilers were created in Windows and for Windows. That means the most advanced OS in the world is not the penguin but it is preinstalled on all the PCs in the world. For example, the latest edition of Windows contains of the speech engine Cortana, a powerful full text search in all the files and has a builtin Firewall. All these features are missing in Linux and there is no plan to introduce it. Basically spoken, Linux is some sort of fail pattern and it makes sense to remove the OS from the harddrive.

August 01, 2021

Describing the Linux failure landscape

 



The Linux online forums have to handle a lot of request from disappointed users. The typical situation is that a certain Linux distribution isn't working with a concrete piece of hardware. And nobody in the forum knows the exact reason or can fix the problem upstream. Instead of explaining what each single case was let us give a birds eye perspective about linux compatibility in general.
The overall situation has nothing to do with init scripts or kernel modules but the problem is located on a time line. A hardware vendor releases a product to the market and is selling the product for 2-3 years. During that time, the product won't work with Linux for sure. That means, the touchpad of the notebook isn't recognized and the graphics card can't display even standard VGA signals. The only way to run the hardware is the Windows operating system.
If the product delivery was stopped after a while and was replaced by a new product it is possible to write open source hardware driver for the product. This results into kernel patches. Without these hardware driver, the linux kernel can't recognize the device. Until the newly written hardware drivers are put into the kernel and shipped to normal Linux users it will take another 2 years. So we can say, that after a certain period Linux works great with the hardware.
The critical time span are the first three years. If a product is sold, no Linux support is available. And if the product is outdated, it will run fine in Linux. Which customer likes to run Linux on brandnew devices? Right nobody because the customers are aware of the problem.
Let us summarize the situation a bit: most (90%) of all the Linux bug reports in online forums are about recent hardware. It is very unlikely that somebody finds a problem with outdated hardware because if the product is 5 years and older the chance is very high that it will run out of the box with Linux. So basically Linux provides something nobody was asking for.