September 25, 2019

Pyglet hello world program

According to different internet websites, a potential alternative for the pygame library is the pyglet framework. It's a bit harder to use, the number of tutorials is lower but the performance is much better. In pygame, the general problem is, that even a slow framerate results into an application which consumes many CPU ressources. Pyglet allows similar to a C program direct access to the OpenGL interface which improves the performance. Additionally, there is a feature available called batch, which is useful for particle simulation in which hundreds of verticies are drawn at the same time.

It's unsure, if pyglet can replace the famous pygame library. Because it's very new, and the information are spread over different stackoverflow entries. For testing out the capabilities, a small hello world program make sense, which draws a line, a sprite and asks for keyboard input. From the performance side, it's much better than pygame. The window consums nearly zero CPU ressources, it's on the same level like a game written in fast C++ with the SFML library. A short look into the sourcecode shows, that pyglet is a bit tricky to use. The first thing which is important is to inherit a window, and secondly, the manipulation of the colors is complicated too. The hope is, that in larger programs, it will become easier to use, because the drawing primitives can be capsuled in subfunctions. What we can say for sure, is that the program written in Python is much easier to write, than the same App written in C++, Java or C.



#!/usr/bin/env python3
import pyglet
from pyglet.window import key

class GameWindow(pyglet.window.Window):
  def __init__(self, *args, **kwargs):
    super().__init__(*args, **kwargs)
    self.pos=(200,200)
    self.angle=0
    self.movement=20
    imagetrain = pyglet.resource.image('locomotive.png')
    self.spritetrain = pyglet.sprite.Sprite(imagetrain)
    self.label = pyglet.text.Label('Hello, world',x=100,y=100,color=(100,100,255,255))
  def on_mouse_press(self, x, y, button, modifiers):
    pass
  def on_mouse_release(self, x, y, button, modifiers):
    pass
  def on_mouse_motion(self, x, y, dx, dy):
    pass
    #print("move mouse")
  def on_key_press(self, symbol, modifiers):
    #print("key was pressed",symbol)
    if symbol == key.A: self.angle-=10
    elif symbol == key.B: self.angle+=10
    elif symbol == key.ENTER: pass
    elif symbol == key.LEFT: 
      self.pos=(self.pos[0]-self.movement,self.pos[1])
    elif symbol == key.RIGHT: 
      self.pos=(self.pos[0]+self.movement,self.pos[1])
    elif symbol == key.UP: 
      self.pos=(self.pos[0],self.pos[1]+self.movement)
    elif symbol == key.DOWN: 
      self.pos=(self.pos[0],self.pos[1]-self.movement)
    self.spritetrain.update(x=self.pos[0],y=self.pos[1],rotation=self.angle)
  def update(self, dt):
    pass
    
  def on_draw(self):
    self.clear()  
    pyglet.gl.glClearColor(1,1,1,1)
    pyglet.graphics.draw(2,pyglet.gl.GL_LINES,
      ('v2i', (10, 15, 130, 135)),
      ('c3B', (100,100,255)*2), # one color per vertex
    )
    self.spritetrain.draw()
    self.label.draw() 

if __name__ == "__main__":
  window = GameWindow(600, 400, "pyglet",resizable=False)
  pyglet.clock.schedule_interval(window.update, 1/30.0)
  pyglet.app.run() 


Update

In a second version of the program, a dedicated class for the physics state was created. This simplified the programming a bit. The remaining commands for drawing the lines and the sprites will become easier, because once the code is written, it will draw all the elements to the screen.

Secondly, it was measured the exact cpu consumption. For 100 frames per second, the GUI app needs 15% of the totai CPU.

ps -p 31339 -o %cpu,%mem,cmd
%CPU %MEM CMD
15.1  1.3 python3 1.py

If the framerate is increased to 200 fps, the cpu consumption is 18%. It's important to know, that the only objects on the screen, are one sprite, one line and a small “hello world” text. It seems, that Python programs in general are running slow? No, that is not the case. Because a program written in high speed C++ with the SFML library has the same performance problems. It seems, that the requirements of updating the screen 200 times per second is too much for computers in general. No matter in which programming language the code was written.

September 24, 2019

What Robotics enthusiasts can learn from model railroad



On the first look, both domains have nothing in common. Robotics is focussed around artificial intelligence, and writing software for a real robot is only one applications under many. In contrast, model railroad is primarely a mechanical toy not equipped with artificial intelligence. Both domains have weaks and strength which fits together very well. For example, the robotics community has the problem, that they have developed advanced software for creating neural networks, experts systems and AI programming languages, but they have no idea what to do with the technology in a real project. In contrast, model railroad experts are well equipped with practical applications but what is missing is a theoretical concept.

It's obvious to combine both domains into a single one. In the literature it is called a rail guided robot, or a digital model railroad. It depends from which point of view the subject is discussed. One option is to put existing AI technology into a model railroad. That means to control the trains with neural networks. The other option is, to put existing train tracks into a robotics project. this results into railway guided vehicles.

The reason why a combination make sense is because the foreign domain can solve the issue from the home domain. Let us give some examples. In robotics, the main problem has to do with the large state space. Suppose, a walking robot was constructed which contains of 20 dof. THe problem is, that nobody knows how to control such a complicated device. Sure, the AI Community has developed many frameworks but using them for control a concrete robot is complicated. The situation will become much easier if the idea is not to control a walking robot, but an electric train. A train is equal to a half degree of freedom. The speed value can become zero or slow moving ahead. More is not needed.

Now, let us observe the situation from the perspective of model railroad experts. A typical model railroad contains of 10 different trains, plus 20 switches on the train track. The problem is, that the average train expert has no idea how to control this system. He can press the buttons manual, but this become a stressful job after a time. The problem is, that within in the railroad community there are no things available like programming languages, cognitive architectures nor neural networks. In most cases the model railroad can't run autonomously. Exactly the missing features are provided by the AI Community. They have developed lots of tool which can control 10 trains plus a handful switches easily. That means, for AI experts, a train track is a very easy problem, which is less sophisticated what they are trained.

Example

A basic setup which can be realized with any model railway system is a locomotive which drives in a circle. From a technical point of view, the motor in the locomotive accepts electric current and this drives the vehicle on the track. The interesting point is, that the locomotive drives in the circle without a steering wheel. All what is needed is the forward movement. This hardware setup can be utilized by AI engineers for realizing a control system. The neural network has to mainstain the speed parameter of the locomotive constant. The result is a neural network controlled electric vehicle. It's a practical application which is useful in reality and it can be realized with a minimum effort of know how.

The most interesting feature is, that each community which is AI experts, and model railroad enthusiasts are not aware that their knowledge is relevant for solving the task. From the perspective of model railroad, a simple circle without any swichtes and with a single locomotive is very easy to realize. They do not assume, that this kind of problem is relevant. And from the other perspective, the AI experts, a neural network which is working with the propagation algorithm is also a bread and butter technology which seems not relevant on the first look. Combinging both technologies together results into something new which is very advanced.

Take a look at the weakness

For understanding why both domains can be combined easily, it's important to take a look at the weakness of each domain. In the model railroad domain there are some tutorials available about how to use virtual model trains and program the vehicles. A short look into the tutorial comes to the conclusion, that the quality is low. That means, railroad experts are bad programmers, they have no idea how to create software, nor how to build artificial intelligence.

The model railroad community is not the only one who has a deficit. In short look into the AI Community and neural networks experts shows, that some documentation is available about how to build physical robots and even cars. The problem here is, that the AI experts have no idea how to realize the mechanical parts nor the electric component. The reason is, that Artificial Intellligence is mainly a software driven technology, but is not located in the reality.

So in general, both domains have a large weakness and the foreign domain can add the missing feature easily. It's important to communicate between both areas back and forth to get access to the missing knowledge.

Entry projects in classical robotics

Before it make sense to explain why model railroad is a good starting point, there is a need to describe low entry projects in robotics. A beginner friendly project which is not too complicated is the Micromouse competition. The aim is to control a wheeled robot in a maze to the goal. Another easy to realize robotics projects is a line following robot which can be build with Lego Mindstorms. The interesting point is, that with model railroads, the entry barrier can be lowered much more.

The reason is, that a micromouse robot is compared to a train controlled robot and advanced machine. Because the micromouse can drive backward, forward, rotate to the left, rotate to the right. And addtionally, there are obstacles available. Creating the control algorithm which can handles there requirements is compared to a biped robot an easy problem, but compared to model train controller and advanced task. Model railroad trains are interesting because they are lowering the control requirements to the minimum. They are more beginner friendly than a micromouse challenge.

Railway as a graph



The reason why model trains can be controlled much easier than a micromouse is because a railway network is forming a graph. If we abstract from different sizes and switches, every railroad can be converted into nodes which are connected by arrows. Modelling this in a computer program is a beginner task and routing a packet through the graph is more easier than driving a normal robot to the goal.

The sown graph contains of a small number of train blocks which are connected with switches. What the operator can do is to drive a train on the network and adjust the position of the switches. In contrast to a normal robotics problem, the remote control vehicle has a reduced amount of decisions. The train is not allowed to decide anything, but the path is given by the environment. From a computer science perspective this is a very easy to solve task, much easier than normal AI problems which have a very large state space. Most normal AI problems are open ended tasks. That means, the robot can decide anything and the problem is not specified in detail. In contrast, a routing problem in a graph which on which a vehicle is cruising is the perfect task for AI newbies. Even if they have no idea about reinforcement learning, neural networks or PDDL they can solve the routing problem easily.

A second advantage is, that the hardware for realizing the given graph in reality is available out of the box. All the model train manufactoring companies are able to deliver the needed parts in a great quality. And even more, the consumer can decide between a locomotive in the green color which is older, or new engine which is given in the black color which is available in the reality as well.

What i want to explain is, that railway model is the perfect testbed for teaching Artificial Intelligence to the newbies. It's much better than Lego Mindstorms or the micromouse challenge.

Comparison of Python with Java

Java and Python are both alternatives to the C/C++ ecosystem. The idea is to create software with the object oriented paradigm but avoid complicated pointer arithmetic and outdated package manager. What Java have in common with the C/C++ standard is, that the programmer has to enter variable types like integer, boolean and arraylist. In the Python language this is no longer needed, instead the variable types are determined on the fly.

This makes Python to a modern language. It simplifies the programming process over Java. That means, the same algorithm can be implemented in less time and with less lines of code. On the other hand, it's harder to program a Python interpreter and a python just in time compiler. Because the interpreter has to figure out by himself if a variable is a character, an integer or a float.

Python is the most high level language available today. That means, the program can focus on the problem but is not forced to think in the problem. The next higher step after python is not programming any line of code but using a package manager to insteall an existing application. That means, the “dnf install pacman-game” command works better, than creating the pacman game in Python from scratch. A look into the usage of Python have shown, that only a few people are aware of the language. Most programmers are prefering classical languages like C, C++, C#, Java and Fortran. In case of C++ they are doing so because of performance reason, and in case of Java because they want to use a just in time compiler which is perceived as more mature, than a Python interpreter which is producing memory leaks.

It make sense to divide the existing programming languages into groups. The first one is compiled vs. interpreted. And the second one is staticaly datatype vs. dynamic scripting languages . For example the Java language is a semi-compiled language which is working with static datatypes. That means, before the user can start a hello world program he needs to run the java compiler. And in the hello world example, the variable i in the for loop has to initialized as an integer type. On the other hand, Python is the most modern language, because no compiler and no datatypes are needed anymore. That means, Python code can be written much faster, than C, Java or C# code.

Python have much in common with other scripting languages like Perl and bash scripts. It was invented as an alterntive to classical programming but has become popular very fast because many non programmers are attracted by the low entry barrier. The difference between using an existing application and write a python script is small. On the other hand, Python is more complicated than using an Excel spreadsheet for calculating numbers. Because the python interpreter needs an error free script before the first run. It won't fix syntax problems by it's own.

Which alternatives over Python are available?

The quick and easy answer is, that plain C is well suited for creating productive applications. But let us slow down the problem a bit and describe first what is wrong with python.

Suppose, a GUI prototype was created in Python. Everything works fine, because Python is a great choice for creating runnable scripts. The only problem is, that the resulting code is slow, won't run on different architecture and is not recommneded for a productive environment. The first trial is to redefine the problem and claim, that Python is a great language for creating efficient applications. But even Python advocates are realistic and are aware that Python has some limits. The best example is a game written in Python. It looks nice for a prototype but for daily usage the application will consume to much cpu-ressources.

So, what is the next better alternative? The good news is, that an existing software written in Python can be converted easily in any other language. The conversion has to be done manual, but it's not very complicated. So it's up to the programmer to choice on of the existing 400 programming language and rewrite the app. Rewriting the software in a different language is not a mistake, but it's the fastest way in software development because it postpone the less important steps.

Which target language is the right one? According to the TIOBE index, and the Stackoverflow website, large universal languages are C++, Java and C#. All of them are supporting object oriented programming. The problem with these languages is, that they are complex to master, because they are supporting OOP features. The more elegant way in programming an application is plain C. Plain C is some kind of standard language which is supported with the top priority by the GNU Compiler collection. But if compiled C is a good choice, what speaks against the C++ language?

To understand the issue we have to make clear what the shared principle of all compiled languages is. Java, C++, and even C are static typed languages. That means, the user has to define the datatypes in advance. For example, he defines an array, an integer variable and so on. The reason, why programming in Python is so much easier is because in Python there is no need to define datatypes. Only compiled languages have a need for static types.

If we made the decision to rewrite a software into a professional language, this is equal to rewrite the code in a static typed language. Which means, the programmer is forced to negotiate with the compiler on a lower level. He has to think about pointers, heap, main memory and so on. There is a simple decision. Either the programmer is working on a high level which is equal to Python. In this case he can figure out the best algorithm and ignore the underlying operating system. Or he rewrites the software in a fast language, which is compiled, then he thinks about system level implementations.

It's obvious why languages like Java, C# and C++ are a bad choice for creating productive code. In case of Java the programmer has no access to pointers, and in case of C++ he is forced in using classes. If the programmer is trying to write lowlevel code the only choice is plain C. And yes, writing an application in C is more complicated than doing it in Python. But C is more easier to master than Assembly language and for most tasks there is a library available. In most cases, a C program will use a dynamic datastructure which is a linked list. The advantage is, that such a list is superfast and allows to create efficient programs.

The main reason why C is recommended for productive code is, because the amount of features is reduced. In contrast to Java or C#, C code supports only a limited amount of programming techniques. Apart from c files which are put together by the compiler no other features are available. It's not allowed to use object oriented programming, it's not allowed to use a predefined dictionary, but the programmer has to manipulate the main memory directly and needs to know what the external libraries are doing. This results into a programming style which is more efficient than Java code or C++ code. Not because a Java compiler is slow but because the programmer limits himself to writing C code.

Object oriented programming is a great idea in prototyping of software. In a short time, larger applications can be created. Using the same principle for creating productive code is a bad idea. The reason is, that the main memory of a computer doesn't understand what objects are. And compilers or virtual machines who are translating object oriented code into binary code are too complex for practical usage. The better idea is to divide the programming task into two subproblems. Creating the application prototype with Python and building the productive code in C.

If the c code was written the application is build for the future. The same code will compile in 10 years from now. The reason is, that plain C is the common standard in programming. If the C# language is forgotten, if Python scripts won't run and if the operating system is different, one thing remains stable. C code can be compiled always into binary code. The funny aspect in C code is, that apart from a bit pointer juggling there are no other complicated things which can be made wrong. The c language contains only of for loops, if statements, pointers and external libraries. It's easy to debug existing C code, and it make sense to write new one.

September 21, 2019

Short introduction into railroad modelling

The amount of information about model railroad is large because the subject has a long history and a large support of commercial companies. In general we can say, that former analog control is working with blocks. A part of the train track can become activated by current and this forces the train to drive forward.

The more modern technology is called digital control. This is working with a data transmission protocol. The current is combined with information packet and both are received by the decoder in the train. It is working very similar to a local area network and each train has a network interface card. The technology is a bit different from normal computer networks because the electric current and the networking information are combined into the same signal. In computer terms it's called powerline https://en.wikipedia.org/wiki/Power-line_communication

What makes model railroad a bit difficult to grasp is because each manufactorer has developed it's own data transmission protocol. Some recent developments are working the Arduino micocontroller but there is a need for a decoder in each train for receive the informations. A possible standard is called Digital Command Control (DCC).

From the outside perspective, DCC is working similar to a WLAN device. Each trains has an imaginary WLAN adress and it's possible to send a signal to it. The difference is, that in case of model railroads the signals are transmitted not over the air, but inside the train track. The reason is, that all the trains are driven by an electric motor and this needs electric current. This is perhaps to most obvious difference to Lego Mindstorms robotics, which are working with onboard batteries. The advantage is, that an electric train can work 24/7.

Model trains for teaching artificial intelligence

From the classical perspective artificial intelligence is teached in the universities with robotics applications. The famous Lego Mindstorms kit is a well known programming environment for creating practical robots. The problem with the robot machinery is, that nobody can define what a robot is. In the history of films, a robot is some kind of futuristic device which comes from outer space. A famous humanoid robot is Lt. Commander Data from Startrek, but the T-800 in Terminator is also a robot.

On the first look the idea of using a robot as an education tool to teach artificial Intelligence make sense, because intelligent machinery is something not available today but should be invented in the future. The disadvantage is that it remains unclear, what a robot is and how to program it.

The better idea is to search for different platform for teaching Artificial Intelligence which is not a robot but a railroad model. A short look into train models have shown, that this kind of toy has a long history. The first electric models were build around 1900 and the time before, purely mechanical railroads were used. Also the community is growing. The latest technological advancement is called DCC control which was invented in the late 1980s. It's a network packet protocol similar to the Ethernet standard for addressing individuals motors on the track.

The normal pattern of utilizing model trains is by manual control. But with a computer in the loop it's possible to automate the process. That is what the tech model railroad club have done in the 1950s. So we can say, that model railroads are an ideal testbed for creating complicated Artificial Intelligence control systems which are working with symbolic reasoning, neural networks, image recognition and so forth. The dominant advantage over the sad Lego mindstorms product is, that model railroads are supported very well by commercial manufactorers. There are endless numbers of starter kits, add ons and realistic model trains available. That means, the community is already there and doesn't need to invented from scratch. In contrast, there is no robotics playing community, because it's harder to do with a robotics kit something useful.

On the first look, it's sounds like too much of complexity to figure out the details of model railroads if the aim is to introduce model predictive control. The advantage is, that even the AI part of the project fails, the model train will work quite well. It's possible to run a model railroad without artificial intelligence in the loop, this makes it easier to approach the subject slowly. Let us compare this with Lego Mindstorms. A robot has the tendency, that without programming the device, the toy will do nothing. If the self written Java code has an error, the Mindstorms robot won't move a single millimeter. This is frustrating and results into failed projects. In case of railroad models, the chance of failure is lowered. The low end education goal is to teach the basics of electrics and physics. And the higher goal is to explain what artificial Intelligence is. This makes a model railroad the natural choice for increased difficulty.

Symbolic Planning is equal to GOFAI

The term classical AI and symbolic AI is often used to describe a certain period in AI research, mostly before the advent of neural networks and behavior based robotics. This kind of understanding summarizes all efforts in the early years under the same description. But at the same time is leaves out what the word “symbolic” means.

I've investigated the problem in the literature and found that symbolic AI is equal to symbolic planning..And this is equal to high level planning in which the STRIPS language is the most important tool. As a result, GOFAI is not over but it's state of the art AI if the focus is on the higher leyer of a robot control system.

After this theoretical introduction it make sense to give some details for symbolic planning. Suppose there is a game of robocup which is played by a robot swarm. On the bottom side there is the need of low level control and low level sensor interpretation. This kind of tasks is ignored in the following description. Instead the more interesting aspect of the problem is to control the robots on a higher level. Which means to plan the general strategy and long term tasks. Before symbolic AI can be formlized some kind of abstract simulation is needed. This is a simplified robocup simulation in which all the details are missing. It doesn't work with a physics engine, but with a symbolic engine. In most cases the game will look like an early Commodore 64 which contains of low resolution graphics or the extreme case no graphics at all, but a textual menu in which the player can play soccer.

On the first look this kind of simulation is step backward, because it doesn't contains a colorful highly realistic simulation. Instead the game is played on a 10x8 pixel map and the movements of the ball are abstract. The advantage is, this kind of simulations are easily to play autonomously by an AI. The state space is reduced and the allowed actions are working on a higher level. The advantage is, that the computational requirements are low.

Atari 2600 games

A less powerful computer than the Commodore 64 was the Atari 2600 device. It has a maximum resolution of 160x192 pixels but many games are using a smaller resolution. The amount of colors was small and the RAM was 128 bytes. For today's eyes the games on the Atari 2600 are equal to a joke. They doesn't have interesting sound, they are not using 3d graphics, and the movements are unrealistic and slow. What all the games from this area have in common is a low state space. If the map is limited, the amount of possible paths is reduced too. This is equal to a symbolic game.

If the goal is to simplify the game more, the next lower step would be avoid any kind of graphics, and create games in the textmode which are using at maximum only 80 Bytes. According to a fan forum there were few text adventures released, for example “Dark Mage” Another game is “STELLAR TRACK” which was rated by the gamers as poor. The user has to control a spaceship. In detail he sets the maximum warp speed and the feedback is shown on the screen textual.

For explaining what symbolic AI is, the “Stallar Track” game is ideal. It contains of a map which is 8x8 pixel large, everything is in textmode and the overall game is very limited. That means, the player can choose between a small number of actions and after a short period the game gets boring. The advantage is, this sort of domains can be easily simulated and solved by an AI player. He can use a fixed strategy or try to test out alternatives with monte carlo tree search. We can summarize, that symbolic AI has to do with a certain sort of game engine which is a very reduced one. The game engines uses not more than 100 bytes of RAM and the output is given in textual screens.

Turn based strategy games

Before the advent of Real time strategy games, there was a period in computer gaming history which were devoted to turn based strategy games. A notable example is Oil Imperium. It's major feature is, that the gameplay is very slow. The amount of decision to take are working on a high level. The player can relax while the other player takes a move.

This sort of games has felt out of fashion but they are important from the perspective of Artificial Intelligence. Even modern real time strategy games have a high level component which needs only a small amount of decisions. For example the question, if the user likes to build 10 houses or 30 housing. Most games can be split into a low level layer and a high level layer.

Word processing on the computer

Around the year 1980 a technology revolution took place, namely the invention of the first word processing machines. In the beginning, they were standalone apparatus. It was a normal typewriter equipped with computer memory, but after a short time, the normal PC was used as a wordprocessing machine and the capability to edit text was realized by a software program.

The transition from mechanical typewriters to word processing on the computer was a large step in productivity. The remarkable thing is, that a computer can be reduced in it's functionality to as word processing only machine. That means, it's possible to use the IBM PC for nothing else than only typing in letters. The advent of computers for business and private users was motivated by the word processing capabilities. The advantage of a computerized word processor over a mechanical typewriter is huge. The most important feature is, that the text can be stored in the memory and reproduced many times. For today's computer users this kind of feature is normal, but in the early 1980s it was a revolution. It forces all the companies to throw away their old mechanical typewriters and buy millions of PCs from computer manufactors. The interesting point is, that even untrained users are using the PC as a word processor. It's the first application somebody discovers on the PC and understanding how a modern text software is working is very easy.

To understand the impact of the revolution we have to take a look how texts were written before the 1980s. All the decades before it was done with mechanical typewriters and before the year 1900 it was done with manual pen. That means, all the books and publications created before the 1980s were not typed in at the PC but there were produced mostly on mechanical typewriters. This effects the content of the text. If somebody is using a computer to type in a book the sentences are different if somebody is using a mechanical typewriting machine.

From a social perspective wordprocessing is not only a computer driven technology, but it's an educational device. If somebody learns to use the computer for writing texts, he learns to write texts in general. Or let me explain it from a different point of view. It's not possible that somebody learns a language without using tools like pen, typewriters and computers. As a result, the invention of the first word processing computers were equal to a global education program to teach the art of writing.

September 19, 2019

Identify the bottleneck in robotics

A short look into the modern world shows, that robots are seldom used. Most cities and companies look the same like 50 years ago. That means, 100% of the cars are operated manually, and pick&place operations in a warehouse are also done by humans. The reason is not located in the technology itself. With robotics hardware and software everything is alright, the problem is to push the technology into the market.

Let us give a motivational example. From a technical point of view, a cheap and highly productive pick&place robot contains of a delta robot which provides the robust basis and a software. The software works with a convolution neural network to identify the objects and controls the robot gripper. The gripper is not working with mechanical parts but with air together with a soft gripper of plastic. The combination of modern hardware plus state of the art software results into a working robot who can do the given task with maximum performance. The robot won't make any mistakes, it will operate at low costs and helps to increase the productivity drastically.

How many of these robots are used today in reality? It's a rhetorical question because even in the united states no such robots are installed and there are no plans in doing so. Not because something is wrong with the technology but because most people don't understand the technology.

Before a modern robot can be activated the first time, the surrounding humans have to be prepared. The employees have to be trained, the consumers have to asked and the universities have to educate the students. This kind of preparedness is missing on the global scale. That means, at the university, robotics isn't in the curriculum, and the employees in the company are not trained in programming a robot. On the one hand powerful technology is available and at the same time, the society collectively resists in using it. If no intervention will take place the robotics revolution will become obvious in some decades. In 40 years from now, somebody will discover today's technology and thinks it would be good idea to install one or two of the devices. The people in 40 years will have less fear of robots. That means, they will use the same hardware and software invented today, but the only difference is, that they feel more comfortable with it.

Most technical issues in artificial intelligence are already solved. Self-driving cars were demonstrated in the past, self-driving drones are flying robust and even household robots are ready for the market. What is missing is some kind of techno-optimism which asks for these inventions and is bold enough to install the robot in a production task. The question for the coming 40 years in Artificial Intleligence research is not how to improve neural networks or create better expert systems, but it has to do with training humans in using existing technology which is available but not understood quite well.

From a technical point of view, the combination of a delta robot plus a convolutional neural network is some kind of low tech automation project. It doesn't contains of quantum computing, it doesn't try out new programming languages and it was tested in the laboratory many times before. What is new is to introduce this kind of machine into the workplace for daily usage. Most humans are in fear of computer related innovation, and especially if the idea is that the machine is thinking by it's own, they will reject it. Even if the robot is able to increase the productivity by 300%, the employees are ignoring the device because they imagine a dystopian future in which their work is monitored. And the fear makes sense, because the hidden agenda of introducing Artificial Intelligence at the workplace is to create a surveillance system of an omnipotent machinery while the humans gets indoctrinated and manipulated.

Python as a replacement for perl

The scripting language Perl doesn't need a introduction, because since the 1990s it is widespread used by the UNIX community for automating daily tasks. Perl has the same status like the AutoIt language under the Windows operating system, it's everywhere and nobody admits, that he has written such a script.

The problem with perl is that the language has many disadvantages, not in comparison to a compiled language like C but in comparison to Java and Python. Perl programs look like an improve bash script which means, that it has subfunctions and variables but it's hard to create larger projects. The recommended alternative over Perl is the mentioned Python programming language. The only thing what is important is, that the Python code starts with a different header:

#!/usr/bin/env python3
print("hello world")

Similar to a perl script the python program has made executable with chmod +x helloworld.py This allows to run the script from the command line. It will search for the Python interpreter automatically, and then the cpython program will execute the script.

The performance of Python is the same like in the case of Perl, that means it will run fast enough for most of the tasks and if the user needs a higher performance he has to rewrite critical code segments in the C language. One major improve of Python over Perl is, that it's possible to create graphical applications. This is possible with the object oriented features which allows to send messages to classes. Around the pygame library an entirely community has been formed which is programming simple games with Python.

Creating advanced Artificial Intelligence with mindmaps

Most beginners in robotics are starting with the line following example. The idea is, that the robot has to move on the blackline. He has to steer, that the robot will never lost contact. Realizing such an algorithm is possible in under 10 lines of code, and after a bit of programming the robot is doing the wished behavior.

After the first robotics project has came to a success the often asked question is how to scale up the project into more complicated domains. The most demanding task which is available is an AI engine for a real time strategy game. This sort of game is hard for human players and for AI controllers as well. Sometimes the newbies are expecting that this kind of domain asks for a complete new sort of technology, but it's enough to combine mainstream AI techniques.

The project starts with a mindmap in which submodules are given. An RTS AI player contains of a strategy manager, a production module, an exploration manager, and house building submodule. It's not a monolotic AI which handles all the tasks, but the AI is split into sub-AI which are working together on the blackboard. This allows to focus the attention only on a specific problem, for example which building should be created next. All the other problems in the RTS game can be ignored by the building manager.

The most surprising effect is, if many of these modules gets implemented and the AI can use all of them together. The result is not a random behavior, but it will result into a human like AI who knows exactly how to play the game efficient. The problem solving technique of using a mindmap for define submodules can be utilized for other robotics projects as well. For example if the aim is to build a drone which can fly but also land and grasp objects, the overall system can be realized by different managers who are communicating on a blackboard. From a programming perspective a blackboard is a normal class which contains of other classes. It's the same principle used in normal computer programming if the program contains of lots of subclasses who are ordered hierarchically.

Input and output of a module

Let's take a look at one of the mentioned modules. The production module has to decide which kind of unit is produced next. It's only allowed to use the existing production facilities but not to build new one. The question which has to be answered is: what should the base produce next? To make the decision process more reliable some additional information are given, for example, which units were produced in the past, how much credits are available and which of the units has a better performance to cost ratio. As a result, the module decides, what the next produced item will be.

The most interesting features of using submodules is, that the amount of information is reduced. Not the complete game state is feed into the module, but only a limited amount of information. This allows to develop and test the module by it's own. In classical software engineering this is called encapsulation and is a powerful tool for creating complex applications.

The disadvantages in creating all the modules is, that it takes a lot of programming effort. In contrast to the example with the line following robot from the introduction the resulting AI application will need hundred of codelines, distributed over many files.

The transition from mechanical typewriters to modern computers

Historians have the tendency to describe decades on an objective basis. It's possible to tell the story of the 17th century. On the first look it make sense to divide time into pieces of similar durations, 100 years are always 100 years. The problem is, that in the recent history the development of mankind has accelerated, which means, that in the same 10 years much more events are taking place than before. A well known example are the 1980s which are known as a very influential decade.

To understand why we have to snapshot first the situation before the advent of the 1980s in terms of media history. Until the 1970s the most common tools for producing information was a mechanical typewrite. This machine was used in private households, companies, by journalists and by scientists. Compared to a manual driven pen, a typewriter provides a higher typing speed and allows to create up to 3 copies of the original work. In general the speed of information distribution was slow, and the number of produced new letters, journals and books was low.

With the advent of the first home computers the situation changed drastically. The killer application for the first IBM PC was not it's ability to emulate a turing machine, but the first home computers were used as a modern typewriters. The early word processing tools were running only in textmode but they were powerful enough that in under 10 years the computer has replaced typewriters. Even people who were familiar with mechanical typewriters recognized very fast the advantages of the computers. It allows them to create information more efficient. Sometimes the early 1980s are described from a literature perspective as the beginning of the Cyberspace novels. What happened in that time, that the authors thrown away their typewriters, used a computer and wrote a new kind of stories. It's not possible to write a story like Neuromancer (1984) on a mechanical machine.

The replacement of typewriters was not the only benefit of the computer revolution. At the end of the 1980s a second revolution was visible at the horizon which was the Mailbox and Internet community. This kind of innovation was more drastically than using a computer as a typewriter. It was equal to replace the former postal service. All of these development took place in a single decades. In the 1980s there was more technical revolution available than the 3000 years before. That means, if somebody likes to describes in the correct depth, he has to write about the 1980s the same amount of books like about all the centuries before.

The assumption is, that this kind of revolution game has repeated in the 1990s. That means, in the recent history the amount of innovation has become bigger and a single decade contains of more facts than in the past.

September 17, 2019

Is English needed in the computer industry?

There is a common misconception available which claims, that English is a world language and is spoken by million of people. Especially in the tech-industry the situation has changed since a while. Most international programmers doesn't speak English anymore but they are preferring their local language which is Japanese. The famous stackoverflow website https://ja.stackoverflow.com/ for asking questions about the latest Apple programming languages like Swift isn't written in English, but it's recommended to write in normal Japanese characters.

The same is true for spanish speaking programmers. They have accumulated on their website https://es.stackoverflow.com/ around 100k questions with all sorts of programming related questions about Android, PHP and Javascript issues. The localized version of Stackoverflow have became successful because most computer engineers are not interested in learning English and finds it more comfortable in communicating in their mothertonque. This helps to lower the entry barrier. Sometimes, the function names in the newly written Swift programs are also written in Spanish characters which makes the sourcecode more readable and is recommeneded for international oriented projects. This helps foreign students to not only learn the the programming language itself but they will understand the Spanish alphabet much better.

In contrast, the former trend of using English as a lingua france in computer science has come to an end. The English speaking stackoverflow website struggles in finding new users and the quality of the questions has lowered since a while. The reason is that the English language is used as a colloquial language in which non-sense questions are formulated.

The recommended way in using the Spanish language inside existing C# is given in the following example:

public string Jugadas()
{
    string variacion = "";
    if (turno == 1)
    {
        variacion = "X";
        turno = 2;
    }


The function names and the variables as well are written in the spanish language. Jugadas means “interpreter” in English and variacion can be translated into “oscilator”. Unfortunately, the Java keywords are written in plain english. For example “public” could be translated into “público” which has over the u character a wonderful apostrophe which looks nice on a 4k resolution screen.

September 16, 2019

Perl and the invention of scripted languages

Under the UNIX operating system, the Perl language was a milestone. This is on the first look surprising, because Perl is less powerful than the C language. Programs written in a scripted language run compared to a compiled program much slower. Today, Perl is nearly obsolete similar to the Pascal language. Instead the Python VM has replaced former Perl environments. The reason is, that the syntax of Python is more elegant and it's supporting object oriented programming very well. But let us focus on the Perl language. It was the first important language which offered an alternative to compiled programming. The reason why so many perl scripts were developed was because it's much easier to write a script over compiling a program.

Programming in the C language is equal to classical programming. That means, a system specialist has a concrete project, he has a lot of experience with the C language and is creating an efficient program. In perl the situation is a different. Perl scripts are developed as throw away programs. They were written by non experts, and for tasks which are not important enough to create a standalone c application. In contrast to a c program a perl script doesn't know the difference betwen sourcecode and executable binary file. It's both the same similar to a plain text file. If somebody has received the perl script over a modem, he has the executable program, the documentation and the sourcecode all in the same file.

Many modern programming languages like PHP, Java and Python are derived from Perl. They have copied the idea to interpret but not compile the program. That means, the code is not executed by the operating system direct similar to a system routine, but the code is transmitted to the perl interpreter first and he executes the code. That means, a perl script isn't a standalone program but needs always the perl interpreter as a runtime environment.

In a direct comparison, Python can be treated as the inofficial Perl successor. It's working with the same principle but has improved the syntax. Also more libraries are available for python than for the perl language. The open question is what will come after Python? One option is, that the python interpreter evolves into the direction of java, which means, that it will precompile the code first which results into a higher performance. Another option is, that Python is nearly perfect and will be used for the next 30 years. Last but not least one very unlikely option would be to evolve the Python interpreter into a complete GUI operating system, similar to the micropython framework https://en.wikipedia.org/wiki/MicroPython which stays in contrast to existing Linux OS. In this case, the Python language would replace C drivers and the linux kernel and take control over the entire machine.

Typewriters are helping to understand what innovation is

The most interesting feature of typewriters is, that his technology is obsolete. This makes it much easier to analyze it's past from an objective standpoint. According to different sources, mechanical and electric typewriters were popular until the late 1980s. This was the time in which the first users switched over to IBM PC together with dot matrix and laser printers. Up to the early 1990s a typewriting machine was a common tool used by individual and companies as well.

The reason why the typewriter was a success was because it was wore more advanced than using the alternative which is a manual pen. Writing a letter with a typewriter is more comfortable than using a pencil and the typing speed is higher. Additionally, the typeface is easier to read which makes the typewrite to the perfect choice for journalists and writers.

Surprisingly the typewriter was replaced by the computer for the same reason. Creating a letter in a wordprocessing software is much easier then typing it with a typewriter. So the typewrite was a transition process between a pen and a desktop pc. It was used in a certain period and it's possible to tell this story for a wider audience.

The reason why the history of typewriters is attractive for many people is because it has a long history. The first commercial models of mechanical typewriters were built around 1890 and the last typewriters were sold on the market around 1990. This is equal to a timespan of 100 years. In that time, the typewriter was produced unchanged, with some minor modification in the 1970s with the advent of the first electric typewriters. This sort of long duration is a luxury which is uncommon for modern computing industry. The lifespan of recent technology is lower than hundred years. Even the famous Commodore 64 was sold only for less than 10 years.

The interesting aspect of ttypewriters was, that in that area the inventors were not in hurry. One typewriter was produced in 1890 and then 10 years happend nothing and smaller improvements were made in 1900. That means, the typewriting machines were easy to understand and there was no need to go forward and discover which is completely new. This has changed drastically in the computer industry. The normal behavior of computer evolution is, that every 4 years a new technology is introduced into the market which is working different from any product known before. A machine like the Commodore 64, the 286 IBM PC and a more 4 core PC have nothing in common. They are used for different purposes and the internal working has evolved a lot.

September 14, 2019

The Perl language as a guideline for understanding modern programming

The Perl language was developed in the late 1980s and had fallen out of fashion since a while. It's a programming language which has revolutionized computer programming in the past, because it was the first widespread used scripting language. Perl was introduced in a time, in which programming in UNIX was done entirely with the C language. From the perspective of C, Perl doesn't make much sense. Because all the libraries are written in Perl again. It was yet-another-programming language which resulted into extra work, new scripts and a different ecosystem.

Exactly this was the reason why Perl and all the other later examples of scripting languages were successful. Because they are focussed not on classical programming but on scripted programming. The advantage of using Perl over C is the same, like modern Python programmer would explain why they have choosen the language of Guido van Rossum: because in a few lines of Perl it's possible to create complex applications. Secondly, there is no need to use pointers, integer variables or a compiler. But a simple hello world script fits into a single line of code.

From the computer history perspective, Perl was a successful attempt in reinventing computer programming. Instead of writing code for the machine, the code was written for a parser, called the Perl interpreter. Similar to a modern Java VM or a PHP interpreter, the Perl parser isn't compiling the code into machine code, nor it is doing a syntax check if the semikolon was put at the correct position. The self-understanding of Perl programmers is, that they are not interesting in creating machine code, or want to extend the operating system. But they want to deliver a HTML webpage or test a prototype.

Another interesting point is, that Perl never replaced existing C code. After the invention of Perl, the previous created c applications were used unchanged. But Perl was thought as an additional language. It was the beginning of creating a zoo of languages. Instead of deciding between C and assembly language, the programmer had the choice between Perl, TCL, Bash script, Modula-2, Postscript and Clipper.

From an abstract point of view, Perl resulted into many newly created lines of code. The CPAN directory was created from scratch. It started with 0 entries and today around 180k modules written in Perl are available. All of the code was created manual and tested by the community. In most cases, the code is doing exactly the same what a well maintained C program can do, but it's written in Perl. It seems, that there is some kind of general law which says, that the number of different programming languages has exploded. And each language community follows it own rules.

Does it make sense to invent the Perl language next to the C language? Sure, because technically both concepts are different. For some tasks, C is the better choice, but many things can be done with Perl perfect. With the same argument, new languages after Perl can be explained. For example the Java language is very different from Perl and C together. Java is not a compiled language, but it's also different from an interpreted language. In contrast to Perl, Java was developed with object orientation in mind from the beginning.

All the programming languages can be divided into two groups. The first one are optimized for the compuer in mind. They are compiled low level languages which are hard to learn but run efficient on today's hardware. Typical example is the C++ language. The second group are prototyping languages which are mostly interpreted but sometimes they run a virtual machine like C# language. Their idea is, that it's ok if the program runs 5x slower than a C program, but it provides an easier syntax and makes it useful for programming newbies. The typical example is Python, but languages like AutoIt (Macro language for Windows GUI testing) and Matlab script (embedded language in a mathematics software) can be included in this category as well.

The most interesting fact is, that code reuse between the languages isn't there. Instead, each community is creating all the libraries from scratch. The idea is to increase the total number of codelines. The reason why so many languages were invented has to do with the architecture which runs the code. The classical platform for run computer code was a barebone CPU. That means, the CPU contains of 10 registers and an instruction pointer, and the language of choice was Assembly. The next logical step is a compiled language which simplifies assembly programming. In the C language the programmer doesn't create a program for a Register machine, but he is programming software which runs on a general computer which contains of memory and lots of subroutine. This kind of evolution goes on: OOP means to program code for an architecture which is more powerful, and the latest version of the AutoIt programming language assumes that a well equipped Windows operating system is already there which contains of preinstalled programs.

Transitions in media history

Media studies is strongly connected with the invention of the printing press in the 15th century. The past can be divided into the time before books were available and after them have become a mainstream good. The interesting point is, that the time after inventing the Gutenberg press has become faster. Revolutionary inventions were made which have made the original press obsolete. The first one was the steam driven printing press in the 19th century which has reduced the costs dramatically. This sort of machine made it possible to print apart from books newspapers.

Another important invention was the first typewriting machine. This machine democratized the writing of books. Anybody who likes was able to write a book by it's own. He needs only an empty sheet of paper and a bit language skills. The next revolutionary device after the mechanic typewriter was the homecomputer. The businessman Jack Tramiel is a good example for this transition. In it's early years he worked for a company who produced typewriting machines, later he founded Commodore Computers and sold the famous homecomputer with 64kb of RAM. The transition from trypewriters to homecomputers was made during the early 1980s. The transition was much faster, then the invention of the first printing press 500 years ago. In less then 10 years, the desktop computers have replaced former typewriters at homes and in business.

After the homecomputer was on the market the next revolutionary invention was even bigger, it was the internet which was introduced for the mass market during 5 years. It took only 5 years from the release of the first WWW server in 1993 until the Internet was available widespread in the year 1998 which resulted into the dot com boom.

On a longer timespan the time between two revolutionary inventions has become shorter. It took 300 years from the Gutenberg press to the steam driven press. It took only 100 years from modern printing press to typewriters but it took only 15 years until the homecomputer replaced the typewriter and it took 5 years until the Internet was available worldwide.

Typewriters

In the year 1968 the Commodore 2200 was widespread available. It wasn't a homecomputer, but a home typewriter. From a technical perspective the machine wasn't very advanced, it looks like any kind of typewriters from that period. The fascinating feature was the brand name because only 20 years later, Commodore has evolved from a typewriter company to a computer manufacturer. In the year 1968 it was rare or not possible to become the proud owner of a homecomputer with 64kb of RAM. In that time, the computer hardware nor the software wasn't invented yet. That means, a personal typewriter was the most advanced technology the people can buy in this time.

Even a mechanical typewriter looks outdated for today's eyes in the 1960s it was a modern technology. Because the typewriter allows for anybody to create a book. It was similar to today's word processing tool. Instead of using a pen, the user was using mechanical keys. This allows him to write faster and more accurately. An additional was that a typewriter can be utilized for photocopy tasks by using a special sort of paper.

The social aspect of mechanical typewriters is amazing. It allowed the people to educate themself. If somebody has written some letters and complete books on the machine it was able, that he was trained as a writer. He was using language not only passive by consuming existing information, but in a active fashion. It was not a coincident, that mechanical typewriters were widespread used by students, in the office, by teachers and by play writers. It was a tool for improving the own mind. The interesting point is, that without such a tool it was not able to become educated as well. Writing lots of letters only with a manual pen is not possible. And if somebody hasn't written many letters, he wasn't trained in using language efficient. This was equal to missing education. So the most powerful of a typewriter was not located in the hardware of the machine, but in the effect on humans who are using the tool on a regular basis.

Using scripting languages for number crunching


In the past, the task of number crunching was handled very well with the C/C++ language. The optimizing compiler generates efficient machine code and forces the programmer to use the existing ressources optimal. A C++ library which calculates prime numbers is the most efficient way in software engineering. That means, the C/C++ is the perfect choice for such applications.
Surprisingly the situation has changed nowadays. Scripting language were used not only for prototyping reasons, but especially for number crunching. On the first look, it doesn't make much sense to calculate prime numbers or to plan the trajectory of a robot arm with a python script, because both tasks are numerical intensive tasks which needs a lot of cpu ressources. On the other hand, a slow scripting language gives the programmer the opportunity to think twice about the implemented algorithm. He knows, that the Python interpreter is not very efficient so there is a need to search for a fast algorithm first, before programming a single line of code. The idea is to ignore a potential speedup by the C compiler by the factor 5, and search for an algorithm which improves the program by the factor 500.
The paradox success of Python wasn't limited to GUI prototyping and interactive applications but has become obvious for number crunching applications. The question is not how to convert an existing algorithm into efficient machine code, but the problem has to do with algorithm identification and discuss existing one in academic papers. The python language has become a quasi standard for such applications and has replaced C/C++ in number crunching tasks.
Prime numbers
On the first look, a prime number generator has to do with a certain algorithm which has to be implemented. The following codesnippet is moderately fast.

def isPrime(n) : 
    if (n <= 1) : 
        return False
    if (n <= 3) : 
        return True
    if (n % 2 == 0 or n % 3 == 0) : 
        return False
    i = 5
    while(i * i <= n) : 
        if (n % i == 0 or n % (i + 2) == 0) : 
            return False
        i = i + 6
    return True

In contrast to a naive prime number testing, it will check only for 6. value in the for loop which reduced the amount of computational effort. It's surprisingly to know, that far more efficient prime algorithms are known. To get access to them is not a question of mathematics, but using a search engine for asking the Stackoverflow website. After a bit of search we will identify a Stackoverflow post which gives a more efficient implementation:
def primes(n):
  """ Returns  a list of primes < n """
  sieve = [True] * n
  for i in range(3,int(n**0.5)+1,2):
      if sieve[i]:
          sieve[i*i::2*i]=[False]*((n-i*i-1)//(2*i)+1)
  return [2] + [i for i in range(3,n,2) if sieve[i]]

Now we can compare both versions in term of speed. The first one calculates the prime numbers from 0 to 3000000 in 15 seconds on a standard computer. The second version is doing the same task in only 0.3 seconds. That's a 50x times speed up.

The reason for the enourmous speed up was, the Stackoverflow forum. On the same place different users have argued pro and against certain algorithm and other have found the information with the search engine. That means, the speed up was not the result of using efficient machine language nor it has to do with a certain mathematical approach, but it is located in the gutenberg galaxy. Python is a language which supports the communication about different algorithms. Python programs are human readable. This is interesting for number crunching applications.


C/C++ was designed for number crunching


In the early 1990s, the C++ programming language was already known. In that time, it was described as a complicated but powerful programming language. Surprisingly, C++ was never the only programming language under the UNIX operating system. A well known handy to use example was the PERL scripting language which is working the opposite over C++. The controversy between C++ and Perl is very similar to today's arguments which are speaking for or against Python. The interesting point in the early 1990s was, that Perl didn't replaced C++ and the other way around, but both languages were used in parallel. It was the beginning of a language variety, in which new languages were developed quickly.
The reason why scripting languages like Perl, PHP, Python and Javascript were developed is obvious, because they are focussed on the speed of program creation but not on the performance of the resulting code. The advantage of C/C++ is the other way around. The language is used heavily in the domain of number crunching, in which the CPU runs with 100% and it's important to make the program faster. A typical example for a C/C++ application would be a video encoder.
The interesting point is, that C/C++ is well suited for such applications. The programmer has to invest more time until the program is created, because he has to deal with pointers, variables types and complicated compller workflows, but in exchange he gets a software which outperforms Perl and Python easily. Encoding a video with Python would take 10 hours, but the same task can be done with C++ in under 10 minutes.
Outside of number crunching C/C++ has nothing to offer. If the program doesn't occupies the CPU by 100% which is true for a normal desktop application or a simple game, C/C++ has no advantage over a scripting language. The programmer has to deal with the complex language syntax and has to realize the program with pointers but he gets nothing in exchange. That is the reason, why most programmers stay away from C++ and preferring scripting languages.
Not the programmer but the CPU has to decide, if C/C++ make sense. If the application takes 100% of the CPU, it make sense to rewrite the existing Python code with C/C++. The result is an improvement in performance. Which means, that a compiled C/C++ program is using the existing CPU more efficient. The generated assembly statements are highly optimized and this allows the CPU to run more efficient. How many programs take 100% cpu time? Only a few, in a typical programming situation it's the exception, that the programmer has created an algorithm or a subroutine which occupies all the ressources. Dedicated number crunching problems are not very common in normal programming tasks. The typical webframework, the normal GUI application and even the average 3d game doesn't use the CPU intensively. Switching to the C/C++ language is not needed here.
The problem is, that some programmers understand software engineering and number crunching application as the same. They identify themself with a social role, in which computers are used mainly for important number crunching applications. The idea is, that the computer is started in the batch mode, has to add millions of numbers and then he prints out the result to the screen. And the programmer is trying to realize such software with C/C++. The problem is, that this application is not there in modern software engineering. Today's software is at foremost complex, which means, it contains of thousands of sourcecode lines and is described in hundred of pages of documentation. At the same time, this software need only a little amount of CPU ressources. That means, after the software is started, the CPU demand has increased only from 2% to 3%. If the software makes trouble, it's not because the computer is too slow, but because there is a bug in the software, which throws an exception.
Perl
The invention of Perl has lead into many newly written sourcecode. Instead of propagating code reuse, the Perl community has invented everything from scratch. Today, all the written code is useless, and most of them was rewritten in modern languages like Python. At the same time, the Perl language was loved by it's users, because it was easier to master than C/C++.
From a historical perspective, Perl was the first important scripting language. It was invented by the advent of Java or Python. The concept behind Perl was very similar to what Python is today, because the programmer is no longer forced to declare variables before it's first usage and he doesn't need to compile the program into machine code. In contrast, to C/C++ Perl is a dynamic scripting language. It's main purpose is to create software prototypes. Sometimes, these prototypes are rewritten in C/C++ later, but in most cases the Perl code is used on a production server. This kind of strategy was replicated by modern programmers who are running their PHP and Java applications on a production server.
The reason why Perl and other scripting language have became successful is because the computer speed was fast enough at a certain time. If the created console application doesn't need much cpu power but waits most of the time for a server request, there is no need to optimize the application in terms of performance. Let me give an example. The same is true for cron jobs and GUI applications. In 99% of the running time the app is in the idle mode, that means, it does nothing but waiting for a user input. Does it make sense to optimize the software in terms of low cpu consumption? It's a rhetorical question and explains, why C/C++ has fallen out of fashion.
From the point of view of a C/C++ programmer the CPU is occupied by the current application. That means, the software produces a heavy load in term of memory consumption, cache requests and integer multiplication. And the question is, how to program the software in a way, that it will run with less energy. That means, how to optimize the assembly instructions and the memory map so that the CPU works smooth.

Again, is C++ dead?


The question was asked https://www.quora.com/What-are-the-areas-where-C-is-dead and some examples were given. The interesting fact is, that the amount of projects which are using C++ is very small. Some examples are given in which C++ is not widespread used. The Linux kernel was written in C, Web applications are written in PHP and Java, desktop GUI apps are programmed in C#, Android games are written in Java and prototyping is done with the Python language. At the end of the post, a remarkable statement can be found:
quote: “Technically C++ can be used for all of these, it is just that people tend to prefer other tools for these applications.”
Is the statement wrong? I didn't found a mistake. It's correct, that the Linux kernel was written in C, and it's also true, that Microsoft is promoting C# and Google promotes Java for android applications. C++ is ignored by mathematicans as well, they are prefering R, matlab or Python for creating a model quick and easy. It seems, that there is a gap. In the literature, C++ is covered in thousands of books and papers. But it seems, that in reality no one is reading these tutorials are is motivated to attend the C++ community.
The reason why is not hard to define. C++ has become famous because it combines object oriented programming with a compiled language. This is very different from all the other languages. In most cases, object oriented langauges are interpreted or run in a virtual machine, while structured languages like C are compiled. Or let me ask the question from the other perspective. If C++ is so great, why it wasn't able to replace all the other languages? This question can be answered step by step. We have to convince a certain community to migrate to C++. We are starting with Python programmers. C++ is more powerful than Python. It provides many object oriented features and a modern LLVM compiler can produce the binary file quickly. The problem is, that the average python programmer won't switch to C++, because he likes his language which has less features and gets interpreted interactively. The next candidate would be the Microsoft community. All what we have to do is to explain to long term C# programmers, that their language is wrong and that the don net strategy doesn't make much sense. The better alternative is the C++ language which is not controlled by a single vendor but works on all operating system. Unfurtunuately, the prediction is, that Microsoft users are not interested to switch from C# to C++.
The same is true for Android programmers. Technically they can write C++ applications but it's harder than using the normal Java workflow. The same is true for web application which will run faster with C++, but the PHP community is not motivated to switch. That means, the C++ is a nice programming language which is searching for a user base.
If a language has problem to become attractive to users, the language can be called death. It means, that the language is ignored. It make sense to write a obituary for C++. It was a great language in the 1990s before the advent of the internet. The promise was to combine object oriented programming with efficient compiling technology. In reality, this concept was attractive to programmers only during the 1990s because in that period no alternative was there. With the advent of new programming paradigms C++ has become obsolete. Microsoft has attacked C++ with the dot net strategy, and non programmers have discovered scripting languages like PHP and Python as an easy to use replacement for a compiled language. Even the core programmer from within system development and hardware programming were not motivated to support the C++ language.
From a technical point of view, C++ was designed as a “one fitts for all” language, which means, that any type of application and any type of user can profit from C++. In reality, many different programming langauges were developed for specialized needs. It was easier to invent yet another programming language instead of writing a simple C++ library. The current situation is, that the world doesn't speak C++ but prefers around 100 different languages which are all incompatible to each other.
We can say, that C++ was the most influential language in the 1990s but has lost all the programmers. A modern C++20 compiler works better than ever, the generated sourcecode is efficient and the binary file will run on any computer plattform. Unfortunately, the programmers are not interested in using this technology. On the long run, C++ will become obsolete, or it is in that stadium since a while. The prediction is, that the ranking of the C++ in the tiobe index will decline in the next years, and other languages which includes completly new languages like Javascript or Go will become more popular.
The reason why C++ is ignored by the normal programmer is because the migration from C++ to any other language is very easy. In most cases, it's possible to rewrite existing code. If somebody has written a C++ application he can convert all the code manual into the C# code. And the advantage of creating new C++ code is low. Another problem is, that most code isn't created with libraries in mind but is used once in a project. Creating system libraries can be done with plain C more efficient than with C++. Because plain C is the smallest standard on x86 architecture and object orientation is perceived as an addon which supports software development similar to the UML notation.
From the perspective of a C++ advocate, most today's programmer have decided for the wrong language. If somebody prefers PHP over C++ for a web framework he is not using an efficient language, if somebody prefers C but not C++ for creating a library, his workflow is not optimal. And if somebody writes code in C# but not in C++ he is not vendor independent but focussed on Microsoft operating system. The problem is, that this description of the world would declare 95% of the programmers as wrong and the only one who has understand programming right are C++ programmers. That means, programming In C++ is not something which is available in reality, but it's wishful thinking.
I wouldn't argue against C++, because it's one of the most interesting languages ever. In contrast to PHP or Python, C++ is a universal programming language. Which means any type of program can be created with the compiler from small to large projects. And perhaps this feature has made C++ obsolete. Because programming has to do with writing millions of codeline by thousands of programmers. Allowing them only to use one programming language doesn't make much sense. The needs of the different projects are very different. For example, if somebody likes to prototype a new game with pygame, he has a different understanding of the world, than somebody who is creating a library for a web framework.
The question is not how to define one language which fits all needs, but the problem is how to connect existing Python code with PHP frameworks over the internet. Perhaps it make sense to describe the descent of C++ by an example. In the year 2012 Google has announced the Go language. The alternative over go would be a C++ library for creating RESTful services. Google has decided against C++ but has implemented a language from scratch. According to the written codelines in the Go language, Google was right. The newly developed language has become a success. Go helped to convince formerly C++ programmer to switch over to Go if they want to create responsive web applications. The exact reason why Google has ignored C++ is not clear, Perhaps, the C++ language was not attractive enough. It was perceived as outdated, or technically not compatible.
Another reason was perhaps, that creating a new sort of language and a new compiler provides the opportunity to learn something. If the programmer is forced to use only existing C++ compilers he restricts himself into already known technology, this is equal to resist against technology progress.
The problem with C++ is, that the concept is not able to develop into future needs. The concept of combining a compiled language with object oriented language is fixed in the case of C++. Also the requirement of providing a language for lowlevel system programming and high level classes is fixed in the C++ specification. This makes it hard to test out new ideas.
The main problem of C++ is perhaps, that the language is compiled. This technique doesn't fit to modern needs in which programs are created inveractively in a test driven development. It makes no sense to comipile the complete project after adding smaller modifications. The idea of compiled binary file has become popular many decades ago and has much in common with batch processing. The idea was, that the programmer types in a program and every 3 days he press the compile button once. Then he edits the sourcecode at once until the next compile run is made.
The most obvious reason why C++ is dead is the Python language. Python works completely different from C++. Python is interpreted, has an easy syntax and is prefered by non programmers.
Creating object oriented libraries
The most efficient way in implementing a library is the assembly language. Surprisingly, it's not possible to create object oriented assembly programs, because an assembly program is able to read and write in the entire adress space. It can call any subroutine and their is no classes which are equal to boundaries. The alternative over Assembly is the c language which has also no object orientation abstraction.
At the same time, it's possible to implement any algorithm in assembly and C, which means, that object orientation is not a must have in modern system programming. It simplifies the software engineering workflow only for the programmer because it helps him to divide a problem in smaller chunks. If the prototyping stage is over the programmer can write the program itself. He doesn't need object orientated features in software and it's possible to create libraries in normal C code very well. The interal working of these libraries is realized with structs and pointers. If a certain subroutine should modify a struct, the subroutine gets the pointer to the struct. This sort of understanding is exact the way how computers are working internally and the programming is close to the machine.
The hypothesis is, that libraries written in C are more efficient and closer to the machine than libraries written in C++. This make C the better choice for creating system libraries. The only language which beats C in terms of performance in Assembly. There are some examples available in which a math library written in Assembly outperforms C by 30% in terms of speed and memory consumption. But for most cases, normal C code is fast enough.
Is this maybe the explanation why C++ has struggeled in reality? Because it has compared to normal C problems in term of efficiency? At least we can reduce the topic to a simple question: is a library written in C more efficient than a library written in C++? One possible way to answer this question is by an example. But i think it's also possible to describe the situation from an abstract standpoint. The interesting point is, that the syntax of C++ which includes classes can't converted directly into machine code. A class is nothing which can be stored in the main memory. All object oriented programming languages like Python or C# are converting the class notation in a different syntax which is utilizing pointers for matching existing data and subroutines. Let me give an example.
In the C++ language it's not possible to call a subroutine from a different class. In term of object orientation this feature is called encapsulation and means, that a class has a border to the rest of the program. For the programmer this feature is very important because the amount of variables in the class is smaller. A typical class which contains of 100 lines of code has only 10 variables. The programmers has only to maintain this small amount of information.
If the program is executed in an operating system as binary code, the concept of classes is gone. For the operating system the program forms a large adress space whcih contains of hundreds of variables. Each of them can modified by any subroutine. No one forbids the program to modify the variable from a different class. The reason why the C++ compiler prevents such behavior in the sourcecode is because the compiler checks if the programs fulfills the object oriented standards. And this is the reason why C++ are so complex to realize. Because they are adding features which are not needed. The ability to check if the programmer is allowed to modify a variable is not necessary from the standpoint of the computer program but has to do with object orientation. A c compiler doesn't prevent the programmer in changing variables from different classes, because the class feature isn't there. This makes it easier and more efficient to realize a plain c compiler.
According to internet forums, it's unclear if C or C++ is the faster language. In most postings it was written that modern C++ compilers are faster than C compilers. But makes this explanation any sense? Let us make first a working hypothesis in which C is superior over C++, because a c compiler is needs less features and as a result it's a smaller piece of software. With this hypothesis in mind, we can search for examples and counter examples. It seems, that the awareness of the difference between C vs. C++ is not very widespread available. C++ is described mostly as the queen of all programming language and it's rare to find an opinion which declares C++ as obsolete. It seems, thate is a missing of objective judgment about C++.
History of C compilers
In the 1980s with the upraising of the IBM many C compilers were available like Microsoft C, Watcom C and Borland C. From the early 1990s all of these compilers were enriched with C++ functionality which resulted into the description “C/C++”. The same compiler was able to parse C and C++ source code as well. In the middle and late 1990s, C++ was the only object oriented language. Newly languages like Java, C#, PHP and Borland Delphi were developed as an alternative to the C++ language.
If C++ is desclared as obsolete in the 2010 it make sense to take a closer look at the original C compilers. Without any doubt, c is the programming language of the future. But not the C/C++ ideology of the 1990s but the strip down version which means the plain C language without it's object oriented plugins. The advantage is that a compiler which generates only C is much smaller than a full blown C++ compiler.
Pointers
The reason, why libraries which are written in C are fast is because pointers are used as default in C. Pointers plus dynamic allocated memory allows to write very efficient programs. It can't replaced by anything else because this is how computers are working on a low level perspective.
It's surprisingly to know, that at the same time pointers and dynamic memory are perceived by high level programmers as something which is slow down the development process. The average Python programmers wants to store data a dictionary, but he is not interested in dynamic allocating an array and copy the adress to a subroutine. That means, pointers are at the same the most hated and most important feature in modern programming. A possible workflow in combinging the advantages is to write a program prototype in a high level scripting language like Python. In Python there are classes available and there is no need to use pointers. If the Python code runs well, it make sense to convert the program into lowlevel C sourcecode which is using pointers everywhere. The execution speed will become much better.

Is C faster than C++?


The question was asked recently in a blogpost somewhere in the internet. For measure the performance on an objective basis, a pi calculating routine was written which was realized with the same algorithm. The first routine was compiled with gcc, the second one with the C++ compiler. The result was, that the execution time in seconds was exactly the same.
In my own experiment i came to the same conclusion. The runtime speed of the binary file is the same, and even the program size in bytes was the similar. Does that mean that C++ is comparable to plain C? No that is not the conclusion, because we have to take a broader look into the ecosystem. A c++ compiler is more difficult to realize than a C compiler. And a book which describes C++ has more pages than a book which gives an introduction to C. That means, C is the baseline, and the C++ compiler is trying to imitate the standard.
Instead the answer has to do if someone likes to program object oriented or not. Most current libraries are not written in C++ but in normal C. The reason is, that the increased complexity of C++ doesn't provide an advantage so the programmer stay within the C universe and ignore the extension of C++. If they want to program object oriented they are switching from C to a scripting language like PHP or Python. There is a second reason why plain C is recommended over C++. If the aim is to improve the performance of a program there is a need to use pointers and program close to the machine. This results into a programming style which is called c programming. That means, the programmer allocates manual memory, and gives the pointer to a datastructure to a subroutine.
Using this kind of tricks in context of a C++ is possible but it is not compatible with object oriented programming. That means, the C++ language is used as some kind of C language and the question is why the OOP extensions are available. On the other hand, the programmer can not ignore pointers, because then he will loose all the performance. It is important to decide either for lowlevel hardware programming or high level object oriented software engineering.
Software engineering has to do with designing an application. It is realized by drawing UML charts, creating prototypes in a scripting language and write throwaway code. In contrast, the production ready code is written in low level C language which doesn't provide object oriented features but is using the CPU as fast as possible. It make sense to separate both steps.

From programming to algorithm invention


In some previous blogposts, I've explained why programming is obsolete. Not a certain kind of language like Python or C# is outdated for computer scientists, but all available language have fall out of fashion. Programming is too easy to teach it in a computer course. It was replaced by something which has a higher priority and needs an academic background. This alternative is called algorithm invention. An algorithm is an abstract description how to solve a problem. A typical example for a search algorithm would be to sort first the data in the ascending order and then go through each item to search for the term. This search algorithm can be implemented in any programming language, for example in Ruby, Fortran, Javascript and so on. The interesting fact is, that the implementation will look very similar. Another point is, that the task of converting an algorithm into sourcecode is not very hard.
In the easiest case, a certain algorithm is expressed in the python language in around 20 lines of code. The task of programming which means to enter the program lines into the computer and check for potential syntax errors is outside the scope of computer scientists. In most academic papers, not the needed sourcecode is shown but the algorithm is described.
The hypothesis is, that computer science at a university is about algorithm but not about programming itself. If somebody has tried to implement a search algorithm in C++ but not in Python and gets a syntax error, nobody cares. That means, the question how to realize an algorithm into a concrete programming language is not important for computer scientists. They are don't care about it. What they are interested in, is discussion different kind of algorithm to search in a data structure.

Is the C++ language dead?


On the first look, the idea behind C++ make sense. C++ combines object oriented programming with a compiled language. Buth features are perceived as powerful technique in modern software engineering and the C++ is the most important language with this aim. But, if C++ is so amazihg, why so many alternatives were developed in contrast to C++?
In the Open Source / Linux domain there is a widespread concern, that object oriented programming doesn't provide a useful addition to structured programming but make things more complicated. The complete Linux kernel for example was written in plain C, and most libraries like the GTK+ too. In case of GTK+, a plugin was used to provide simple OOP features which is called gobject, but it's not a C++ library. The same mistrust against C++ is obvious in the Windows operating system. Microsoft has developed since the year the C# language and the dot net framework which stands in direct contrast to the C++ language. Programming with C# is very similar to programming in Java, which means, that the language is not compiled into machine code but interpreted as runtime. Last but not least, the famous Python language which supports also object oriented programming doesn't provide a compiler but it's an interpreted language.
To analyze the situation in detail it's important to give some facts. It's widespread accepted that the lowlevel C language is compiled. Another fact is, that most object oriented languages like Java, C#, Ruby and Python are interpreted ones or they are running in a virtual machine. What is a potential explanation for this gap?
Converting a structured C like program into machine language is not very complicated. A normal program contains of variables and program text. A C compiler converts high level C sourcecode into assembly statements and this is executed by the operating system. On the other hand, an object oriented programming style asks for a different kind of converter. The first object oriented languages like Simula and Smalltalk were realized as interpreted languages. The reason is, that objects are not stored in assembly syntax but in a datastructure. Let me give an example:
Suppose, in the pacman game the object “ghost” was created in the program. Ghost contains of variables for storing it's x/y position, and it contains of methods for moving the object to another position. The code for moving the object is stored only once in the physical memory, but all the instances have access to the code. The translation from sending on a high level layer a message to the object to low level instruction execution is done by the interpreter / virtual machine. A second reason why most object oriented language are implemented with an interpreter is because it allows interactive edit and test the sourcecode.
The question which remains open why C++ isn't working in this way, but combines a compiled language with object oriented features? I'm not the first one who asks the question. Microsoft struggles too by answering it and the Linux community as well. This was the reason, why they have developed both an alternative to C++. The disadvantage is, that these alternatives are slow (in case of C#) or they are complicated to program (in case of GTK+ and gobject). That means, the original idea of C++ is not perfect, but potential alternatives are also criticized as the wrong way.
To investigate the situation in detail we have to do again a step back and describe first which kind of technology is perceived as stable. The combination of a structured language like C plus a compiler is a best practice method in modern software engineering. There is no need to interpret a c program because this would reduce it's executation speed. All the important libraries in an operating system are written in pure C, and this is used in all operating systems. The open question is, if the same compiler technique can be utilized for object oriented languages. It seems, that the problem has to do with pro and cons of object oriented programming.
OOP is a relative new development. It is something which is not available in assembly language and it isn't there in classical languages like Fortran or C. It's hard to define what object oriented programming is. On the first look, it's a language feature, which is built in into the syntax, but at the same it's also a software engineering technique which was made famous with the UML notation. Especially in the Python environment, OOP is used as a prototyping technique to develop software from scratch.
From a critical perspective the question is, if OOP is needed in classical programming. Suppose the look and feel of an application is already known and the algorithm is fixed. Then the advantage of OOP is relative low. That means, in most cases, the more efficient way in creating sourcecode is to not create classes but to utilize lowlevel techniques like pointers and linked lists.
I think it's important to make clear what kind of best practice method is accepted widely and which not. What is used by all programmers is to compile a structured language like C, and use Object oriented design for prototyping new applications. This is done with interpreted prototyping languages like Python. The open question is, how both parts can be combined. One option is to use C++, the other idea is avoid any OOP feature in executable code, or to introduce a Java like bytecode concept.
From a technical point of view, it's interesting to ask how an existing object oriented design in the UML notation can be converted into a non-OOP language like C. What are the steps to convert the Pacman ghosts and the other objects in a game, into normal structured programming code which gets compiled by a c compiler?
The answer can be found in concrete game project which utilizes the C language but not C++, https://stackoverflow.com/questions/43127769/creating-a-game-board-using-struct In the example game, the current game state is stored in a struct. And then the struct is given as a parameter to a function. It's easy to imagine who a pacman clone would be realized in plain C. The first thing is, to create an array struct which holds the ghosts. And then the item in the array is given to a move routine which adjusts the position.
What is different to object oriented programming is, that in the sourcecode there a no explicit classes given, but the machine model is used as a representation. That means, the program gets access to adress in memory, and can call subroutines. The prediction is, that this kind of programming style results into a faster program executation, because it comes closer to the inner working of a computer. Object oriented languages like C# or C++ are providing an additional layer not available in assembly language which makes the code slower.
Conclusion
Object oriented programming is at foremost a software engineering technique which allows humans to develop a prototype. Implementing OOP features into a computer language or a compiler make the system slower. From a computer point of view, the programmer should avoid object oriented programming and type in the sourcecode in normal C code.