July 29, 2021
New stories about the revolutionary year 1992 in computer history
July 26, 2021
Small critics of the Servo magazine
The Servo magazine https://www.servomagazine.com/ is the only remaining printed robot magazine available. Possible alternatives like the Robot Magazine from Maplegate were terminated because of missing readership. In contrast, the Servo magazine is up and running since the year 2003. .Each issue has around 90 pages and the articles are equipped with high quality picture. The topic is mostly hardware dominated tipps about how to build physical toy robots and in addition some articles are published about the history of robotics which includes academic projects like Shakey the robot.
So what is wrong with the Servo magazine? The problem is that even the content is future ready the magazine looks outdated. It has much in common with a homecomputer magazine from the late 1980s. Not because the articles are written so bad or more advanced topics are missing but because the principle of a magazine is out dated. The better idea would be to extract a single article and put it into a separate space. Instead of publishing one issue with 10 articles the better idea would be to publish 10 article with each 9 pages. There is no need to aggregate different articles from the wide area of robotics into a single magazine.
July 22, 2021
The inner working of social networks
Social networks like reddit and Facebook are a new phenomena in the internet. They are marketed as start page to the internet and the hope is, that they are providing an environment for advertisement and user profiling. Social networks are operating with a certain linguistic pattern which should be analyzed next. In short, the pattern works by focus not on a topic but to talk to people. Here is an example.
A newbie user of a social network has posted into a group a question about a random topic. For example he likes to know which Linux distribution works fine. A possible original post can be written the following way.
Hallo social network Linux group,
I'd iike to know if Ubuntu or Arch Linux is the better operating system. I'm asking because both systems have advantages and this makes it hard to select one of them. Thank you for reading my post.
—
So far, the initial posting is nothing special. A user has concrete problem (decision for a Linux distribution) and he posts the problem online so that a larger audience can read it. In a normal discussion forum which is not a social network, but a classical website a possible answer would read the following:
Dear Newbie,
thanks for joining our Linux group. I prefer the Arch Linux distribution because updates are delivered more frequently, so you have always the latest software version. By and have fun on the PC.
—-
A possible alternative answer would be the following:
Arch Linux is very complicated to use on a production system. The user has to install software all the time. In contrast, Ubuntu works robust and is recommneded for desktop systems.
—
From a content perspective both answer are different. They are recommending opposite Linux distribution. But from a language perspective they have much in common. Both are answering a question in an online forum and it is likely that they can be read in the real world.
Now let us compare the situation with a social network. Suppose the same OP is posted to such a networking side, the typical answer would read the following:
—-
In general, we are not discussing Linux distributions in such a way. Perhaps, you can answer the question by yourself.
—-
Another typical social network answer would read the following:
Why are you asking? Are you familiar with Ubuntu already?
—–
So let us try to describe both answer from a linugistic perspective. What they have in common is, that the topic was ignored, instead the person who is asking the Original post is under investigation. What the inital users was trying to achive was to discuss the pros and cons of Linux distribution. And what the social network is trying to do is to talk with the person who has asked the question.
It depends, if this linguistic pattern makes sense or not. What can described is, that social networks are producing such a pattern, while classical online forums are staying on the topic. So what potential users can do is to anticipate such a language pattern.
Analyzing the computer year 1992 in detail
In previous blog posts it was mentioned already that the year 1992 was equal to a breakthough. At the same time, a period has ended and a new one was starting. Let us take a look into a TV show from that time about a computer show.
On the first look the format looks familiar. The episode of computer chronicles looks similar to what was known from the 1980s episodes. But at the same time, it was something new and not known before. Because aroun the year 1992 the internet has started to become a mainstream phenomena. At the same time, the computer has become attractive for larger amount of people. In contrast to the 1980s the average user was not only interested in the subject but he bought a PC for it's own and was doing something with it. We can say that the year 1992 was some sort of inbetween. It was different than the decades ago and it was improving by the years after 1992.
Let us listen to the episode itself. At first the moderator asks his quest what is new and amazing in this comdex fair. The answer was that lots of multimedia technology is available. The second point mentioned was, that mobility and notebook computers have become important. And the guest was right. In the year 1992 the computer has become a central hub for entertainment. In contrast to the situation in the 1980s the price was much lower and the hardware had a bigger capacity.
What was shown on the comdex as an example for multimedia, was the video for windows software in combination with CD-ROM drives. The interesting situation is, that at the same comdex show alternatives over the Windows operating system was presented by Apple. And this technology works the same way. So we can say that not a single technology was available but all the products from the year 1992 was fulfilling the requirements.
It should be mentioned that a lot of technology has become obsolete in 1992. For sure, the Commodore 64 was outdated, and the 16bit Amiga 500 too. Also, text terminals used for mainframe copmputers were outdated.
___Longer periods___
It is possible to split the computer history into two periods:
1. from 1962-1992 (30 years)
2. from 1992-2022 (30 years)
The year 1992 is the border between the periods. The first period can be described as old school computing. Computers were available the first time, they were slow, expensive and reserved for a small amount of engineers and scientists. In contrast, the second period introduced the computer as a mass product which has become cheap, powerful and suitable for all sorts of tasks. The interesting point is, that since the year 1992 the products from the first period have become out of fashion. Nobody was interested anymore in 8bit homecomputers, gaming systems which were equipped with 1 kb of ram. Also it has become unpopular to pay millions of US$ in exchange for tape drives which can store a few amount of megabytes.
Today it is the year 2021 and since 1992 not much has changed. The second period is up and running and the improvements are taking place only on a minor scale. All the technology available in 1992 is available today as well. The only improvement was that former 32bit cpu were replaced by 64bit cpus and that the software has improved in minor parts. But the general idea was available in 1992 as well. Major milestones like Windows operating systems, unix as a server system, 3d graphics for video gameing and mobile computer hardware was available in 1992.
If the duration of 30 years is typical duration for a longer period in computing the question is what will happen in the third period which goes form 2022-2052? The answer is not very complicated. The AI Revolution is something which is located at the horizon and it something which will revolutionize future computing drastically. There is a similarity to previous periods. In general AI and robots are available already. They were developed by large companies. The difference is, that these technology isn't available as a consumer product and that the capabilities are low. Perhaps this will change within the next 30 years?
___References___
[1] The Computer Chronicles - Comdex Fall 1992 (1992) https://www.youtube.com/watch?v=evMilwVBHAQ
July 21, 2021
Random movements in a potential field
July 20, 2021
The year 1992 was amazing milestone in computer technology
On the first look the year 1992 looks similar to all the other years in computer history. But something was new. It seems that with this year a major milestone was reached and that 1992 was the start to the internet period.
To understand the breakthrough in detail we have to analyze the computer technology before this date. Computers were available an different layers: gaming konsoles (NES), homecomputer (Amiga 500), office PC (Intel AT), workstation (SUN), mainframes (IBM). The interesting situation was, that none of these computer fulfilled the requirements. Bascially spoken, the PC before the year 1992 was not powerful enough, and more powerful workstations like the NEXT computer were too expensive. Because of these reasons, the computer was a niche product and was used seldom. The best sign were perhaps the low quality monitors. Monitors before the year 1992 were typically black / white, and it wasn't possible to use them for reading longer text. Even advanced mainframe computers from this period weren't able to display simple graphics. And a soundcard wasn't invented yet.
So what sort of application were possible before the year 1992? Right, no one. Advanced media during that time were the printed book, printed magazine, the radio, television or the music CD. But computers were running with a low priority.
Around the year 1992 the situation has changed drastically. Overnight many progress was made. The hardware had become cheap, the performance of the computer was better and it was possible to do something with a computer. For example, typing in a text. From the perspective of the moores law, the year 1992 wasn't very different to other years. Similar to the years before, the expected improvement for computer hardware had become possible. The new thing was the provided technology was able to fufill the expectations of the users. Even if the PC was available before the year 1992 and even computer networks were used in the 1980s, after the year 1992 everything have become new and shiny.
July 18, 2021
The internet was born in 1992
It is sometimes hard to determine at which date new technology was invented. In case of the printing press it took centuries until it was used widespread. Surprisingly the birth of the internet can be localized in time very precisely It was a development which was realized overnight and all around the globe at the same time.
It is not clear why exactly the internet was developed in parallel but according to the computer history there was a time in the past in which the new technology was available suddenly. To understand the situation we have to describe two periods: the time before the year 1992 and the time after the year 1992. Before the year 1992, computers were available but they were slow, not standardized and very important they were missing of networks.
A typical example from 1990 was an Amiga 500 homecomputer which was used mainly for playing games. The Amiga 500 was missing of everything: it had not built in harddrive, the resolution of the monitor was low and the processor speed was only 7 mhz. The interesting fact is that compared to more professional devices like a Vax minicomputer sold for millions of US$ the Amiga 500 was advanced.
Let us take a closer look at so called professional computing before the year 1992. Most universities were equipped with a main frame computer. The system was used for batch processing and it was connected with other mainframes over a 9.6kbit telephone connection. Such a system wasn't used for practical application because the disk storage capacity was low, the bandwith was too slow and the price for such technology was too high. That means, even the university was equipped with the most advanced computer hardware nobody was using it.
At least, this is the description for the period until the year 1992. The interesting situation was, that the development to more powerful computer technolgy was realized within a small time frame. In the beginning of the 1990s, the former homecomputers were replaced by standardized PCs. A PC in the year 1992/1993 was cheaper and more powerful than the computers before. At the same time, the professional mainframes were replaced by local area networks which were running with workstations. These workstations were connected over fast ethernet cables and were able to display high resolution graphics. Also it should be mentioned that around the same year, the price for transmitting a single gigabyte of information over the telephone line has become cheaper.
Like i mentioned before this development doesn't take place over decades, but it was introduced very fast. That means on december 1991 the computer technology was very slow and only six months later the internet was available.
From a longer perspective the situation was more complex. until the pc industry was born it took many years. During the 1980s, PC hardware and software was developed. Also the development of workstations and fast internet routers took a long time. But, all the technology was available at one point in time which was 1992. After this time, the restrictions were solved. That means, the industry was able to deliver cheap and fast computers which can be used for practical applications. From a hardware perspective, a typical PC in 1992 was equipped with a 386sx processor. This allows the user to run basic applications like simple games, spreadsheet documents and even databases. In addition, such a PC is working well in a local area network and very important the technology has become affordable.
The interesting situation is, that before the year 1992 the computer industry was in a poor condition. Sure, some hardware was available for example home computers, and also software was written in Assembly and C. But it was not possible to use this technology in a meaningful way. For example to write a letter or to send an email. In that time, computer technology was something which was only available in imagine. What was shown in Science fiction movies was not a available in the reality.
Localizing the birth of the internet more precisely
The internet is a complex global computer system which was invented by many engineers. All the developments were united in a single year and into a single computer platform. It was exactly the year 1992 and the Intel 386SX PC which has made the internet possible.
Somebody may argue, that the internet was available in the 1980s already and that on a different hardware allows to connect to the internet as well. But a closer look will show, that only the combination of the year 1992 and the 386sx cpu has triggered the acceptance of a larger audience. It was some sort of sweet spot in which the technology was cheap enough and powerful enough at the same time so that the public went online.
Let us investigate some alternative hypothesis how the internet was born. Was it possible to use a home computer like the Commodore 64 to get access to the internet? no it wasn't possible because the homecomputers of the 1980s which includes 16bit machines like the Amiga 500 as well were not equipped with a hardrive and they wasn't capable of running even simple texteditors or telnet programs. Another interesting question is if mainframes from the 1980s were suitable for internet access. Technically it was possible to connect a supercomputer like the CM-2 with the internet. The problem was that this machine was very expensive. It was out of the reach for normal users. And the idea behind the internet is that not 10 or 100 users are geting access to online databases and e-mails but thousands and even millions. So the supercomputers of the 1980s were also not suitable for providing internet access.
Only the combination of a certain hardware performance, a cheap price and the availability of a network results into the well known internet. The year 1992 was the first year in history which combined all these elements. It can be interpreted as the year of birth of a complete new technology unknown before. The interesting fact is that the year after 1992 only little amount of improvement were available. Modern more advanced computers have improved the situation only in minor parts. That means, the difference between a text only webbrowser in 1992 and a text oly webbrowser in 2021 is small. A normal 386SX PC from that erea which is equipped with the lynx software plus an email program comes close to a more recent user's experience.
From a more pessimist standpoint the technology hasn't evolved that much from 1992. A simple isa network card from the past was able to provide 10 mbit connectivity. And this speed will serve today's Internet very well. What is known as the internet is unchanged since 30 years. It was improved only a minor parts. For example, the java language was invented, the h.264 videocodec has become available and wireless technology was added.
If the aim is to build some sort of prototype hardware which was able to act as a server and client as well, a standard desktop pc from the year 1992 works very well for this purpose. Even for today's requirements such a machine is more than capable of providing a technology known as the internet.
Let us summarize the specifications a bit: A 386sx pc is a 32bit processor which is faster than a 16bit processor. Also such hardware is sold for a low price so that a large amount of users can use the technology. In addition it is equipped with an ISA network card which is a standardized network for providing TCP/IP connectivity.
The interesting fact is, that before the year 1992 such computer technology wasn't available. It wasn't invented yet and therefor the internet was not available. What the computer users have done before the year 1992 was to use computers in a more specialized way. For example, an 8 bit homeomputer can be utlized to learn the BASIC programming language. While a pdp-11 like minicomputer can be utilized to create an UNIX operating system. What we know as the internet is the combination of all these elements into a single computer, sold for a low price.
[1] RetroTech Chris: Getting Started: Wireless Web Browsing in MS-DOS, 04.06.2020, youtube, https://www.youtube.com/watch?v=yH57bt-lU1Q
July 15, 2021
Some tools for developing Artificial Intelligence
Classical non AI computer projects are created with tools. Well known software packages for creating new software are operating systems like Unix, programming languages like C++ and IDEs like the Emacs editor. These tools have become famous because they have become a framework for learning, teaching and improvement of computer science.
The problem in AI and robotics that the mentioned tools are useless. Sure, it makes sense to program a robot in C++, but any other programming language would make sense too. If the existing software tools are no longer valid what else can be preferred to develop a robot?
AI and robotics is working quite different from normal software programming. Instead of solving a problem, the idea is to create one. So called robot challenges are the preferred choice for a framework. A robot competition is an educational description for example a maze game. And the participants have to build and program a robot who can solve this task.
The interesting point is that most robot challenges are different and have evolved over the year. A certain challenge is asking for a certain hardware and software combination. For example in the micromouse challenge most of the robots are looking equal. The main reason why these competitions have become famous is because they are providing an easy to solve task. Entry level robot challenges are asking for a robot who can follow a line. Programming such a robot ca be realized in under 10 lines of code. More complicates challenges are demanding for a complete robot control system which consists of sensor perception, motor planning and GUI output.
What is object oriented programming?
July 12, 2021
Will Microsoft take over Debian?
July 11, 2021
Dictionary in C++ and in Python in comparison
The C++ version is faster, of course.
map.py
#########
d={
"A":{"one":0, "two":0, "three": 0,},
"B":{"one":0, "two":0, "three": 0,},
}
for i in range(20000000):
d["A"]["two"]=5
a=d["A"]["two"]
#print(a)
map.cpp
###########
#include <iostream>
#include <map>
#include <string>
#include <string_view>
// g++ -O3 -std=c++2a map.cpp
int main()
{
std::map<std::string, std::map<std::string, int>> d;
d["A"]["one"]=0;
d["A"]["two"]=0;
d["A"]["three"]=0;
d["B"]["one"]=0;
d["B"]["two"]=0;
d["B"]["three"]=0;
for (int i=0;i<20000000;i++) {
d["A"]["two"]=5;
int a=d["A"]["two"];
//std::cout<<a<<"\n";
}
}
July 10, 2021
Simpler programming with groovy and python
The programming community consists of two opposing group. The first one are old school programmer who have learned programming in the 1980. This group is familiar with Assembly language, C and sometimes the C++ language is used for creating modern desktop applications. This approach to writing software can be called a professional one because it garantees the maximum performance and is used to create large scale productive programs.
On the other hand, there are programmers who are not calling themself developers because they have never learnt to write code in C or Assembly language. The difference between both groups can be made visible by the different understanding of a pointer. Only the real programmers can explain what a pointer is, and they have used them all the time. Pointers are used for creating faster games and handle lot of data in a program.
The interesting situation is, that it is not possible to learn C/C++ or assembly without understanding pointers. They are a fundamental part of these language. To write anyhow a program a different sort of programming language is used. Typical examples for recent programming languages are groovy, python, matlab, javascript and Autoit. What these languages have in common the user has to enter only a little amount of code, and the written code reads easier.
A typical example is to compare a swing gui written in java, with a GTK+ gui written in C. The c code needs 4x more lines of code, and ofcourse the pointer operator * is used everwhere. In contrast, Groovy and especially python are much easier to read.
There is a reason why the scripting languages haven't replace real programming languages. Because a scripting language is using a lot overhead to reach the same goal. They are not a minimalist language like Assembly, but before groovy can be executed lots of programs have to installed first. And all of the underlying programs like operating systems, Virtual machines are written not in groovy but in powerful languages like C, C++ and Java.
July 08, 2021
Describing LaTeX from the workflow perspective
Instead of explaining what the LaTeX software itself is, it shoud be mentioned that without third party programs, the user can't create anything. Before the latex command line tool can be started, the text has to be created in a text editor. There are not a single but a lot of external text editors available:
How to avoid pointers in programming
July 05, 2021
How to learn C/C++ as fast as possible
Sometimes it was mentioned, that C and especially C++ are complicated languages which are hard to learn. The reason is, that C++ allows to write low level code which includes pointers. In addition the amount of possible C++ commands is high and the chance is high that the newbie won't understanding anything.
So the logical choice is to avoid the C++ language at all and take a look for a programming language which works better. Potential candidates are Python (very beginner friendly) and Java (works great for writing productive code).
Sure the Python language works great for creating a prototype. But it fails for creating a productive application. So the question which language would combine the strength of python with a fast runtime performance?
The main problem with potential alternatives to C/C++ is that all these languages are seen as a second choice. Sure it is possible to write code in freepascal, C#, swift or ruby. But the default compiler on all the operating system remains c/c++ and most programs in the world were realized in C/C++. It seems, that from a technical perspective the language is great and the bottleneck is the programmer who doesn't understand it?
The interesting situation with C/C++ is, that the perceived complexity depends greatly on the manual. This is a contradiction to the Python universe in which the language is described in most manuals the same way. There are two sorts of C/C++ manuals available. Older one which are focussed on the language itself and are some sort of reference book. These sort of manuals have created the perception that C/C++ is hard to understand. And there are more recent tutorials available which are describing how to use C/C++ for writing games and apps.
A prominent example for a recent handbook is “SFML Game Development By Example”. The interesting thing is, that this book explains more things than only the language itself and at the same time it is easy to read. The difficulty is not harder than a python book about the pygame envirionment. The newbie learns how to write a small game in under 500 lines of code with the usage of a graphics library and some self defined classes.
The interesting point is, that according to this (and many similar books) C++ is some sort of python like programming language, but it runs much faster.
Let me summarize the situation a bit. First thing to mention is, that C/C++ remains technically the best programming language in the world and secondly it depends largely on the tutorial if the newbie is able to master the language.
Understanding the role of LaTeX
Most introductions into LaTeX are trying to describe what the software itself is about. But this won't answer the question which role LaTeX is playing in a more general workflow. The interesting point is that most authors are not creating their text with latex itself but they are using external software like Emacs, texniccenter and vim. But, if LaTeX is so great why it is not used for writing the text?
To understand this mismatch we have to take a closer look how the programs are interacting with each other. What LaTeX needs as input text is a highly structured longer text, for example a 100 kb long markdown file, a 50 kb .tex file or a 300 kb long ascii text which contains of sections and subsections. in the next step, LaTeX is rendering this file into a pdf file similar to the ps2ascii utility.
In contrast to the MS-Word software the user never creates something in LaTeX but he creates the textfile for LaTeX. The text is written in additional software like the texnic center. And this explain what the idea behind LaTeX is. The user has to prepare the text in an outliner software and then it gets rendered into the pdf format.
Bascially spoken, creating something in LaTeX is not about the formatting itself but the more interesting point is, which sort of functionality is needed outside the latex software. The functionality is provided by so called outliner programs. That is a sort of software which makes it easy to create a longer text in structured way. Structured means that the contains of hierarchical sections, tables and references.
To understand how the process works let us imagine a world in which latex is not available. The idea is the user creates the text in the emacs org mode, then exports it into the markdown format and then renders the markdown textfile into the pdf format with an external program whcih is not latex but a different tool. The user will spend most of the text for creating the text in emacs. That means, the org mode environment is the software in which the text was created. Similar to other outliner programs like cherrytree, the user won't miss a page oriented layout, but outline means that a draft of the text is created.
There is a reason why apart from the famous ms word software, even under Windows and MacOS many external outliner programs like citavi, scrivener or omnioutliner are available. These programs are calling themself authoring tools. The idea is, that in the first phase of writing a document only the content is important and the layout gets ignored. For example, the cherrytree outline isn't able to format a simple two column layout or create the hyphens for a paragraph. That means, cherrytree is a poor choice for formatting the text on the screen and it was never created for this purpose.
Let us describe in general what outliner software is about. These programs were become famous for it's pane on the left screen which shows the document structure. The user can navigate in the document tree and shuffle the sections. Many features or even all features of a desktop publishing software are missing in an outliner program. Cherry tree doesn't even know what kerning is or how to embedded a type1 font. That means all the important features used for creating a book are not there. How can it be that the software is used in the reality if it has a low amount of features? Because cherrytree and other outliner programs are focused on the text itself. There is a distinction between creating the raw text and formatting the text into a pdf document.
From an authors perspective, a book or journal is not created with a desktop publishing software but with a normal text editor aka an outliner software. What an author is doing is not to create a pdf file but the author creates a 300 kb long plain ascii file. The interesting situation is, that even in the world of MS-Word and Indesign there is a clear distinction between creating the text itself and rendering the text into a pdf file. For example the indesign software has an often used function to import plain ascii files. That means, somebody else has to create the .txt file before and then it can be imported.
The relationship of LaTeX to outliner software
The LaTeX software is known as a powerful typesetting program used since decades for creating academic writing. From an outside perspective the LaTeX community prefer it's own program over potential alternatives like Framemaker or MS-Word. But the community struggles in explaining the reason why. How can it be, that a complex software like LaTeX which contains of thousands of macros has found so many advocates?
The reason is not located in the software itself but with the workflow in which LaTeX makes sense. What the average student, researcher or book author is doing can be summarized in the following workflow:
1. makes notes in an outliner
2. aggregate the notes into prose text
3. convert the prose text into a readable pdf paper
This workflow divides the task into two subtasks: creating the text and rendering the text. And both tasks can be realized with different software packages. There are on the one hand programs available for creating an outline which are Omnioutliner, a text editor or the notecase software. And on the other hand there is software available for converting the text into a pdf document.
Let us imagine how the workflow can be mastered without the famous LaTeX software in the loop. First step is to create the outline with a program like notecase which is an open source outliner for Linux. And then the text gets exported into the markdown format and converted in to a pdf paper.
This workflow comes close to what LaTeX users are doing with their software. They are using two different programs for the workflow. A typical combination is to combine the emacs editor for creating the outline, and the pdftex backend for creating the pdf file. Another frequently used combination is to use the texniccenter editor as the editor and the luatex engine for creating the pdf file.
The interesting situation is, that an outliner software doesn't need a preview how the text will be shown in the pdf format. During the outline subtask, the user doesn't care about the size of the font, or if the output is rendered in one column or twocolumn. This missing capabilitites of an outliner software is described by the latex community as an advantage. And they are right. The outline of a text which contains of sections, subsections, intext references and bibliograühic references has enough complexity. There is no need to render the information like it will be printed later.
Even if somebody doesn't prefer the latex software, the chance is high that he will use a divided workflow which contains of creating the text and formatting the text. Or let me explain it the situation from a different perspective. Classical WYSIWYG software like MS-Word, indesign or scribus are a poor choice if the idea is to use them as outliner tools. In theory it is possible, but this is not the purpose of these programs.
Perhaps we have to a go a step backward and answer the question what desktop publishing is in general? Has it something to do with printing out a text or rendering the text in the pdf format? No, the core task is to create the text in an outliner software. The interesting situation is, that the LaTeX program wasn't designed as an outliner software, but latex assumes that the outline is provided as input. That means, the user needs the structured text, and then he can run the latex program.
How to solve AI problems in contrast to solve programming problems?
Computer programming works with a certain principle in mind. This general principle can be adapted in solving a new problem unknown before. Suppose someone likes to create a hello world GUI application from scratch. For doing so, he has to write down the source code. Then the code is compiled into binary code, and the code is executed.
Creating larger projects is possible by repeating these steps and introducing object oriented features. The result is, that any sort of application can be created within this paradigm. This principle has become so successful, that many programmers are convinced that any sort of topic within computer science can be handled within this framework. But it is not. Robotics and other AI related problems doesn't follow this principle in mind. If someone tries to create a robot controller similar to creating a hello world C++ application he will fail.
A more demanding problem is, that it is not possible to repeat robotics projects from the past. Even if programmer A has successful implemented a maze solving robot, programmer B can't create the same robot. This is surprising because the source code is distributed under an open source license and it should be pretty easy to reverse engineer the former project.
The main problem in AI is, that the underlying principle is not known. There is no programming language, framework or workflow which results into working robots. And because of this reason, the engineers have struggled over decades to create robots in the reality.
The only pattern which remains relative stable in the robotics domain is, that every project has a tendency to fail. That means, the programmed robot doesn't work. There is a gap between the requirements of the robot and what the machine is doing. For example, the robot should avoid the obstacles but in the reality he will collide with them. The interesting situation is, that even after the bug was fixed, the robot won't work as expected but it will collide a second times with the obstacle.
Instead of trying to program robots, the more eloquent approach is to postpone the problem to a later period. What a programmer can do instead is to create teleoperated robots. The interesting feature of this approach is, that the robot will work as expected. Creating a teleoperation controll follows the common principle within computer programming. That means, after writing the code and improving it, the robot will work as expected.
To analyze why teleoperated robots are working great we have to define the social role of a robot. There are two sorts of robots available: autonomous ones and teleoperated robot. Teleooperation is equal to see the robot as a medium, similar to a mouse cursor on the screen. And the underlying principle is called shared control. Shared control means, that the human operator remains in the loop. This prevents that the robot fails.
With this small modification the resulting project will look different from standard robotics. Instead of programming a robot the idea is to programming a teleoperation control with a reduced workload. That means the progress is measured how much human interventon is needed to guide the robot into the goal. In the worst case, the robot is asked to use a joystick and he has to control the robot all the time. In such a case, the autonomy is low and the workload for the human operator is high. The task is to reduce the workload a bit.
The interesting point is, that it is much easier to solve such a challenge. It is up to the programmer to try out a new remote control and then he fails or not.
July 02, 2021
Some applause for the LaTeX software
Even the chance is that, that most users are aware of the LaTeX software already it makes sense to emphasize the advantages again and again. The first thing to mention is, that most desktop publishing software from the past is gone. Software programs like Ventura publisher, Word for MS-DOS, Framemaker and even quarkxpress are forgotten. They were replaced by other tools. Only the LaTex has prooven it's stability over decades.
The perhaps most common question is, why should somebody learn LaTeX if he can typeset the document with MS-Word much easier. The greatest advantage is, that similar to the txt2ps command line tool under Linux, the LaTex program can handle easily larger file. It is not difficult to create a pdf file which has 3000 pages, contains lots of graphics and archives all the issues of journal. This feature is missing for possible alternatives over TeX.
Instead of describing how LaTeX was used 20 years ago it makes to give a short outlook how a modern approach is to creating print ready document. First step is to create a document in an outliner software like Emacs, Lyx or Emacs. This step needs time and has nothing to do with desktop publishing or typesetting. If the text is ready the funny part starts. The text is converted into the LaTeX format and converted in the pdf format. With the help of a template this step needs only a little amount of time. Even the amount of LaTex documentations is high and the possible options are endless, the average user will spend only a little amount of time with the typesetting software. It is basically a rendering engine which takes a raw text and produces a pdf file. This pdf file can be printed, uploaded into the internet or copied on a cd-rom.
Somebody may argue, that the ability to create a pdf file with 3000 pages is not a big thing and other programs like indesign or Word can do the same. No they can't. In contrast to a common myth, text processing is a computational demanding task. A single jpeg image with 300 dpi can have lot of megabyte in size. It makes sense to avoid WYSIWYG software and take advantage of desktop publishing software for handle the document creation process.
___Whats after LaTeX?___
The main advantage of LaTeX over alternative desktop publishing software like Word and Indesign is, that the author can focus on the content and ignores the layout. The problem of aranging the content in columns and place the images at the correct place is postponed to a later step and is handled by the latex software during a batch processing run. This reduces the obligation of the author to creating the text.
The interesting situation is, that LaTeX is not the advanced software for focusing on the text. What authors are using in the reality instead of LaTeX are outlining software tools. These gui applications have usually on the left side a pane in which the sections of the text can be arranged. Famous helping tools for the LaTeX software like Texniccenter and Lyx are providing an outline capabilities but many other tools outside of the LaTex ecosystem are available too.
From a technical perspective these programs are emphasizing the difference between text and layout much further. An outline only tool like cherrytree doesn't provide any ormatting capabilities but it is focussed on the text itself. That means, the dimension of the age are missing and most images are shown in a draft mode. In most cases outline tools are used for making notes. But they can be utilized for writing longer texts as well.
It is important to known that an outline tool is different from desktop publishing. Some outline tools are supporting the export of the file into the pdf format, but the quality is low. Instead, outline tools are trying to structure the text on a semantic level with the help of sections.
The simple reason why so many authors are fascinated by the LaTeX software is because it handle the layout process automatically. After the some decision were made about the amount of columns, the font and the size of a section, the latex software will layout the text by it's own. It fulfills the promise of database publishing.
The perhaps most impressive example for the strength of latex is the lyx software. Lyx is some sort of outliner with a pdf export option. Lyx asks the author to provide the text which consists of a hierarchical paragraph and then lyx can export this document into the pdf format. In contrast to alternative outliner software like cherrytree, the pdf file was typeset with the latex backend which is equal to get a high quality result back.
Let us construct an example. Suppose the user provides the lyx file for a 2000 page long book. The file contains of the raw text, which are the paragraphs, the sections, bibliographic references and tables. What lyx can do very well is to confert this raw text into a fancy pdf booki.
Sometimes the Lyx software is described as a dtp software similar to idesign or Word. But it's not. Lyx has more in common with outliner software like cherrytree, texniccenter or Omnioutliner on the mac. It's purpose is, that the author can enter notes and paragraphs.
___Outline editor___
Let us take a closer look at the cherrytree software. There is a similarity between cherrytree and Lyx. Both programs have on the left side an outline pane and the user is able to enter text, items and tables in the main window. The advantage of Lyx is, that it can generate a well looking pdf document on demand, while the export option of cherry tree is reduced to create a html file.
The similarity between lyx and cherrytree is that in both programs a document contains of a hierarchical structure which contains of sections, subsections and fulltext. It is possible to jump between the section with hyperlinks and a table of contents is available as default.
Suppose somebody invents a cherrytree plugin which allows the run the external latex engine with a command line. Then the cherrytree software will behave the same like lyx. The interesting situation is that cherrytree is widely recognized as an outliner tool. This is the correct description for these software programs.