July 24, 2018

Physics is different from computing


The first impression is, that the Physics and the computer science have much in common. A deeper look into the subject shows, that they are about completely different aspects of the world, and the inner working of the scientific community around these subjects is different. Physics can be called without any doubt a scientific discipline. It is teached at universities, has evolved over the year and the progress is done as a challenge between the best ideas. That means, if one physicians has a better idea, he has to prove that this idea works better then the known information before. At the end, all the physicians are unite for one idea and the opposition is called conspiracy theory or non-scientific claim.
Computer science works a bit different. The most remarkable difference is, that computerscience doesn't know any kind of opposition. There is no contrast between accepted theory and discarded theory, instead computer science works like a puzzle in which all the pieces are connected together. In the example of Linux vs. Windows, it is obvious that some kind of aggressive discussion is available, but the opposite direction is also part of computerscience. That means, there are no operating systems inside computer science which can be called unscientific. Every theory or sourcecode is welcome.
This working principle is uncommon for a scientific subject, it is more the work-mode in the human-sciences for example art, history and music. Music is perhaps a good example, in which also piece of music is called music. Even 12 tone music (which sounds very uncommon) is called official music, there are no works available which are non-music.
In a real science subject, it is very easy to contribute a non-sense argument. For example it is possible to invent a new kind of physics or a biology which is in opposition to main-stream biology. Under the term pseudo-science this is done very often and all theories are not backup-ed by academic science. The consequence are endless debates about what is true science and what not. In contrast in the field of computing it is nearly impossible to contradict the mainstream community. Even if somebody tries to brake the existing system and invents something which is complete non-sense his works is never out-of-the-box. Even joke programming language like Brainfuck or Joy are discussed seriously, that means, they are recognized as a correct contribution to the computing science.
Sometimes, computerscience was called an alchemy. This implies that in the future, it will become perhaps a science. I would argue, that computer science was never invented as science, it was invented as art. That means, it will stay on the same level like music or literature, that means it is not possible to decide which is part of computing and which is non-sense.
How could it be, that the obscure composition of Arnold Schönberg doesn't violate the music-science? Because music never was invented as a science. All musicians are composing their work from scratch, that means there is nothing like a mainstream theory about music. A piece of work can't be right, it has only an author and a date in which the work was published. In contrast, a real science like physics can be right or wrong. It is possible to teach the right physics or the wrong one, which isn't accepted by anyone. Identifying the opposite is a fundamental element of the university discussion. All the researchers are talking about the question if a theory is scientific or not. And if the researchers are great they are very conservative, which means they are calling nearly everything what is new non-scientific because it broke the rules. On a higher abstraction this is called convergent thinking. Scientific progress has to do with it, while arts works with divergent thinking which means to be open in any direction.

The 80s are a special decade ...


On the first look, the 1980s looks similar to other periods in the history of music, for example the 1990s, the 1960s and so on. But there is something which is different, the 1980s are not only a decade, it was a century. The development until the 1980s can be called 20th century. It started in the year 1900 and ended in 1980. There overall elements are car manufacturing, mass production, airplanes and the railroad. With the year 1980s this centuray has ended, and something new started, called information technology. Their main idea was the microchip, namely computers, CD-ROM, television, VHS videocassette and so on. The 1980s later evolved into the internet revolution which includes youtube, the World Wide web, WIkipedia and Google, but it all started in the 1980s with inventions like the Apple II and the IBM PC. It was the first time, that no longer industrial products like cars but information products like Movies, music and software became important. Most of the inventions of the 1980s like the Commodore 64, the Walkman or LISP workstations are today no longer relevant. They are replaced by something which is more powerful. But, in the 1980s it was totally new. That means, the Commodore 64 didn't replaced a homecomputer before, it was the first homecomputer ever.
Some people belief that the year 2018 is equal to the 80s and they are right. Everything feels like the same, only in details an improved is there. The reason is, that the 1980s didn't only start a new decade but it started the 21th century. According to a nominal definition the 21th century begins in the year 2000, but the 1980s looks very equal. So it makes sense to define the 1980s as the first decade of the 21th century. And today in the year 2018 we are living in the same century. That means, it is a post-industrial world full of computers, microchips, music and software. It is unclear how long this century will take, perhaps it will run until the year 2100 or later. Nobody knows what the future will bring. But what is a mistake is to compare the 1980s with the 1970s. There is major contrast between both. The 1970s can be seen like the 1960s and the 1950s as a classical example for the 20th century which was dominated by industrial production, cars and electric current. It is a very modern century with electric light, railroads and refrigerators but it's not the 1980s. That means, the Compact Disc wasn't invented, the walkman too, and computers are not a consumer product yet. If the youth want's to see a movie they will go to a car-cinema and not to a video rental store. And most television stations submitting their program in black/white without stereo sound. MTV wasn't invented yet, and Tetris is unknown. This will become possible only in the 1980s.
The 1990s
Somebody may argue, that the 1980s wasn't great and the 1990s had produced much better cultural artifacts. And indeed, the 1990s are about Windows 95, better popmusic (for example Eurodance), and the new invented DVDs player had a better quality then the VHS Cassettes of the 1980s. But a more objective look into the details shows, that the 1990s are only an improvement of the 1980's. They didn't invented new thinks, they only modified existing patterns. The homecomputers gets more memory, the music become more electronic and the movies contains better special effects. The difference between the 1990s and the 1980s are much smaller then the difference between the 1980s and the 1970s. The 1990s are only the second decade of the 21th century. It can be called an extension of the 80s.
The same is true for the 2000s and the 2010s. In all these cases, the technology was improved but not in a sense that one of these decaded started a new century which is different from the information revolution.

Creating a GUI with C++, gtkmm and glade



The first impression for the beginner is, that C++ is very complicated if the aim is to create GUI application. Other languages like C# and Java seems to be more comfortable. The major problem is not the language itself, but it is a lack of documentation. I've searched a bit in the internet to find a good documention, but it seems very complicated. Most tutorials about gtkmm are about manually creating the buttons in the sourcecode which is outdated. What today's programmers want is a GUI tool like glade. But how to use glade together with C++? That is unclear, I didn't find a good documentation for that problem, but it seems, that the C community is better informed. Under the URL https://prognotes.net/2015/06/gtk-3-c-program-using-glade-3/ is a nice tutorial given which shows a hello world example which contains of C sourcecode together with the glade editor. I've modified the code a bit for C++ needs and here is my hello world example:

// Glade gui with gtkmm
// g++ -std=c++14 `pkg-config --cflags --libs gtkmm-3.0`main.cpp
// https://prognotes.net/2015/06/gtk-3-c-program-using-glade-3/

#include iostream
#include gtkmm.h
int main(int argc, char *argv[])

  gtk_init(&argc, &argv);
  GtkBuilder *builder = gtk_builder_new(); 
  gtk_builder_add_from_file (builder, "gui.glade", NULL);  
  GtkWidget *window = GTK_WIDGET(gtk_builder_get_object(builder, "mainwindow")); // from XML file
  gtk_builder_connect_signals(builder, NULL);
  g_object_unref(builder);
  gtk_widget_show(window);                
  gtk_main();

  return 0; 
}


The basic idea is to use the Glade gui tool for drag and drop the window and save the XML file as “gui.glade”. Then in the C++ program this XML gets loaded and become visible on the screen. It seems to be a common idea, but it is very hard to find tutorials for such a simple task.
Like I mentioned above in the introduction, the problem is not C++, glade or gtkmm. With these tools everything is fine. The resulting “a.out” application works stable, and the ecosystem is mature. The problem is a lack of documentation. Most programmers are only familiar with Windows GUI system, and here C# replaced the previous C++ entirely. While in contrast under Linux, most developers are only programming textbased-commandline tools and have no need for a nice looking GUI.
I strongly believe, that this will not be the last tutorial for combining glade with C++. Perhaps in the future some extension become possible, for example a more advanced GUI application which is more then a simple hello world example. But for this time, this tutorial is over.

July 20, 2018

Library Privatization works


The first idea in the debate about the future of libraries might be to declare public libraries as obsolete because Google can deliver much more content then outdated conventional libraries. From the first perspective this is true, but this argument will be not recognized as a valid contribution from the library itself. So there are major doubts, that the idea of “libraries are obsolete” will convince a huge amount of people, especially if we are mention that the number of visitors the classical library has isn't so bad, as it seems.
The better approach to discuss the future of library is a bit tricky. It is called privatization, and this time the idea is welcomed by everyone. The libraries itself see privatisation as the next logical steps, the public asks for the same thing, and potential competitor of libraries like Google are also fans of this idea. The concept itself is easy. Instead of reducing the tax money to the library, instead of closing library and make all digital we let all untouched and change only the legal structure of today's public library. They are no longer part of the government or the church, but will be converted into a stockmarket listed company. What the library are doing with their new role is up to themself. They can ask their customers for money, they can ask the government for money. It is natural that the government will spend a lot of taxpayer money into the new listed library company, because of it's traditional strong relationship. The new thing is, that from now on both are separated. That means, libraries have to compete in a market like a telecom company or like Walmart.
In my opinion there is a market for borrowing printed books, The problem in the past was, that public libraries have killed this market and no other company was able to earn money in that market. This has to be changed. I don't think that Europe or Asia will make the first step in this direction. No the mother in capitalism is the United states, and here is the right place for library privatization. If this works, other countries will follow.
I don't think that it is realistic to think about a disappearing library. Books, DVDs and a place to read them will be needed at least for the next 50 years. So long will it takes, that the Internet is fully accepted. On the other hand it makes no sense, to see a library as part of the government or the church. It is more like a business like a bakery or a restaurant. And most employees of libraries see themself also as employess and not as something else. A library has customers, gets money, and provides services. So it is logical to formalize this behavior with the correct legal status.
Todays comparison between Google and libraries isn't fair. Google is a stockmarket company and deals with electronic information, while the library is not a company and is about printed books. To get the challenge fair it is important to let fight one company against another company. To compare a library company with a search engine company. A possible metric is the return on investment, the number of employees and so on.

Libraries are a commercial book borrowing service

Some outlooks into the libraries future are recognizing that the internet is there and that the libraries need to find an answer. What they doesn't aware of it is, that other market players are already there which have created business around distribution of information. Sometimes the battle is shortened to libraries vs google, but in reality the battle is between Nasdaq-100 vs libraries. Let us take a look into some coperations listed at the tech oriented stockexchange:
  • Activision (computergames)
  • Adobe (PDF format)
  • Alphabet (Google)
  • Amazon.com (books)
  • Apple (ipad)
  • Baidu (search engine)
  • Cisco (router)
  • comcast (fiber optic cable)
  • Intel (CPUs)
  • Microsoft (operating system)
  • netflix (videos)
  • paypal (paying service)
  • t-mobile (telephone provider)
What does have all these companies to do with libraries? Very much, because they are providing the infrastructure for getting access to printed books, electronic books, films, computergames and so on. They are providing hardware devices like PC and smartphones, they are proving telephone cables and they are writing software. It is not only a battle between google vs. libraries, that war is much broader. Are traditional libraries able to compete with the Nasdaq-100 list? Can they provide better information, and give better advice to the customers then all the companies on the list? The answer is simple: nope. The interesting fact is, that the libraries itself didn't understand this. They think, that libraries have a future because their history goes over 500 years back. They can simply not imagine, what the internet is.
The interesting information is, that libraries not only will disappear. Because the demand for information, books and movies will raise in the future, and also the demand for manual advice, structure and a place for talking about it. But, all the workforce will no longer be active within the classical library, but within one of the NASDAQ-100 companies. I would guess, that even in 20 years there will be a market for printed books.. But all the infrastructure around this won't be handled by libraries but by suppliers like Amazon, paypal and Microsoft. These companies are not digital under any circumstances, they will process printed books, but the business workflow will bypass the library, that means apart from Nasdaq-100 companies there will be no need for extra service providers.
Sometimes the internet is described as a virtual space. But in reality it is the combination of the employees and the customers of the Nasdaq-100 companies. Google for example, has 1 billion customers who are visiting every day the search engine, and Intel has lots of engineering how are developing new microprocessors. But what is the role of libraries? What service they have to offer? Right, they have nothing to offer, and they have no customers. The only thing what libraries provide today is a broken story about their future role which isn't realistic and should mainly protect the library themself against all the people who know it better how to provide information, media, entertainment and education. The libraries sees themself as a protection against the evil google company. That is nonsense. Google never had a monopol in information distribution, it is only one company in the NASDAQ-100 list and competes with all the others. The libraries are the danger for society, because they acquire too much money and provide no values.
Today's library are very equal to a religions institution. They are not listed at the stockexchange but earn a huge amount of money. They own lots of building and have a huge amount of workers, but they are obsolete for the society. The only reason why libraries exists today is because they were important in the past and spread fear that something evil will happen, if they get lost.
Ending the customer relationship with one of the NASDAQ-100 companies is easy. All what the customer has to do sending a letter to Apple and explain that from now on, he will no longer use Apple iMac devices but HP workstation because they costs less. From now on the relationship between Apple and the customer is over. But how can we end the relationship with the library-industrial-complex? Sending a letter to the library that we do not need their service doesn't help. Because the libraries doesn't see themself as a company who offers a service for money, but the library think they are part of the society and they are open for everyone. That means, from a technical point it is not possible to quit the relationship. The library owns the people for lifelong like the catholic church.
My prediction is, that the libraries will have a future, because of this lifelong membership. The customers are not in the position to end the library account, nor they are able to reduce the amount of tax-money they have to spend to the library-complex. At first, most don't care, but in 10 years this will become a major problem. Going bancrupt for a NASDAQ-100 company is part the business. Adobe knows, that in one day, their lifespan is over, because something was invented which is better then the PDF format. But the library has no exist strategy. There is simply no exit button for the library, they are created for eternity in mind. And this is dangerous if the underlying idea (printed books) becomes outdated.
The first important step is to talk about library privatization. That means, to separate between library and the church, between libraries and the government. That means, future libraries have to become financial independent from church and government as a huge institution financed by it's own. And then, it will become much easier to talk about adjustments and shrinking.
As far as i know, no country worldwide has made this step. In all nations, libraries are from a legal perspective part of the government and the church. They are managed like a public university which means, they don't have customers. The first step would be, to make users of the library a customer. That means, that people who are visiting the library have to pay for it. And only the users but nobody else.

The first software-crisis is solved


Since the 1970s the term software crisis was used to describe a deficit in modern system software. Artifacts like operating systems, compilers, and graphic software were not available in the 1970s and the 1980s. A short look into mainstream journals from that area shows, that each new developed program and each operating system was welcome because the people were happy, if any software was available. The reason why minimalic operating systems like MS-DOS were at the beginning such a great success has to do, that a simple 3.5 inch floppy disc which can boot a PC was seen at the eight world wonder. To understand the situation better we must go back in the 1970s. In that area, no micocomputer like the Apple ii available, no system software and especially no software which could connect computers over the internet. The problem was, that in that area even computer experts were very bad in coding. If they had developed in assembly language a small text editor they have seen themself as god like programmers.
Early databases in the 1980's were at the same time extreme minimalistic but costs a huge amount of money. That means, the end user paid 10000 US$ and get a product with an upper memory limit of 20 kilobyte and no opportunity to store fulltext data in the software.
To summarize the software crisis from an economical perspective we can say, that hardware and software was simply to expensive. Programming a simple 10 kb application was done as a million US$ project and a simple memory extender cost more then an airplane. The perhaps most impressive development was, that the software crisis was at one time in history done. I would guess, that the mid 1990s can be seen as the turning point, because since then microcomputer were available for small prices and software to run them also. And the software industry has increased their productivity dramatically. Many researchers were not only interested to write a computer program, but they discussed about how to write programs more easily. As a consequence new programming languages like C, new concepts like object-oriented programming and multi-tasking operating systems were invented. This advanced technology was called in the area of it's invention 5th generation workstation computer and can be dated back at the late 1980s. Such computersystems looked very familiar to what is used even today. And programmer could write software on that machines without much effort.
The best invention to solve the software crisis was perhaps the GNU Open Source project. The main idea was, to create all the software which was available again but leave out the pricetag. At first time not only advanced C++ compilers and network ready operating systems were available but thanks Linus Torvalds such software was given away for free. This made the internet revolution possible and fired a development viewed backwards as dot com bubble.
The original problems called software crisis can be called solved. That means, the industry found together with academic researcher an answer to the challenge. If today somebody isn't able to find a compiler or an operating system to boot up his computer it is not a problem of the hardware or the software industry, but has different reasons. That means, cheap hardware and software for free is available and waits for everybody who is interested to learn.
The world is not very exciting if no crisis is there. The first problem was solved, but another problem is open. This time it is unclear how to fix it. The new problem has nothing to do with system software for booting computers, but with AI related software. That are packages for controlling robots, powering image recognition or realizing search engines. Such AI Related software isn't available, and like in the 1970s before, any small attempt in writing such a software is celebrated as the eight world wonder. If somebody in the world is able to program a walking robot (for example Boston dynamics) he will get applause from million or even billion users worldwide, because no other company can do so. That means, the knowledge of how to program robots is not available in the public domain, their costs are high.
I have no idea, how to fix the second crisis. It is way more difficult to program a robot, then to program a compiler for the C programming language. How to program a fast compiler is written in many books. And many people know how to do it. They are talking about the subject at stackoverflow or on their blog. And many examples are available at github. But nobody knows how to program a robot. The total number of people how is able to install the ROS software is smaller then 30. That are the employees at Willow Garage how have developed the tool, and they didn't documented their work. The same is true for specialized deeplearning hardware. It is unclear how the chips are manufactured or what the tensorflow software is doing really with the resources.
Is the AI software crisis really there or is it only an invention of the media? If somebody isn't interested in AI he see perhaps no problem. Because normal computer software is available and is working great. If computing is equal to writing a texteditor in C++ or a Linux hardware driver for controlling a wlan card, then no crisis is there. But some visions goes into a direction that computers are perhaps able to controll robots and then it would be nice to experience how to program them. If somebody is interested in such problems, he will notice the second software crisis. He will recognize that the given papers today are too complicated and that the amount of information of is too low. And perhaps he will recognize that his own skills are too weak to program AI related software. Perhaps he has tried to program a simple Pacman game AI which should simply avoid the ghosts and he fails. And then he has a problem, because he is powerless in front of his computer.
The shared experience of beginners in AI is, that they have too little knowledge to be successful. They have tried out simple AI puzzles, failed with the implementation and give up. They know, that AI is perhaps possible, but not by themself and not in their lifespan. This feeling of disillusion is very common for today's computer generation. It isn't a failure of the individual but from larger circles. It is unclear right now how to solve the problem. Perhaps it would be enough to bring Open Access forward, so that information about AI related topic will become easier available. Is this enough? Papers are available today, but they are to complicated and sometimes the authors have also no idea about the subject. The problems goes deeper. The main problem is, that AI was never before researched in the human history. IT is something which wasn't available in the past. And there are no methodologies available how to understand the subject or finding out new things.

Video rental stores as a template for future libraries


Video rental stores are something from the past. Today, most companies are under pressure, because Netflix and Amazon can stream much more movies and are cheaper. But let us ignore this new development and describe the basic idea behind a classical video rental store. They were available from the 1980s until 2010. They boom period was together with the VHS format. The most interesting aspect in video rental stores was, that the medium was a physical one. A VHS cassette is something different from the internet, it was some kind of medium before the internet was invented.
The use case for the customer is very similar to a visit in the public library. He goes physical to the video rental store. Brings older cassettes back and searches for new films. They he checks out at the desk and leaves the store. So nice so good, the most interesting aspect is, that behind such a store a business model is there. That means, the owner of the store gets payed by the customer. And this is the difference to a public library, in which no such business model is there. The other aspects like the physical medium and the physical location are the same.
Video rental stores are a great invention. They help to provide access to information for all kind of people. And the business model can be transferred to today's libraries. Like in a videostore the customer come in, bring their old books back and searching for new one. Like in a library, the customer are prefering a physical store over the internet and are interested in printed books but not in digital pdf versions. They are doing so, because they love books made of paper with high resolution graphics and they hate the PDF format, because this can't be read at the beach.
As far as i know, most public library have also some DVDs and audio-cd available which can be borrowed for some days. The DVDs are the same like the titles in the video rental store, for example Rambo I–III, the A-Team or Flashdance. The difference is, that the public library don't want to be a commercial business but a democratic institution. Why? Is the Rambo I DVD from the public library so much better then the same DVD from a small video rental shop in the city? Isn't it true, that public library are buying the CD-Roms and books from a commercial publisher which is the same who is delivering the goods to barnes & noble or Amazon?

July 17, 2018

How to create cliparts from scratch


Suppose we need for the next OpenSource project a fancy looking Emperor penguin. The first step is to search for a realistic picture which is already there for example at Wikipedia Commons. Even the JPEG file looks great it isn't a clipart and it wasn't created by us. No problem, we can open the original file in GIMP and create a second layer on top of the first one. To get a smother linedrawing for the outline it is always a good idea to scale the whole picture up, at least to 2000 pixels width. Making the image smaller for exporting is always possible but for the painting process we need a lot of space. The best way to follow the outline is by hand with the pencil tool. Thanks to the layer we only need to paint again a line which is already there.

The reason why it is so easy to follow the contour lines has to do with two powerful features in GIMP. At first the undo tool and secondly the zoom. With both it is easy to create in a reasonable amount of time a well looking copy of the original image which can be filled with colors.

Tools for selecting the area are helping a lot to fill the colors exactly until the border, which produces a well looking penguin:


The resulting JPEG file is 43 kb in size, and can be seen as the own creation, that means the author has all the copyrights and can post the image everywhere. It is an artwork, created from scratch in an all-digital workflow.

July 02, 2018

Digital painting 101


The number of publications about digital painting is surprisingly small. Even experts like Craig Mullins are not mentioned in a simple Google search. So it is time to bring the development a bit forward and provide some introductory tutorials. One misconception about digital painting is to compare it with digital art. In both cases a computer is used, but digital painting has the aim to replace traditional painting technique like oil painting. Let us start from fresh.

The first what we need is gimp. It is available for Windows, Mac OS and Linux. The version shown in the screenshot is outdated (version 2.8.22), newer versions of GIMP have more colors and better brushes. But as starting point the software is well suited. It has more features, then even expert painters ever need. So what can we do with gimp? In a classical sense, the software is recognized as a photo-editing tool for improving pictures taken by a digital camera. That is only one feature and will be ignored in the following tutorial. The more interesting scenario is to use GIMP as a painting tool for creating art which is not already there in reality. We are starting with some basic trials to bring colors on the screen.
What we see in the next image is a special tool, called “Smudge tool”, it helps to soften the transitions between two areas. It is known from normal art and allows to create special effects.
This tool can used more and results into a fast drawn picture. It looks similar to an abstract painting which was created offline but this time it needs less time. The above picture was created in under 5 minutes. It has a high-resolution and can be printed by a inkjet printer to any size. That means, a single artist can produce on one day not simple one image, he can generate hundreds of them. The most important function in the gimp tool is perhaps the CTRL+Z key. It allows to take back the last step like in a word processor. Such a simple function allows an amateur to get the same quality like a professional. That means, it is not necessary to be able to paint, it is enough to want to learn it.
But something is missing. Apart from abstract painting sometimes it would to have also realistic paintings, for example we like to have a SUV car. No problem, we are painting one from scratch, adjusting the colors and insert it into our artwork:
Sure, such a piece of art can be also created with the well known offline workflow. It is similar to what artists were doing all night long. But, digital painting has the advantage that is a lot of faster and can be handled by non-experts. That means, a super-realistic painting is not something which is done by professionals who have 20 years experience, it is something which the students are able to do after a two week training course. A pipeline around digital painting is comparable to a mixture of speed painting and concept art. And like a scientific document which was created with LaTeX it can be printed out and reach a much better quality then a normal artwork.
At the end some improvements were made on the picture and the result is:
Printing on poster format
Perhaps the most interesting question is how to print out the JPEG file on real paper. So called copyshops have a service called poster-printing and fine art printing. The idea is to use large format inkjet printers to print on 120 cm x 80 cm. Such a print produces costs of around 50 US$. The resolution should be at least 150 dpi which is around 12000 pixel x 8000 pixel (round about, the example is 255 dpi because 12k/47inch=255). The feature of poster printing has a longer tradition. It was used in the past to print out photographs and reproduce artwork which was created in the past, for example by Van Gogh. But, future artists are able to create the pdf file by it's own. And this pdf file can be copied on usb stick and printed out.
Like in the old style to paint, the quality of the paper and of the ink is important. And were are not talking about simulated colors, but about real ones. The new thing is, that the workflow is separated. Creating an image with gimp can be done without real sheet of paper, while printing it out is something which is different from it. I would guess, that in the future we will see lots of artists, who are working with an all digital pipeline. That means, in the art exhibition the original painting was printed out on an inkjet printer. It is not a copy of another drawing, but it is the original. That means, if the art exhibition doesn't want to show a printed copy of a picture, it is not possible that the painting of the artist can be shown. He has no other master copy somewhere in the basement, he has only the JPEG file stored on his laptop.

July 01, 2018

Art is the queen of science


From a 19th centure viewpoint mathematics and physics were the queen of science. They have powered the industrial revolution. Things like steam engines, healthy food, airplanes and the computer wouldn't come reality without mathematics and physics. Both subjects are investigating the inner working of nature, and create formulas to describe the given order. Subordinated engineers are transforming the laws of physics into practical products.
That was the situation in the 19th century, but now the world has changed. In today, and even more for tomorrow's needs there is a demand for something which is more powerful then pyhsics and mathematics. The best example in which cases physics fail is to the task of programming a computer. Without software a computer is nothing, but physics alone can't answer the question how to program the machine. Sourcecode is not given in the nature, it is something which is entirely with imagination. Programming a computer have much in common with painting an image or with creating music. This is not only a detail problem, it is about everything.
Software engineering and writing documents about how to program artificial intelligence is not located inside the subject of mathematics and physics, it has it's own category. Or to be more precise, it is the same category like painting, dance, music and literature before, called arts. Art is something which stays in contrast to mathematics and physics. It is something which isn't available in nature. Humans are the only one who can create art. And the subject who is talking about creating poetry, sourcecode and everything else what is important in the future is called arts. I would go a step further, and argue that mathematics have nothing to say about software engineering. Mathematics have invented the computer in the 1930 and the electric current have made the computer real. But everything which is beyond this trivial starting point is outside the scope of mathematics.
Let us make a simple example. We want to talk about the paintings of Leonardo da vinci because he used an agile workflow. The idea is to see the past as a blueprint for writing better software. Would it make sense to discuss the topic inside mathematics or physics? No, it is offtopic there. Da vince, and all other artis are located inside the art. Because their main aim was to be creative, that means to think about something which is not given by nature.
Somebody may argue, that physics can also be creative. No it can't. Physics is restricted to the reality. For example, a perpetum mobile is not possible in reality, so it is not a topic for serious scientists. And they are right. If somebody want's to discuss about perpetum mobiles he should do this from the perspective of art. For example, he can collect some paintings of the past who shows such a machine, and he talk about the history of the dream of such a machine. SUch a discussion is possible inside arts, and there are many example which are already there. It makes no sense, to redefine the subjects and to promote to talk about non-science topics inside the physics. The baseline for talking about imagination, fictional machines, computer programming, and artificial intelligence is the subject arts. It is one the same level like abstract painting, modern music and poetry. That means, it is purely imagination and has nothing to do with physics or mathematics.
In today's curriculum the distinction is not so clear as needed. Programming software and artificial intelligence is in most universities located in the mathematics department in the so called computer science departure. This classification is wrong. There is a huge difference between Physics and software engineering. The first one, Physics, is a science. Which means it is rational, is based on reality, and can be right or wrong. While the second one (programming) is not based on reality, is depended from the artist and his understanding of the world and is driven by emotions. The best example is the LInux kernel mailing list, which is poetry at is best. The people there are without any doubt involved in one of the most important software projects, and on the same time they are not discussing about nature laws, but about art works which were invented by somebody and are now under the judgement. This judgment is never true or false it depends on other categories. The working process on the Linux mailing list is very equal to the working of a art-community or a music-orchester. But it is very different to what physicists or mathematicians are doing.
The misconception is, that in the mainstream computing and artificial intelligence is located inside the mathematics department because the assumption is, that it is a rational science. No it is not. A for-loop in a c-program have more in common with a perpepetum mobile and other purely non-serious inventions like an image of Malewitsch.
Mathematicians and Physicians are not interested in speaking as a deputy for artificial intelligence. They see the discipline as something which is not part of their business. And they are right. Physics is science, Computing not. Physics is the search for truth, Computing not. Physics is based on rationality, logical formulas and stochastic models, artificial intelligence not.
Nature
Let us go back into the beginning of physics. The basic idea was, that the nature has to be understand. That means, the wind is not a random force, it can be predicted. The electric current is not simply there it can be reproduced by an action. If the aim is to understand nature, the hard-science are great. Physics combined with Math, biology and chemistry have a long successful tradition in recognizing universal laws. On the other hand, they have a weakness too. If a physicists wants to try to make music or painting an image he will fail. It is nothing which is teached in a Physics class. The reason why the art department is at the other side of the campus and in most cases in a different university has to do with the fact, that art is something which is different from physics. Art is based on human culture. That means, artworks from the past are collected, interpreted on a subjective way and the aim is to create something which wasn't there before. Art produces his own rules and they can't be expressed mathematically.
Let us go back to the example with a perpetuate mobile. Suppose an art student who likes to make movies is creating a pseudo perpetuum mobile. He is using a hidden electric wire which is not visible in the film and paints the machine in a certain look to increase the effect for the audiance. From the perspective of arts, he has done great work. The movie looks great, the machine works as expected and he was very creative. But, if the same project is observed by a serious physicists he will come to the opposite evaluation. Because the art student has understood nothing, especially not the laws of thermodynamics and his machine won't work, it is fake and non-scientific.
Who is wrong? Nobody, because art is different from physics. The first one has to do with creativity and making nonsense, while the second one has to do with reality and rationality. The only mistake which can happen is, that the art student is not aware about the difference between the subjects. The language to communicate in each of the subjects is fixed and different. A valid contribution to the arts is different from a valid contribution to a physics debate. I think, the distinction is right. It is a best practice method, that arts doesn't understand physics and vica versa.
The image Waterfall from M. C. Escher is a famous example about the difference. From an art perspective the image is great. It was created by a professional painter and has an interesting topic. From the perspective of physics, the image can be ignored, because it contains nonsense information which are contradicting the reality. It makes no sense to explain the flow of water with such kind of art.

New art is focussing on output


On the internet there are two kinds of art tutorials available. The old one with a classical style and more modern oriented tutorials which are digital. What is the difference? The old style of teaching art is oriented on understanding art, or to be more detailed the idea is of reproducing the past. That means, the artist is taking a sheet of paper, has oil-colors, and is painting like van Gogh and others. In most cases the tutorial are grouped around the teaching of certain skills, for example to see the objects right, to understand the idea of perspective, to be familiar with imagination and so on.
Somebody may argue, that this way is the only one and is equal to THE art. But it's not. The new digital oriented tutorials have a different focus in mind. Here is the aim not to reproduce a certain workflow or a understanding of art, he is the focus on the result of art. In most cases, a painting is created with a purpose, for example the artist want to paint a flower. How this can be done is not important. One option of creating the result is simply put a photo on a copy machine and press the button. Another option is to copy and paste a jpeg-file which is already there. All of these techniques are valid pattern in creating art because they resulting into an output.
What new art is investigating (and what is described in the tutorials) are different ways to generate art in a productive way. Apart form simply using a photocopier it is also possible to start the GIMP software and draw the image with a tablet. The technique is not important. Instead of a tablet a mouse would work too and instead of GIMP another tool for example a vector drawing software is possible. What the artist is focussed is not a certain workflow or a certain understand of art, but he is guided by the results. He has certain conditions, for example an image created from scratch which shows flowers, and then he takes a technology which is helping to produce this output.
Stereotypes
The classical description of art was separated between online and offline art. Offline art is equal to painting with oil colors on paper, while online art is equal to digital art which is grouped around electronic photoediting. The gimp software was introduced as a classical photo-editing software tool. That means, a digital camera is producing the image and the gimp software is for making minor adjustments.
But, this clear distinction disappeared. Because Gimp is more then only a photoediting software, GIMP is a full blown electronic art-studio which allows to paint directly on the screen. The new better workflow is to use gimp for replacing classical painting. That means, the content is created nativly for the web, and is printed on demand, for example to present the work in an Art exhibition. The result is, that there is a difference between physical painting and the creating of the painting. It is possible to draw a picture without printing it out. The advantage has to do with costs. It is possible to create with GIMP much more images in a short amount of time, and without the need to buy oil colors for concept works.
In classical old school painting both working steps were made at the same time. While using the brush, the artist decides about the image. He can not paint without paint. Painting was about physical painting. In the improved workflow, the artist can create a complex oil painting without using real colors. He moves the mouse on the screen and using virtual tools like a brush, a pencil and so on. But, the old technique of using physical colors is still available. Because it is possible to transform any jpeg image with an inkjet printer into a real painting. In context of creating art, a normal desktop printer is not enough. There is a need for special A0 inkjet printers which provide a high-resolution quality. Such printers are available in photocopier studios.
What is the result of printed art? The result is mostly a higher productivity. Artist how are using a digital workflow with GIMP plus inkjet printers are not only drawing 2 images in a year, they are producing 100 images in a huge format. And they get printed out many thousands time. On the first look it seems, that not a single artist but a huge group of designers is behind the images, but it was a single person. He has optimized the workflow and is able to create any image the customer likes from scratch. What he has done is to replace a normal offline brush with computer hardware.
Resolution
One explanation, why computers are not very often used for paintings has to do with the limits of technology. The first painting programs were available since the 1980s and Andy Warhol used the top model for creating art. But, the output wasn't comparable to real art made with a brush, because the Amiga 1000 computer had a limited amount of main memory. If somebody is drawing on early homecomputers an image it will look like computerart but not like a real painting.
Let us define some preconditions to use the computer as a serious tool. Suppose, somebody want's to create a normal sized image which can be printed out in 40 inch by 40 inch (1 meter x 1 meter). A sufficient quality amount would be 300 dpi, the number of pixels is 12000x12000. Reducing the number of pixels isn't a good idea, because this result into a rastergraphics which can be seen with the normal eye from the distance. The key question is: how does look a computer like, who is able to process 12000x12000 pixels? And this is the bottleneck, the early computers in the 1980s were not capable of doing that. Even expensive workstations from that area are not suitable. The amount in Megabyte is 400 MB, that means, computers before the year 2000 were simply not powerful enough to handle this data.

Is the LInux kernel a fake?


A careful observation of the development in the LInux community will show, that every two months a new version of the Linux is released. In all cases it has lots of improvements, security fixes and a bit tuning. The updates comes regularly, frequent and have a high-quality. Can this story be true? What is Linux hiding from us?
On the first look, the development process is suspicious. It is not driven by humans but it seems that Linux is developed by an Artificial Intelligence. If real programmers would release the code, the frequnce would be much slower, and the quality weaker. How does look the computer program look like which is able to generate the Linux sourcecode? I'm not talking about the c-compiler, I'm talking about the sourcecode itself.
But let us take a step back. Can it be possible that high quality software without any serious bugs was programmed by humans? Yes and no at the same time. Indeed, software engineering is hard, on the other hand the workflow which results into the Linux kernel can be reproduced under clean condition. Suppose, we want to make our own “nearly perfect” software project. The first thing what we can do is to create a new git repository with the magic commandline expression “git init”. Then we are committing some edit. The result is a twofold. At first, the development process becomes transparent and secondly the result is nearly perfect sourcecode. That means, even we are not good in coding software, the git workflow allows us to take back wrong actions and evaluate edits according to their quality. That means, git acts as some kind of quality evaluation system which is equal to what is done in modern automotive fabrication.
The car production there is the same problem. At the first hand, the ordinary worker is not very accurate. He has a limited understanding of the internal of a motor, and his motivation for producing high quality is low. If 10 of such workers are producing the new car, it will fail. The workers will argue, they will leave out important steps, they will forget something to install and so on. And here comes the magic, called management. Even if the workers are not very good it is possible to produce many cars which are 99.9% perfect. That means, every car is the same, and no serous issues are there. This is realized in reality with a mixture of process monitoring, quality control and communication between the workers. Such a management system transforms an array of inexpensive workers ;-) into a high-quality production line. They will act nearly perfect and the car looks like as it was produced by robots.
The surprising truth is, that either in car manufacturing nor in programming the Linux kernel any kind of automation is in the loop. Most parts of the process (90% and more) were done by hand. Sure, the workers have some mechanical tools, and Greg Kroah-Hartman has some profiling tools for analyzing the code, but in general it is a manual task which is done by humans.
What i want to tell is, that the development of the LInux kernel, or better the quality of the Linux kernel, is the result of management decisions. Such system transforms weak and middle capable programmers into robot-like programming machines which are generating 99.9% top-quality. The interesting news is, that the process can be reproduced in any software project too. All what the programmers need is a git like version control system and a shared mission, for example to program a software like Google Chrome, gimp or whatever. And even if the single programmer has absolutely no understanding of computing the project will be a success.
Mastering robot-like quality
In a modern software engineering pipeline there are some techniques available which improves the sourcecode until a new perfect quality. Perfect means, that from an outsider perspective no humans but automatic artificial intelligence have written the code. In short the techniques are:
- git version control system
- mailing list
- search in stackoverflow for similar problems
- C as high-level language
- repository available in fulltext for every team member
- a group behavior which supports commits by quality not by personal status
All of these techniques are used in modern software projects, for example the Linux kernel but also in some other projects. It results at the end into a near-perfect sourcecode which is indistinguishable from automatic generated sourcecode. The surprising information is, that so called automatic generated code is not used in modern software development technique. That means, Linus Torvalds has no UML model which is transformed into executable sourcecode. Instead the quality is based on features like using of git, and seraching in Stackoverflow. That means, the human programmer is always in the loop. At first, he is reading the mailing list, then he is browsing the sourcecode, then he is searching stackoverflow for a similar problem, then he is fixing the bug, then he is pushes out the commit. That means, that the human programmer is in the centre and all the other tools are grouped around him.
With modern internet communication this workflow is transformed into a game-like system. That means, the sourcecode can be seen as an textadventure and the players how are contributing to the project are trying to maximize their score. Again, the workflow is lightyears away from an automatic / autonomous workflow. I would guess, that today less code generators are used, then 30 years ago. That is interesting because the academic literature about how to write high-quality software is based on models and code-generators. The idea is, that an abstract UML like model is created first, and in a topdown process this model is transformed into sourcecode. That is the way, software engineering is teached at the university and it has nothing to do with how software is programmed in reality.
What we see in real life is, that the programmers have an sourcecode editor open, for example eclipse, Visual studio or whatever, and they are editing every piece of code by hand. And that is not a malfunction in the system, it is the best practice method. Every single byte in the linux sourcecode was entered with a keyboard by a programmer who have pressed manually this button. And if we are observing the future trend on Stackoverflow then it is probably that sourcecode will become more important. That means, a good stackoverflow question about how to user a C pointer right is asked by posting some lines of code, and the perfect answer contains also some lines of code. That means, the workflow isn't based on code-generators but on teaching programming. This is sometimes called social coding, because the beginner has to learn it, the advanced user is giving advice and the communication process between them is stored in fulltext so that outsider can recognize a flame war on the mailing list without knowing the details ...

Digital Painting and the revolution in art


The term digital painting sounds like a revolution. It can be compared with the term Open Access which is describing a new form of book publishing. The shared identity of both is the digital only idea. That means, it is a new technology which is grouped around computers and desktop publishing.
Before I want to explain digital painting, let us first define electronic publication. Electronic publication means simply to not print out a manuscript, instead it is a file on the computerscreen and is created, improved and read entirely on the screen. Instead of using mechanical typewriters and Linotype printing devices more modern technologies like the PDF fileformat and LaTeX is used to generate the output. The content itself, for example a manuscript about a scientific topic remains the same. That means, an electronic PDF paper has chapters, a bibliography index, and words written in English, like his offline ancestor.
Now, we can introduce “digital painting”. In contrast to electronic publication, digital painting is even more advanced. That means, that even progressive artists may never heard about the term. It means simply to not print out an artwork or use a traditional studio, but take advantage of modern computer technologies like a graphictablet, the JPEG fileformat and the GIMP software. Digital painting should not be confused with digital art. Digital art is a very traditional form of art, because the computer is the subject. A possible installation would be to use computer monitors in a art exhibition to explain to the public what technology is. Another example of digital art is the early homecomputer scene on the commodore C-64 which has programmed so called intros and demos. In both cases, the computer made a new form of expression possible.
Such digital art is not the subject here. It is something which doesn't contradicts the self-image of a painter who is producing an oil painting. That means, an installation which is using an arduino micocontroller can coexists to a normal 1x1 meter painting. This can be seen as a state-of-the art example and is widely excepted in the art system todey. The more radical form is to use computer technology to replace old kind of art.
An early form of this technique is called speed-painting and can be realized without using the GIMP software. The idea behind speedpainting is, that the artist is using normal materials like a pencil and colors but with a certain motivation. The aim is not only to produce a painting, but with the constraint of using not more then 10 minutes. So the artist is focussed on the output of his work. He is taken a stopwatch and he must be an expert for different painting strategies. He can't figure out how to draw while he is doing so.
Comparable to speed painting is digital painting. Here the pencil is replaced by a computerprogram and the idea is to produce a certain picture. Like in speed painting to, environment constraints like time, money and effort are very important. One example of GIMP and other tools is, that the artist has only a restricted amount of colors, for example 16.7 million of them and he can't use more of them. Also, the number of brushes is limited. If he need a certain type of the blue color which is not available in GIMP but only as a offline color he has a problem. Because the color is not available in the software and he must find an answer to the problem.
Let us investigate what Craig Mullins is doing. He doesn't program algorithms to generate a picture. This is called fractals. He doesn't scan in a picture with a scanner in 600 dpi and upload it to the internet, he doesn't use oil colors in the real world. On the same time he is painting pictures, but he is focussed on tools which are available on the computer. For example, he has a digital pencil and draws with it an artwork. Why he is doing so? Because he is not interested in the painting process itself, for example to work with oil colors, but what he want's is a result. The typical workflow of Craig Mullins is, that a customer needs a portrait and Mullins has to deliver in a certain amount of time. How exactly he is generating the image is not important. He is perhaps using photoshop, or he is using GIMP. That means, the aim is to generate a JPEG File which is 100 MB in size, and how exactly this image is produced is not important.

Defining art


... is surprisingly easy, because there is something available which is the opposite. This is called science and is structured into subjects like mathematics, physics, biology and chemistry. The idea behind science is to use the rational mind for recognizing the nature. The good news is, that everything in the world fits in one of these categories. It is either art or science. In general, art is creating something from scratch which was not there and results into a cultural history, while science is per default true and bans everything what is not science. The best example to make the difference clear is a perpetuum mobile. From the perspective of physics it is a failure and equal to non-science. It is something which violates the laws of nature and people who are trying to build such a machine are pseudo-scientists. The same device (a perpetuum mobile) is from the perspective of art a valid contribution. That means, it is possible to paint such a machine in oil and it is possible to make a video about it (which is fake, of course).

June 27, 2018

Why dogs are aggressive


The healthy communication between humans and dogs and has to do with reading the mind of the other with language. It is a two way communication: the human is talking to his dog, and the other way around. An example would be, that the human is reading a book aloud in front of the dog, to indoctrinate his mind with the right ideas, but the opposite direction is also important. Every dogs is sending out signals and it is up to the human to interpret them.
The most important question is maybe: why are dogs so aggressive? Answering this is easy, because they are defending limited resources. A god usually lives together with other animals in the wild life and he needs some goods: a place to sleep, something to eat, a position in the hierarchy and so on. All of these resources are confined, that means they are not infinitely. If dog1 is eating the fish, dog2 can't eat the same fish. The result is a conflict about who owns the meal. And a conflict is solved with aggression. And that is the reason why dogs have invented lots of aggressive behaviors and signs to warn, inform and fight with other dogs.
To provoke aggressive behavior at dogs the easiest thing to do is to limit the given ressources. For example to take away a fish from the dog, to reduce the amount of space he has, to disturb his play. From an abstract point of view this is equal to make the limit of the ressources obvious. As a consequence the dog takes a decision, in most cases he decides to fight for his ressources. If he is a wild dog this is equal to an immediate attack, if the dog was trained by human he can decide for a non-aggressive behavior for example he can beg. That means, he is asking polity if he can get the fish back, if he can get his space back and so on.
Begging for food is a very usual behavior. It is a learned behavior which transforms formally aggressive fight over low resources into a social accepted behavior with the same aim: to get in control of the limited food. Perhaps, dogs are so fascinating for humans, because human are using the same technique. Like dogs, they have the major problem that food, space, and social hierarchy are limited ressources so they have invented strategies to get in control of them.
To make the point clear: The precondition for any dog behavior are resources. If no conflict about limited amount of food, space or hierarchy is there the dog will do nothing. He is bored and ignores the situation. That means, if nobody has stolen his fish then there is no problem. Perhaps the dog is memorizing about previous experiences from last week to improve his behavior in the future, but in most cases he has forgotten the episode. That is the difference between dogs and humans. Humans can make notes on their laptop, that is outside of the scope of animals.
What I want to explain is, that dogs itself are not aggressive. They have no brain which controls their behavior. Their behavior has always to do with games they play. The situation is located outside of the dog, for example if dog1 is catching the food of dog2. This game is about: two players, a limited ressources and a social hierarchy. And it is up to the individual to play the game. And this results into a certain behavior.
So called Alpha dogs have developed strategies for getting the most ressources. Either they are super-aggressive or they are super-cute. In both cases they get the most of the high-values ressources, which means free space, high quality food, social hierarchy, fresh water and so on. What the dogs are talking to each other is how to develop such strategies. And if they have acquired a behavior they will use it in reality.

Painting is easier then expected


How does art work? It is mostly a mystery. The normal non-artist is imaging that doing art is something which depends from a certain person. That means, he has discovered early his skills to draw images, and over the years he become better and better. So it is a long and demanding journey, right? No it is not. Painting is very easy and can be mastered by everybody.
In a previous blog-post I explained the usage of gimp for tracing the contour of an image. The idea itself was the right direction, but the technique needs a bit improvement. In the previous blogpost I explained, that on the left screen the user should make the original visible, while on the right side he opens up the GIMP drawing software with the purpose to trace the lines according his eyes. There is a feature inside the GIMP tool which is extreme useful, called layers. Here is the tutorial for the absolute beginner.
Step 1 is to ask the google image search for an painting which is already there. That can be a certain motive, and it can be realistic. Step 2 is to open the jpeg file in GIMP. As default it will be opened as layer0. Step 3 is to create a second layer which we call “trace”. With the window menu it is possible to switch between the layer and make it visible or invisible. And now comes the trick. After creating the new layer, the menu asks us if want a transparent background. Sure we want. On step 4 we have both layers above each other and drawing the contour of the original image with the pencil. That is surprisingly simple, even without using advanced input devices. A normal mouse or trackpad is enough. In the last step we are filling the trace with some colors which can be similar to the original and exporting our layer as a new jpeg file.
Let us reflecting the technique a bit. According to the description, painting is equal to drawing contour lines of an image / photo which is already there. And the GIMP layer tool is the perfect choice which allows even for beginners to create his own masterpiece in under 5 minutes of work. No it is not a joke. The image from top of this posting was created in under 5 minutes and without much effort. Is it art? I don't know, probably yes.
Somebody may ask if trace back the contour of an image is not painting it is simple to create a bad copy. So what is the difference between tracing back the lines and making a photo? There is a huge difference, because our copy looks completely different from the original. If we are not telling the public what the template was they will never ever recognize it. And there is a trick available. Suppose, we want not to trace back a real photo or a real image but want to create everything from scratch. There is also a way to go. All what we need is a smaller puppet, made for artists. The puppet gets clothes and a photo is made. Now we are tracing back this photo. And voila, every part of our image was created really from scratch without copying anything. But to be honest, even this painting technique is a copy, because the workflow has always to do with open the original in layer0 and making the trace on layer1.
Impress the non-artist
If we are looking into some art schools and books about painting we will never find a similar explanation about the workflow. That means, the artist who are painting real images are not telling, that they are simple trace back images with the layer-feature of gimp. Question 1 is, are they doing so? And question 2 is, if yes, we they are not telling so? Answering the questions is easy. If everybody is able to create art, nobody will need artists anymore. And it might be also the case, that in former times it was not recognized widely that painting means always copying something. What in most art-schools of the past was teached is a certain type of copying. For example, the students are sitting in room and painting all together a flower which stands on a table. But painting has nothing to do with this situation. Because, in the example are many elements which are not important for the creation of the image itself. At first, it is not important to paint in a group, if somebody is alone in the room with the flower he will get the same result. The second aspect is, that an art school is also not a precondition. And the last point is, that a real flower is also superfluous. What I tried to describe in this blog post is some kind of minimal artist setup. This is a workflow which results into art, but is using a minimal set of ressources. The workflow consists of:
- in gimp layer0 is the original image, which can be a painting or a photograph from a puppet
- in layer1 of gimp, the artist is drawing the contour line and fills the colors with the aim to copying the original. He can add some noise to make the difference more obvious.

C++ the language of the future


Since 1-2 years there are some articles available which are introducing C++ as the best programming language ever which is superior to Java, C# and Python. Are these articles right? To answer the question we must take a look back into the year 1995. The advantage is, that the history is well understood and many material is available. How was the situation in programming language in the past? Programming in C++ was possible but there was many pitfalls out there. At first, the Borland C++ compiler cost a lot of money and needs huge ressources. If somebody was interested in writing a simple Hello World program such a compiler was not the best choice, the better idea was to use a basic interpreter from MS-DOS or the Commodore 64.
But, it was clear that at least in the year 1995, C++ was an advanced high-end programming language. Because it contains lots of features: compilation of sourcecode result into fast applications, object-oriented programming is highly productive and templates allows to write the same algorithm with less code. Suppose, somebody was familiar with C++, has a lot of money and has also a fast developer workstation, then C++ was the way to go.
Since then lots have changed. State-of-the-art C++ compilers like GCC are available for free, a cheap consumer PC can be used as a workstation and new language feature like std::vector allows it to use C++ like Python. C++ works the same like in the year 1995 but without the pitfalls. And that is the reason, why some people are arguing that C++ is the language of the future. Basically it is the same language like in the year 1995, but today the costs are much lower. Perhaps the most interesting aspect is, that C++ scales to many demands. At first it is possible to write any application with it: web application, command line programs, GUI applications, games, compilers and operating systems. And secondly it is possible to use a C++ compiler in many ways. The beginner can program in C++ like in a python interpreter. That means he is writing down a hello world function which contains apart from a loop and an if-statement no interesting feature, while the expert programmer can define his own class library and use template metaprogramming. It is even possible to extend C++ into a stack-based programming language, which is called UNconventional Threaded Interpretative Language (UNTIL) and was invented by Norman E. Smith.
A look back into the year 1995 is also useful to determine the importance of GUI-libraries. In 1995 there were two major libraries out there: Object Window Library (OWL) from Borland, and Microsoft Foundation Classes (MFC; Microsoft). Such a library together with an object-oriented language is a powerful tool for creating complex application in a short amount of time. And this answer perhaps the question which programming language is right for today. In the year 2018 there are also two important open question: which language is right, and which GUI library is right?
Let us take a look into a bottleneck of today's C++ development. Under MS-Windows, C++ works great. Or to say it better: there are C++ libraries out there. If the user want's to program in Linux a GUI application he will run into trouble. The gtkmm library is available, it can be installed for free, but it is very weak documented and no introduction tutorials are available. That means, a newbie is not able right now to program under Linux a C++ GUI application. In contrast the situation in Windows is much better. I would guess, that a missing C++ GUI library which is well documented in an open Source operating system is the major bottleneck in todays C++ development. That means, if somebody is arguing he is not using C++ but C# under MS-Windows because it has the better library then he is probably right.

June 24, 2018

How to use a scientific bookstore?


An academic bookshop is specialized on education material which is used in universities. They are able to search in databases from Springer and Elsevier for an article and it is possible to buy standard books from a subject. They are also offer conference procedings, from the last year. The customer gets usually 150 pages for 40 US$ which is very cheap. A rare proceeding will cost 100 US$ for only 50 pages.
In recent times, some customer are not aware what an academic bookstore is. They do not want to buy books, instead they are asking how they can create a profile with their name, photo and a list of publication. The problem is, that an academic bookstore was never invented to support such demand. It is only possible to buy printed books and ebooks too. It is not necessary that the customer gets his personalized profile page on the internet, because a bookstore is a place to buy information not to drop down a personalized marketing information.

June 21, 2018

First look on glade


Programming a GUI application in Linux is hard. There are lots of standards available (gtk+, Qt, mono develop, Python Window) and it is unclear which of them is a here to stay. The hypothesis is, that the combination of: C++, gtkmm and the Glade GUI designer is the best practice method to develop own apps easily. But I'm not sure how this works in detail. Today, I only want to take an introductionary look into the Glade GUI designer and evaluate what is possible with the tool.
The installation is easy. A simple “dnf install glade” is enough to fetch the 3 MB size file onto the local Fedora machine. After starting the program the user sees a screen like this one:

In the menu, the user can drag&drop his GUI application which works like in a painting program. And he can execute the preview function to show, how the GUI will look like.


But, there is one part, which i haven't figured out right now. How to use the GUI in C++ program. As far as i know, the file is stored in the working directory with the “.glade” extension. It is an XML formatted file which can be used inside Python, C or C++ apps.

But the details are unclear right now. Nevertheless, the glade tool itself, seems very powerful. The above example GUI-window was created in under 5 minutes with simple point&click mouse movements. The user selects first the window type, specify a grid and moves them the elements like a menu bar, the textfield and the button into the window. Then he can give names for activation events for example “pressbutton”. These event should be parsed later in the C++ program.