There are many different subproblems available in scholarly writing. The pipeline contains of:
1. article writing itself
2. converting the article into the pdf format
3. publishing the pdf file
The following blog posts investigates only the first problem and assumes that the remaining one are easy to solve. The article writing itself is the result of taking notes.. Taking notes are done with an outliner software. An outliner works quite different from normal desktop publishing software.
Before a text can be written some keypoints are needed. The keypoints are creating during the reading of existing literature. If the keypoints are available in the outline editor it is possible to write down the same information as a prose text. Prose text is equal to normal sentences which can be read by the audience. The interesting fact is, that outline editors allows to write down full paragraphs. It is some sort of raw manuscript. These sentences are full of spelling mistakes and grammar errors. In most cases they are hard to read. but they are providing a first draft edition of a text.
In the next step the draft version is copied into a new file which is mostly a text processing software like LaTeX or MS-Word. Here is the idea to rewrite the sentences and make them easier to read. Also figures are added to the text and the bilographic references are inserted. From a formal perspective, a text is a reformulated version of the notes. The user has learnted something while reading the text from other. Has written down the information in keypoints and then he has reformulated the content with his own words.
The reason why this is done by authors is because it helps to understand a subject. Most texts are written not by experts but by newbies how have discovered a topic the first time. It is some sort of paradox, that especially authors without any background knowledge are motivated to explain the subject to a larger audience. The reason is, that with this precondition the author is motivated by himself. That means, it makes sense for him to investigate a topic and write something about it.
In theory it is possible to copy and paste the keypointes which were created during the research for a topic. But most authors are trying to avoid it. Because they are assuming that self-created keypoints are containing contradicting information and out of context knowledge. The more elegant way is to rewrite the content from scratch in prose text.
June 27, 2021
How to write an academic paper?
From printed journals to electronic ones
The first electronic journals were started in the 1980s.[1] In contrast to today's perspective which is dominated by detail problems like Open Access in the 1980s the debate was held in a more general perspective. It was compared what the difference is between electronic journals and printed journals. The perhaps most interesting point is, that peer review and the existence of an editor are typical elements of a printed journal. They are producing sense and it is not possible to avoid them.
Let us take a look into the workflow. According to the description an author writes a paper, submits it to the editor and the editor submits it to the peer reviewer.[1] Why this workflow is remarkable is, because it was mentioned in the year 1982 as a role model for printed journals. At this time, electronic journals weren't available or they were started slowly. And if the peer reviewer has together with the editor and the author revised the document it gets printed and delivered to the reader.
Now it is possible to explain why this workflow was used by printed journals. Suppose an author submits a manuscript directly to the printer. the result is, that many authors will do so. Lots of draft articles get printed and the amount of article would be very high. The editor and the reviewer are useful to slowdown the process. Their main task is to stop an author, select only the best manuscripts and make sure that the publication is postponed.
This kind of moderation is a reaction to the printing press. A printing press is a bottleneck. Printing a document is a costly process and it will take time It is not allowed to print a manuscript if it contains of spelling mistakes or if the quality is low.
Suppose the idea is to start a printed journal from scratch. The chance is high that this fictional journal will work with the same principle in mind like the journals from the past. That means, the author is slowdowned by the editor and the peer reviewer with the attempt that only high quality knowledge is send to the printer.
In electronic journals the printing press is missing. It is possible to publish low quality manuscripts full of spelling mistakes. The reason is, that printing out something doesn't cost anything but publication is nothing else than creating an electronic file. Do we need an editor and peer reviewer to slow down the process? It is a rhetorical question. The only reason why electronic journals have a peer review is because they are trying to emulate the printed journal system from the past. That means, that the journal gets printed and the electronic version is an addon.
Or let me explain the situation from a different perspective. It is pretty easy to explain the peer review system for printed journals. Peer review provides an important filter in the process and without peer review, a printed journal can't be created. The problem is, that it is much harder or impossible to explain the peer review process in case of electronic only journals. The basic question is, why it is not possible to publish the raw version of a manuscript?
___Peer review___
The underlying assumption for a peer review is that it helps to filter the information. this makes the peer review process a powerful instance. It is in the hand of the peer reviewer to decide if a certain manuscript gets printed. What is ignored is, that not the peer reviewer is the important instance but the printing press itself is the reason why a manuscript gets rejected. A printing press is a machine which produces costs. If a certain manuscript should be published in a journal, it has to be printed first. If the journal has a circulation of 10k copies, the printing press will print out the manuscript 10k times. This process takes time and will produce costs.
Because of this reason a printing press is used seldom. That means, it is not allowed that anybody can print out his ideas but the machine has a priority queue. This queue is not something which is determined by a journal but it has to do with the limits of the mechanical machine. That means, printing works in a sense that it will produces costs and costs need to be managed.
Let me explain it the other way around. What all the printed journals have in common is, that the printed text are spelling free and that they were peer reviewed at least 10 tiimes. Only if a manuscript is absolute perfect, and only if the journal editor is 1000% sure that the reader will need the information, he will send the manuscript to the printer.
___Technology___
From a computer perspective, electronic publication has been solved. The first programs for desktop publishing were invented in the late 1980s and since the advent of HTML and PDF it is possible to publish a document in the internet. What is missing is the cultural shift towards electronic knowledge. This shift is possible only in theory but not realized yet. The problem with electronic publication is, that the amount of information will become much higher than before. It can be compared with inventing the printing press a second time.The original printing press has increased the amount of books from a few hundreds to millions. And electronic publishing will increase the amount of books from millions to ...
___Literature___
[1] Turoff, Murray, and Starr Roxanne Hiltz. "The electronic journal: A progress report." Journal of the American Society for Information Science 33.4 (1982): 195-202.
Components for a robot control system
The main problem in robotics is, that it is hard to define some recipes which are working well for all domains. Suppose the idea is to contruct a robot which can drive on a motorcyle, or a robot forklift which can load a cargo. What is the basic principle to control these different kind of robots?
A possible walk through to solve these difficult domains is a combination of voice control command processing, model predictive control and learned cost function. Let us go into the details. The idea behind voice control is, that the robot is controlled manual but not with a joystick but with natural language. For example, the human operator can say “robot start”, or “robot load the cargo”. This sort of interaction is important because it allows to show the entire picture of a robot domain which includes the actions not automated yet. An interaction between human operator and robot is needed, if certain parts of the control system are missing. In such a case the robot is controlled with teleoperation.
The second element of a robot control system is the mentioned model predictive control tool. MPC means to predict future system states and determine the optimal action. The last strategy on the list is a cost function. A cost function helps to guide the search in the problem space. Learning a cost function is equal to inverse reinforcement learning, which is sometimes called learning from demonstration . The idea is that during the demonstration the parameters are found which are defining which sort of behavior is wanted.
June 25, 2021
Early examples for Desktop publishing
On the first look, Desktop publishing seems to be something realized in the 1980s and today the technology is simply there. A closer look will show, that desktop publishing was at all times an advanced technology and creating the underlying hardware and software was very difficult.
Most technology historians see the start of DTP in the mid 1980s with the Apple Macintosh computer. But at this time, the technology was used in the reality. It took some years until the first user have discovered the new options. A more practical example for Desktop publishing in the reality can be seen on the Atari ST computer. Around the year 1990 the Signum II software was available. Signum II was a graphical textprocessing software which looks similar to a modern Word software. The disadvantage of Signum was, that the software needs a lot of RAM plus an externa harddrive, so the user has to spend extra money to upgrade the Computer.
In theory, it was possible in the year 1990 to write a longer text on the Atari ST with the Signum software. Books from this time are available which are describing the workflow. But it should be mentioned, that even the Atari ST was perceived as a cheap computer compared to the Apple computers, it was some sort of advanced technology to use in the year 1990s a PC or a homecomputer to create a document. Only some enthusiast tech-pionieers have done so, but not the majority of students at the university.
So we can assume, that in the year 1990s desktop publishing wasn't invented yet. The problem was, that the computer hardware at this time was missing of larger amount of RAM and most computers had no harddrive.
From a more realistic perspective, the DTP revolution was started with IBM compatible PCs and the Windows operating system around the year 1995. At this moment, the average PC was equipped with a harddrive and large amount of memory and was able to run graphical operating systems. The average student since the year 1995 was able to type in a text on a PC. On the other hand, the sad situation is, that until the year 1995 desktop publishing for the masses wasn't there. All the books, journals and documents were created somewhere else but not on a home computer with DTP software.
This might be a be surprise, because it opens up the question how academic journals and dissertations were produced from 1900 until 1995? Like i mentioned before the workflow wasn't realized with desktop publishing. But it was working in a more distributed fashion. The interesting situation is, that all the elements of a modern text processing software were available before the 1995 but not in a single location but in different larger machines and located in different companies. For example in the early 1980s printing machines were widespread available. Not on a desktop of a single user, but in a printing house. Also high quality photography was available and the ability to create longer texts. To create a book or an academic journal before the year 1995 the workflow can be described as:
- phototypesetting
- printing machine
- entering text into a terminal at a larger mainframe
- graphic design in a dedicated company
Desktop publishing wasn't inventing book printing from scratch, but desktop publishing has combined all the steps into a single software. The difference is, that before the year 1995 academic publishing was equal to group working. The steps in the workflow have to be coordinated. In contrast, desktop publishing since the year 1995 was grouped around a single person.
___Tutorials for Academic publishing___
The untrained user may wonder why universities have no courses in which the students learn how to publish a paper. Also the topic academic publishing isn't described very well in the literature. The simple reason is that in the past it was technical not possible that somebody can write or even publish a paper. Let us assume that the normal student is living in the year 1985. At this time DTP wasn't available in the reality. Without a harddrive and high resolution graphics it is simply not possible to create an academic paper. That means, if a single student at this time was motivated to create a paper and publish it somewhere it wasn't possible.
The ability for doing so was invented much later. The first tutorials can only be written and read if the underlying technology is available. That means, if it is possible to run the MS Word software under a graphical operating system it is possible to write a tutorial how to do so. This was only possible after the year 1995.
But if desktop publishing wasn't available before the year 1995, how was it possible to fill the university library with content? Somebody needs to know how to write books, conference proceedings and papers which is published. Yes, such a meta knowledge is available but book publishing before the invention of desktop publishing is working a bit different. Academic publishing before the year 1995 was equal to group working in which larger amount of people have to coordinate each other so that at the end a printed book and a printed journal is available. It is hard or even impossible to describe the overall process in a single tutorial because each subject is handled by specialists. This makes it hard to give general advice how create a paper or how to start a new academic journal. The only thing what is available for all the book publishing is, that it will need large amount of ressources. A machine which is able to print high quality journals will cost millions of US-Dollar, and running a text processing on a mainframe will cost even more.
June 24, 2021
Creating computer games alone?
Most computer games in the 1980s and early 1990s were created in a group work. Especially the commercial high quality titles were created never by a single author but as a combination of graphics experts, musicians, programmers and marketing experts. What remains open is the reason why? Why was a group needed, why isn't it possible to create by a single person?
The answer isn't located outside the creation pipeline, it has to do with technical restrictions. The if computer hardware is slow and if support tools are missing the only way in handling the constraints is team work. Let me give an example, The Commodore 64 had a main memory of 64 kb of RAM. The only way in creating computer games was the assembly language unfurtunately, writing such programs needs a lot of time. If someone is doing so for 6 months, he has no time for creating the graphics and the sound as well. The only way to program the code and create the graphics is, that 2 and more people are working together.
Let us imagine a different sort of technology. Today the average PC has gigabyte of RAM and lots of tools are available. in addition, high level programming language like Python are available. There is no need to write the code in assembly language. Today it is possible that a single person writes down the source code, creates the graphics, invents the game design, provides the sound effects and uploads the resulting video to youtube. if the game is smaller one, a single person can do so in a weekend. There is no need for group working because better technology is available.
But what is about the quality? Would can we expect from a computer game programmed in python by a single developer? The interesting point is, that such a game has the same quality like the games in the 1980s. The only difference is, that less ressources are needed to create such a project.
Let me explain the situation from the other point of view. Group working is an indicator that a lot of resources are needed which can't be provided by a single person. Group working means, that a project will need so much time, and so much different knowledge that a larger amount of people has cooperate to create something. Group working is needed if technology is missing. this can be shown for many different domains like video games, book publishing, construction working or automotive assembly.
Perhaps it makes sense to explain who group working can be introduced to a modern video game. Suppose the idea is to create a mini game in python which has around 500 lines of code plus some low quality sprites. It is a normal jump'n'run game without any extras. Does it make sense to see this as a large scale group working project? It is a rhetorical question, because it is a typical one man programming effort. It is nothing else but a hello world demonstration of somebody who has discovered the pygame engine and likes to learn how to use it for creating a game. It is not possible and it doesn't make much sense to ask a group of people if they want to join this project. It would take more time to contact all the people than writing the small amount of codelines alone.
The typical situation in the now is, that a single person creates such a game, uploads the video into the internet, and then will read through the comments to get feedback from the end user. Such an interaction is not equal to group working but it is something else. Group working in the classical sense means, that before something was published lots of people have to interact.
Let me give a counter example. Suppose, the python language wasn't invented yet. The only computer available is a VIC20 which has to be programmed in assembly language. Under such a constraint the same project (a jump'n' run game) is much harder to realize. It is not possible for a single person in doing so. So the alternative is, either to work in a group or not creating the game at all.
June 23, 2021
Can the success of Open Access explained only with technical reasons?
The discussion about Open Access has become mainstream within the science community. There are dedicated conferences available and many books and papers are focussing on the problem of how to open up science. Most discussions are about the question how to convince somebody to publish a paper under open access license or if this failed, what the concrete reasons are. Describing the situation from an ideology standpoint has become the standard and Open Access is often seen as a movement similar to the Open Source idea.
To make the situation more pleasent it would help to create a working thesis. Open Access is the not the result of individual decisions but the origins are located in technical development, namely desktop publishing, full text databases and bibliographic managers. The interesting situation is, that these tools are a new development and it can be traced back, that before these tools were available nobody was talking about electronic publishing.
Perhaps it makes sense to start the journey to open access with the desktop publishing software. Suppose somebosy has installed the MS Word or LaTeX software on his computer and tries to investigate the newly discovered features. What he will recognize fast is, that he can create academic papers and expert them into the pdf format. That means, LaTeX and MS Word was created with such an objective in mind, and it is very easy in doing so. If the user is unsure he can formulate it as a question and he will receive for sure an answer in an online forum, in which other users are explaining in detail how to expert a text file into a pdf document.
The interesting situation is, that the self created pdf file is the core element of open access publication. Open access assumes, that the manuscript is available in a digital format and of course in a standard format like pdf or postscript. Open Access won't work, if somebody has written the manuscript with a mechanical typewriter, because such document can't be uploaded into the internet.
To understand the upraising of the open access movement we have to investigate the Arxiv project in detail. The server was started in the year 1991 and the only allowed content was about mathematics, physics and computer science. In the 1990s, arxiv was the only preprint server in the internet. And the reason was, that in the humanities like literature or philosophy, computer technology was not available at this time. In the early 1990s, the Personal computer was an unusual device. It was available but it was a costly device. A typical desktop PC was running with MS-DOS and the software in that time wasn't able to expert text documents into the postscript format. On the other hand, PDF wasn't invented yet.
Open Access has become a mainstream topic in the 2000s. During that period, normal PCs were able to expert documents into the pdf format with a single mouseclick and the PC was widespread avaialble. It is not surprising, that since the 2000s the amount of published electronic documents was higher than before. Apart from mathematical papers also documents with a humanities background were published in the internet.
The new development was, that former book publishing companies were no longer needed. The combination of the internet plus a desktop publishing software allows a single person to write and publish a paper and he won't need classical libraries or linotype machines.
The revolution was, that with the book printing industry every thing remains the same. A classical academic pubiishing house works the same like 20 years ago. That means, somebody sends a manuscript to the publisher, it gets formatted in electronic format and then the book is distributed to an academic libraries. The new thing is, that nobody needs this workflow anymore. Today's situation is, that the individual can decide if he likes to publish a paper in the classical way, or if he likes to simply upload the pdf file into the internet. The simple reasons why so many electornic documents are available is because it is so easy to create them.
To understand the revolution in detail we have to focus on the key component of Open Access which is a desktop publishing software. Desktop publishing means, that the former workflow was simplified. Instead of using a book publishing house, a printing company and somebody who formats a documents, the author of a paper is charge of the entire process. Desktop publishing means basically that a single person authors the mansuscript, formats the layout, creates the images, checks the bibliographic references and experts the document as a pdf file. There is no need to send the manuscript back and forth between different stakeholders, but the document is created with a stand alone pc and a pwoerful textprocessing software.
Before the advent of desktop publishing a book was created by a team. The workflow can be traced back by analyzing the book cover. In most cases it was labeled by different stakeholders. A typical book in the 1970s was equipped with an imprint of a library. That was the physical place in which the book was located. Another imprint was made by the printing house. That was a company how created the physical book. Then, an imprint was avaialble by the translator. That means, a book was translated from one language into another language. Another imprint was available from the publishing house. That was a company how formatted a document and so on.
Basically spoken, the workflow until a book was created in the 1970s was distributed over many steps. Somebody may argue, that this complicated pipeline is equal to high quality book publishing, but the more likely reason why the workflow was so complicated was is because the technology in the 1970s was low. The desktop computer wasn't invented, and it was complicated to prepare a mansucript and print it out. The internet was missing, so a library was the only way to distribute the information to the reader.
The outdated book publication process in the 1970s, and the more recent publication workflow since the 2000s are both the result of a certain technology. If a book gets created with mechanical typewriter, linotype printing machines and printed libraries, a certain workflow is needed until a book is available. And if the book was made with LaTeX, pdf files and webservers a different workflow is needed until the book is published.
The illusion of Open Science
The dominant theory about the upraising of Open Science puts the society or governmental decision into the role of decision maker and the goal is to convince more scientists that Open Science is a great idea. So it is some sort of ideology and political movement to open up research and make the publications free to read to the world. The main reason why this plot is repeated is because it give to the stakeholders the illusion that they have everything under control. Which means, that a certain scientists can decide by himself or the political decision makers can give Open Access a higher priority or not.
A closer look onto the problem of Open Access will show, that it is not the result of a decision making process but the reason is located in technological development. The first open access server in the world was the arxiv repository. The simple reason why the project is up and running is because:
- the costs for a webserver a low in the internet age
- the existence of desktop computers and the LaTeX textprocessor is common for scientists
- if Arxiv hosts many thousands paper it is pretty easy to write more of the same content
These technical conditions have resulted into a successful prepreint server. Bascially spoken, the simple reason why Open Access was started is because the manuscripts are in the digital format available so it is easier to upload the document to a webserver.
Like all technological innovation like the car, the computer or the telephone the process was not managed and there was no higher instance who has to decided to introduce it, but if new techhology was working fine, it was adapted as quickly as possible.
It is interesting to observe the Open Access was started around the same time like the desktop computer. The first electronic scientific journals on CD-ROM were available at the same time the CD-ROM was invented, and desktop computer were there to create such content. This sounds a bit trivial but between the invention of desktop publishing and electronic publishing there is a causal relationship available.
To understand the Open Access movement we have to identify technology which supports the creation of academic papers. Potential key components are full text search engines, desktop publishing software, bibliographic databases, document formats like the Postscript and DVI standard, Unix workstations and a larger amount of people who have access to these thing. The resulting open access movement is some sort of logical consequence.
The situation has much in common with the interaction between a human and other tools, for example a hammer. If somebody has bought a hammer, he will search for situations in which he can use it. And if somebody has installed the LaTeX package on his workstaion, he will write a simple hello world academic paper next.
Is C the optimal language for programming the C64?
The good news is, that the problem of identifying the ideal programming language can be reduced to only two: Assembly vs. C. It is pretty easy to show, that C code will run slower and needs more RAM than the manual created assembly code. A typical example would be a prime number generator.
The more interesting question is, how much exactly C code will run slower. Somewhere in the internet it was written that without using cc65 optimization techniques the C code will run 5 times slower. On the first look this speaks for replacing C code with assembly. But a closer look will show, that slowdown by the factor 5 is not very much. Suppose there is an algorithm which needs 60 seconds. If the same algorithm was rewritten in Assembly it will need only 12 seconds. In both cases, the algorithm won't run in realtime but the user has to wait in front of the machine.
The main concern with assembly language is, that the sourcecode isn't readable. Even if the programmer is using lots of comments, the code will look messy. This is perhaps one of the reasons, why C has replaced former assembly coding.
Suppose the hardware is a bit faster than the original 6502, suppose some optimizatinon techniqfues in the compiler were activated, then the chance is high, that the C code will be only 2 times slower than the assembly language counter part. This slow down is acceptable because the code is way much easier to read.
Let us make the situation more practical. A naive prime number generator works with two nested loops. Such an algorithm will run in assembly around 5 times faster then the C counterpart. But, a more advanced sieve algorithm will run much faster, no matter in which programming language it was implemented. That means, the sieve prime number generator written in C will outperform the nested loop algorithm written in Assembly language easily.
The problem with coding assembly language is, that it is difficult to write longer and more complex algorithm. Implementing a sieve algorithm in Assembly is an advanced project. That means, even an experience programmer will need many hours in doing so.
It is a known fact that during the 1980s the c language was not very popular for the Commodore 64. All of the demos were written in pure assembly code. This fact is known, because the sourcecode of the demos is available and it is normal 6502 assembly code. But what if the demo competation has the contraints, that the code has to be written in C?
since a while the C64 community has discovered the cc65 cross compiler again. They are trying to use this environment to write games from scratch. The resulting games are look not impressive as the Assembly demos, but they are easier to read and it takes less effort to code them.
A list of some games written in cc65 are available at https://wiki.cc65.org/doku.php?id=cc65:cc65-apps The quality of these games is low. They are looking like the early c64 games from the 1984 year.
An interesting side question which remains unanswered in this blog post is, if Forth can outperform the C language on the C-64. the problem with Forth is, that most existing Forth systems are only interpreters, they are not converting the code into assembly instructions so the resulting program will run slow. What is known from the MS-DOS PC is, that compiled Forth code can reach the same speed like compiled c code.[1] That means, compiled forth code will run slower than hand coded assembly code.
[1] Almy, Thomas. "Compiling Forth for performance." Journal of Forth Application and Research 4.3 (1986): 379-388.
June 16, 2021
Comparing Google scholar with Microsoft academic
Coach based artificial intelligence
Limits of automation
What if voice commands don't work?
Whats the problem with self-driving cars?
Autonomous trains
Why are robots not available?
Kitchen robot
Artificial demonstrations
What is a robot kitchen?
Automation market worldwide
A positive future
June 11, 2021
Warum größere Projekte in Python keinen Sinn machen
Von den Sprachstandards her ist Python ausgezeichnet um größere Projekte darin zu realisieren. Die Python Virtual Machine ist hinreichend robust, die Python-internen Möglichkeiten zur objektorientierten Programmierung sind vorbildlich und das Modulkonzept erlaubt es Klassen zu Packages zu aggregieren. Technisch gesehen kann man mit Python durchaus Projekte mit 100k LoC oder sogar noch mehr realisieren. Es gibt nur ein Problem: wer möchte diese Programme verwenden? Endanwender machen üblicherweise einen großen Bogen um GUI Applikationen die in Python erstellt wurden, und Systemprogrammierer werden garantiert keine Libraries einbinden, die in Python geschrieben wurden. Dagegen spricht schon die geringe Performance. Python erinnert an das Schicksal was Turbo Pascal ereilt hat: es ist eine Lehrsprache in der Programmierausbildung, kann aber nicht für reale Projekte eingesetzt werden.
Die Sprache als solche ist vorbildlich: Python ist sehr elegant designt. Und es lässt sich darin auch produktiver Sourcecode schreiben, in dem Sinne dass man für einfache Aufgaben wie das Sortieren eines Arrays eben nicht erst tagelang in Foren um Rat fragen muss, sondern einfach den pythonic way of life verwendet. Nur, stellen wir uns mal vor wie das in der Realität konkret aussieht. Man schreibt sein elegantes Python Programm runter, es besteht aus 12000 Lines of Code, nutzt dafür selbstverständlich mehrere Klassen und dann? Rein theoretisch ist das Script jetzt überall ausführbar, aber wer will das auf seiner Maschine tatsächlich verwenden? Das Problem mit Python ist, dass es nur eine weitere Programmiersprache ist in einer ganz speziellen Nische (anfängerfreundlich und interpretiert) und das der damit erstellte Code garantiert nicht in größere Projekte wird einfließen. Genau genommen kann man Python Programmierer nur bemitleiden, weil ihre schönen Programme sonst keiner haben will. Java Programmierer werden ganz sicher keine Python Bibliothek in ihr Projekt einbinden, C Programmierer auch nicht. Mit etwas Glück kann man die Library im Pypi Repository unterbringen, aber das wars dann auch schon. Es ist keineswegs Zufall dass es keine großen namenhafte Python Projekte gibt, mit mehr als 10k LoC. Wie gesagt, rein technisch geht das ausgezeichnet, nur leider ist die Welt außerhalb von Python sehr viel kritischer in solchen Dingen.
Ich bin mir nicht sicher, ob Guido van Rossum der Welt einen Gefallen getan hat, als er die Sprache erfunden hat. Auf den ersten Blick hat Python viele Vorteile. So richtet es sich nicht explizit an Informatiker sondern an Wissenschaftler aus den Bereichen Physik, Linguistik und Geschichtswissenschaften. Ferner ist als interpretierte Sprache mit kurzen Edit-Compile-Run Zyklen konzipiert wodurch man in kurzer Zeit viel Code schreiben kann. Genau genommen ist Python also in eine Lücke vorgestoßen wofür es davor noch keine Sprache gab. Aber kann es wirklich das Ziel sein, zu den gefühlten 500 Programmiersprachen immer weitere hinzuzufügen um darüber die Spaltung der Entwickler voranzutreiben? Reicht es noch nicht, wenn Java und C# Programmierer gegeneinander arbeiten? Braucht man neben PHP, go und Perl noch weitere Sprachen? Python hat sogar das seltene Kunststück fertiggebracht zu sich selber inkompitbel zu sein. Bekanntlich laufen Python3 Programme nicht mehr auf einem Python2 Interpreter. Und das Pypy Projekt ist zwar ein JIT Compiler kann aber nicht alle Bibliotheken aus cpython verarbeiten. Irgendwie ist Python eine ganz eigene Welt die im universitären Umfeld prächtig gedeiht und die dazu führt, dass Leute ihre Zeit verschwenden. Anders kann man es nicht ausdrücken, wenn man Ressourcen in den Aufbau von Python Sourcecode investiert.
BEISPIEL
An einem kleinen Beispiel möchte ich das Thema vertiefen. Früher habe ich schön mit pygame Spiele programmiert. Das geht wunderbar einfach, und mit erstaunlich wenig Sourcecode. Man fängt einfach oben an mit “import pygame”, aktiviert das Fenster, und schon kann man seine erste Box auf den Bildschirm zaubern. Jetzt wo ich nicht pygame nutze, sondern in C++ mit SFML das Spiel realisiere ist es deutlich aufwendiger. Man muss sich durch Manuals auf English wühlen, es gibt für alles mindestens 4 Möglichkeiten und mehr Sourcecode benötigt man auch. Für den Computer macht es keinen Unterschied. In beiden Fällen sieht man eine GUI in der etwas angezeigt wird, und beidesmal mit ruckelfreien 60fps. Der Unterschied liegt in der Community die hinter der Sprache steht. Projekt-1 wendet sich an die Python Community, also an Nicht-Informatiker, während Projekt-2 sich an C++ Programmierer richtet. Die Community unterscheiden sich im Anspruch an sich selbst. C++ Programmieren tönen lautstark dass sie die besten Programmierer der Welt seien und demzufolge haben sie auch den Ehrgeiz die besten Programme des Universums zu schreiben, während es in der Python Community sehr viel entspannter zugeht, in dem Sinne dass man sich gegenseitig versichert Anfänger zu sein und überhaupt sich eher mit mit inhaltlichen Dingen und weniger mit Programmieren beschäftigt. Damals in Python war meine Produktivität immerhin bei stolzen 10 Zeilen Code am Tag, jetzt mit C++ in SFML ist sie abgesunken auf 5 Zeilen täglich. Dadurch verdoppelt sich natürlich die Zeitdauer bis das Projekt fertig ist.