December 03, 2024

Should software be updated?

Before this controversial question gets answered let us take a step back and describe the situation from a general perspective. Software development is usually done in the upstream which is technically realized as a git repository. Multiple software developers are commiting changes to the repository frequently. They are solving bugs, clean up the code and introducing new features. Its not possible to reinvent this workflow because the described combination of programmers who are submitting updates to the git repository is the industry standard.

The only open debate is about how frequently the upstream source code gets converted into binary production ready executable code. This question remains unsolved and its the major cause why so many Linux distributions are available. The first distribution in the ecosystem, Gentoo Linux, assumes, that the end user has a need to compile the code by itself, because his needs are individual. The second distribution, Arch Linux, assumes that the user likes to update the software multiple times per week because this will make the system stable. The third distribution, Debian Linux, assumes that the end user runs a production server which is updated once a month and only security updates are desired.
The simple question is: which strategy is recommended? Nobody knows, and because of this unclear situation lots of opposite Linux distributions and Linux users are available. In general there are two conflicting opinions: rolling release vs. stable release. Rolling release means to increase the update frequency. In the maximum case this is equal to install every day new versions of the packages. The opposite ideology, stable release, means in the extrem case to run an oldoldstable Debian system which wasn't updated since months and some major security updates were skipped because of individual reasons.
Even computer scientists are assuming that the world can be described with Yes and no, True and false, its pretty difficult to answer which of the update strategies is more secure and more recommended. What we can say instead is, that in the reality, the stable release principle is applied in the reality more frequently. Most production servers in the internet are running with a stable release Linux distribution but not with a rolling release system like Archlinux or Debian SID.
To understand the conflict we have to describe how an upstream release gets delivered to a downstream software package. Suppose on day 1 a new version of a software gets installed for the first time on a computer. On this single day 1 the upstream and the downstream is synced perfectly. Both layers are running the same software. A few days later, the git repository in the upstream was modified because of different reasons. And one month later, the upstream git repository was improved drastically because the developers have modified the code. Unfortunately, the code on the downstream layer wasn't affected so the layers are running out of sync. The version in the upstream might be v1.05 but the version in the downstream remains v1.00.
Even without doing something, the out of sync situation will become serious over the time axes. After 12 months, the downstream will run still v1.0 but the upstream has improved the code towards v2.0. This will generate a conflict between the parties. The upstream argues, that the downstream is in charge to update to the new version, but the downstream has no reason for doing so and argues, that he can't update because its a production server with no downtime window.
Resolving the conflict isn't easy. In most cases, the downstream will take a decision. He can update the system in a certain frequency, for example, each month, each year or never. Its not clear what the optimal frequency is and most computers are updated in a different interval. The only thing which is for sure is, that a longer duration between two updates increase the out of sync situation between upstream and downstream.

No comments:

Post a Comment