Skip to main content

Pagespeed

After having removed all external resources from this blog, I was curious to see how well it fares in terms of performance. See for yourself:

I was pleasantly surprised, since all I did otherwise to improve performance was to add a few file types to those that are subject to gzip compression, and a few more to those types that the client should cache for some time. To get a well performing website is actually not as terribly difficult as I previously believed (Ich habe eine Blogsoftware in C gehackt, mit dietlibc, libowfat, unter gatling laufend und mit einem tinyldap-Backend.)

The number one reason to use Linux

A reader of my previous post remarked that while he found my reminiscences amusing, he believes them to be factually incorrect. Windows and MacOS, he insists, are not prone to crashes as I've described, but are just as stable as any Linux.

That's more or less true, but only for times more recent than those I've dwelled on, namely, the times prior to the release of MacOS X and Windows XP in 2001 (and yes, I knew the older versions of NT and was impressed by their stability, but I didn't like their price tag!).

But if recent versions of MacOS and Windows do not exhibit any deficiency in terms of stability, the reader asks, is there any other reason to prefer Linux?

One? Well, let's at least briefly discuss some of the aspects that led me to chose Linux instead of Windows or MacOS. GNU/Linux is free software, both as in speech as in beer. Contrary to the free software foundation, I'm pragmatic enough to appreciate the latter point as least as much as the former one. Had I decided to employ commercial software for my work, I would not have been able to equip desktop, notebook, and netbook with the complete set of applications required for my work. As it is, I'm fully equipped to work at both office and home, at a pub, or while travelling, without the need to carry a high-end notebook back and forth between all of these locations. I greatly enjoy this freedom.

But free software is not a phenomenon restricted to Linux. In fact, almost everything I really need for my work on an everyday basis is available for Windows as well. So what's the difference?

The difference is the effort is takes to get everything working. On Windows, software installation hasn't changed since the 1990s. Software is installed from any installation medium deemed to be trustworthy by the user or, more recently, by Microsoft. In the 1990s, freeware was installed from CDs acquired at computer shops, nowadays, from any place in the internet. The user usually 'googles' for the software, opens the first hit without checking if it is the developers' site or some obscure download portal, downloads the installer, and installs it by clicking on various 'next' and 'yes' buttons without paying attention to the browser toolbars and other potentially unwanted utilities installed at the same time. The user repeats this awkward procedure for every software to be installed. And that's just the beginning: for the vast majority of both free and commercial software, the user has to check for updates himself, and if there is an update, he has to go again through the same archaic, error-prone, and time-consuming routine.

That's because the Microsoft updater only updates Windows and software from MIcrosoft (such as Office), but nothing else. This update is scheduled to occur only once per month (at the patch tuesday), but is so slow and heavy on resources that every Windows user dreads this day. No wonder Windows users hate updates – but it never ceases to surprise me that they don't draw the obvious conclusion. In particular since this situation is not a mere inconvenience, but inevitably results in unpatched systems that are easily compromised. With potentially unwanted consequences. ;)

The game-changing feature of Linux compared to its commercial rival(s) is the existence of central software repositories, containing tens of thousands of individual program packages, combined with a powerful built-in package management system. Updating the system updates every single program installed on it. Security updates are not held back but are delivered in time, typically even before you read about the underlying vulnerability in the news—at least when using the big six. And installing the updates is usually a matter of seconds to (at most) minutes and does not require a reboot (with the exception of libc and kernel updates).

But that's not the only advantage of the GNU/Linux software distribution model. For example, one can transform a vanilla Linux installation into a dedicated workstation within minutes, and I find that immensely useful. That's because I need a number of applications for my work, including texlive, libreoffice, gnuplot, numpy, scipy, matplotlib, mpmath, gmpy, sympy, pandas, seaborn, hyperspy, gimp, inkscape, scribus, gwyddion, imagej, vesta, and perhaps a few more. On Arch Linux, I'm able to install all of these packages with a single command that takes me a few seconds to enter. And from that moment on, all of these applications are included in the update process, and I get the most current version available automatically. On Windows, I would not only have to search for these programs on the internet, manually download the installation files, and install all programs individually, but I also would have to update all of this software by myself, without any kind of automatism. That means once again visiting the respective websites, downloading the new installation files, ...

Just the first time installation is the work of one day. Do you really think I want to continue doing that for the rest of my life? As a matter of fact, I don't, just as everybody else. And as a consequence, you'll find plenty of outdated software on your average Windows installation. This situation is arguably the major reason why Windows is the preferred target of malware developers: they can simply rely on the fact that their drones will find a huge number of vulnerable targets.

When I set up my wife's gaming rig, where Windows serves as a game starter, I've decided to install and update the few applications she needs with Ninite, and I'm not unhappy with this poor man's package manager. I've also installed Secunia's Personal Software Inspector to detect outstanding updates. Alas, this very useful tool has been discontinued, and I have to find a replacement. There are several contenders, but as usual under Windows, it is not clear which of them are trustworthy, in the sense that which of them are actually doing the job advertised, and not instead being a vehicle to deliver ads to your desktop. It takes time to separate the wheat from the chaff, time I could have otherwise spent on much more valuable and enjoyable activities. That's Windows, the greatest productivity killer ever invented.

Perhaps I should have a look at Chocolatey, which claims to be a “real” package manager for Windows (for a comparison with Ninite, see here). Now wouldn't that be the perfect tool for Windows, which is marketed since version 10 as a service receiving regular updates that contain new features and other changes? Sure it would, but what does Microsoft do with this chance to establish Windows as a rolling release model? They stick to the patchday and roll out new features in the form of semiannual 'Creator Updates', which turn out to require a full upgrade installation. Home users can't even delay this upgrade.

That's ... well, pathetic. In the end, Windows in 2020 will be no different from Windows in 1994: a pile of duct-taped debris desperately trying to look like a modern operating system.

General Data Protection Regulation

The new European data protection law (the GDPR, or DSGVO in German) will be in effect in exactly 12 days from now. Time to act! I basically had the choice between implementing a privacy policy detailing on several pages why I definitely need to use all these external services, and finding alternatives. Well, since I very much prefer technical solutions over legal ones, the choice wasn't too difficult. ;)

I've used Google Analytics mainly for two reasons: first, I like to look at aggregate statistical data, and second, I don't see the evil in it. The user can block this external service in uncountable ways, for example, by simply rejecting the corresponding cookie, or by installing a script- or adblocker. Instead of creating a monstrous legal construct such as the GDPR, one should rather educate users to take care of themselves again. But the data protection industry only exists since these things are massively blown out of proportion, and of course they serve their own interests. They like to see the users as drooling rugrats, and they like to keep them like that. Well, that's the general trend of all western societies.

../images/ct_schlagseite_2018_11.jpg

c't-Schlagseite von Ritsch+Renn in c't 11/2018

Anyway, since I recently integrated a black list into my local DNS resolver, it's actually not that simple anymore to access the Google analytics domain. :D Furthermore, I'm interested how many of my readers actually block Google Analytics. And most importantly, I thought it would be a good idea to take the opportunity to get rid of this proprietary garbage. :)

I thus searched for local web analyzers and came across several, but took an immediate liking in goaccess. It's easily installed and even easier to use:

# echo "deb http://deb.goaccess.io/ $(lsb_release -cs) main" | tee -a etc/apt/sources.list.d/goaccess.list
# wget -O - https://deb.goaccess.io/gnugpg.key | apt-key add -
$ sudo apt update
$ sudo wajig install goaccess

Goaccess comes preconfigured with the log formats of the most popular web servers, but for Hiawatha, our web server of choice, manual configuration was required. Fortunately, I found all relevant information in the Hiawatha forum:

time-format %T %z
date-format %a %d %b %Y
log-format %h|%d %t|%s|%b||%r|%v|%u|%^|%^|%^|%R|%^|%^

Since the reports of goaccess are directly generated from the access logs of the web server, I anonymized the IP by activating the

anonymizeIP = yes

option in /etc/hiawatha/hiawatha.conf.

Finally, I configured goaccess to automatically generate html reports. Logrotate takes care of periodically deleting the logs (which don't contain personal information anyway). The reports, by the way, show that at least 75% of all visitors of this site block Google Analytics. Excellent! Also, I'm happy to see that I have four times as many readers than I thought. ;)

The next task was a local installation of the Google fonts I'm using. That was very quickly done by cloning the bash script 'best-served-local' of Ronald van Engelen and running it:

git clone https://github.com/ronalde/best-served-local
cd best-served-local
./best-served-local -i ../static/fonts "Kreon:300,400,700" > ~/temp/fonts/kreon.css
./best-served-local -i ../static/fonts "Fira Mono:400,700" > ~/temp/fonts/fira.css

The css snippets thus created have to be included in the main css file of the blog's theme, which in my case is a modified bootstrap. The fonts themselves go into ../output/assets/static/fonts (that has to be consistent with the option for the command line parameter -i above).

Next, I wanted to get rid of the dependency on an external resource for MathJax. I struggled first with a local installation of KaTeX, but couldn't get it to work. MathJax instead was very easy. I simply installed libjs-mathjax and fonts-mathjax on pdes-net.org, and copied it to a location accessible to the webserver

cp -r /usr/share/javascript/mathjax/ /var/www/hiawatha/cobra/output/assets/static/mathjax/

and back to my local blog installation

scp -r cobra@netcup:/var/www/hiawatha/cobra/output/assets/static/mathjax/ home/cobra/ownCloud/MyStuff/Documents/pdes-net.org/output/assets/static/

Finally, the Nikola configuration file had to be updated correspondingly:

<script src="../assets/static/mathjax/MathJax.js?config=TeX-AMS_SVG"></script>

Finally, the search box on this site already works on the client side by virtue of tipuesearch. So, I was done!

Ah, not entirely: I still had to update my contact page. After doing that, I also moved the link to the bottom of the page as its customary on most sites.


Well, when I look at the result, I'm actually quite pleased. I feel that I have really done something for the good of my visitors, instead of continuing to act as a data collector for Google and others, and justifying that by tons of legal mumbo-jumbo in a privacy policy nobody reads or understands. But IANAL, and it is likely that this particular species actually prefers the latter, no matter what the GDPR says about transparency and plain language. Let's see.

Server speed

An important feature of a server is the speed of its internet connection, or more precisely, its latency and bandwidth. How can we measure these quantities if all we have is command line access?

Regarding latency, look at my last post. Here's an example from pdes-net.org with the fastest mirror:

± netselect -vv https://mirror.de.leaseweb.net/debian/
Running netselect to choose 1 out of 1 address.
............
https://mirror.de.leaseweb.net/debian/ 2 ms   6 hops  100% ok (10/10) [    3]

If you want a closer look at the 6 hops, use mtr-tiny.

Concerning bandwidth, use speedtest-cli:

sudo wajig install speedtest-cli

Here's an example from pdes-net.org:

± speedtest
Retrieving speedtest.net configuration...
Testing from netcup GmbH (185.170.112.87)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by IT Ohlendorf (Salzgitter) [111.93 km]: 11.215 ms
Testing download speed.................................................
Download: 427.58 Mbit/s
Testing upload speed...................................................
Upload: 393.16 Mbit/s

Hm. A ping of 2 ms and a symmetric down- and upload of 0.4 GB/s for a handful of € per month? Why can't I have that at home?

HTTPS mirrors

And while we're at it, let's also configure HTTPS mirrors for our package updates. That may seem superfluous at the first glance, as these packages are public content and are signed with the private GPG keys of the developers, certifying their authenticity. However, the signatures are only one part of the story, and the encrypted transfer is the other. In fact, updates installed via a plain-text HTTP connection can be intercepted by GPG replay attacks. A comprehensive analysis of this scenario was done a decade ago at the University of Arizona. The following brief summary is due to Joe Damato:

“Even with GPG signatures on both the packages and the repositories, repositories are still vulnerable to replay attacks; you should access your repositories over HTTPS if at all possible. The short explanation of one attack is that a malicious attacker can snapshot repository metadata and the associated GPG signature at a particular time and replay that metadata and signature to a client which requests it, preventing the client from seeing updated packages. Since the metadata is not touched, the GPG signature will be valid. The attacker can then use an exploit against a known bug in the software that was not updated to attack the machine.”

The distributions I'm currently using and am familiar with (Archlinux and Debian) do not use HTTPS mirrors by default, but can be coaxed into doing so.

That's particularly easy for Archlinux after installing reflector, a python script that allows to filter mirrors by various criteria such as their geographical location, up-to-dateness, download rate, and, last but not least, connection protocol. The following one-liner overwrites /etc/pacman.d/mirrorlist with the current top-ten of all HTTPS mirrors in terms of up-to-dateness, overall score and speed:

reflector --verbose --protocol https --latest 100 --score 50 --fastest 10 --sort score --save /etc/pacman.d/mirrorlist

This command can be run manually, or automatically by either using a pacman hook to trigger it in the event of mirrorlist updates, or by a timed systemd service. Reflector is thus an as flexible as convenient tool for selecting the optimum mirrors.

I expected that Debian would offer something equivalent, but to my considerable surprise and disappointment, there's nothing coming even close. The netselect derivative netselect-apt is only capable of finding the ten fastest mirrors for the relevant release of Debian (i.e, stable, testing, or sid). But how do I know if these mirror support HTTPS? To the best of my knowledge, the only way short of trying them one-by-one is the python script available here (or one of its forks).

Armed with this script (I'm using the multithreaded variant), I usually just create a list with all mirrors, which I then pipe through netselect to find the fastest one:

± ./find-https-debian-archives.py --generic --no-err | awk '{print $1}' | grep https | tr '\n' ' ' | xargs netselect -vv

Oh, and one should not forget to install 'apt-transport-https' to allow apt to use HTTPS mirrors. Only true Debilians will find this situation tolerable.

Let's encrypt

Ever since I've set up the new server for this blog, I've wanted to make the switch from plain HTTP to TLS-encrypted HTTPS (if you think HTTPS is for online shops and banks only, think again).

This transition turned out to be much easier than I thought. Hiawatha, our web server of choice, comes with a script that takes care of registering the site at Let's Encrypt and requesting certificates for the associated domain(s). Chris Wadge, the maintainer of Hiawatha for Debian, provided an excellent tutorial guiding through the few steps necessary to configure Hiawatha for serving HTTPS content.

Since I had to configure vhosts for the certificates anyway, I took the opportunity to set up some proper subdomains. For example, this blog can now be reached at https://cobra.pdes-net.org.

After a bit of tweaking (setting HSTS to one year), the security rating of our site is flawless:

../images/qualys.png

Intel microcode updates

Intel offers an updated microcode data file since 8th of January. According to Heise, these updates are exclusively devoted to Spectre and CPUs younger than 2013, Meltdown being taken care of by kernel updates, and older CPUs being (hopefully) the subject of subsequent microcode updates.

To examine and to eventually apply these updates, they have to be downloaded:

Arch:

sudo pacman -S intel-ucode

Debian:

sudo apt install intel-microcode

Debian automatically updates initrd, but for Arch, one has to update the bootlader as described in the wiki. Prior to doing so, one should check whether the updated microcode file actually holds updates for the CPU in use at all. In agreement with the report of Heise, there's no update for my Ivy Bridge Xeon:

➜  ~ bsdtar -Oxf /boot/intel-ucode.img | iucode_tool -tb -lS -
iucode_tool: system has processor(s) with signature 0x000306a9
microcode bundle 1: (stdin)
selected microcodes:
  001/138: sig 0x000306a9, pf_mask 0x12, 2015-02-26, rev 0x001c, size 12288

But the Haswell i7 at work is destined to receive one:

➤ bsdtar -Oxf /boot/intel-ucode.img | iucode_tool -tb -lS -
iucode_tool: system has processor(s) with signature 0x000306c3
microcode bundle 1: (stdin)
selected microcodes:
  001/147: sig 0x000306c3, pf_mask 0x32, 2017-11-20, rev 0x0023, size 23552

After a reboot, it is easy to ckeck whether an update of the microcode has taken place or not:

➜  ~ dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0x1c, date = 2015-02-26
[    0.652031] microcode: sig=0x306a9, pf=0x2, revision=0x1c
[    0.652284] microcode: Microcode Update Driver: v2.2.

Same as before.

➤ dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0x23, date = 2017-11-20
[    0.552077] microcode: sig=0x306c3, pf=0x2, revision=0x23
[    0.552404] microcode: Microcode Update Driver: v2.2.

Indeed, a new one!

And what does the update do? Am I now immune to both Meltdown and Spectre on the Haswell system?

According to the 'Spectre & Meltdown Checker', the update has actually very little effect. Here's the result on my Xeon:

$ ./spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.29

Checking for vulnerabilities against running kernel Linux 4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64
CPU is Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
- Checking count of LFENCE opcodes in kernel:  NO
> STATUS:  VULNERABLE  (only 21 opcodes found, should be >= 70, heuristic to be improved when official patches become available)

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
- Mitigation 1
- Hardware (CPU microcode) support for mitigation:  NO
- Kernel support for IBRS:  NO
- IBRS enabled for Kernel space:  NO
- IBRS enabled for User space:  NO
- Mitigation 2
- Kernel compiled with retpoline option:  NO
- Kernel compiled with a retpoline-aware compiler:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)

CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
- Kernel supports Page Table Isolation (PTI):  YES
- PTI enabled and active:  YES
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

And here's the Haswell. Note Spectre 2.

$ ./spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.29

Checking for vulnerabilities against running kernel Linux 4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64
CPU is Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
- Checking count of LFENCE opcodes in kernel:  NO
> STATUS:  VULNERABLE  (only 21 opcodes found, should be >= 70, heuristic to be improved when official patches become available)

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
- Mitigation 1
- Hardware (CPU microcode) support for mitigation:  YES
- Kernel support for IBRS:  NO
- IBRS enabled for Kernel space:  NO
- IBRS enabled for User space:  NO
- Mitigation 2
- Kernel compiled with retpoline option:  NO
- Kernel compiled with a retpoline-aware compiler:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)

CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
- Kernel supports Page Table Isolation (PTI):  YES
- PTI enabled and active:  YES
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

Don't see the difference? Look again in 'Hardware (CPU microcode) support for mitigation'. That's all, yes. Kind of sobering, I agree. Please blame Intel, not me.

Update: After reading this article, I understand that the microcode update only prepares the ground for the actual patch, which will come with kernel 4.15 and later versions. I'll check again then, of course after updating the 'Spectre & Meltdown Checker' (simply pulling the lastest version via 'git pull origin master').

Meltdown patch available for Arch

If you haven't heard of Meltdown and Spectre, it's about time you do. Since yesterday, all newspapers and even TV provide extensive coverage on a recently discovered vulnerability of modern CPUs potentially resulting in a leak of sensitive data. While Meltdown seems to primarily affect all modern Intel CPUs, Spectre also applies to AMD and ARM chips. The scale of this vulnerability is not only unprecedented, it's historic.

The KPTI (formerly KAISER) patch developed by the University of Graz defeats Meltdown. The patch is part of the coming Linux kernel 4.15 and has already been backported to 4.14.11.

Which brings me to the good news for Archers like myself: Kernel 4.14.11 is available since yesterday, 8:13 CET. Spectacular work from upstream, but also from the Arch team! No new microcode, though – the currently available one is still from 17th of November.

CentOS just provided patches as well. There's nothing from Debian yet, however. :(

Oh, and I've just received a mail from the hoster of pdes-net.org. Good to see they react at once.

What a great start of 2018. Well, regardless, happy new year to all of you. ;)

Update: An in-depth analysis of the mechanisms resulting in meltdown and spectre can be found in an online article (in German) written by the legendary Andreas Stiller (who, most unfortunately, retired at the end of 2017).

Genealogy

My first Linux was Redhat 2.0, installed on a Pentium 90 from a CD attached to a magazine entitled “Linux: ein Profi-OS für den PC”, which I had purchased for 9,99 DM at Karstadt in December 1995. I was mesmerized: to run a Unix system on my PC not unlike the Solaris I've had before on a Sun workstation (which was entirely out of reach financially) was a revelation. Soon after, I acquired the “Kofler” (2nd edition) that included a CD with Redhat 3.0.3.

Why was I so interested in Linux? Of all operating systems I knew, Solaris was the only one I found to be a pleasure to work with. DOS was stable and reliable, but much too limited, and MacOS and Windows appeared to me as demonstrations of the various ways a computer can crash rather than operating systems.

I used MacOS 6 on a Macintosh II from 1992 to 1994 in Japan and learned to thoroughly despise this caricature of an operating system. I sometimes felt that I spent more time in looking at the bomb than doing anything useful. When Apple launched the switch campaign a decade later, the frequent crashes of MacOS and the bizarre error messages were already a legend.

I returned to Germany in 1994 and and had great hopes in Windows, from which I'd heard from a guy working for FutureWave Software, a company which developed the precursor of Shockwave Flash. Well ... the much touted Windows turned out to be DOS with an amateurishly designed GUI, which was prone to surreal crashes that occurred spontaneously, without any apparent reason.

Before one could enjoy these magic moments, one had to install the whole caboodle. And that meant, of course, to install first DOS 6.22 (which came in four 3½ inch floppy disks) and then Windows 3.11 (eight 3½ inch floppy disks). If you're too young to know what that means, listen to the sound of computing in the 1990s.

Redhat, in contrast, came on a CD, which in itself seemed to reflect the technological supremacy of this OS over its commercial cousin. This impression, however, turned out to be nothing but a delusion: the installation procedure could only be started from DOS! The installation itself required intimate knowledge of the hardware components of the computer and their IRQ numbers and IO addresses. Ironically, the easiest way to get this information was an installation of Windows on the same computer.

What made the installation even more difficult was my plan to realize a dual boot configuration—Windows for the games, Linux for LaTeX. In fact, the typesetting suite was one of the main reasons for my interest in Linux, because it was an integral part of the distribution at that time. I had just installed LaTeX on Windows on my computer at the office, and after an entire day and a seemingly endless sequence of floppy disks, I realized that I didn't want to do that again.

After struggling with a number of difficulties, I managed to set up my dual boot system. Encouraged by this success and the pleasant user experience, I installed a variety of distributions in the years to come, and found the installation to become easier and easier with every year. Installing Mandrake Leeloo in 1998 on a brand-new Pentium II 266 was way easier than installing Windows 98. In 2001, HAL was still science fiction, but we had computers every dumbo could handle.

At least that was my impression. Ubuntu, a Debian derivative, materialized in 2004 and was touted to be the first Linux distribution a normal user would be able to install and use. The Ubuntu hype is unbroken since, and in many mainstream media, Ubuntu has become synonymous to Linux. In recent years, Ubuntu has been superseded by Mint in terms of popularity. It seems that the masses always chose unwisely.

But what is a good choice? And how should a beginner chose from the 305 distributions listed on Distrowatch?

Well, let's start with the second question. The situation is actually much less confusing than it seems at first glance. As a matter of fact, we do not face 305, but just about a dozen of independent Linux distributions, and the rest are offsprings. Wikipedia has a comprehensive article about this subject, and the fantastically detailed timelines visualize the historical development most beautifully. The comparison of Linux distributions is another illuminating article.

For simplicity, let's project this development onto a one-dimensional time axis. These are the originals (together with popular derivatives):

Slackware (July 1993)
Porteus, SalixOS, Slax, Vector, Puppy, (SUSE)
Debian (September 1993)
Ubuntu, Mint, ElementaryOS, Grml, Knoppix, SteamOS, Damn Small, Puppy, ...
Redhat (October 1994)
CentOS, Mandrake/Mandriva/Mageia, Scientific, Fedora, Qubes
SUSE (May 1996)
OpenSUSE
Gentoo (July 2000)
Sabayon
Archlinux (March 2002)
ArchBang, Antergos, Chakra, Manjaro

Is that really all there is? Well, these are the big six. There are some notable newcomers:

CRUX (December 2002), Alpine (April 2006), Void (2008), and Solus (December 2015)

The first three are technically markedly different from the mainstream distributions, and are definitely not aimed at beginners. All right, all right...which one of the big six is aimed at beginners?

None, of course. What do you think? That back then anybody in his right mind developed primarily for noobs? Hell, the word was not even created yet, since the whole category of people who could be labeled as noobs did not exist. The world wide web, which would give birth to a generation that watches videos to learn how to boil eggs, had only been invented. Incredible as it sounds, there was no Google, no Youtube, no Twitter or Facebook. Watching a video with the bitrate of modems in 1993 (14.4 kb/s) would only have worked in ultraslow motion anyway (1 s stretched to 5 min). In any case, personal computers and their operating systems were perceived as a revolution in user friendliness compared to what had existed before, and people were willing to acquire the skills it took to operate them.

To develop Linux for noobs is a decidedly modern phenomenon, invented by a visionary southafrican billionaire in the hope to become the 21st century's Bill Gates. Indeed, Mark Shuttleworth was the first person who tried to market Linux. He did that in a remarkably effective way by appealing to first world people's natural sentimentality: “Ubuntu is an ancient African word meaning ‘humanity to others’.” Hardened Linux veterans like me reacted to this campaign in a rather unfavorable way, I'm afraid:

Ubuntu is an ancient African word meaning 'I can't configure Debian'.

And now to the first question: what is a good choice? As I've stated in a previous post, I generally do not like to make recommendations – people's qualifications, needs, and preferences are just too diverse. However, I can tell you what criteria are important for me and what I have consequently chosen to work with.

  1. I'm not willing to make any compromise regarding security. The distribution I use must have a dedicated security team and a dedicated security advisory system. That excludes the majority of pet project and show-case distributions derived from one of the big six.
  2. Many thousands of useful programs exist in the open-source world. I want as many of them as possible to be easily accessible in central repositories managed by the distribution. A clearly defined core subset should be officially supported. Situations as in Ubuntu (and derivatives), where no one knows a priori what's supported and what not, are unacceptable.
  3. New software versions should be available days after they are provided upstream, not months or years. I do not have the patience for waiting for bugfixes for several months because of six-month release cycles or similar nonsense. That leaves only rolling-release distributions such as, most prominently, Arch, Gentoo, Debian Testing and Debian Sid, Fedora Rawhide, and openSUSE Tumbleweed.
  4. Last but not least: I want to invest as little time and effort with my computer installations as possible. They should run smoothly and function as expected.

I've thus almost inevitably arrived at the following constellation:

Desktop Home: Arch
Desktop Office: Arch
Notebook: Arch
Netbook: Debian Sid
Server: Debian Testing
Compute Servers: Debian Testing, CentOS [1]

In addition to all these physical systems, I also have various installations of Debian Testing, Debian Sid, Arch, and CentOS as virtual machines. Oh, and, before I forget: there's also a lonely Windows 7, that is about as troublesome as all of the above together. No, I'm not kidding. Just the regular monthly update takes an hour.

In any case, those are the distributions I'm using. What can you learn from that, if you are a noob? Just a few basic things, perhaps. First of all, it's good to know what you really want. And then, it's good to act correspondingly, no matter of your level of noobishness. ;)

[1] The CentOS compute server is administered now by Jonas (thx a bunch!), which is not to be understood as a statement against CentOS. On the contrary, I think that CentOS is an extremely capable and convenient server (!) OS.

Quality journalism

c't 25/2017. A test of the new iPhone X entitled Für die nächsten 10 Jahre. In the conclusion on page 55:

Nur das iPhone X zeigt, wie ein aktuelles Smartphone aussehen sollte. [...] Face ID ist ein Alleinstellungsmerkmal gegenüber allen anderen und man möchte es nach kürzester Zeit nicht mehr missen.

Same issue on page 60: A test of the new OnePlus 5T entitled Hohe Schlagzahl.

Zusätzlich baut OnePlus eine Gesichtserkennung ein. Die arbeitet in unter einer halben Sekunde, ließ sich nicht von Fotos überlisten und nicht von Brillen oder Mützen verwirren.

It is well known that a significant percentage of the population and apparently 100% of all journalists suffer from a catastrophic failure of higher cerebral functions when being confronted with products from Apple. But what's the reason for this distressing loss of self-control? Well, if you look into my previous post on a recension of the iPad by Spiegel Online, it is clear that this loss closely resembles the one seen in sexually overloaded situations such as mating rituals and reproductive scenarios, during which the male brain is fully occupied in sending messages to the pituitary gland triggering production of testosterone (which, incidentally, is also responsible for making the individual drool).

The inevitable conclusion of these observations is that Apple coats their products with certain pheromones acting as a highly effective sexual stimulus. Since the next Apple store is just a 10 min walk, I shall test this hypothesis myself. Should I develop the madness described above, please shoot me on sight.

Contents © 2018 Cobra · About · Privacy · Powered by Nikola · Creative Commons License BY-NC-SA