One does not simply exit vim

A few days ago, stackoverflow has hit a major milestone: the community has helped one million developers to exit vim. If that isn't reason for celebration, nothing is.

'Of course, these aren't real programmers, since those use something else entirely.'

Surely the mighty ed?

'Not even close.'

For those still interested in the grandfather of vi (aka the most user-hostile editor ever created): here's an excellent little tutorial. By the way, you can exit ed by the canonical quit command of Unix applications: q. Much more intuitive than vim!

I'll write about the editors I'm using (and why) in the near future.

TCAD station, part II

As a testbed for commercial TCAD software, we will use a standard desktop PC equipped with an i4790 CPU and 32 GB RAM, and an Nvidia 710 GT to connect the monitor via DVI. As a substitute for Redhat Enterprise Linux (RHEL) required by all of the packages we're about to evaluate, I'm going to install CentOS.

I create a bootable USB stick by issuing

dd bs=4M if=CentOS-7-x86_64-Minimal.iso of=/dev/sdc status=progress && sync

The first stick doesn't work, and it takes us perhaps half an hour to realize that it's the fault of the stick, not the system. A second one works right away. I manually partition the disk, as in my installation in a virtual machine, choosing btrfs as filesystem for both system and home partitions. I add a UEFI boot partition as well as a swap partition.

During the first boot from hard disk, the system throws an error message concerning nouveau (the open-source driver for the Nvidia graphics card) and subsequently hangs with a kernel soft lock signified by the repeated message that CPU #i stuck for 120s. After a hard reset, the system boots and behaves as expected, but when I boot again to activate the new kernel installed by yum, the system hangs again. Since the hangup is always preceded by the error message from the nouveau driver, we remove the Nvidia card and reboot. Now the systems hangs without any message. Great! A cold boot (with the Nvidia card installed again) seems to help at first, but later the system hangs repeatedly and finally refuses to boot to the login screen at all.

What I thought to be done within an hour has already taken most of this Friday morning. Since the Nvidia card does not seem to responsible for these problems, I start to suspect the btrfs filesystem, which Redhat still considers to be a technology preview. In the afternoon, I thus reinstall the system, this time choosing the default filesystem XFS. And indeed, while I still get the error message from nouveau, the system boots up without any of the previous symptoms. I go home with the conviction that I've solved the problem.

Saturday morning, curiosity gets the better of me and I decide to check if the system still runs. It does, but htop shows that the uptime is only 49 min instead of the expected 13 h. Weird! I check again Sunday morning, and again the uptime is less than an hour. At the same time, I notice that the filesystem usage seems to increase by roughly 1 GB per day. Aha! An 'll /var/crash' confirms my suspicion: the system crashes and dumps the kernel roughly every 2 or 3 hours.

Core dumps can be analyzed to determine their cause. Following the Redhat tutorial and issuing the command

crash /usr/lib/debug/lib/modules/3.10.0-514.16.1.el7.x86_64/vmlinux /var/crash/127.0.0.1-2017-05-01-07\:35\:53/vmcore

I get the following crash report:

KERNEL: /usr/lib/debug/lib/modules/3.10.0-514.16.1.el7.x86_64/vmlinux
        DUMPFILE: /var/crash/127.0.0.1-2017-05-01-07:35:53/vmcore  [PARTIAL DUMP]
                CPUS: 8
                DATE: Mon May  1 07:35:42 2017
          UPTIME: 08:27:43
LOAD AVERAGE: 0.05, 0.03, 0.05
           TASKS: 224
        NODENAME: testbed.de
         RELEASE: 3.10.0-514.16.1.el7.x86_64
         VERSION: #1 SMP Wed Apr 12 15:04:24 UTC 2017
         MACHINE: x86_64  (3600 Mhz)
          MEMORY: 31.9 GB
           PANIC: "Kernel panic - not syncing: Hard LOCKUP"
                 PID: 0
         COMMAND: "swapper/6"
                TASK: ffff880174a9edd0  (1 of 8)  [THREAD_INFO: ffff880174ab8000]
                 CPU: 6
           STATE: TASK_RUNNING (PANIC)

This diagnosis lets me find the cause of the core dumps very easily: it's an unresolved bug reported in September 2016 and given high priority by the developers. The bug has been found in version 7.2.1511 but has not been fixed even in the current version 7.3.1611. However, the bug reporter and others have identified the nouveau driver to be the culprit. And indeed: after removing the GT710 and rebooting, the system does not suffer from any further lockups and core dumps:

 ob@testbed:~$ uptime
19:00:08 up 17 days,  9:00,  2 users,  load average: 0.00, 0.01, 0.05

I have to admit that this experience was an unexpected surprise. CentOS, as a binary compatible clone of RHEL, has the reputation of being the very model of a conservative Linux distribution, and thus to be a paragon of stability and reliability. Consequently, I had expected outdated software, but not buggy implementations of core packages, and critical bugs that are open for 8 month without eliciting any response from a developer.

Thinking twice, however, I realize that I should not have been surprised at all. About ten years ago, we decided to switch our core servers from SUSE Enterprise Linux (SLES) to OpenSUSE since we were entirely frustrated with the support and bug fixing policy of SLES, despite the fact that we payed a handsome amount to Novell every single year. Personally, I'm not too fond of OpenSUSE either, but the core servers don't overly concern me. Our compute servers, for which I'm responsible, are running Debian Testing, and in view of the minimal administrative effort required over the past ten years, I congratulate myself for this decision.

TCAD station, part I

You certainly know the old tune on the lack of “professional” software for Linux, with “professional“ being usually an implicit synonym for Microsoft Office and the Adobe Creative Suite. For a scientist or engineer, people joining this chorus appear to be misinformed and to be motivated by ideology rather than reality. In technically oriented fields, software is in fact mostly cross-platform or even developed primarily for Linux. That's true in particular for multithreaded software with non-negligible demands for computational resources and five- to six-figures price tags. Examples include Maxwell solvers, multiphysics solutions, and TCAD packages.

Commercial software for Linux is usually certified only for one of the enterprise Linux distributions, namely, Redhat and SUSE Enterprise Linux (RHEL and SLES). Some of this products turn out to be actually distribution-agnostic, meaning that they run without any problems also on, e.g., Debian. But in many other cases, the software only runs and even only installs on systems with the distribution it was developed for. I've learned that the hard way, wasting an entire day trying to get the commercial finite-difference time-domain simulation package FDTD solutions to work under Debian Stretch. In the end, we've used Meep instead.

I've vowed that this wouldn't happen again, and since we are in the process to evaluate some selected TCAD solutions (all of which are certified for RHEL only), it seemed to be wise to set up a test server running CentOS (a binary-compatible clone of RHEL). The TCAD software requires a graphical interface, but I did not intend to perform a standard installation, which results in a complete Gnome desktop suitable for a workstation rather than a compute server. For servers, I prefer the installation to be as lightweight as possible. For example, I usually install the tiling window manager wmii if a graphical interface is required or desirable on a server.

Since I'm not at all familiar with CentOS and the available software, I decided to first set up a virtual machine to look for possible pitfalls. For the base instalIation, I've downloaded and installed the minimal ISO which I've then, upon first boot, updated with:

yum update
grub2-mkconfig -o /boot/grub2/grub.cfg

Why the reconfiguration of grub? Well, the update installed a new kernel, but CentOS did not automatically update the grub configuration file. Weird, but true.

CentOS offers three tiling window manager, two of which I'm familiar with: i3 and xmonad (never even heard about spectrwm). On second thought, however, I realized that it may be a better idea to install a more conventional desktop to give the TCAD testers an environment they feel comfortable with. XFCE seemed to be a reasonable compromise and can be installed with these few steps:

yum install epel-release -y
yum groupinstall "X Window System" -y
yum groupinstall "Xfce" -y
systemctl get-default
systemctl set-default graphical.target
systemctl isolate graphical.target

The last step seamlessly starts the X Window system and thus catapults one into the graphical desktop. Slick! Now we only want a resolution higher than the oldfashioned 1024x768 offered by the default (VESA) driver. In other words, we need to install the Virtualbox guest additions:

yum install dkms
yum groupinstall "Development Tools"

To install the guest additions, I first inserted the 'Guest Additions CD Image' in the 'Devices' menu and then downloaded the image. After the download, I mounted the image and compiled the guest additions:

mkdir /media/vboxadditions
mount /dev/cdrom /media/vboxadditions
cd `/media/vboxadditions <file:///home/cobra/Documents/media/vboxadditions>`_
./VBoxLinuxAdditions.run
reboot

Still no higher display resolutions available? The script in this thread enabled me to activate the 'Auto-resize Guest Display' option in the 'View' menu, which finally allowed me to use the desired full HD resolution (on a WQHD monitor):

#!/bin/bash
Diplay_Name=`xrandr | grep connected | cut -d' ' -f1`
Display_Spec=`cvt 1920 1080 | grep Modeline | cut -d' ' -f2 |cut -d '"' -f2`
Display_Params=`cvt 1920 1080 | grep Modeline | cut -d' ' -f2-18 | sed s/'"'//g`

xrandr --newmode $Display_Params
xrandr --addmode $Diplay_Name $Display_Spec
xrandr --output $Diplay_Name --mode $Display_Spec

In the second part of this post, I will write about the installation of CentOS on the physical server we have reserved for the evaluation stage. From the (largely) pleasant experience with the virtual machine, I expected this task to be entirely straightforward. I thought to be done in an hour, including configuration of user accounts as well as the sshd and vncd daemons for remote access. Well ... it took more than one day. Stay tuned. 😉

Lingua franca

Last week, a colleague of mine who's using Antergos showed me a particular error message he's getting when attempting to update the system, and asked whether I could help him to resolve it. However, I had never seen this error message before, and recommended to search for it. I've realized only later that the error message had been in German.

Let me give you one advice: whatever Linux distribution your are using, whatever your native tongue may be, do not chose German. Nor Spanish, French, Hindi, or Chinese. Do not use any system language other than English!

Why? Well, just compare the active forums on the generic, primarily english-speaking Arch forum with the German one. Right now, it's about 70 topics compared to 3. That ratio also applies to the number of answers you are likely to get when searching for a particular problem with Archlinux in English or German. It's proportionally less likely to get a response to questions not posted in English.

In my 30 years of computer usage, I've in fact never used any system language other than English. Oh well, that's not entirely true: the Macintosh II and the 386 Mitsubishi notebook I've worked with in 1992 had a Japanese user interface, but that had not be my decision. Other than those, I've always used English regardless of the OS. I believe that this decision is the major reason why I've usually fared better, regarding computer installations, than most of my contemporaries, although we nominally did the same. As a matter of fact, one often finds plenty of solutions in the internet when searching for error messages in English, but hits nothing in any other language.

Æchter Senf

As a native German, I'm born with the genetic predisposition to love Bratwurst, Sauerkraut, and Kartoffelsalat. However, I insist that the Bratwurst, regardless of its provenience, is served with Mustard, and not just any one. Unfortunately, the standard mustard in Germany (“mittelscharfer Senf”) is a feculent substance that sickens me even if I only think about it. It tastes just like one would imagine a vinegar-salt-sugar paste with a homeopathic dose of mustard powder would taste: disgusting.

What I expect from mustard is really very simple: it should taste of mustard (and not exclusively of vinegar) and has the effect of mustard, i.e., I want to experience the familiar nose-tingling sensation one also knows from horseradish or wasabi. And that's to be expected, because all these condiments contain C4H5NS (3-Isothiocyanato-1-propene) or allyl isothiocyanate.

Wasabi has probably the highest C4H5NS content among the brassicaceae plants, and it was in fact in Japan where I discovered my love for the effect this substance has on the “mucous membranes of the sinuses”, as medically oriented people would put it. I've been invited several times to high-end sushi places in which wasabi was freshly prepared right at the dining table using a shark-skin oroshigane. I'm not too fond of sushi in general, but tekkamaki and unakyumaki are absolutely delicious when served with a proper amount of fresh wasabi.

And I soon found that the Japanese are serious mustard aficionados as well. Karashi, for example, is just plain brown mustard powder mixed with water and is served with many popular dishes, for example oden. Sausages are also highly popular and are usually consumed with a variety of excellent mustards available at any 7-11.

And then I returned to Germany just to discover that we live in a culinary desert. We have great sausages, oh yes, but where's the mustard of equivalent quality? A regular super market offers a dozen different varieties of “mittelscharfer Senf” (all tasting exactly the same), one sweet Bavarian variant (sugar paste with one or two mustard seeds), and two or three Dijon type mustards such as Löwensenf and Maille Dijon Originale. The latter ones are the least despicable, but I'm still not satisfied with their effect on my nose.

But who cares what the local market offers, this is the age of online shopping! Right?

Well, I order a lot of my food in the internet, and consequently also tried various offers I found online. Einbecker produces mustard with excellent taste, just as my favorite chili shop. I've set my hope in the latter as there were rumours in the forum that Michael Dietz, the founder of Chili Food, planned to create a truly hot mustard. In the end, it turned out to be just another of the so-called hot mustards that are simply pimped with chili.

But chili and mustard are two totally different beasts. As stated above, the desirable effect of mustard relies on the presence of allyl isothiocyanate. Chili, in contrast, is powered by C18H27NO3 [(6E)-N-[(4-Hydroxy-3-methoxyphenyl)methyl]-8-methylnon-6-enamide] or capsaicin. Their effect is not only different: actually, it is entirely distinct. I don't understand why it seems to be increasingly popular to simulate the former with the latter. Imagine the opposite: somebody asking for Tabasco getting a pouch of Löwensenf.

In my desperation, I've even ordered Karashi and Colman's via Amazon. But come on: besides the obscene price tag, I really do not want to depend on a US internet retailer for a satisfactory Thüringer experience.

And then I found ECHTER LOOSER SENF and began to see the light. Hell, yes, why should I not produce my own mustard? How difficult can it be?

As it turned out, it's about as difficult as making a coffee. I've chosen this comparison as we have to grind the mustard seeds, which can be done with a mill or a mortar. Talking of mustard seeds: that's of course the central ingredient for a mustard. 😉 Since I aimed to get a really, genuinely, absolutely hot one, brown mustard seeds were required that can be obtained from specialized spice shops such as this one. A plain hot mustard can then be obtained with only a few more basic ingredients. Prior to its preparation, however, it's important to realize that a mustard has to be enjoyed when fresh. Certainly, one can keep it for months and perhaps even years and it never gets “bad” in a microbiological sense. But after a few days the bite of it is gone! So put your freshly prepared mustard in the fridge and wait overnight to let it mellow, but don't keep it longer than a week.

With that in mind, I recommend to prepare only a minute amount sufficient for just one or two meals:

Brown mustard seeds:                    25 g
Water:                                  30 ml
Wine:                                   5 (10) ml
Vinegar:                                15 (10) ml
Salt:                                   1.5 g
Sugar:                                  0.15 g

Grind the seeds in a mill or mortar (I recommend the latter). Mix water, vinegar and wine with salt and sugar. Pour the mixture over the mustard powder and stir carefully. Put in fridge for one night but let it warm to room temperature before digestion. Let me add that the choice of wine and vinegar is important, as these two ingredients are largely responsible for the character of our mustard. This fact is also the reason why I allowed for some flexibility in their relative amounts.

Server security

I really didn't expect that, but my recent post about our new server attracted more questions than all posts in 2016 combined. I thought that people interested in such old-school IT issues would be essentially extinct, but apparently a few still exist.

In what follows, I try to provide some answers. I've grouped the questions such that they revolve around the same topic even if they were not asked by the same person. My answers are all short except for the last one, where I elaborate on server security.

Can every 'ordinary' citizen rent such a server? Or do we need a trade license (Gewerbeschein)? Do we need special certificates?

No, anyone can rent a server. Technically, however, it is not advisable to run a server without some basic knowledge of system and network administration. I come to that point in more detail below.

What can I do with my own server? What benefit does it offer?

You can do anything you can imagine to do with a computer. And what's most important: it's yours, and no Google/Facebook/Dropbox will suddenly discontinue the service to “optimize the user experience”. And if your hoster goes bankrupt, just move to the next one—you can always rent another server (unless the government decides to ban private servers to “fight cyberterrorism” or whatever else is en vogue).

With your own server, you could, for example, host your blog, as I do. You could run a mail server, set up your own cloud, provide groupware for yourself and your family, or use it as game server and communicate via IRC, as we do at pdes-net.org. You could also install a Jabber server supporting OMEMO to provide a Skype and Whatsapp replacement for your family and friends that is guaranteed to be safe from eavesdropping.

What the heck is this Jessie and Stretch thing? Do I need to know that?

Perhaps not. But I'm not sure. If you don't know, I imagine that you are not, in all likelihood, very familiar with GNU/Linux. And that's not the ideal basis for administering your own server. Well, you could rent a Windows server, of course. I have no idea for what reason, though.

Can we rent a server anonymously?

Yes, for example here. Note: in case you want to register a domain (and who doesn't?), you want to do that anonymously as well. Otherwise your identity is revealed by a simple whois request (check, for example, 'whois pdes-net.org').

How do you connect to the server? Can I use it also by ftp? Or by the Windows Explorer? What about smartphones?

One has to distinguish between the interface used to administer the server from the services provided by it. Regarding the former, I connect exclusively via ssh (or better to say, via mosh, an ssh replacement). I also use this way to copy files via the ssh-based tools scp, sftp and sshfs. You can get ssh clients also for Android and iOS, and you can thus administer your server from anywhere you like. Concerning the latter, you can use any protocol for which you have configured the corresponding service—in this context the smb server to allow access to the user's files via the Windows Explorer. However, I would definitely not recommend that. Rather, I'd use winscp or implement user access to files by webdav over https.

Can hackers attack the server?

Whatever you call them: there will be plenty of people trying to get access to your server. For example, in the three days in which our new server was running in its default configuration, 'lastb' revealed 6742 login attempts via ssh. Fortunately, our hoster had set a passphrase that was definitely better than the most popular one.

What did you do to secure the server and to avoid hackers taking it over?

The measures I usually take are all very simple and do not require membership of the inner circles of server adminship. The core principle is to minimize exposure by scrutinizing the software base.

What do I mean with that statement? I can illustrate that best with an example from one of my Arch systems:

  ~ arch-audit
Package bzip2 is affected by ["CVE-2016-3189"]. Update to 1.0.6-6!
Package jasper is affected by ["CVE-2016-9591", "CVE-2016-8886"]. High risk!
Package libtiff is affected by ["CVE-2016-10095", "CVE-2015-7554"]. Critical risk!
Package openjpeg2 is affected by ["CVE-2016-9118", "CVE-2016-9117", "CVE-2016-9116", "CVE-2016-9115", "CVE-2016-9114", "CVE-2016-9113"]. High risk!
Package openssl is affected by ["CVE-2016-7055"]. Low risk!

As you see, libtiff is listed as critical, and the exploits partly date from 2015. Better to get rid of it, right? Sure, but:

  ~ whoneeds libtiff | tr -d '\n'
Packages that depend on [libtiff]  aarchup  artha  auctex  awoken-icons  blueman  chromium  clipit  conky  conky-colors  cups  darktable  djvulibre  emacs  emacs-minimap  engrampa  feh  firefox  galculator  gimp  gimp-webp  gksu  gnome-keyring  gnome-themes-standard  gnuplot  gparted  gpicview  graphviz  gsimplecal  gst-libav  gst-plugins-good  gtk-engine-murrine  gtk-engines  gtk-theme-orion-dark  gtk2-perl  gtk3-print-backends  guake  gucharmap  gvfs  gvim  hplip  hsetroot  inkscape  keepassx2  kodi  libbpg  libcaca  libreoffice-fresh  lxappearance  lxappearance-obconf  lxinput  lxrandr  lxterminal  lyx  masterpdfeditor-qt5  mirage  mpv  mupdf  mupdf-tools  netpbm  network-manager-applet  nitrogen  numix-circle-icon-theme-git  obconf  obkey  obmenu-generator  openbox  openbox-themes  orage  owncloud-client  pavucontrol  pcmanfm  portfolio  povray  pstoedit  pstotext  pychess  python-matplotlib  python-pillow  python-scikit-image  python-seaborn  qpdfview  rawtherapee  ricochet  scribes  scribus  scrot  seahorse  spacefm  spyder3  sqlitebrowser  terminator  texlive-bibtexextra  texlive-core  texlive-fontsextra  texlive-formatsextra  texlive-games  texlive-genericextra  texlive-htmlxml  texlive-humanities  texlive-latexextra  texlive-music  texlive-pictures  texlive-plainextra  texlive-pstricks  texlive-publishers  texlive-science  tint2  tumbler  vertex-themes  vesta  virtualbox  volumeicon  webkitgtk2  wxpython  xfce4-notifyd  xfce4-terminal  yelp  zenity  zim

As you see, essentially everything depends on this package. No way to get rid of it on a desktop system! But surely that's no issue on a pure command-line system like our server, right?

$ aptitude why libtiff5
i   webalizer Depends libgd3 (>= 2.1.0~alpha~)
i A libgd3    Depends libtiff5 (>= 4.0.3)

Ok, let's remove webalizer. But after that:

$ aptitude why libtiff5
i   pinentry-gtk2      Depends libgtk2.0-0 (>= 2.14.0)
i A libgtk2.0-0        Depends libgdk-pixbuf2.0-0 (>= 2.22.0)
i A libgdk-pixbuf2.0-0 Depends libtiff5 (>= 4.0.3)

Who installs pinentry-gtk2 on a system without X server? WHO?

/usr/bin/apt-get --auto-remove purge libtiff5
Requested-By: cobra (1000)
Install: pinentry-curses:amd64 (1.0.0-1, automatic)
Purge: libcroco3:amd64 (0.6.11-2), libpangoft2-1.0-0:amd64 (1.40.3-3), libcups2:amd64 (2.2.1-4), libimlib2:amd64 (1.4.8-1), w3m-img:amd64 (0.5.3-34), libgtk2.0-bin:amd64 (2.24.31-1), libgdk-pixbuf2.0-0:amd64 (2.36.3-1), libpixman-1-0:amd64 (0.34.0-1), libsecret-1-0:amd64 (0.18.5-2), librsvg2-common:amd64 (2.40.16-1), gnome-icon-theme:amd64 (3.12.0-2), libavahi-common-data:amd64 (0.6.32-1), libgail-common:amd64 (2.24.31-1), libavahi-common3:amd64 (0.6.32-1), libgtk2.0-0:amd64 (2.24.31-1), libxcursor1:amd64 (1:1.1.14-1+b1), libthai-data:amd64 (0.1.26-1), libxcb-shm0:amd64 (1.12-1), libid3tag0:amd64 (0.15.1b-12), libsecret-common:amd64 (0.18.5-2), libgail18:amd64 (2.24.31-1), libxcb-render0:amd64 (1.12-1), fontconfig:amd64 (2.11.0-6.7), libtiff5:amd64 (4.0.7-5), libatk1.0-0:amd64 (2.22.0-1), libpangocairo-1.0-0:amd64 (1.40.3-3), librsvg2-2:amd64 (2.40.16-1), pinentry-gtk2:amd64 (1.0.0-1), libgif7:amd64 (5.1.4-0.4), hicolor-icon-theme:amd64 (0.15-1), libthai0:amd64 (0.1.26-1), libgdk-pixbuf2.0-common:amd64 (2.36.3-1), libgtk2.0-common:amd64 (2.24.31-1), libgraphite2-3:amd64 (1.3.9-3), libjbig0:amd64 (2.1-3.1), gtk-update-icon-cache:amd64 (3.22.6-1), libatk1.0-data:amd64 (2.22.0-1), libharfbuzz0b:amd64 (1.2.7-1+b1), libcairo2:amd64 (1.14.8-1), libavahi-client3:amd64 (0.6.32-1), libpango-1.0-0:amd64 (1.40.3-3), libjpeg62-turbo:amd64 (1:1.5.1-2), libdatrie1:amd64 (0.2.10-4)

That was an example illustrating what I meant with “scrutinizing the software base”. But let's proceed step by step and relive the few hours when I configured our new server.

  1. I first tighten the security of sshd:

on the client:

  ~ ssh-keygen -t ed25519
  ~ ssh-copy-id -i ~/.ssh/id_ed25519.pub pdes-net.org

on the server:

su -
$  vim /etc/ssh/sshd_config
Port XYZ
PermitRootLogin no
ChallengeResponseAuthentication no
PasswordAuthentication no
$ systemctl restart sshd.service

XYZ has to be replaced with a sensible portnumber, of course. 😉

  1. Relieved, I next check which services are running on the system to have an overview:

systemctl --type=service
  1. I then look for services that opened a port and listen on it. I prefer to use

netstat -tulpen

for this purpose,1 but I usually also install 'iftop' and 'iptraf' to have a look at the traffic.

1 Note that you have to install the 'nettools' package on many distributions as 'netstat' is deprecated in favour of the 'ss' command of the iproute2 package since 2011. The 'netstat' output is much more compact and readable, though.

Obviously, it is kind of paradoxical to rely on a local check on a system which might have been already compromised. I thus also use 'nmap' to have a look from outside:

nmap -sS -sU -T4 -A -v pdes-net.org
  1. The simple tests above reveal that the server was basically prepared to run an online shop and thus has plenty of services running. Apache, nginx, postfix, dovecot, mysqld, sshd, froxlor, etc. etc. I just stop and remove all of them all except sshd:

systemctl stop <service>
systemctl disable <service>

deinstall them:

apt purge <service>
  1. After that, there are plenty of orphans that I remove with

wajig autoremove
  1. Update:

wajig dailyupgrade
  1. Ugrade:

vim /etc/apt/sources.conf
:%s/jessie/stretch/g
ZZ
wajig daily-upgrade
wajig sys-upgrade

This last step may appear questionable, as Debian Testing (currently called Stretch) does not receive the same security support of Stable (currently called Jessie). Well, I definitely prefer Testing for its more up-to-date packages, and I think its more important to avoid packages from the contrib and non-free repositories.

  1. I then check the support status of my installation:

debian-security-support “...identify installed packages for which support has had to be limited or prematurely ended...”

check-support-status

Everything's supported. The more software you have installed, the less likely is this result.

  1. And finally, I search for vulnerabilities (similar to arch-audit above):

debsecan “...generates a list of vulnerabilities which affect a particular Debian installation...“

CVE-2016-2148 busybox (remotely exploitable, high urgency)

What is busybox doing here? Well, its gone.

Update: Damn, I forgot – it's needed for update-initramfs. No big deal, though: what you can remove that easily can as easily be installed again. So don't worry, you won't be able to accidentally remove the kernel or libc. 😉

After these nine steps (the nine hidden secrets for perfect server security!!!), the total size of our installation (disregarding user content in /home and in /var/www) is less than 1.2 GB.


What did I achieve so far? Well, first of all, I have stopped and removed all running services I do not need. That's certainly the most important contribution to server security as all of these services were remotely accessible. Second, I have upgraded the entire installation to a current version of the distribution in the belief that in this version, as a tendency, previous CVEs have already been recognized and fixed. Third, I have identified and removed remaining programs and libraries with security breaches rated as critical.

What can I do more? Can I rate the security of the system somehow, and monitor it?

Yes. Such a rating is, for example, offered by lynis, a security audit system by rkhunter author Michael Boelen, which provides a wealth of helpful information and advice out of the box without the need to configure anything. Great for beginners, useful for advanced users. Recommendable alone for its suggestions concerning the configuring of the ssh server. But beware and don't lock yourself out. 😉

With the current configuration, lynis gives pdes-net.org a hardening index of 78%. I'm quite satisfied with that score (you probably won't get a 100% as long as the server is still connected to a network).

How can I make sure that we keep that score? Well, lynis is really very helpful in that respect, since it suggests, depending on the distribution, the installation of several useful tools that help in future security-related decisions.

Many of these tools, however, work best when they are executed by a cronjob in the background and inform the administrator by local mail in case there's anything to report. For this reason, it is imperative for any Linux server installation to include a functional mail transfer agent (MTA) configured for local delivery. In Debian, I always chose exim because its so wonderfully easy to configure it for this case. I'm so much used to this genie on the system, telling me about the good and the bad, that I install an MTA not only on servers, but on every system I administer (although I usually prefer postfix over exim). Here's an example taken today from pdes-net.org:


../images/system_mails.png

When performing system updates on Debian, I additionally like to have the following tools as little helpers in the background. apt-listbugs “retrieves bug reports from the Debian Bug Tracking System and lists them”, apt-listchanges “compares a new version of a package with the one currently installed”, apt-show-versions “shows upgrade options within the specific distribution of the selected package”, checkrestart (part of debian-goodies) “helps to find and restart processes which are using old versions of upgraded files (such as libraries)”, and needrestart “checks which daemons need to be restarted after library upgrades”. I also like logcheck which “helps spot problems and security violations in your logfiles automatically and will send the results to you in e-mail” (see above).

These tools are helpful, but I like to go one step further and have an automated, daily security check. That's exactly what checksecurity does, which, according to Debian, performs “basic system security checks”. Well, how basic depends a lot on the packages installed in addition: recommended are, among others, tiger, which again refers to other packages such as chkrootkit, “searching the local system for signs that it is infected with a 'rootkit'”, as well as file monitoring systems such as tripwire and aide that just make little sense on a rolling-release system. This fact does not diminish the value of checksecurity, of course, which I would very much recommend to install.

You certainly have noticed that I so far did not even mention the security evergreen: the firewall. Well, I do recognize the value of an enterprise-class firewall for a corporate network, but here we are talking about a software running on the very system that we desire to protect. This scenario reminds us of the infamous 'personal firewalls' under Windows, the legendary discussions on nntp://de.comp.security.firewall and fefe's succinct summary:

Do Personal Firewalls improve security? — No.

Why do so many people install them, then? — Because those people are all idiots.

Well, nobody would judge the built-in firewall functionality of Linux equally harshly, and there are even one or two arguments in favor for using it. My view is that this built-in firewall is secondary compared to the measures discussed above, but it certainly doesn't hurt to use it. And that's what I do:

ufw default deny
ufw limit ssh
ufw allow http
ufw allow ...

One final word. Be careful not to overdo things. The more security-related stuff you install, the more messages you will get, and the more dramatic it will all sound. For example, chkrootkit identifies the mosh server instance running on udp port 60001 as an infection when running the bindshell test. That's a trivial false positive, but within the grip of security paranoia, it will be amplified such that it can unbalance even experienced administrators. Be calm, practice Zen, and acquire enough knowledge to immunize yourself against a fullblown security hysteria.

The end of infinality

If you use Archlinux and the infinality bundle from bohoomil's repository: yesterday's update of harfbuzz from 1.3.4-1 to 1.4.1-1 may break important parts of your setup. The reason is that infinality uses an outdated version of freetype2, and at present it seems unlikely whether we will ever see an update:

Reason: Infinality is dead both upstream and with the downstream maintainer bohoomil, and differences with freetype upstream become small as development progresses

For details, see here and here.

Instead of downgrading harfbuzz, I thus reverted to the stock versions of freetype2, fontconfig, and cairo:

pacman -S --asdeps lib32-freetype2 lib-32cairo lib32-fontconfig
pacman -S --asdeps freetype2 cairo fontconfig

The first one only applies if you have the 32-bit multilib-packages installed as required, for example, for steam.

I have not yet replaced the fonts nor is there any immediate need to do so. In fact, the current stock freetype2 seems now to offer an equivalent quality of font rendering as the previous freetype2 with the infinality patchset. Excellent!


I've just checked and found that the situation is even worse for Debian: on my mini (running Stretch/Sid), the installed version of freetype2 is 2.4.9 from 2012. Compared to that, the stock version (2.6.3) can almost be called up-to-date...

apt purge fontconfig-infinality
apt purge libfreetype-infinality6
apt install libfreetype6

Better graphic formats

The most frequently used (and abused) raster image format—JPEG—recently celebrated its 25th anniversary. Its cousins are mostly even older: TIFF stems from 1986, GIF from 1987, and only PNG, the latter's intended replacement, was developed a few years later, namely, in 1995.

What kind of computer did I have 1995? A Pentium 90 with 16 MB RAM and a 512 MB HDD. And that's what these formats were designed for. Today, 20 years later, we enjoy a factor of about 1000 with regard to CPU speed, memory, and storage size, but despite this enormous difference, our image file formats have so far remained the same.

Several new formats have been proposed in the past few years, such as Google's WEBP in 2010, BPG (better portable graphics), which is essentially owned by the MPEGLA, in 2014, and FLIF (free lossless image format) in 2015. Only WEBP is supported to a degree that allows one to actually use it, while BPG and FLIF are essentially still on the level of technology demonstrations.

This page offers a most illustrative comparison between the different lossy image formats, among them JPEG and its intended successors as well as BPG and WEBP. There's absolutely no question about the winner. Just look at Tennis or Steinway, for Pete's sake. No question, wouldn't it be for the sodding patents. sigh


But forget the patents for the moment, let's rather look at something interesting. In this post, I look at these new image formats from a different perspective. How well can they compress an essentially black-and-white line art?

Not that one should ever even consider to do that. Line art should always be stored as vector graphics, that much is obvious to anyone with even the faintest knowledge of graphic formats. Even a few scientific publishers know that. In the author guide to Nature Communications, for example, we find the statement:

All line art, graphs, charts and schematics should be supplied in vector format [...].

The author guides of most other publishers lack such explicit statements and rather breath the spirit of the 1990s. For example, in an Elsevier FAQ we can read:

Why don't you accept PNG files?

We will constantly review technological developments in the graphics industry including emerging file formats - new recommended formats will be introduced where appropriate. PNG files do not cause issues in processing, but our submission systems are in progress of updating to allow for this useful new format.

In practice, however, most publishers have no problem with accepting vector graphics in EPS or PDF format and, most importantly, also use it 1:1 for the final publication.With one prominent exception: the American Chemical Society (ACS). Vector graphics submitted to any of the numerous ACS journals are invariably converted to a raster image. Some of their author guides even include a corresponding note:

NOTE: While EPS files are accepted, the vectorbased graphics will be rasterized for production.

Regarding the format and resolution of this raster graphics, we find the following exemplary recommendation in this guide:

Figures containing photographic images must be at least 300 dpi tif files in CMYK format; line art should be at least 1200 dpi eps files.

To specify a resolution for EPS files demonstrates a complete lack of understanding of vector graphics. And in the same spirit, we read:

Cover images should be 21.5 cm in width and 28 cm in height, with a resolution of 300 dpi at this size (this should be a file of at least 8 MB).

Oh, we cannot even handle compressed TIFFs? How wonderful to work with professionals.

Perhaps as a direct consequence of the resulting size of 1200 dpi bitmaps, I have never seen any figure in an ACS journal whose resolution would have exceeded 300 dpi. At least these figures are compressed, contrary to the implicit recommendation in the author guide. Depending on the preference of the technical stuff at the respective ACS journal, the figures are included in the manuscript either as overcompressed JPEGs, exhibiting plainly visible compression artefacts, or as insufficiently compressed PNG files.

Insufficiently compressed? Yes—in contrast to JPEG, PNG employs lossless compression, and one can and should thus always employ the maximum compression level (9). Not doing so only increases the file size. The technical stuff at ACS typically invokes only the minimum compression level 1. Furthermore, the file format is invariably 8 bit/color RGB, even for black and white line art. As a result, the 692 kB of a 295 dpi figure (extracted as described here) in one of my recent ACS publications could have been easily reduced to 138 kB. Or, alternatively, one could have produced a 1200 dpi version with a file size of only 787 kB—barely larger than that included in the galley proofs.

And for all this “professional” service, we even pay handsomely. Why, then, do we publish there at all? Because of the impact factor, of course. I'll write more about this much too powerful incentive in the near future.


But let's come back now to the actual topic of this post, and consider the following grayscale line art that has been created with the help of graph and inkscape:

The original SVGZ has 21.6 kB, a PDF saved by inkscape 52 kB. Now let's see what happens if we convert this PDF into various raster image formats with a resolution of 1200 dpi.

PNG:

The obvious choice of format is PNG. We can convert the SVGZ or the PDF in various ways. We could export a PNG directly from inkscape, of course. Alternatively, we could open the PDF by gimp and export it as PNG. Both are viable ways, but the CLI is actually more flexible and powerful. So let's open a terminal and enter

pdftocairo -png -scale-to-x 4000 -scale-to-y -1 -gray -antialias gray valence_bands.pdf valence_bands_cairo.png

That would be my usual way. Results in a nice grayscale png with 356 kB.

Another possibility is

convert -verbose -density 483.87 valence_bands.pdf -depth 8 valence_bands_convert.png

Equivalent to '-depth 8' is '-colorspace gray' (in this particular case). In any case, we get a file with 330 kB. Can we do better? Oh yes, by tuning the PNG compression parameters:

convert -density 483.87 valence_bands.pdf -colorspace gray -define png:compression-filter=1 -define png:compression-level=9 -define png:compression-strategy=1 def.png

300 kB! For the parameters, see here.

Now, that seems to be a fairly optimized PNG, but it is still almost six times larger than its predecessor, the PDF. That's the time of the PNG optimizers! Let's apply them to the smallest PNG we have obtained so far, the one with 300 kB.

optipng

optipng def.png -out opti.png

225 kB.

pngquant

pngquant def.png

In contrast to the other optimizers, pngquant converts to a color palette! But with unexpected success:

220 kB.

pngout

pngout def.png out.png

189 kB. Needs ages. But it's the tool of the duke.

zoplipng

zopflipng def.png zopfli.png

190 kB. Google vs Ken Silverman: 0:1!

That's about the limit for PNG.

Let's check other lossless formats.

TIFF:

convert -verbose -density 483.87 valence_bands.pdf -depth 8 -flatten -compress lzma valence_bands.tiff

188 kB. Surprise, surprise: basically equal in size to the smallest PNG.

WEBP:

convert def.png -define webp:lossless=true def.webp

159 kB! Not bad at all.

BPG:

bpgenc -lossless def.png -o def.bpg

387 kB. Not a format for lossless compression.

FLIF:

flif def.png def.flif

92 kB. Now that's a statement!

But still way larger than the PDF. Is there perhaps a lossy algorithm capable of creating a 1200 dpi image smaller in file size than the PDF? Note that the present graphics with its hard contrasts is a worst case scenario for JPEG and, I presume, for essentially all lossy image formats.

JPEG (libjpeg-turbo)

convert def.png -flatten -quality 1 def_default.jpeg

165 kB. Hardly smaller than the lossless variants and with the characteristic ringing and quilting artefacts surrounding every edge and corner (see below).

JPEG (mozjpeg)

convert def.png -flatten -quality 1 def_moz.jpeg

83 kB. Better than the default above, but still larger than the PDF. The compression artefacts are different from those of the default JPEG implementation, but the image is still of terrible quality (see below).

WEBP:

convert def.png def_lossy.webp

203 kB. Worse than lossless (but I didn't explore the various parameters convert offers for WEBP).

BPG:

convert def.png -flatten def_spec.png
bpgenc -q 44 def_spec.png -o def_lossy.bpg

50 kB. I had to preprocess the image since I needed a screenshot of the final BPG for the comparison below. The result is indeed smaller than the PDF, and exhibits (compared to the JPEG) only moderate compression artefacts (see below). Very impressive.

Here's a comparison of a section of the above graphics.

BPG is certainly a major improvement over JPEG also for line art. However, nothing beats vector formats: the PDF is of similar size and is arbitrarily scalable. A version for an A0 poster would still be 54 kB in size, whereas a corresponding BPG of the same quality as shown above would be truly gigantic.

An ideal strategy for scientific artwork would look like that: line art, labels, and annotations as vector graphics (SVG or PDF), photography as BPG, stored together in a PDF or SVGZ container. That's imagery for the 21st century. And, in case you didn't notice, I didn't find any reason to mention WEBP or FLIF. For either of them, there's always a better alternative. If we disregard the patents. 😉

New server

This blog has been hosted since 2008 on a vServer powered by a single core of an Intel Core2 Quad Q6600 commanding over 256 MB RAM and a 12 GB HDD. As OS, we've used Debian Lenny, and we've since long tried to silence the voice inside our heads warning us that security support for Lenny has ended almost 4 years ago. Certainly, there was nothing much to hack (after all, these are static pages), but I normally wouldn't tolerate such a neglect, and I certainly wouldn't encourage it.

Well, we finally hauled our lazy carcasses out of their graves and managed to get a new vServer. Hardware-wise, a huge step up: two Intel Xeon E5-2680 v4 cores with 6 GB RAM and a 320 GB HDD. Software-wise, we've ordered the server with Debian Jessie which was configured very nicely, but with plenty of services we don't need. The first step was thus to clean up and to update the system to Debian Stretch, the current version of 'Testing', which in my opinion represents one of the best choices for a rolling-release server installation that is reasonably up-to-date and yet almost care-free.

From Linux 2.6.20 on our old server to 4.8.11 on the updated new one: what an enormous jump! System administration has also changed significantly: for example, to synchronize the time, one does not any longer rely on a cronjob executing 'ntpdate -s', but uses systemd-timesyncd, and instead of apt-get and apt-cache, one uses apt. Oh yes, my dear dinos, that's how it is! But since the user interface has stayed the same, it is still as easy as ever to administrate the system as long as one is able to read and write (type).

Concerning the webserver, it was haui to suggest Hiawatha. I'd never even heard about it, but after a first look I installed it (there are Debian repositories managed by Chris Wadge) and it instantly grew on me. It's small, lightweight, easy to configure, and has unique features not found in other webservers.

However, just as all other webservers I know, Hiawatha does not correctly deliver compressed scalable vector graphics (svgz). I was tired of that and wanted to avoid the need for patches, and I hence replaced all svgz by svg and the corresponding reference in all my posts:

find . -type f -name "*.svgz" | xargs gunzip -S z
find . -type f -name "*.md"  | xargs sed -i 's/\.svgz/\.svg/g'

This decision turned out to be the right one. Hiawatha transparently compresses content without requiring any user interaction, and the page size of this blog actually decreased by 50% with respect to that delivered by dhttpd, our previous webserver, despite this manual decompression.

Now, concerning the IRC server, InspIRCd seemed to me the most promising candidate. Just look at that! And I wasn't disappointed: with a little help from here and there, I had it running pretty fast. What took some time was the key generation, since I wanted the TLS configuration of the server to comply to the current security standards. After a lot of reading, I've finally generated the key and the certificate

certtool --generate-privkey --ecc --sec-param ultra --outfile key.pem
certtool --generate-self-signed --load-privkey key.pem --template cert.cfg --outfile cert.pem

and configured the gnutls section in InspIRCd:

<module name="m_ssl_gnutls.so">
<gnutls certfile="cert.pem" keyfile="key.pem" priority="SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-MD5:-SHA1:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC">

Note that this is a rather strict configuration that will work not work for clients belonging into a museum. With reasonably up-to-date systems, no problems should be encountered.

I've applied a few other tweaks to the IRC server, but I won't discuss them now as I would first like to see how they perform in practice.

LaTeX vs. Unicode

I'm using matplotlib to create figures for my publications. For axes labels, legends, and everything else requiring text and symbols in a figure, I've so far used the excellent LaTeX support of matplotlib, and the results are (obviously) highly satisfactory:


../images/plot_tex.svg

There's a disadvantage, though: there are not too many fonts to chose from. Naively, I thought that this limitation would be lifted if I wouldn't use LaTeX, but Unicode instead:


../images/plot_uc.svg

And wouldn't XeLaTeX even combine the advantages of both?


../images/plot_xetex.svg

As you can see, matplotlib allows you to use any of these options, but what you don't see is that the desired results can be achieved only with a very limited set of fonts. For example, there are only a few fonts that include the unicode character for a 'superscript minus' (for an overview, see here). Sadly, most of these are part of the ClearType Font Collection, which was introduced by Microsoft with Windows Vista. Free fonts with a 'superscript minus' include Dejavu Sans, Free Sans, and Free Serif. If the 'superscript minus' is included instead as a command by employing the internal LaTeX support of matplotlib, many more fonts become accessible. Examples are shown in the table below. But even then one can't make any assumptions: while Source Sans Pro works fine, Source Serif Pro doesn't. I have no idea why.

You see from my last statement that this post in not in the least authoritative. I'm just toddling around, and if you find a better way, I'd appreciate corrections and additions. That's particularly true for the case of XeLaTeX, the use of which seems to require OTF-only fonts with math table support. I wasn't even able to find a single Sans Serif font with this profile 😞 . Others have similar problems.

Renderer

Serif

Sans Serif

LaTeX

Palatino, Fourier

Kurier, CM Bright

Unicode

Noto, Gentium Plus

Open Sans, Source Sans Pro

XeLaTeX

Libertinus, XITS

?

Finally, here's an archive containing the three scripts I've used to create the figures above. In each case, I let matplotlib render a pdf, convert that into an svg by pdftocairo, and compress this svg files by gzip:

./plot_uc.py
pdftocairo -svg plot_uc.pdf plot_uc.svg
gzip -S z plot_uc.svg

The results are compressed scalable vector graphics that are fully compatible with inkscape if a post-processing should be necessary. That's how I got the unicode logo in, by the way. 😉