Keep Alive

I'm so much used to mosh that I'm always surprised by how fast a plain ssh connection runs into a timeout. Or worse into a kind of half-terminated hanging connection with an apparently unresponsive terminal.

Which is weird, since there's already one measure in place that is intended to avoid this situation: TCPKeepAlive, which is enabled by default. To quote the man page of sshd_config:

On the other hand, if TCP keepalives are not sent, sessions may hang indefinitely on the server, leaving "ghost" users and consuming server resources. The default is yes (to send TCP keepalive messages), and the server will notice if the network goes down or the client host crashes. This avoids infinitely hanging sessions.

What irritates users most in this situation is the unresponsive terminal, which seems to no longer accept any commands, and won't close unless the ssh connection is terminated by killing the process. But there's no need to kill, as ssh offers several escape sequences that also take care of this case:

 ~.  - terminate connection (and any multiplexed sessions)
 ~B  - send a BREAK to the remote system
 ~C  - open a command line
 ~R  - Request rekey (SSH protocol 2 only)
 ~^Z - suspend ssh
 ~#  - list forwarded connections
 ~&  - background ssh (when waiting for connections to terminate)
 ~?  - this message
 ~~  - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)

To prevent this thing to ever happen again, two options can be set either on the server or the client.

On the server side, one can set the following options in /etc/ssh/sshd_config:

TCPKeepAlive no (default yes)
ClientAliveInterval 30
ClientAliveCountMax 240

To quote again the man page of sshd_config:

ClientAliveInterval Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client.

ClientAliveCountMax Sets the number of client alive messages which may be sent without sshd(8) receiving any messages back from the client. If this threshold is reached while client alive messages are being sent, sshd will disconnect the client, terminating the session. The default value is 3. If ClientAliveInterval is set to 15, and ClientAliveCountMax is left at the default, unresponsive SSH clients will be disconnected after approximately 45 seconds. Setting a zero ClientAliveCountMax disables connection termination.

On the client side, corresponding options exist in etc/ssh/ssh_config, but changing them occurs better on a per-user basis in ~/.ssh/config (instead of adding them manually to each connecting ssh call via the -o command-line parameter):

TCPKeepAlive no (default yes)
ServerAliveInterval 30
ServerAliveCountMax 240

The meaning is the same as above, but the roles are reversed: now, the client sends an alive message to the server every 30 s, and the client drops the connection if it didn't receive an answer from the server within 2 hours.

Missing my notifications

It took quite some time until I realized that I don't get notifications anymore on any of my Arch-based installations, but when aarchup didn't chime up even after days, I've finally noticed that there must be something wrong.

The culprit is the new autostart file coming with xfce4-notifyd 0.6.2:

[Desktop Entry]
Name=Xfce Notification Daemon

Only show in XFCE? As OpenBox user, I feel seriously excluded and discriminated. Well, actually, we can just delete this line in a user context and thus continue as before:

cp /etc/xdg/autostart/xfce4-notifyd.desktop /home/cobra/.config/autostart/
vim /home/cobra/.config/autostart/xfce4-notifyd.desktop
G dd ZZ



For our webserver, the lefh script provided by Hiawatha, which I run daily via a cron job, guarantees that the certificates for the transport encryption are renewed prior to their expiration. For our IRC server, in contrast, I have to do that manually. That might seem like a nuisance, but on the other hand, it gives me the chance to review the current state-of-the-art regarding transport encryption and to bring my configuration to this level. I've previously used ed25519 (which I also choose when generating SSH keys), but ed448 seems an even better choice.

certtool --generate-privkey --key-type ed448 --sec-param ultra --outfile key.pem
certtool --generate-self-signed --load-privkey key.pem --template cert.cfg --outfile cert.pem
certtool --get-dh-params --sec-param ultra --outfile dhparams.pem


Recently, I've had a hard time with my virtual machines (VMs). With the update to kernel 5.8, starting any of them caused my entire system to lockup so that even the magic sysrequest didn't help. The problem persisted from August 15th to September 9th when it was finally solved by virtualbox 6.2.14. After the update, I immediately tended to my VMs to update them.

  • CentOS: 0.012 packages – check.

  • Debian: 123 packages – check.

  • Archlinux: 123456 packages – ch... wait a sec, login incorrect after reboot?

My physical installations of Arch didn't exhibit such an attitude, which I thus suspected to be related to the virtualbox-guest addons of Arch (since the virtual CentOS and Debian were also behaving properly). I was wrong.

PAM had been updated to version 1.4, dropping support for the long deprecated tally module. My virtual Arch, however, is from 2009, and my /etc/pam.d/login indeed referenced this module. But there was also a login.pacnew that would have corrected this issue if I only would have bothered to handle 'pacnew' files as advised:

“These files require manual intervention from the user and it is good practice to handle them right after every package upgrade or removal. If left unhandled, improper configurations can result in improper function of the software or the software being unable to run altogether.”

I hate being in the corner with the criminally stupid, but there I am. I'll try the pacman hook in the future.

Modern times

Since the beginning of time (or so it seems), I've had Emacs installed with the extension AUCTeX to handle LaTeX documents. And, mind you, I'm still using it from time to time! As a matter of fact, in 2018 I've worked on a number of manuscripts exclusively with Emacs to prepare myself for the editor shootout I've promised for the end of 2018 (and which may or may not be done by the end of this year). I'm still quite happy with it, that much I can say already now.

Perhaps you can then understand my surprise when yay told me that AUCTeX has been orphaned on the AUR. I was even more surprised when I saw that the maintainer was Stefan Husmann, who is also the maintainer of several hundred other packages and a moderator on the German Archlinux forum. Not the guy to thoughtlessly abandon a package on a mere whim.

And then it hit me: of course! Emacs has got it's own package manager (ELPA) some time, well, perhaps two years ... actually, eight years ago. 😣

So what's the meaning of this post? Let's say that there're one billion computer users out there. Only one percent of these know what an editor is, and only one percent of these again are actually using one. Of these again, only one percent use Emacs. Once again one percent of these Emacs users use AUCTeX, but more than 80% of these guys have installed AUCTeX via ELPA, the recommended and canonical way. I'm not one of them. Am I the only one? No, if we do the math, it turns out that there's one kindred spirit who is in the same situation like me. This post is for you, my brother in arms!

Well, as stated above, Emacs recommends installing AUCTeX via ELPA. After removing AUCTeX from the AUR, we can install manually

M-x package-install RET

or, in a properly maintained init.el (like mine), automatically:

(require 'package)

(add-to-list 'package-archives
        '("melpa" . "`" <">`_))

(when (not package-archive-contents)

(defvar myPackages

(mapc #'(lambda (package)
        (unless (package-installed-p package)
          (package-install package)))


;;(load "auctex.el" nil t t)
;;(load "preview-latex.el" nil t t)

While I was pondering the question whether this post would be relevant for anybody at all, I found these news: Levee, a vi clone, has got a new major release after 30 years. Now that's the spirit! Compared to the estimated number of users interested in this update (interestingly, the only comment in the AUR is from Stefan Husmann), my post is for the masses. To celebrate this Chucknorishness of software development, I've installed levee and prepared this text in it. It was ok (just like vi), but David Parsons will certainly understand if I say that I prefer vim for everyday work.

SOHO system monitoring

I have to admit that when computers are concerned, I'm somewhat of a control freak: for more than 20 years, a system monitor is an integral part of my desktop. Since the last 10 years, conky fills this role. Conky can be configured exactly to one's liking and actually may be a quite stylish element of the desktop. My conkies rather display a maximum of information while still being aesthetically pleasing (for me). Judge for yourself:


You can get the configuration file here, if you are interested.

Now, having an active element on the desktop can be distracting, and I understand that this may not be to everyone's liking (although I myself feel entirely detached from the system I'm working on without this direct view into the engine room). Besides, configuring conkies is also not something you could call simple and intuitive.

If you are a private or SOHO user, and are interested in an on-demand system monitor, it doesn't really help looking at the list of system monitors available on Wikipedia. For example, in the office we are quite happy with Nagios for monitoring the health of a few dozen servers, but it would be a bizarre overkill to employ it for the few systems in a SOHO situation.

If it's about monitoring a single system, the most obvious choice is bashtop, or, after a port to Python that happened just two weeks ago, bpytop, a system monitor for the command line that I'd mentioned already in a previous post. An interesting alternative that is graphically more spartan but no less capable is glances. Neither of these programs require configuration; they all work out of the box. Here are two screenshots showing bpytop running in mosh sessions on and blackvelvet, my desktop. The former is virtualized and thus lacks CPU temperatures.



If bashtop/bpytop isn't sufficient, but configuring Nagios too much, I'd recommend a look at Monitorix, which requires very little configuration, and can be be accessed with a browser. I've deployed it on my office PC for being able to examine the computing resources on this system in very great detail from my living room. And that works really well: with the help of the Monitorix protocols, I can present solid evidence when asking for more RAM or storage place. 😎 Here's the very top of the monitorix report on my desktop, showing that I'm doing just fine with what I currently have. 😞


Using Nexie

About a year ago I described the successful reanimation (Saving Nexie) of our Nexus 7 from 2012 by installing LineageOS on it. We were very happy with the result, and the little gadget subsequently accompanied my wife on her trip to Japan last year. But when she returned, Nexie seemed to disintegrate: the display and the backside started to separate for no apparent reason, and nothing could keep them together.

We thought that Nexie got somehow damaged during the trip, but I anyway asked our lab McGyver for help. She carefully dissected it and discovered that the culprit is the battery, having turned from a flat sheet to something resembling a Fugu. I purchased a new one, with which the display again connected to the back with a satisfying >click<. 😌

I brought the fully intact Nexie home and proudly presented it to my wife. She was unusually timid and finally told me that she wouldn't know what she should use it for. In fact, the Nexie doesn't really have a place anymore in her gadget zoo that includes an up-to-date Android tablet as well as a Windows 10 detachable. She thus tried to motivate me to take care of the Nexie, and despite my vow that I won't have anything to do with Android or iOS gadgets, I finally gave in and promised to give it a try.

My first Android device! A new world to be discovered! 😋

It turned out to be rather straightforward to switch accounts on an Android device, and to throw out old apps in the favor of new ones. And although I do realize of course that the Nexie is very slow by modern standards, I find it perfectly adequate for the simple things I use it for (mostly checking the weather, the soccer results, and the ticker, as well as reading the books that are too much for my old e-book reader and watching an occasional video). Thanks to Blokada from the F-Droid Store, all apps are free of ads of any kind. And despite the humble hardware, I stand no chance when playing Chess against the Nexie.😶

I haven't posted any screenshots for ages, so here are two showing Nexie in its full glory:



If you don't see anything: these are the first images in this blog in WEBP format, which results in images of essentially the same quality as the original PNG at a third of the size. I will use this format for all future images and also retroactively, so update your browser (that will probably work soon even on a Mac).

Working on the command line

I was looking for an introduction like “Due to the Corona crisis, ...”, as everybody does these days. Unfortunately, that won't work with the subject I'm going to talk about, namely, command line (CLI) applications, which I've used and liked in the pre-SARS-CoV-2 world just as much as now. I'm convinced since ages that the CLI is, to state it with the words of Luke, “the most intuitive, most natural and easiest to grasp type of user interface we have invented so far”.

But regardless of what I and others believe or not, lots of things are simply best done on the command line. There's a catch, of course: that statement is only true if and only if we are equipped with the right tools. An ancient /bin/sh without tab completion and history search and no access to my toolbox is nothing but a nightmare. Correctly configured, however, nothing beats the CLI in terms of speed and economy.

In the following, I'll provide a brief overview of the command line applications I'm regularly using for everyday duties such as server administration and file management, as well as a few more enjoyable activities. Two of my earlier posts have a certain overlap with the present one, but a different focus. My list is of course by no means exhaustive: you can find many more interesting tools and gadgets, ranging from the useful to the bizarre (try ternimal). Excellent starting points are the curated lists provided by Adam and Marcel. In addition, Igor (the guy behind wttr) compiled a list of web services available via the command line.


Work on the command line starts with the shell. The default in most distributions is the bash, and in a few the zsh. Both require extensive configuration to offer all features I'd like to see, and not all distributions provide such a custom setup (meaning, you got to do it). In contrast, fish is very well configured out-of-the-box regardless of the distribution. For the typical bash one-liner copied from the interwebs, I usually follow this advice.


If we are not at the console, we need a terminal emulator. I'm primarily using Tilix or, when resources are scarce, Guake and Terminator, but there are plenty of other choices.

Remote Shell

A lot of my everyday duties involve connecting to a remote server. Wherever possible, I do that by using a combination of mosh and tmux, the benefits of which have been described, for example, by Filippo and Brian. For example, I'd use

mosh --tmux new-session -s default

for connecting to and starting a new tmux session with the name 'default' on this server. I can detach with Ctrl+A-D and attach anytime again (also from an entirely different network) with

mosh --tmux a

Note that mosh requires some open UDP ports that may require configuration of the firewall (for which I'm utilizing ufw, so it's as easy as 'ufw allow <port>/udp').

Editor and other utilities

There can only be one: vim. Well: neovim. But that's it. 😉 In both cases, I use vim-plug as a plugin manager to load nnn as file opener (there is also a ranger plugin) and optionally vimtex whenever it's appropriate (although vim is not my primary TeX editor – a subject to which I will return in a forthcoming post). And if you really, really can handle neither vim nor emacs: try micro, it's the better nano. 😉

I put all documents that are still being edited under local version control. For this very simple task, I prefer mercurial over git because of the former's humanly readable version numbers (I just find it more natural and less demanding to address a commit with a natural number than with a hash). For LaTeX documents, I use scm-latexdiff to create diffs out of previous version managed by mercurial. And finally, I backup all of my documents and data with borg and rsync (and in future, also with rclone) as described in detail in my previous post.

Whatever I'm working on, it surely involves quick check on numbers, some simple, some involved. A lot of these can be done by qalc, a very versatile general purpose calculator, which has replaced calc after I've discovered that qalculate! also has a CLI...

System and network administration

Very much up on my list for this tasks are two rather unexpected tools, namely, a mail client and a scheduler. In fact, I rely on mutt (“All mail clients suck. This one just sucks less.”) for reading the mails sent by cronie. Whether I'm interested in the status of my hourly backups on the desktop or in the results of daily security checks on a remote server, this combination of tools is truly indispensable for getting important system messages. In the same category are tools for receiving security news and advisories such as newsboat with the appropriate feeds (or arch-audit on Arch), and of course the security auditing tool lynis.

On servers, I often want to know who's logged in on a system and what this user is running. For this task, htop is the swiss army knife, offering an excellent overview of all system resources and activities and also the possibility to manage them. In particular, htop helps to find (and end) amok running applications consuming too many precious CPU cycles and RAM. Even more information offers bashtop, a veritable system monitor for the command line. Applications running wild while accessing the mass storage can be easily identified by iotop, and disk resources can be checked on a partition level by pydf and on a file level by ncdu (or by broot and nnn as mentioned above). For a more in-depth analysis of systems, tools such as dstat may become helpful, but my general experience has been rather that it either works, or is broken.

When I have problems with the snappiness of the interwebs (which has become very rare), I first turn to mtr which often helps to find the culprit (although there isn't anything one can do if one of the hops is overloaded and suffers from package loss). Several helpful tools exist if the problem seems to be rather on my side, such as dnstop, iftop, nethogs, iptraf, and more, but I think that these tools merit a separate post.

Spare time

All work and no play makes Cob a dull snake. I stream videos using mpv and youtube-dl, like everybody else. I used to listen music by moc, but I've switched to cmus a few year ago. Similarly, I've exchanged irssi for weechat to talk to friends. When I feel like getting some news from the world outside, I fire up newsboat for a list of feeds that I find amusing. And for a quick reality check, nothing works better than mop displaying a ticker of my stocks. The very compact lured me into creating a permanent ticker on my desktop via conky integration, but I found that it distracts me way too much.

Appendix: bulk rename

For sake of example, let's create 101 files with touch rha{0..100}.barber, and let's rename them so that the numbers are all three digits (like, 004 instead of 4, 023 instead of 23). We can do that the easy or the hard way.

mc gives us the choice, namely, between shell patterns (globs) and standard regular expressions (regexes). The file selection dialog (press +) has an option 'Using shell patterns'. Let's select that and perform our self-imposed task in three simple steps: first, we select all files with a single digit number (+ rha?.*), then rename them with F6 / rha?.* / rha0?.*, and then all with a two digit number with F6 / rha ??.* / rha0??.* Now, that was easy, wasn't it?

ranger doesn't understand globs, but allows the use of regexes with the :filter (zf) and :mark commands. The renaming is sourced out to the system wide editor, i.e., vim in my case, where we can use the substitute command (:s) and vim regexes (which are mostly identical to those used in, for example, sed and perl, but not identical). So let's start with selecting all files with a single digit number: :mark -rft rha[0-9].b, followed by :bulkrename. The selected files can be easily renamed in vim by :%s/\([0-9]\)/0\1/g, followed by ZZ to save the list and apply it. For the next bunch, we mark files again by :mark -rft rha[0-9]..b, :bulkrename, and repeat the above command in vim. Not quite as easy as with mc if one is not very familiar with regexes!

nnn is very similar to ranger in this regard. We can select files by the filter function employing either strings (/) or regexes (\). So we type \rha[0-9].b followed by r to open the selected files in vim. The rest is basically the same as for ranger.

Home Office

The spread of SARS-CoV-2 has made it advisable for many people to work from home. I and my colleagues are doing that now for four weeks, and it's working very well. For me, home office isn't new: I use this possibility since a decade whenever I have a task at hand requiring particular concentration and focus. Writing papers or proposals is such a task, or developing and implementing a quantitative model to understand experimental data (that's what lucky physicists do for a living). In fact, I've been asked in January by colleagues to help with the development of such a model, which I thought to be challenging, but didn't expect to be as difficult as it actually turned out to be. For most of the time, I was rather cluelessly poking around in a forest of equations and not getting anywhere.

During the last days, I made an effort to refocus on this issue, and not only for a few hours, but a couple of days: you go to bed with the problem and wake up with it, and there's nothing to distract you from it. This kind of total concentration is simply not possible in the daily office routine, but I can do it at home, basically returning to my time as a student where every living moment was devoted to problem solving. What greatly helps with reaching this trance-like state is having no kids, an understanding wife, and softly purring cats that love to sleep in the chairs on my left and right. The breakthrough occurred after two days, all of a sudden, like a flash. I still had to solve technical problems, but the direction was clear. These are the moments that every scientist cherishes and holds most dear: the intense joy to have solved the problem, to have broken the code. 😌

I realize that not everybody has the same favorable boundary conditions as I do, or even the luxury to compare. And I understand that the situation is very different with young kids instead of cats. 😉 But still, I'm really tired reading commentaries in the newspapers moaning about the “solitary confinement”, and how unbearable it is. Most of them stem from rather young people with a smartphone glued to their right hand, and the strong belief to have the god-given right to party. Even worse are the characters with a political agenda, bitterly complaining about violations of our constitutional rights and predicting the end of democracy. What unites these two apparently very different groups is their failure to understand even the most simple arithmetic. And yes, there's no need for calculus to understand the simple concept of the exponential spread of a virus.

More realistic models are based on systems of differential equations similar to the ones describing the zombie apocalypse. The infection rate of the human population depends on the infection probability when a human and a zombie meet. Similarly, in epidemiology, the spread of an infectious disease is characterized by R0, the basic reproduction number. This number determines how fast the infection spreads (i.e., the slope of the exponential), and if it decreases with time (which is highly desirable), the curve “flattens”. The curve remains, however, an exponential as long as R0 > 1.

The greatest shortcoming of the human race is our inability to understand the exponential function. A. A. Bartlett


In a previous post, I've remarked:

“If the distributor commands over virtually unlimited resources, and compression speed is thus not an issue, brotli and zstd are clearly superior to all other choices. That's how we would like to have our updates: small and fast to decompress.”

And not even a year later, we get this announcement. My own measurements indicated a factor of 8 increase in decompression speed, but the Arch team even sees a factor of 14. Great! ☺

There are also a few settings in /etc/makepkg.conf that may greatly accelerate the installation of packages from the AUR. All details can be found in the Arch Wiki, but here are the modifications I'm using in the order of appearance:

# building optimized binaries
CFLAGS="-march=native -O2 -pipe -fstack-protector-strong -fno-plt"
# use all cores for compiling
# compile in RAM disk
# use package cache of pacman
# enable multicore compression for the algorithms supporting it
COMPRESSGZ=(pigz -c -f -n)
COMPRESSBZ2=(lbzip2 -c -f)
COMPRESSXZ=(xz -c -z - --threads=0)
COMPRESSZST=(zstd -c -z -q - --threads=0)
# use lz4 as default package format (the command line lz4 does not yet support multi-threading, but it's still faster than anything else)

I didn't perform any systematic measurements, but some AUR packages seem to install in seconds, when it took minutes in the default configuration. YMMV, but it's worth to give it a try.