Using Nexie

About a year ago I described the successful reanimation (Saving Nexie) of our Nexus 7 from 2012 by installing LineageOS on it. We were very happy with the result, and the little gadget subsequently accompanied my wife on her trip to Japan last year. But when she returned, Nexie seemed to disintegrate: the display and the backside started to separate for no apparent reason, and nothing could keep them together.

We thought that Nexie got somehow damaged during the trip, but I anyway asked our lab McGyver for help. She carefully dissected it and discovered that the culprit is the battery, having turned from a flat sheet to something resembling a Fugu. I purchased a new one, with which the display again connected to the back with a satisfying >click<. 😌

I brought the fully intact Nexie home and proudly presented it to my wife. She was unusually timid and finally told me that she wouldn't know what she should use it for. In fact, the Nexie doesn't really have a place anymore in her gadget zoo that includes an up-to-date Android tablet as well as a Windows 10 detachable. She thus tried to motivate me to take care of the Nexie, and despite my vow that I won't have anything to do with Android or iOS gadgets, I finally gave in and promised to give it a try.

My first Android device! A new world to be discovered! 😋

It turned out to be rather straightforward to switch accounts on an Android device, and to throw out old apps in the favor of new ones. And although I do realize of course that the Nexie is very slow by modern standards, I find it perfectly adequate for the simple things I use it for (mostly checking the weather, the soccer results, and the ticker, as well as reading the books that are too much for my old e-book reader and watching an occasional video). Thanks to Blokada from the F-Droid Store, all apps are free of ads of any kind. And despite the humble hardware, I stand no chance when playing Chess against the Nexie.😶

I haven't posted any screenshots for ages, so here are two showing Nexie in its full glory:



If you don't see anything: these are the first images in this blog in WEBP format, which results in images of essentially the same quality as the original PNG at a third of the size. I will use this format for all future images and also retroactively, so update your browser (that will probably work soon even on a Mac).

Working on the command line

I was looking for an introduction like “Due to the Corona crisis, ...”, as everybody does these days. Unfortunately, that won't work with the subject I'm going to talk about, namely, command line (CLI) applications, which I've used and liked in the pre-SARS-CoV-2 world just as much as now. I'm convinced since ages that the CLI is, to state it with the words of Luke, “the most intuitive, most natural and easiest to grasp type of user interface we have invented so far”.

But regardless of what I and others believe or not, lots of things are simply best done on the command line. There's a catch, of course: that statement is only true if and only if we are equipped with the right tools. An ancient /bin/sh without tab completion and history search and no access to my toolbox is nothing but a nightmare. Correctly configured, however, nothing beats the CLI in terms of speed and economy.

In the following, I'll provide a brief overview of the command line applications I'm regularly using for everyday duties such as server administration and file management, as well as a few more enjoyable activities. Two of my earlier posts have a certain overlap with the present one, but a different focus. My list is of course by no means exhaustive: you can find many more interesting tools and gadgets, ranging from the useful to the bizarre (try ternimal). Excellent starting points are the curated lists provided by Adam and Marcel. In addition, Igor (the guy behind wttr) compiled a list of web services available via the command line.


Work on the command line starts with the shell. The default in most distributions is the bash, and in a few the zsh. Both require extensive configuration to offer all features I'd like to see, and not all distributions provide such a custom setup (meaning, you got to do it). In contrast, fish is very well configured out-of-the-box regardless of the distribution. For the typical bash one-liner copied from the interwebs, I usually follow this advice.


If we are not at the console, we need a terminal emulator. I'm primarily using Tilix or, when resources are scarce, Guake and Terminator, but there are plenty of other choices.

Remote Shell

A lot of my everyday duties involve connecting to a remote server. Wherever possible, I do that by using a combination of mosh and tmux, the benefits of which have been described, for example, by Filippo and Brian. For example, I'd use

mosh --tmux new-session -s default

for connecting to and starting a new tmux session with the name 'default' on this server. I can detach with Ctrl+A-D and attach anytime again (also from an entirely different network) with

mosh --tmux a

Note that mosh requires some open UDP ports that may require configuration of the firewall (for which I'm utilizing ufw, so it's as easy as 'ufw allow <port>/udp').

Editor and other utilities

There can only be one: vim. Well: neovim. But that's it. 😉 In both cases, I use vim-plug as a plugin manager to load nnn as file opener (there is also a ranger plugin) and optionally vimtex whenever it's appropriate (although vim is not my primary TeX editor – a subject to which I will return in a forthcoming post). And if you really, really can handle neither vim nor emacs: try micro, it's the better nano. 😉

I put all documents that are still being edited under local version control. For this very simple task, I prefer mercurial over git because of the former's humanly readable version numbers (I just find it more natural and less demanding to address a commit with a natural number than with a hash). For LaTeX documents, I use scm-latexdiff to create diffs out of previous version managed by mercurial. And finally, I backup all of my documents and data with borg and rsync (and in future, also with rclone) as described in detail in my previous post.

Whatever I'm working on, it surely involves quick check on numbers, some simple, some involved. A lot of these can be done by qalc, a very versatile general purpose calculator, which has replaced calc after I've discovered that qalculate! also has a CLI...

System and network administration

Very much up on my list for this tasks are two rather unexpected tools, namely, a mail client and a scheduler. In fact, I rely on mutt (“All mail clients suck. This one just sucks less.”) for reading the mails sent by cronie. Whether I'm interested in the status of my hourly backups on the desktop or in the results of daily security checks on a remote server, this combination of tools is truly indispensable for getting important system messages. In the same category are tools for receiving security news and advisories such as newsboat with the appropriate feeds (or arch-audit on Arch), and of course the security auditing tool lynis.

On servers, I often want to know who's logged in on a system and what this user is running. For this task, htop is the swiss army knife, offering an excellent overview of all system resources and activities and also the possibility to manage them. In particular, htop helps to find (and end) amok running applications consuming too many precious CPU cycles and RAM. Even more information offers bashtop, a veritable system monitor for the command line. Applications running wild while accessing the mass storage can be easily identified by iotop, and disk resources can be checked on a partition level by pydf and on a file level by ncdu (or by broot and nnn as mentioned above). For a more in-depth analysis of systems, tools such as dstat may become helpful, but my general experience has been rather that it either works, or is broken.

When I have problems with the snappiness of the interwebs (which has become very rare), I first turn to mtr which often helps to find the culprit (although there isn't anything one can do if one of the hops is overloaded and suffers from package loss). Several helpful tools exist if the problem seems to be rather on my side, such as dnstop, iftop, nethogs, iptraf, and more, but I think that these tools merit a separate post.

Spare time

All work and no play makes Cob a dull snake. I stream videos using mpv and youtube-dl, like everybody else. I used to listen music by moc, but I've switched to cmus a few year ago. Similarly, I've exchanged irssi for weechat to talk to friends. When I feel like getting some news from the world outside, I fire up newsboat for a list of feeds that I find amusing. And for a quick reality check, nothing works better than mop displaying a ticker of my stocks. The very compact lured me into creating a permanent ticker on my desktop via conky integration, but I found that it distracts me way too much.

Appendix: bulk rename

For sake of example, let's create 101 files with touch rha{0..100}.barber, and let's rename them so that the numbers are all three digits (like, 004 instead of 4, 023 instead of 23). We can do that the easy or the hard way.

mc gives us the choice, namely, between shell patterns (globs) and standard regular expressions (regexes). The file selection dialog (press +) has an option 'Using shell patterns'. Let's select that and perform our self-imposed task in three simple steps: first, we select all files with a single digit number (+ rha?.*), then rename them with F6 / rha?.* / rha0?.*, and then all with a two digit number with F6 / rha ??.* / rha0??.* Now, that was easy, wasn't it?

ranger doesn't understand globs, but allows the use of regexes with the :filter (zf) and :mark commands. The renaming is sourced out to the system wide editor, i.e., vim in my case, where we can use the substitute command (:s) and vim regexes (which are mostly identical to those used in, for example, sed and perl, but not identical). So let's start with selecting all files with a single digit number: :mark -rft rha[0-9].b, followed by :bulkrename. The selected files can be easily renamed in vim by :%s/\([0-9]\)/0\1/g, followed by ZZ to save the list and apply it. For the next bunch, we mark files again by :mark -rft rha[0-9]..b, :bulkrename, and repeat the above command in vim. Not quite as easy as with mc if one is not very familiar with regexes!

nnn is very similar to ranger in this regard. We can select files by the filter function employing either strings (/) or regexes (\). So we type \rha[0-9].b followed by r to open the selected files in vim. The rest is basically the same as for ranger.

Home Office

The spread of SARS-CoV-2 has made it advisable for many people to work from home. I and my colleagues are doing that now for four weeks, and it's working very well. For me, home office isn't new: I use this possibility since a decade whenever I have a task at hand requiring particular concentration and focus. Writing papers or proposals is such a task, or developing and implementing a quantitative model to understand experimental data (that's what lucky physicists do for a living). In fact, I've been asked in January by colleagues to help with the development of such a model, which I thought to be challenging, but didn't expect to be as difficult as it actually turned out to be. For most of the time, I was rather cluelessly poking around in a forest of equations and not getting anywhere.

During the last days, I made an effort to refocus on this issue, and not only for a few hours, but a couple of days: you go to bed with the problem and wake up with it, and there's nothing to distract you from it. This kind of total concentration is simply not possible in the daily office routine, but I can do it at home, basically returning to my time as a student where every living moment was devoted to problem solving. What greatly helps with reaching this trance-like state is having no kids, an understanding wife, and softly purring cats that love to sleep in the chairs on my left and right. The breakthrough occurred after two days, all of a sudden, like a flash. I still had to solve technical problems, but the direction was clear. These are the moments that every scientist cherishes and holds most dear: the intense joy to have solved the problem, to have broken the code. 😌

I realize that not everybody has the same favorable boundary conditions as I do, or even the luxury to compare. And I understand that the situation is very different with young kids instead of cats. 😉 But still, I'm really tired reading commentaries in the newspapers moaning about the “solitary confinement”, and how unbearable it is. Most of them stem from rather young people with a smartphone glued to their right hand, and the strong belief to have the god-given right to party. Even worse are the characters with a political agenda, bitterly complaining about violations of our constitutional rights and predicting the end of democracy. What unites these two apparently very different groups is their failure to understand even the most simple arithmetic. And yes, there's no need for calculus to understand the simple concept of the exponential spread of a virus.

More realistic models are based on systems of differential equations similar to the ones describing the zombie apocalypse. The infection rate of the human population depends on the infection probability when a human and a zombie meet. Similarly, in epidemiology, the spread of an infectious disease is characterized by R0, the basic reproduction number. This number determines how fast the infection spreads (i.e., the slope of the exponential), and if it decreases with time (which is highly desirable), the curve “flattens”. The curve remains, however, an exponential as long as R0 > 1.

The greatest shortcoming of the human race is our inability to understand the exponential function. A. A. Bartlett


In a previous post, I've remarked:

“If the distributor commands over virtually unlimited resources, and compression speed is thus not an issue, brotli and zstd are clearly superior to all other choices. That's how we would like to have our updates: small and fast to decompress.”

And not even a year later, we get this announcement. My own measurements indicated a factor of 8 increase in decompression speed, but the Arch team even sees a factor of 14. Great! ☺

There are also a few settings in /etc/makepkg.conf that may greatly accelerate the installation of packages from the AUR. All details can be found in the Arch Wiki, but here are the modifications I'm using in the order of appearance:

# building optimized binaries
CFLAGS="-march=native -O2 -pipe -fstack-protector-strong -fno-plt"
# use all cores for compiling
# compile in RAM disk
# use package cache of pacman
# enable multicore compression for the algorithms supporting it
COMPRESSGZ=(pigz -c -f -n)
COMPRESSBZ2=(lbzip2 -c -f)
COMPRESSXZ=(xz -c -z - --threads=0)
COMPRESSZST=(zstd -c -z -q - --threads=0)
# use lz4 as default package format (the command line lz4 does not yet support multi-threading, but it's still faster than anything else)

I didn't perform any systematic measurements, but some AUR packages seem to install in seconds, when it took minutes in the default configuration. YMMV, but it's worth to give it a try.