Unbound Plus

As detailed in a previous post, I'm running Unbound as local DNS resolver. I've configured it to use DNS over TLS, and while things were a little shaky with the few available servers supporting this security protocol in the beginning, I didn't need to switch back to plain DNS for at least a year. And I'm not using the commercial providers that have decided to jump on the bandwagon, namely, Google (8.8.8.8), Cloudflare (1.1.1.1), and Quad9 (9.9.9.9). I wouldn't touch them except for testing purposes.

However, my initial motive to run a local resolver was not security or privacy, but latency, and DNS over TLS, being based on TCP instead of UDP as plain DNS, definitely doesn't help with that. In fact, unencrypted queries over UDP are generally way faster than encrypted ones over TCP, but the actual latency can strongly vary depending on the server queried.

Dnsperf is the standard tool for measuring the performance of an authoritative DNS server, but it doesn't support TCP, and the patched version is seriously outdated. Flamethrower is a brand-new alternative looking very promising, but I've got inconsistent results from it (I'm pretty sure that was entirely my fault).

The standard DNS query tools dig (part of bind) and drill (part of ldns) don't support TLS, but kdig (part of knot) supposedly does. An alternative is pydig, which I've used already two years ago to check if an authoritative server offers DNS over TLS, and which turned out to be just as helpful in determining the latency of a list of DNS servers (one IP per line). After updating ('git pull origin master'), I've fed this list (called, let's say, dns-servers.txt) to pydig using

while read p; do ./pydig @$p +dnssec +tls=auth ix.de | grep 'TLS response' | awk '{print substr($0, index($0,$10))}'; done < dns-servers.txt

with an explicit (+) requirement for DNSSEC and TLS (or without for plain DNS).

I've got a few really interesting results this way. For example, Cloudflare is invariably the fastest service available, with a latency of 9 and 60 ms for plain and encrypted UDP queries here at home, respectively. From pdes-net.org, the situation is different: Cloudflare takes 4 and 20 ms, while dnswarden returns results within 1 and 9 ms, respectively. Insanely fast!

This latter service (where the hell did it come from all of a sudden?) is also very competitive compared to Google or Quad9 here at home: all of them require about a 100 ms to answer TLS requests. That seems terribly slow, but it's not as bad as it sounds. First, I've configured Unbound to be a caching resolver, so many, if not most, requests are answered with virtually zero latency. Second, I minimize external requests by making the root zone local – which is also known as the hyperlocal concept.

Due to this added functionality, I've found it necessary to revamp the configuration. All main and auxiliary configuration files of my current Unbound installation are attached below.

Main configuration files

/etc/unbound/...

.../unbound.conf

include: "/etc/unbound/unbound.conf.d/*.conf"

.../unbound.conf.d/01_Basic.conf

server:
verbosity: 1
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes

use-syslog: yes
do-daemonize: no
directory: "/etc/unbound"

root-hints: root.hints

#trust-anchor-file: trusted-key.key
auto-trust-anchor-file: trusted-key.key

hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes

minimal-responses: yes
prefetch: yes
qname-minimisation: yes
rrset-roundrobin: yes
use-caps-for-id: yes

## reduce edns packet size to help big udp packets over dumb firewalls
#edns-buffer-size: 1232
#max-udp-size: 1232

cache-min-ttl: 3600
cache-max-ttl: 604800

include: /etc/unbound/adservers

.../unbound.conf.d/02_Forward.conf

server:
interface: ::0
interface: 0.0.0.0
access-control: ::1 allow
access-control: 2001:DB8:: allow
#access-control: fd00:aaaa:bbbb::/64 allow
access-control: 192.168.178.0/16 allow
verbosity: 1
ssl-upstream: yes

forward-zone:
# forward-addr format must be ip "@" port number "#" followed by the valid public
# hostname in order for unbound to use the tls-cert-bundle to validate the dns
# server certificate.
name: "."
# Servers support DNS over TLS, DNSSEC, and (partly) QNAME minimization
# see https://dnsprivacy.org/jenkins/job/dnsprivacy-monitoring/

### commercial servers for tests

### fully functional (ordered by performance)

### temporarily (2019/11/05) or permanently broken
#forward-addr: 89.234.186.112@853               #dns.neutopia.org

.../unbound.conf.d/03_Performance.conf

# https://www.unbound.net/documentation/howto_optimise.html
server:
# use all cores

# power of 2 close to num-threads
msg-cache-slabs: 8
rrset-cache-slabs: 8
infra-cache-slabs: 8
key-cache-slabs: 8

# more cache memory, rrset=msg*2
rrset-cache-size: 200m
msg-cache-size: 100m

# more outgoing connections
# depends on number of cores: 1024/cores - 50
outgoing-range: 100

# Larger socket buffer.  OS may need config.
so-rcvbuf: 8m
so-sndbuf: 8m

# Faster UDP with multithreading (only on Linux).
so-reuseport: yes

.../unbound.conf.d/04_Rootzone.conf

# “Hyperlocal“ configuration.
# see https://forum.turris.cz/t/undbound-rfc7706-hyperlocal-concept/8761
# furthermore
# https://forum.kuketz-blog.de/viewtopic.php?f=42&t=3067
# https://tools.ietf.org/html/rfc7706#appendix-A
# https://tools.ietf.org/html/rfc7706#appendix-B.1
# https://www.iana.org/domains/root/servers

auth-zone:
name: .
for-downstream: no
for-upstream: yes
fallback-enabled: yes
#master: 198.41.0.4                   # a.root-servers.net
master: 199.9.14.201                   # b.root-servers.net
master: 192.33.4.12                    # c.root-servers.net
#master: 199.7.91.13                  # d.root-servers.net
#master: 192.203.230.10               # e.root-servers.net
master: 192.5.5.241                    # f.root-servers.net
master: 192.112.36.4                   # g.root-servers.net
#master: 198.97.190.53                # h.root-servers.net
#master: 192.36.148.17                # i.root-servers.net
#master: 192.58.128.30                # j.root-servers.net
master: 193.0.14.129                   # k.root-servers.net
#master: 199.7.83.42                  # l.root-servers.net
#master: 202.12.27.33                 # m.root-servers.net
master: 192.0.47.132                   # xfr.cjr.dns.icann.org
master: 192.0.32.132                   # xfr.lax.dns.icann.org

zonefile: "root.zone"

Auxiliary configuration files

/etc/cron.weekly/...

#!/bin/bash
# Updating Unbound resources.
# Place this into e.g. /etc/cron.weekly

curl -sS -L --compressed -o /etc/unbound/adservers.new "https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext <https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext>_"

if  $? -eq 0 <>_; then mv /etc/unbound/adservers /etc/unbound/adservers.bak mv /etc/unbound/adservers.new /etc/unbound/adservers unbound-checkconf >/dev/null if $? -eq 0  <>_; then
systemctl restart unbound.service
else
unbound-checkconf
fi
else
fi

#!/bin/bash
# Updating Unbound resources.
# Place this into e.g. /etc/cron.weekly

###[ root.hints ]###

curl -sS -L --compressed -o /etc/unbound/root.hints.new https://www.internic.net/domain/named.cache <https://www.internic.net/domain/named.cache>_

if  $? -eq 0 <>_; then mv /etc/unbound/root.hints /etc/unbound/root.hints.bak mv /etc/unbound/root.hints.new /etc/unbound/root.hints unbound-checkconf >/dev/null if $? -eq 0  <>_; then
rm /etc/unbound/root.hints.bak
systemctl restart unbound.service
else
unbound-checkconf
mv /etc/unbound/root.hints /etc/unbound/root.hints.new
mv /etc/unbound/root.hints.bak /etc/unbound/root.hints
fi
else
fi

/etc/systemd/system/unbound.service.d

I've discarded my custom snippet for systemd to get the DNS anchor. Archlinux does provide the anchor automatically as a dependency of unbound (dnssec-anchors), so why complicate things. For other distributions, however, the snippet may still be useful, so here it is:

[Service]
ExecStartPre=sudo -u /usr/bin/unbound-anchor -a /etc/unbound/trusted-key.key

山葵 (Wasabia japonica)

My wife had to attend to urgent family matters and went home for a few weeks. When she asked me if there's anything I'd like her to bring back, well, of course: Wasabi! Now, most of you will have been already in Sushi shops or Japanese restaurants, and you thus may believe that you know what I'm talking about. You don't.

Personally, I've been served genuine wasabi only in two places in Japan, one in Osaka, one in Tokyo, both places I normally wouldn't even dream to visit since I don't want to spent my monthly income in one evening. But that's where I've learned what wasabi actually is – not the colored horse radish one gets almost everywhere, even in Japan (and certainly in Berlin), but one of the most delicious and stimulating spices and condiments I've ever had the pleasure to experience.

My wife bought a small little root as well as a おろし金 (shark-shin oroshigane), since wasabi is enjoyable only when being very finely grated. But when arriving at the airport, she was held back by authorities, since one cannot possibly bring the national treasures of Japan abroad without registering. 😱

Well, after filling out a phytosanitary certificate and getting it officially stamped, she was allowed to enter the plane to Helsinki. 😌

We are now having dinner and are enjoying the fresh wasabi together with good bread, butter, and smoked salmon (and beer). 😋 美味しい (Oishii)! 乾杯 (Kampai)!

InspIRCd 3

All of a sudden, the PdeS IRC channel wasn't working anymore. As inexplicable as this sudden disruption first appeared to be, as obvious are the reasons in hindsight. What has happened?

At August 18, apt offered an InspIRCd update, dutifully asking whether I wanted to keep the configuration files. I didn't realize at this moment that the update was in fact the upgrade from version 2 to 3 I had been waiting for since May. As a matter of fact, this update is disruptive and requires one to carefully review and modify the configuration of InspIRCd. Well, I failed to do that, and I also failed to notice that the InspIRCd service didn't restart after the update.

Sometimes people jokingly remark that I should work as a system or network admin rather than as a scientist. This incident shows that I'm not qualified for such a job. I'm way too careless.

In any case, I now had to find the reason for the InspIRCd service to quit. It wasn't too difficult, but a multi-step procedure. The first obstacle was an outdated apparmor profile, which allowed InspIRCd to write in /run, but not in /run/inspircd. That was easily fixed.

The second was the TLS configuration of our channel. I took the opportunity to renew our certificate and to altogether strengthen the security of the channel, but it took me a while to realize that the identifier in the bind_ssl and sslprofile_name tags has to be one and the same (it isn't in the documentation!).

<bind
port="6697"
type="clients"
ssl="pdes">

<module name="ssl_gnutls">

<sslprofile
name="pdes"
provider="gnutls"
certfile="cert/cert.pem"
keyfile="cert/key.pem"
dhfile="cert/dhparams.pem"
mindhbits="4096"
outrecsize="4096"
hash="sha512"
requestclientcert="no"
priority="PFS:+SECURE256:+SECURE128:-VERS-ALL:+VERS-TLS1.3">

Well, the channel is up again, more secure than ever. Fire away. 😅

Debian 10

Buster is stable, Bullseye is the new testing.

sed -i 's/buster/bullseye/g' /etc/apt/sources.list

Opposite extremes

I have a CentOS virtual machine because I had to install it on a compute server in my office, and I keep it since it's such an interesting antithesis to the rolling release distribution I prefer for my daily computing environment. CentOS is by a large margin the most conservative of all Linux distributions, and it's sometime useful for me to have access to older software in its natural habitat. Just look at this table comparing the versions of some of the major packages on fully updated Arch and CentOS 7 installations:

                Current                         Arch                            CentOS
linux           5.1.15                          5.1.15                          3.10
libc            2.29                            2.29                            2.17
gcc             9.1.0                           9.1.0                           4.8.5
systemd         242                             242                             219
bash            5.0                             5.0                             4.2
openssh         8.0p1                           8.0p1                           7.4p1
python          3.7.3                           3.7.3                           2.7.5
perl            5.30.2                          5.30.2                          5.16.3
texlive         2019                            2019                            2012
vim             8.1                             8.1                             7.4
xorg            1.20.5                          1.20.5                          1.20.1
firefox         67.0.1                          67.0.1                          60.2.2
chromium        75.0.3770.100                   75.0.3770.100                   73.0.3683.86

You can easily see why I prefer Arch over CentOS as a desktop system.

But CentOS has it's merits, particularly for servers. There's no other distribution (except, of course, its commercial sibling RHEL) with a longer support span: CentOS 7 was released in July 2014 and is supported till end of June 2024. And that's not just a partial support as for the so-called LTS versions of Ubuntu.

Now, I've noticed that CentOS keeps old kernels after updates, befitting its highly conservative attitude. However, in view of the very limited hard disk space I typically give my virtual machines (8 GB), I got a bit nervous when I saw that kernels really seemed to pile up after a few updates. There were five of them! Turned out that's the default, giving the word “careful” an entirely new meaning.

But am I supposed to remove some of these kernels manually? No. I was glad to find that the RHEL developers had already recognized the need for a more robust solution:

yum install yum-utils
package-cleanup --oldkernels --count=3


And to make this limit permanent, I just had to edit /etc/yum.conf and set

installonly_limit=3

Well thought out. 😉

What you don't want to use, revisited

A decade ago, I advised my readers to stay away from OpenOffice for the preparation of professional presentations, primarily because of the poor support of vector graphics formats at that time. In view of the difficulties we have recently encountered when working with collaborators on the same document with different Office versions, I was now setting great hopes in LibreOffice for the preparation of our next project proposal. First of all, I thought that using platform-independent open source software, it should be straightforward to guarantee that all collaborators are using the same version of the software. Second, the support for SVG has been much improved in recent versions (>6) of LibreOffice, and I believed that we finally should be able to import vector graphics directly from Inkscape into an Office document. Third, the TexMaths extension allows one to use LaTeX for typesetting equations and to insert them as SVG, promising a much improved math rendering at a fraction of the time needed to enter it compared to the native equation editor. Fourth, Mendeley offers a citation plugin for LibreOffice, which I hoped would make the management of the bibliography and inserting citations as simple as with BibTeX in a LaTeX document.

Well, all of these hopes were in vain. What we (I) had chosen for preparing the proposal (the latest LibreOffice, TexMaths extension, and Mendeley plugin) proved to be one of the buggiest software combos of all times.

ad (i): Not the fault of the software, but still kind of sobering: our external collaborator declared that he had never heard about LibreOffice, and that he wouldn't know how to install it. Well, we thought, now only two people have to stay compatible to each other. We installed the same version of LibreOffice (first Still, than Fresh), I on Linux, he on Windows. But the different operating systems probably had little to do with what followed.

ad (ii): I was responsible for all display items in the proposal, and I've used a combination of Mathematica, Python, Gimp, and Inkscape to create the seven figures contained in it. The final SVG, however, was always generated by Inkscape. I've experienced two serious problems with these figures. First, certain line art elements such as arrows were simply not shown in LibreOffice or in PDFs created by it. Second, the figures tended to “disappear”: when trying to move one of them, another would suddenly be invisible. The caption numbering showed that they were still part of the document, and simply inserting them again messed up the numbering. We've managed to find one of these hidden figures in the nowhere between two pages (like being trapped between dimensions 😱), but others stayed mysteriously hidden. We had to go back to the previous version to resolve these issues, and in the end I converted all figures to bitmaps. D'Oh!

ad (iii): I wrote a large part of my text in one session and inserted all symbols and equations using TeXMaths. Worked perfectly, and after saving the document, I went home, quite satisfied with my achievements this day. When I tried to continue the next day, LibreOffice told me the document is corrupted, and was subsequently unable to open it. I finally managed to open it with TextMaker, which didn't complain, but also didn't show any of the equations I had inserted the day before. Well, I saved the document anyway to at least restore the text. Opening the file saved by TextMaker with Writer worked, and even all symbols and equations showed up as SVG graphics, but without the possibility to edit them by TeXMaths.

ad (iv): Since my colleague had previously used the Mendeley plugin for Word, it was him who had the task to insert our various references (initially about 40). That seemed to work very well, although he found the plugin irritatingly slow (40 references take something like a minute to process). However, when he tried to enter additional references a few days later, Mendeley claimed that the previous one were edited manually, displayed a dialogue asking whether we would like to keep this manual edit or disregard it. Regardless the choice, the previous citations were now generated twice. And with any further citation, twice more, so that after adding three more citations, [1] became [1][1][1][1][1][1][1][1]. The plugin also took proportionally longer for processing the file, so in the last example, it took about 10 min. Well, we went one version back. But what worked so nicely the day before was now inexplicably broken. It turned out that a simple sync of Mendeley (which is carried out automatically when you start this software) can be sufficient for triggering this behavior. We finally inserted the last references manually, overriding and actually irreversibly damaging the links between the citations and the bibliography.

In the final stages, working on the proposal felt like skating on atomically thin ice (Icen 😎). We always expected the worst, and instead of concentrating on the content, we treated the document like a piece of prehistoric art which could be damaged by anything, including just viewing the document on the screen. That feeling was very distracting. I would have loved to correct my position, really, but LibreOffice in its present state is clearly no alternative to LaTeX for preparing the documents and presentations required in my professional environment. I will check again in another ten years. 😉

In principle, I would have no problem with being solely responsible for the document if I could use LaTeX and would get the contribution from the collaborators simply as plain text. It is them having a problem with that, since they don't know what plain text is. In this context, I increasingly understand the trend to collaborative software: it's not that people really work at the same time, simultaneously, on a document, but it's the fact that people work on it with the guaranteed same software which counts.

Functions with default values

Suppose you would like to have a command generating a secure password for an online service at the command line. You would google for that and find 10 ways to generate a random password. At the end of his article, the author presents the ideal way to generate a secure password:

date | md5sum

The author of the article (Lowell Heddings, the founder and CEO of How-To Geek) states:

I’m sure that some people will complain that it’s not as random as some of the other options, but honestly, it’s random enough if you’re going to be using the whole thing.

Random enough? Sure, the 'whole thing' looks random enough:

9ec463af3db95e8e44de84417d9f408f

but the look is deceptive: this is in fact an extremely weak password. To understand why, let's look at the output of the 'date' command:

↪ date
Sun 12 May 2019 04:33:26 PM CEST


We see that without additional parameter (like +"%N"), 'date' gives us one password for each second of the year. How many passwords do we get in this way? Well,

 ↪ date +"%s"
1557666649


i.e., 1,557,666,649 seconds has passed since 00:00:00, Jan 1, 1970 (Unix epoch time), and that's how many passwords we get.

Now, the possibility to order Pizza online came much later, namely, at August 22nd, 1994.

↪ date -d 19940822 +"%s"
777506400


That leaves us with 780,160,249 passwords since this memorable day in 1994 or a complexity of 30 bits, corresponding to a 5-digit password with a character space of 62. Let's get one of these and see how difficult it is to crack:

↪ pwgen -s 5 -1
p9iCN


Now, even my ancient GTX650Ti with its modest MD5 hashing performance of 1.5 GH/s cracks this password in 5 s (note that an RTX2080 delivers 36 GH/s...):

○ → hashcat -O -a 3 -m 0 myhashes.hash ?a?a?a?a?a
hashcat (v5.1.0) starting...

OpenCL Platform #1: NVIDIA Corporation
======================================
- Device #1: GeForce GTX 650 Ti, 243/972 MB allocatable, 4MCU

0b91091d40a8623891367459d5b2a406:p9iCN

Session..........: hashcat
Status...........: Cracked
Hash.Type........: MD5
Hash.Target......: 0b91091d40a8623891367459d5b2a406
Time.Started.....: Mon May 13 12:48:58 2019 (5 secs)
Time.Estimated...: Mon May 13 12:49:03 2019 (0 secs)
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........: 514.0 MH/s (6.21ms) @ Accel:64 Loops:47 Thr:1024 Vec:2
Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.........: 2328363008/7737809375 (30.09%)
Rejected.........: 0/2328363008 (0.00%)
Restore.Point....: 24379392/81450625 (29.93%)
Restore.Sub.#1...: Salt:0 Amplifier:0-47 Iteration:0-47
Candidates.#1....: s3v\, -> RPuJG
Hardware.Mon.#1..: Temp: 44c Fan: 33%


But actually, it's even worse: instead of cracking the hash one can easily precompute all possible values of the 'date | md5sum' command, and thus create a dictionary containing these “passwords”. I could start right away:

for (( time=777506400; time<=1557666649; time++ )); do date -d@$time | md5sum | tr -d "-"; done > lowell_heddings_passwords.txt On my desktop with its Xeon E3 v2, this command computes one million passwords in about half an hour, i.e, I'd need about 17 days for computing all passwords back to 1994. Writing a corresponding program running on the GPU would cut this down to seconds. Note that the resulting list of “random enough” passwords is static, i.e., it is indeed a dictionary, and not even a particularly large one. Lowell Heddings himself mentions several alternative ways to generate a password in his article before turning to the worst possible solution. But if we desire cryptographically secure solutions, even apparently innocuous commands are beset with difficulties, as pointed out by, for example, the carpetsmoker (better carpets than mattresses). In the end, it all boils down to the following three choices that are available on virtually any Linux installation. If we limit ourselves to a character space of 62: cat /dev/urandom | base64 | tr -d /=+ | head -c 25; echo openssl rand -base64 25 | tr -d /=+ | head -c 25; echo gpg2 --armor --gen-random 1 25 | tr -d /=+ | head -c 25; echo If we insist of having almost all printable characters (which often calls for trouble): cat /dev/urandom | base91 | head -c 25 openssl rand 25 | base91 | head -c 25; echo gpg2 --gen-random 1 25 | base91 | head -c 25; echo One could, in principle, also utilize dedicated password generators, such as Theodore Tso's 'pwgen', Adel I. Mirzazhanov's 'apg', or haui's 'hpg': pwgen -s 25 apg -a 1 -M ncl -m 25 -x 25 hpg --alphanum 25 All of these ways are cryptographically equivalent in the sense that the entropy of the passwords generated by either of them asymptotically approaches the theoretical value ($\log_2(62) \approx 5.954$bits per character) when you average over many (10,000,000 or more). In the present context (functions with default values) the generators do not offer any advantage, but only add unnecessary complexity. Now, whatever you chose as your favorite, you don't want to memorize the command or rely on the history of your favorite shell. One could define an alias with the password length as parameter, but I prefer to use a function for this case to have a default length of 25 characters with the option to change this value: ↪ pw62 mPcSU1c3lBTC7gChJ4MBw1sZW ↪ pw62 8 Yjs6NhYM ↪ pw62 32 cfn4KKugWHhOBF8qn6SO5Rj7uC2LksnK  Here's how to implement this functionality for the three major shells. Note the very elegant way with which a default value can be implemented within the bash. Update: haui reminded me that the zsh is a drop-in replacement for the bash and thus of course implements all bash variable substitution, particularly${var:-default}. Hence, we can use the same syntax for the bash and the zsh, and only the fish needs the comparatively clumsy construct shown below. 😎

bash

function pw62
{
cat /dev/urandom | base64 | tr -d /=+ | head -c ${1:-25}; echo } fish function pw62 if set -q$argv
set length 25
else
set length $argv end cat /dev/urandom | base64 | tr -d /=+ | head -c$length; echo

end

zsh (alternative to the bash syntax)

function pw62()
{
if [ "$1" != "" ] then integer length=$1
else
integer length=25
fi

cat /dev/urandom | base64 | tr -d /=+ | head -c $length; echo } Oid's Graffel I generally like Debian, as documented by the fact that it's my Linux distribution of choice for pdes-net.org, for the two compute server at the office, for the Mini (which is currently out of order due to a defunct SSD), and for the virtual machine that I've reserved for online banking (biig mistake...see below). Since the stable version of Debian delivers only outdated software, I'm using 'testing' as the base, and if needed, I also install packages from 'sid'. On my main systems, however, I don't use Debian, but Archlinux. I have several good reasons for this decision. One of them is that packages that belong in a museum are not reserved to Debian Stable, but are also regularly found in Testing or Sid. One example is 'look', which I've recently reported to be a fast way for finding an entry in a huge file. The version of look in Debian, however, contains a bug that has been fixed ten years ago. Except, of course, in Debian (and all derivatives). But what are 10 years if you can have 20? In 2010, c't presented a Perl script for downloading and processing the transactions from an account at Deutsche Bank. The script served me well for several years, but it broke a number of times due to changes of the web interface and Perl itself. I was able to fix the script the first four times, but the last time, about five years ago, I had to ask haui for help. And a few weeks ago, it simply broke completely, and I decided to let it go and extend my old bash script to process the csv files downloaded from Deutsche Bank. Part of one of the new scripts is the following oneliner: tail -n +4$current_rates | iconv -f ISO8859-1 -t utf8 | awk '{split($0,a,";"); print a[14]}' | sed 's/,/./g' | bc -l | xargs printf %.2f"\n" | tr '\n' ' ' | awk '{print strftime("%Y-%m-%d")"\t"$7"\t"$6"\t"$1"\t"$5"\t"$4" \t"$2" \t"$3}'> $cleaned_rates Worked perfectly on my notebook running Archlinux, but in the virtual machine reserved for online banking, I got the following error message: mawk: line 2: function strftime never defined Hmmm... $ awk -W version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan

Are you kidding me? That's rather extreme even for Debian standards. Particularly when considering that version 1.3.4 was published in 2009, and strftime was added to it in 2012. But surely, sid has a more recent version...NOT 😒

Even CentOS 6 came with mawk 1.3.4. Shame on you, Debilian!

Well, the only choice was to install gawk, and in this particular case, the performance hit doesn't matter at all. But why isn't that the default, if the Debilians have chosen to neglect mawk? And why do they do that anyway?

Well, whatever. The scripts are working now. 😉

Saving Nexie

We own a first generation Nexus 7 from 2012, which my wife affectionately calls Nexie because of its compact form factor. It impressed me with its high build quality, particularly considering its modest price point of €199. It's performance was more than satisfactory with the stock Android 4.1, but when it got updated to Android 5.1 in 2015, it was reduced to an unresponsive brick, no matter what I've tried (including a factory reset). We finally decided to retire it and to use it as a, well, wall clock. But recently, the time on the Nexus was often running slow by more than an hour. The reason were resource-intensive background processes updating the various Google apps installed by default.

Can we resurrect the Nexus by flashing it with an alternative ROM such as LineageOS? That's at least what I'm going to try. Note that I'm fairly ignorant with respect to Android and its devices. But anyway, let's get going.

1st step

I search the web for “Nexus 7 (2012) LineageOS”. To my relief, there's general agreement that our Nexus can run LineageOS 14.1, corresponding to Android 7.1. I thus download the image for our (GSM free) version of the Nexus 7. I also find some instructions regarding the installation. I learn from them that I have the option to install the Google Apps (particularly the play store) or not.

2nd step

I'm in favor of installing a pure LineageOS system free of Google Apps, and to use F-Droid instead of the Google play store, but my wife pleads for the latter. Since she's the primary user, I look at the options on OpenGapps and download the pico build for ARM 32 bit, Android 7.1.

3rd step

To install the custom ROM and the Google Apps, I need a custom recovery image such as the one provided by TWRP. I download the latest version for my Nexus.

4th step

Prepare your device, they say. All right, I tap the 'Build Number' under 'About Phone' in 'Settings' seven times and thus become a developer (not a joke, it really works that way 😲). I then scroll down and enable USB debugging.

5th step

The instructions mention the commands adb and fastboot, which I find to be contained in the package android-tools. I thus install these tools on my Fujitsu Lifebook:

sudo pacman -S android-tools

6th step

I use the USB cable of my Kobo reader (a conventional microUSB-to-USB cable) to connect the Nexus to my Lifebook, and chose MTP in the USB dialog on the Nexus.

7th step

All right, now it comes. (On hindsight, I could certainly do better when I would try a second time. But anyway: it worked. 😎)

# adb reboot bootloader

Yup, the Nexus boots and is now in a kind of repair mode. 😊

Now as root:

$fastboot oem unlock$ fastboot flash recovery twrp-3.3.0-0-grouper.img

A subsequent 'adb reboot bootloader' doesn't work (I now believe that 'fastboot reboot' would have). I reboot the Nexus manually by navigating with the volume and power keys. I then switch in the same way to recovery mode, upon which TWRP starts up.

# adb push /home/cobra/Downloads/lineage-14.1-20171122_224807-UNOFFICIAL-aaopt-grouper.zip /sdcard/
# adb push /home/cobra/Downloads/open_gapps-arm-7.1-pico-20190426.zip /sdcard/

Next, as found in the instructions, I navigate in TWRP to the Wipe menu. For some reason, wiping fails, and I'm stuck in a boot loop. 😨

I search the web and find that boot loops are rather common. A recommended solution is to either erase or format the userdata:

$fastboot erase userdata$ fastboot format userdata

but that doesn't do anything (just telling me that it's <waiting for device>). Only after I hold power/volumedown for 10 s, I see the repair menu, and when going to recovery mode, TWRP seems to finish what it has tried to do. pooh

I have no idea what went wrong there, nor precisely how it was corrected. But I found one statement in the web that gave me courage: “as long as your device does anything when switching it on, it is NOT bricked.”

The rest is easy: I go to install, select LineageOS and Google Apps, install and reboot.

The result

Much better than I had hoped for. The interface reacts instantaneously, animations run smoothly, and apps start fast. It feels as good as new. Well, my wife says: even better. ☺

Search package providing a certain command

I've posted a short note on this topic almost exactly ten years ago, and it's time for an update. The situation: you've heard or read about a certain tool and want to install it, but you can't find it no matter how hard you try.

My first advice: don't search and install via graphical applications. ”Software centers” popular in consumer distributions may not show command line applications at all, so if you've read my last post and search for dc, you won't find what you are looking for.

Second: not every command comes in a package with the same name. For example, in Archlinux, dc is bundled with bc, and it is the latter (much more popular) application which gives the package its name.

To master such situations, it's time to leave graphical software centers behind and to learn a few basics about the actual package manager underneath. As an example, I'm showing a search for dig and drill, each of which is contained in a differently named package, with the name of these packages depending (as always) on the distribution.

Archlinux

pacman -Fs dig
bind-tools
pacman -Fs drill
ldns

Debian

wajig whichpkg /usr/bin/dig
dnsutils
wajig whichpkg /usr/bin/drill
ldnsutils

CentOS/Fedora

yum/dnf provides /usr/bin/dig
bind-utils
yum/dnf provides /usr/bin/drill
ldns

OpenSUSE

Reportedly, zypper offers the same functionality.

Of the big six, that leaves Gentoo and Slackware. If you use these or a distribution whose package manager is not covered here, while it offers the desired funtionality, send me a note.

Update: An attentive reader reminded me of the Pacman Rosetta, which includes OpenSUSE and Gentoo. 😎