# The five dimensions of heat

Most people love to travel. I don't. First of all, I dislike the modern way of transportation that reduces travel to a mere logistic problem, namely, to the one transporting human cargo with the minimum cost. Second, I'm a creature of habits, and traveling inevitably interferes with them. And third, away from my natural habitat I miss the native diet of a chilivore.

It's not that I want every dish to be hot as hell, but what I can't stand is the styrofoam taste of the stuff one gets for food in airplanes and related places. Heat doesn't refer here to the physical quantity, but the sensation experienced when consuming certain spices, which is commonly also called spiciness, hotness, or pungency. This quality is usually only associated with Chili, but that's a fairly one-dimensional view.

## Chili

The food of the true revolutionary is the red pepper, and he who cannot endure red peppers is also unable to fight.” Said Mao Zedong (毛澤東), who was born in the Hunan province in China which is home of one of the eight great traditions of Chinese cuisine well known for its liberal use of hot chilis.

The active substance in all chili peppers is capsaicin. The Scoville scale provides a measure of the amount of capsaicin in a given plant and ranges from 0 to 16,000,000 Scoville heat units (SHU). The hottest Chili on earth is currently the Carolina Reaper with a breathtaking value of 1,569,300 SHU (as a reference: the spiciness of the original red Tabasco is not higher than 3,500 SHU). Now, these kind of hyper-hot chilis are mainly a fetish for chili heads (like myself) and are commercially valuable only as tourist attraction (see here). Nobody in his right mind would use such a designer chili for actual food. The hottest variety I've seen in authentic indigenous food is the bird's eye chili, which is particularly popular in Thai food, and which scores around 100,000 SHU. I have not seen anything substantially hotter in Mexico, but I have never been in Yucatán, where people eat reportedly Habaneros (up to 350,000 SHU) for breakfast.

## Black pepper and ginger

Both are known for adding flavor rather than heat. But what is not commonly known is that both of these spices contain substances that are chemical relatives of capsaicin, have an analogous effect and can be measured on the same scale. The active substance in black pepper, for example, is called piperine and scores 100,000 SHU. Not much behind is gingerol in Ginger with 60,000 SHU. Even considering that the actual plants contain only a few percent of these active substances, they can be surprisingly hot when used liberally.

A completely different kind of hotness is produced by a substance called allyl isothiocyanate, which is contained in mustard seeds as well as in horseradish and wasabi, and affects the nose rather than the throat. Personally, I don't like the popular chili mustards that attempt to combine the distinct types of heat offered by chili and mustard, but many people just love it for barbecue. Furthermore, many popular curry powder recipes combine mustard seeds with chili. In any case, mustard seeds are a part of human food everywhere in the world.

## Sichuan Pepper

Has nothing to do with any other pepper (particularly with the very similar looking black pepper as you can see above). Contains hydroxy α-sanshool, which has a unique effect unlike any experienced with ordinary pepper, chili, or mustard. In the words of Harold McGee in his book On Food and Cooking, the sanshools “produce a strange, tingling, buzzing, numbing sensation that is something like the effect of carbonated drinks or of a mild electric current (touching the terminals of a nine-volt battery to the tongue).” Used, not surprisingly, mostly in Sichian (四川菜) food such as mapo doufo (麻婆豆腐) – my absolute favor and IMHO the undisputed crown of Chinese cuisines.

## Garlic/Onions

Raw onions and even more so raw garlic develop an intense heat due to the substance allicin. When consumed in large quantities, the effect can be rather overwhelming. To give you at least an idea of what I'm talking about, have a look at the main vegetable side dish you get when ordering the traditional bulgogi (불고기) in Korea:

The meat coming with this friendly offer is marinated with a paste containing, among other delicious ingredients, loads of garlic. 😆 And the other side dishes consist in a green onion salad and (rather mild) chili peppers. 😵 After enjoying this course, I wasn't surprised hearing that onions and garlic have their own pungency standard, namely, the pyruvate scale.

It requires only these few basic ingredients to create the infinite variety of spicy food all around the world, from Thailand to the Caribbean islands, from Mexico to Ethiopia, from Cajun country to Korea. Sometimes, we may find the taste disagreeable or too extreme, like the eternal wasabi and ginger in Japan, and the endless garlic and onions in Korea. Don't be shy, vocalize your needs. I got hot chilis in Japan and ginger in Korea just by asking.

A spicy new year to all of you! 😊

After uninstalling the antivirus scanner (Avast), I've plugged in the stick, clicked on setup.exe, and off it went. But after 15% installation progress:

Error: 0x8007025D-0x2000C
The installation failed in the SAFE_OS phase with an error during APPLY_IMAGE operation.

Just below the error message is a link that can neither be clicked nor copied. Now that's usability! In any case, the suggestions on this page aren't helpful at all, but send the users experiencing this error message on the wrong track. Fortunately, third-party pages such as techjourney and the windows club do much better in this respect, in that they have the most likely reason on top of their list: corrupted installation media. And in fact, when I simply let the Media Creation Tool download the files, the upgrade works flawlessly. Didn't even take 3 h including the download, which was much faster then I had expected.

You see how easy the upgrade is – even for someone who hasn't actively used Windows since 15 years. Don't be one of these pathetic figures that are eternally whining and bawling that they have a god-given right to use Windows XYZ until the end of the time, and are very loudly expressing the opinion that Microsoft must be condemned by international (or at least European) law to keep the OS in question alive. Get a grip on yourself, make an update, and deal with it, for Pete's sake. Or switch to OpenBSD or any other one of these geekish systems. You could also buy a Mac, if you insist. But don't act like a newborn.

For us, Windows 10 itself is not entirely new, since earlier this year, we purchased a Lenovo Miix 630 for accompanying my wife on her trip to Japan. We got this 'Windows on ARM' detachable for €444 complete with a back-lit type cover and a pen, 8 GB of RAM, and LTE, allowing her to access the internet from home without the need to search for places offering public wifi. The Miix turned out to be very versatile and fun to use, and it has an almost unbelievable battery life in excess of 20 h thanks to its Snapdragon 835 processor (a mid-range smartphone SOC). What I also like is the rolling-release concept of Windows 10, which guarantees that the device isn't obsolete after at most three years as it's custom for Android gadgets. It's a pity that this interesting concept is so unpopular. Lenovo has already stopped the production of the Miix, and there aren't any others like it (the Surface Pro X from Microsoft comes at more than three times the price).

# Unbound Plus

As detailed in a previous post, I'm running Unbound as local DNS resolver. I've configured it to use DNS over TLS, and while things were a little shaky with the few available servers supporting this security protocol in the beginning, I didn't need to switch back to plain DNS for at least a year. And I'm not using the commercial providers that have decided to jump on the bandwagon, namely, Google (8.8.8.8), Cloudflare (1.1.1.1), and Quad9 (9.9.9.9). I wouldn't touch them except for testing purposes.

However, my initial motive to run a local resolver was not security or privacy, but latency, and DNS over TLS, being based on TCP instead of UDP as plain DNS, definitely doesn't help with that. In fact, unencrypted queries over UDP are generally way faster than encrypted ones over TCP, but the actual latency can strongly vary depending on the server queried.

Dnsperf is the standard tool for measuring the performance of an authoritative DNS server, but it doesn't support TCP, and the patched version is seriously outdated. Flamethrower is a brand-new alternative looking very promising, but I've got inconsistent results from it (I'm pretty sure that was entirely my fault).

The standard DNS query tools dig (part of bind) and drill (part of ldns) don't support TLS, but kdig (part of knot) supposedly does. An alternative is pydig, which I've used already two years ago to check if an authoritative server offers DNS over TLS, and which turned out to be just as helpful in determining the latency of a list of DNS servers (one IP per line). After updating ('git pull origin master'), I've fed this list (called, let's say, dns-servers.txt) to pydig using

while read p; do ./pydig @$p +dnssec +tls=auth ix.de | grep 'TLS response' | awk '{print substr($0, index($0,$10))}'; done < dns-servers.txt

with an explicit (+) requirement for DNSSEC and TLS (or without for plain DNS).

I've got a few really interesting results this way. For example, Cloudflare is invariably the fastest service available, with a latency of 9 and 60 ms for plain and encrypted UDP queries here at home, respectively. From pdes-net.org, the situation is different: Cloudflare takes 4 and 20 ms, while dnswarden returns results within 1 and 9 ms, respectively. Insanely fast!

This latter service (where the hell did it come from all of a sudden?) is also very competitive compared to Google or Quad9 here at home: all of them require about a 100 ms to answer TLS requests. That seems terribly slow, but it's not as bad as it sounds. First, I've configured Unbound to be a caching resolver, so many, if not most, requests are answered with virtually zero latency. Second, I minimize external requests by making the root zone local – which is also known as the hyperlocal concept.

Due to this added functionality, I've found it necessary to revamp the configuration. All main and auxiliary configuration files of my current Unbound installation are attached below.

## Main configuration files

### /etc/unbound/...

#### .../unbound.conf

include: "/etc/unbound/unbound.conf.d/*.conf"

#### .../unbound.conf.d/01_Basic.conf

server:
verbosity: 1
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes

use-syslog: yes
do-daemonize: no
directory: "/etc/unbound"

root-hints: root.hints

#trust-anchor-file: trusted-key.key
auto-trust-anchor-file: trusted-key.key

hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes

minimal-responses: yes
prefetch: yes
qname-minimisation: yes
rrset-roundrobin: yes
use-caps-for-id: yes

## reduce edns packet size to help big udp packets over dumb firewalls
#edns-buffer-size: 1232
#max-udp-size: 1232

cache-min-ttl: 3600
cache-max-ttl: 604800

#### .../unbound.conf.d/02_Forward.conf

server:
interface: ::0
interface: 0.0.0.0
access-control: ::1 allow
access-control: 2001:DB8:: allow
#access-control: fd00:aaaa:bbbb::/64 allow
access-control: 192.168.178.0/16 allow
verbosity: 1
ssl-upstream: yes

forward-zone:
# forward-addr format must be ip "@" port number "#" followed by the valid public
# hostname in order for unbound to use the tls-cert-bundle to validate the dns
# server certificate.
name: "."
# Servers support DNS over TLS, DNSSEC, and (partly) QNAME minimization
# see https://dnsprivacy.org/jenkins/job/dnsprivacy-monitoring/

### commercial servers for tests

### fully functional (ordered by performance)

### temporarily (2019/11/05) or permanently broken

#### .../unbound.conf.d/03_Performance.conf

# https://www.unbound.net/documentation/howto_optimise.html
server:
# use all cores

# power of 2 close to num-threads
msg-cache-slabs: 8
rrset-cache-slabs: 8
infra-cache-slabs: 8
key-cache-slabs: 8

# more cache memory, rrset=msg*2
rrset-cache-size: 200m
msg-cache-size: 100m

# more outgoing connections
# depends on number of cores: 1024/cores - 50
outgoing-range: 100

# Larger socket buffer.  OS may need config.
so-rcvbuf: 8m
so-sndbuf: 8m

# Faster UDP with multithreading (only on Linux).
so-reuseport: yes

#### .../unbound.conf.d/04_Rootzone.conf

# “Hyperlocal“ configuration.
# see https://forum.turris.cz/t/undbound-rfc7706-hyperlocal-concept/8761
# furthermore
# https://forum.kuketz-blog.de/viewtopic.php?f=42&t=3067
# https://tools.ietf.org/html/rfc7706#appendix-A
# https://tools.ietf.org/html/rfc7706#appendix-B.1
# https://www.iana.org/domains/root/servers

auth-zone:
name: .
for-downstream: no
for-upstream: yes
fallback-enabled: yes
#master: 198.41.0.4                   # a.root-servers.net
master: 199.9.14.201                   # b.root-servers.net
master: 192.33.4.12                    # c.root-servers.net
#master: 199.7.91.13                  # d.root-servers.net
#master: 192.203.230.10               # e.root-servers.net
master: 192.5.5.241                    # f.root-servers.net
master: 192.112.36.4                   # g.root-servers.net
#master: 198.97.190.53                # h.root-servers.net
#master: 192.36.148.17                # i.root-servers.net
#master: 192.58.128.30                # j.root-servers.net
master: 193.0.14.129                   # k.root-servers.net
#master: 199.7.83.42                  # l.root-servers.net
#master: 202.12.27.33                 # m.root-servers.net
master: 192.0.47.132                   # xfr.cjr.dns.icann.org
master: 192.0.32.132                   # xfr.lax.dns.icann.org

zonefile: "root.zone"

## Auxiliary configuration files

### /etc/cron.weekly/...

#!/bin/bash
# Updating Unbound resources.
# Place this into e.g. /etc/cron.weekly

curl -sS -L --compressed -o /etc/unbound/adservers.new "https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext <https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext>_"

if  $? -eq 0 <>_; then mv /etc/unbound/adservers /etc/unbound/adservers.bak mv /etc/unbound/adservers.new /etc/unbound/adservers unbound-checkconf >/dev/null if $? -eq 0  <>_; then
systemctl restart unbound.service
else
unbound-checkconf
fi
else
fi

#!/bin/bash
# Updating Unbound resources.
# Place this into e.g. /etc/cron.weekly

###[ root.hints ]###

curl -sS -L --compressed -o /etc/unbound/root.hints.new https://www.internic.net/domain/named.cache <https://www.internic.net/domain/named.cache>_

if  $? -eq 0 <>_; then mv /etc/unbound/root.hints /etc/unbound/root.hints.bak mv /etc/unbound/root.hints.new /etc/unbound/root.hints unbound-checkconf >/dev/null if $? -eq 0  <>_; then
rm /etc/unbound/root.hints.bak
systemctl restart unbound.service
else
unbound-checkconf
mv /etc/unbound/root.hints /etc/unbound/root.hints.new
mv /etc/unbound/root.hints.bak /etc/unbound/root.hints
fi
else
fi

### /etc/systemd/system/unbound.service.d

I've discarded my custom snippet for systemd to get the DNS anchor. Archlinux does provide the anchor automatically as a dependency of unbound (dnssec-anchors), so why complicate things. For other distributions, however, the snippet may still be useful, so here it is:

[Service]
ExecStartPre=sudo -u /usr/bin/unbound-anchor -a /etc/unbound/trusted-key.key

# 山葵 (Wasabia japonica)

My wife had to attend to urgent family matters and went home for a few weeks. When she asked me if there's anything I'd like her to bring back, well, of course: Wasabi! Now, most of you will have been already in Sushi shops or Japanese restaurants, and you thus may believe that you know what I'm talking about. You don't.

Personally, I've been served genuine wasabi only in two places in Japan, one in Osaka, one in Tokyo, both places I normally wouldn't even dream to visit since I don't want to spent my monthly income in one evening. But that's where I've learned what wasabi actually is – not the colored horse radish one gets almost everywhere, even in Japan (and certainly in Berlin), but one of the most delicious and stimulating spices and condiments I've ever had the pleasure to experience.

My wife bought a small little root as well as a おろし金 (shark-shin oroshigane), since wasabi is enjoyable only when being very finely grated. But when arriving at the airport, she was held back by authorities, since one cannot possibly bring the national treasures of Japan abroad without registering. 😱

Well, after filling out a phytosanitary certificate and getting it officially stamped, she was allowed to enter the plane to Helsinki. 😌

We are now having dinner and are enjoying the fresh wasabi together with good bread, butter, and smoked salmon (and beer). 😋 美味しい (Oishii)! 乾杯 (Kampai)!

# InspIRCd 3

All of a sudden, the PdeS IRC channel wasn't working anymore. As inexplicable as this sudden disruption first appeared to be, as obvious are the reasons in hindsight. What has happened?

At August 18, apt offered an InspIRCd update, dutifully asking whether I wanted to keep the configuration files. I didn't realize at this moment that the update was in fact the upgrade from version 2 to 3 I had been waiting for since May. As a matter of fact, this update is disruptive and requires one to carefully review and modify the configuration of InspIRCd. Well, I failed to do that, and I also failed to notice that the InspIRCd service didn't restart after the update.

Sometimes people jokingly remark that I should work as a system or network admin rather than as a scientist. This incident shows that I'm not qualified for such a job. I'm way too careless.

In any case, I now had to find the reason for the InspIRCd service to quit. It wasn't too difficult, but a multi-step procedure. The first obstacle was an outdated apparmor profile, which allowed InspIRCd to write in /run, but not in /run/inspircd. That was easily fixed.

The second was the TLS configuration of our channel. I took the opportunity to renew our certificate and to altogether strengthen the security of the channel, but it took me a while to realize that the identifier in the bind_ssl and sslprofile_name tags has to be one and the same (it isn't in the documentation!).

<bind
port="6697"
type="clients"
ssl="pdes">

<module name="ssl_gnutls">

<sslprofile
name="pdes"
provider="gnutls"
certfile="cert/cert.pem"
keyfile="cert/key.pem"
dhfile="cert/dhparams.pem"
mindhbits="4096"
outrecsize="4096"
hash="sha512"
requestclientcert="no"
priority="PFS:+SECURE256:+SECURE128:-VERS-ALL:+VERS-TLS1.3">

Well, the channel is up again, more secure than ever. Fire away. 😅

# Debian 10

Buster is stable, Bullseye is the new testing.

sed -i 's/buster/bullseye/g' /etc/apt/sources.list

# Opposite extremes

I have a CentOS virtual machine because I had to install it on a compute server in my office, and I keep it since it's such an interesting antithesis to the rolling release distribution I prefer for my daily computing environment. CentOS is by a large margin the most conservative of all Linux distributions, and it's sometime useful for me to have access to older software in its natural habitat. Just look at this table comparing the versions of some of the major packages on fully updated Arch and CentOS 7 installations:

Current                         Arch                            CentOS
linux           5.1.15                          5.1.15                          3.10
libc            2.29                            2.29                            2.17
gcc             9.1.0                           9.1.0                           4.8.5
systemd         242                             242                             219
bash            5.0                             5.0                             4.2
openssh         8.0p1                           8.0p1                           7.4p1
python          3.7.3                           3.7.3                           2.7.5
perl            5.30.2                          5.30.2                          5.16.3
texlive         2019                            2019                            2012
vim             8.1                             8.1                             7.4
xorg            1.20.5                          1.20.5                          1.20.1
firefox         67.0.1                          67.0.1                          60.2.2
chromium        75.0.3770.100                   75.0.3770.100                   73.0.3683.86

You can easily see why I prefer Arch over CentOS as a desktop system.

But CentOS has it's merits, particularly for servers. There's no other distribution (except, of course, its commercial sibling RHEL) with a longer support span: CentOS 7 was released in July 2014 and is supported till end of June 2024. And that's not just a partial support as for the so-called LTS versions of Ubuntu.

Now, I've noticed that CentOS keeps old kernels after updates, befitting its highly conservative attitude. However, in view of the very limited hard disk space I typically give my virtual machines (8 GB), I got a bit nervous when I saw that kernels really seemed to pile up after a few updates. There were five of them! Turned out that's the default, giving the word “careful” an entirely new meaning.

But am I supposed to remove some of these kernels manually? No. I was glad to find that the RHEL developers had already recognized the need for a more robust solution:

yum install yum-utils
package-cleanup --oldkernels --count=3

And to make this limit permanent, I just had to edit /etc/yum.conf and set

installonly_limit=3

Well thought out. 😉

# What you don't want to use, revisited

A decade ago, I advised my readers to stay away from OpenOffice for the preparation of professional presentations, primarily because of the poor support of vector graphics formats at that time. In view of the difficulties we have recently encountered when working with collaborators on the same document with different Office versions, I was now setting great hopes in LibreOffice for the preparation of our next project proposal. First of all, I thought that using platform-independent open source software, it should be straightforward to guarantee that all collaborators are using the same version of the software. Second, the support for SVG has been much improved in recent versions (>6) of LibreOffice, and I believed that we finally should be able to import vector graphics directly from Inkscape into an Office document. Third, the TexMaths extension allows one to use LaTeX for typesetting equations and to insert them as SVG, promising a much improved math rendering at a fraction of the time needed to enter it compared to the native equation editor. Fourth, Mendeley offers a citation plugin for LibreOffice, which I hoped would make the management of the bibliography and inserting citations as simple as with BibTeX in a LaTeX document.

Well, all of these hopes were in vain. What we (I) had chosen for preparing the proposal (the latest LibreOffice, TexMaths extension, and Mendeley plugin) proved to be one of the buggiest software combos of all times.

ad (i): Not the fault of the software, but still kind of sobering: our external collaborator declared that he had never heard about LibreOffice, and that he wouldn't know how to install it. Well, we thought, now only two people have to stay compatible to each other. We installed the same version of LibreOffice (first Still, than Fresh), I on Linux, he on Windows. But the different operating systems probably had little to do with what followed.

ad (ii): I was responsible for all display items in the proposal, and I've used a combination of Mathematica, Python, Gimp, and Inkscape to create the seven figures contained in it. The final SVG, however, was always generated by Inkscape. I've experienced two serious problems with these figures. First, certain line art elements such as arrows were simply not shown in LibreOffice or in PDFs created by it. Second, the figures tended to “disappear”: when trying to move one of them, another would suddenly be invisible. The caption numbering showed that they were still part of the document, and simply inserting them again messed up the numbering. We've managed to find one of these hidden figures in the nowhere between two pages (like being trapped between dimensions 😱), but others stayed mysteriously hidden. We had to go back to the previous version to resolve these issues, and in the end I converted all figures to bitmaps. D'Oh!

ad (iii): I wrote a large part of my text in one session and inserted all symbols and equations using TeXMaths. Worked perfectly, and after saving the document, I went home, quite satisfied with my achievements this day. When I tried to continue the next day, LibreOffice told me the document is corrupted, and was subsequently unable to open it. I finally managed to open it with TextMaker, which didn't complain, but also didn't show any of the equations I had inserted the day before. Well, I saved the document anyway to at least restore the text. Opening the file saved by TextMaker with Writer worked, and even all symbols and equations showed up as SVG graphics, but without the possibility to edit them by TeXMaths.

ad (iv): Since my colleague had previously used the Mendeley plugin for Word, it was him who had the task to insert our various references (initially about 40). That seemed to work very well, although he found the plugin irritatingly slow (40 references take something like a minute to process). However, when he tried to enter additional references a few days later, Mendeley claimed that the previous one were edited manually, displayed a dialogue asking whether we would like to keep this manual edit or disregard it. Regardless the choice, the previous citations were now generated twice. And with any further citation, twice more, so that after adding three more citations, [1] became [1][1][1][1][1][1][1][1]. The plugin also took proportionally longer for processing the file, so in the last example, it took about 10 min. Well, we went one version back. But what worked so nicely the day before was now inexplicably broken. It turned out that a simple sync of Mendeley (which is carried out automatically when you start this software) can be sufficient for triggering this behavior. We finally inserted the last references manually, overriding and actually irreversibly damaging the links between the citations and the bibliography.

In the final stages, working on the proposal felt like skating on atomically thin ice (Icen 😎). We always expected the worst, and instead of concentrating on the content, we treated the document like a piece of prehistoric art which could be damaged by anything, including just viewing the document on the screen. That feeling was very distracting. I would have loved to correct my position, really, but LibreOffice in its present state is clearly no alternative to LaTeX for preparing the documents and presentations required in my professional environment. I will check again in another ten years. 😉

In principle, I would have no problem with being solely responsible for the document if I could use LaTeX and would get the contribution from the collaborators simply as plain text. It is them having a problem with that, since they don't know what plain text is. In this context, I increasingly understand the trend to collaborative software: it's not that people really work at the same time, simultaneously, on a document, but it's the fact that people work on it with the guaranteed same software which counts.

# Functions with default values

Suppose you would like to have a command generating a secure password for an online service at the command line. You would google for that and find 10 ways to generate a random password. At the end of his article, the author presents the ideal way to generate a secure password:

date | md5sum

The author of the article (Lowell Heddings, the founder and CEO of How-To Geek) states:

I’m sure that some people will complain that it’s not as random as some of the other options, but honestly, it’s random enough if you’re going to be using the whole thing.

Random enough? Sure, the 'whole thing' looks random enough:

9ec463af3db95e8e44de84417d9f408f

but the look is deceptive: this is in fact an extremely weak password. To understand why, let's look at the output of the 'date' command:

↪ date
Sun 12 May 2019 04:33:26 PM CEST

We see that without additional parameter (like +"%N"), 'date' gives us one password for each second of the year. How many passwords do we get in this way? Well,

↪ date +"%s"
1557666649

i.e., 1,557,666,649 seconds has passed since 00:00:00, Jan 1, 1970 (Unix epoch time), and that's how many passwords we get.

Now, the possibility to order Pizza online came much later, namely, at August 22nd, 1994.

↪ date -d 19940822 +"%s"
777506400

That leaves us with 780,160,249 passwords since this memorable day in 1994 or a complexity of 30 bits, corresponding to a 5-digit password with a character space of 62. Let's get one of these and see how difficult it is to crack:

↪ pwgen -s 5 -1
p9iCN

Now, even my ancient GTX650Ti with its modest MD5 hashing performance of 1.5 GH/s cracks this password in 5 s (note that an RTX2080 delivers 36 GH/s...):

○ → hashcat -O -a 3 -m 0 myhashes.hash ?a?a?a?a?a
hashcat (v5.1.0) starting...

OpenCL Platform #1: NVIDIA Corporation
======================================
- Device #1: GeForce GTX 650 Ti, 243/972 MB allocatable, 4MCU

0b91091d40a8623891367459d5b2a406:p9iCN

Session..........: hashcat
Status...........: Cracked
Hash.Type........: MD5
Hash.Target......: 0b91091d40a8623891367459d5b2a406
Time.Started.....: Mon May 13 12:48:58 2019 (5 secs)
Time.Estimated...: Mon May 13 12:49:03 2019 (0 secs)
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........: 514.0 MH/s (6.21ms) @ Accel:64 Loops:47 Thr:1024 Vec:2
Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.........: 2328363008/7737809375 (30.09%)
Rejected.........: 0/2328363008 (0.00%)
Restore.Point....: 24379392/81450625 (29.93%)
Restore.Sub.#1...: Salt:0 Amplifier:0-47 Iteration:0-47
Candidates.#1....: s3v\, -> RPuJG
Hardware.Mon.#1..: Temp: 44c Fan: 33%

But actually, it's even worse: instead of cracking the hash one can easily precompute all possible values of the 'date | md5sum' command, and thus create a dictionary containing these “passwords”. I could start right away:

for (( time=777506400; time<=1557666649; time++ )); do date -d@$time | md5sum | tr -d "-"; done > lowell_heddings_passwords.txt On my desktop with its Xeon E3 v2, this command computes one million passwords in about half an hour, i.e, I'd need about 17 days for computing all passwords back to 1994. Writing a corresponding program running on the GPU would cut this down to seconds. Note that the resulting list of “random enough” passwords is static, i.e., it is indeed a dictionary, and not even a particularly large one. Lowell Heddings himself mentions several alternative ways to generate a password in his article before turning to the worst possible solution. But if we desire cryptographically secure solutions, even apparently innocuous commands are beset with difficulties, as pointed out by, for example, the carpetsmoker (better carpets than mattresses). In the end, it all boils down to the following three choices that are available on virtually any Linux installation. If we limit ourselves to a character space of 62: cat /dev/urandom | base64 | tr -d /=+ | head -c 25; echo openssl rand -base64 25 | tr -d /=+ | head -c 25; echo gpg2 --armor --gen-random 1 25 | tr -d /=+ | head -c 25; echo If we insist of having almost all printable characters (which often calls for trouble): cat /dev/urandom | base91 | head -c 25 openssl rand 25 | base91 | head -c 25; echo gpg2 --gen-random 1 25 | base91 | head -c 25; echo One could, in principle, also utilize dedicated password generators, such as Theodore Tso's 'pwgen', Adel I. Mirzazhanov's 'apg', or haui's 'hpg': pwgen -s 25 apg -a 1 -M ncl -m 25 -x 25 hpg --alphanum 25 All of these ways are cryptographically equivalent in the sense that the entropy of the passwords generated by either of them asymptotically approaches the theoretical value ($\log_2(62) \approx 5.954$bits per character) when you average over many (10,000,000 or more). In the present context (functions with default values) the generators do not offer any advantage, but only add unnecessary complexity. Now, whatever you chose as your favorite, you don't want to memorize the command or rely on the history of your favorite shell. One could define an alias with the password length as parameter, but I prefer to use a function for this case to have a default length of 25 characters with the option to change this value: ↪ pw62 mPcSU1c3lBTC7gChJ4MBw1sZW ↪ pw62 8 Yjs6NhYM ↪ pw62 32 cfn4KKugWHhOBF8qn6SO5Rj7uC2LksnK Here's how to implement this functionality for the three major shells. Note the very elegant way with which a default value can be implemented within the bash. Update: haui reminded me that the zsh is a drop-in replacement for the bash and thus of course implements all bash variable substitution, particularly${var:-default}. Hence, we can use the same syntax for the bash and the zsh, and only the fish needs the comparatively clumsy construct shown below. 😎

bash

function pw62
{
cat /dev/urandom | base64 | tr -d /=+ | head -c ${1:-25}; echo } fish function pw62 if set -q$argv
set length 25
else
set length $argv end cat /dev/urandom | base64 | tr -d /=+ | head -c$length; echo

end

zsh (alternative to the bash syntax)

function pw62()
{
if [ "$1" != "" ] then integer length=$1
else
integer length=25
fi

cat /dev/urandom | base64 | tr -d /=+ | head -c $length; echo } # Oid's Graffel I generally like Debian, as documented by the fact that it's my Linux distribution of choice for pdes-net.org, for the two compute server at the office, for the Mini (which is currently out of order due to a defunct SSD), and for the virtual machine that I've reserved for online banking (biig mistake...see below). Since the stable version of Debian delivers only outdated software, I'm using 'testing' as the base, and if needed, I also install packages from 'sid'. On my main systems, however, I don't use Debian, but Archlinux. I have several good reasons for this decision. One of them is that packages that belong in a museum are not reserved to Debian Stable, but are also regularly found in Testing or Sid. One example is 'look', which I've recently reported to be a fast way for finding an entry in a huge file. The version of look in Debian, however, contains a bug that has been fixed ten years ago. Except, of course, in Debian (and all derivatives). But what are 10 years if you can have 20? In 2010, c't presented a Perl script for downloading and processing the transactions from an account at Deutsche Bank. The script served me well for several years, but it broke a number of times due to changes of the web interface and Perl itself. I was able to fix the script the first four times, but the last time, about five years ago, I had to ask haui for help. And a few weeks ago, it simply broke completely, and I decided to let it go and extend my old bash script to process the csv files downloaded from Deutsche Bank. Part of one of the new scripts is the following oneliner: tail -n +4$current_rates | iconv -f ISO8859-1 -t utf8 | awk '{split($0,a,";"); print a[14]}' | sed 's/,/./g' | bc -l | xargs printf %.2f"\n" | tr '\n' ' ' | awk '{print strftime("%Y-%m-%d")"\t"$7"\t"$6"\t"$1"\t"$5"\t"$4" \t"$2" \t"$3}'> $cleaned_rates Worked perfectly on my notebook running Archlinux, but in the virtual machine reserved for online banking, I got the following error message: mawk: line 2: function strftime never defined Hmmm...$ awk -W version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan

Are you kidding me? That's rather extreme even for Debian standards. Particularly when considering that version 1.3.4 was published in 2009, and strftime was added to it in 2012. But surely, sid has a more recent version...NOT 😒

Even CentOS 6 came with mawk 1.3.4. Shame on you, Debilian!

Well, the only choice was to install gawk, and in this particular case, the performance hit doesn't matter at all. But why isn't that the default, if the Debilians have chosen to neglect mawk? And why do they do that anyway?

Well, whatever. The scripts are working now. 😉