InspIRCd 3

All of a sudden, the PdeS IRC channel wasn't working anymore. As inexplicable as this sudden disruption first appeared to be, as obvious are the reasons in hindsight. What has happened?

At August 18, apt offered an InspIRCd update, dutifully asking whether I wanted to keep the configuration files. I didn't realize at this moment that the update was in fact the upgrade from version 2 to 3 I had been waiting for since May. As a matter of fact, this update is disruptive and requires one to carefully review and modify the configuration of InspIRCd. Well, I failed to do that, and I also failed to notice that the InspIRCd service didn't restart after the update.

Sometimes people jokingly remark that I should work as a system or network admin rather than as a scientist. This incident shows that I'm not qualified for such a job. I'm way too careless.

In any case, I now had to find the reason for the InspIRCd service to quit. It wasn't too difficult, but a multi-step procedure. The first obstacle was an outdated apparmor profile, which allowed InspIRCd to write in /run, but not in /run/inspircd. That was easily fixed.

The second was the TLS configuration of our channel. I took the opportunity to renew our certificate and to altogether strengthen the security of the channel, but it took me a while to realize that the identifier in the bind_ssl and sslprofile_name tags has to be one and the same (it isn't in the documentation!).

<bind
          address=""
          port="6697"
          type="clients"
          ssl="pdes">

<module name="ssl_gnutls">

<sslprofile
          name="pdes"
          provider="gnutls"
          certfile="cert/cert.pem"
          keyfile="cert/key.pem"
          dhfile="cert/dhparams.pem"
          mindhbits="4096"
          outrecsize="4096"
          hash="sha512"
          requestclientcert="no"
          priority="PFS:+SECURE256:+SECURE128:-VERS-ALL:+VERS-TLS1.3">

Well, the channel is up again, more secure than ever. Fire away. 😅

Opposite extremes

I have a CentOS virtual machine because I had to install it on a compute server in my office, and I keep it since it's such an interesting antithesis to the rolling release distribution I prefer for my daily computing environment. CentOS is by a large margin the most conservative of all Linux distributions, and it's sometime useful for me to have access to older software in its natural habitat. Just look at this table comparing the versions of some of the major packages on fully updated Arch and CentOS 7 installations:

                Current                         Arch                            CentOS
linux           5.1.15                          5.1.15                          3.10
libc            2.29                            2.29                            2.17
gcc             9.1.0                           9.1.0                           4.8.5
systemd         242                             242                             219
bash            5.0                             5.0                             4.2
openssh         8.0p1                           8.0p1                           7.4p1
python          3.7.3                           3.7.3                           2.7.5
perl            5.30.2                          5.30.2                          5.16.3
texlive         2019                            2019                            2012
vim             8.1                             8.1                             7.4
xorg            1.20.5                          1.20.5                          1.20.1
firefox         67.0.1                          67.0.1                          60.2.2
chromium        75.0.3770.100                   75.0.3770.100                   73.0.3683.86

You can easily see why I prefer Arch over CentOS as a desktop system.

But CentOS has it's merits, particularly for servers. There's no other distribution (except, of course, its commercial sibling RHEL) with a longer support span: CentOS 7 was released in July 2014 and is supported till end of June 2024. And that's not just a partial support as for the so-called LTS versions of Ubuntu.

Now, I've noticed that CentOS keeps old kernels after updates, befitting its highly conservative attitude. However, in view of the very limited hard disk space I typically give my virtual machines (8 GB), I got a bit nervous when I saw that kernels really seemed to pile up after a few updates. There were five of them! Turned out that's the default, giving the word “careful” an entirely new meaning.

But am I supposed to remove some of these kernels manually? No. I was glad to find that the RHEL developers had already recognized the need for a more robust solution:

yum install yum-utils
package-cleanup --oldkernels --count=3

And to make this limit permanent, I just had to edit /etc/yum.conf and set

installonly_limit=3

Well thought out. 😉

What you don't want to use, revisited

A decade ago, I advised my readers to stay away from OpenOffice for the preparation of professional presentations, primarily because of the poor support of vector graphics formats at that time. In view of the difficulties we have recently encountered when working with collaborators on the same document with different Office versions, I was now setting great hopes in LibreOffice for the preparation of our next project proposal. First of all, I thought that using platform-independent open source software, it should be straightforward to guarantee that all collaborators are using the same version of the software. Second, the support for SVG has been much improved in recent versions (>6) of LibreOffice, and I believed that we finally should be able to import vector graphics directly from Inkscape into an Office document. Third, the TexMaths extension allows one to use LaTeX for typesetting equations and to insert them as SVG, promising a much improved math rendering at a fraction of the time needed to enter it compared to the native equation editor. Fourth, Mendeley offers a citation plugin for LibreOffice, which I hoped would make the management of the bibliography and inserting citations as simple as with BibTeX in a LaTeX document.

Well, all of these hopes were in vain. What we (I) had chosen for preparing the proposal (the latest LibreOffice, TexMaths extension, and Mendeley plugin) proved to be one of the buggiest software combos of all times.

ad (i): Not the fault of the software, but still kind of sobering: our external collaborator declared that he had never heard about LibreOffice, and that he wouldn't know how to install it. Well, we thought, now only two people have to stay compatible to each other. We installed the same version of LibreOffice (first Still, than Fresh), I on Linux, he on Windows. But the different operating systems probably had little to do with what followed.

ad (ii): I was responsible for all display items in the proposal, and I've used a combination of Mathematica, Python, Gimp, and Inkscape to create the seven figures contained in it. The final SVG, however, was always generated by Inkscape. I've experienced two serious problems with these figures. First, certain line art elements such as arrows were simply not shown in LibreOffice or in PDFs created by it. Second, the figures tended to “disappear”: when trying to move one of them, another would suddenly be invisible. The caption numbering showed that they were still part of the document, and simply inserting them again messed up the numbering. We've managed to find one of these hidden figures in the nowhere between two pages (like being trapped between dimensions 😱), but others stayed mysteriously hidden. We had to go back to the previous version to resolve these issues, and in the end I converted all figures to bitmaps. D'Oh!

ad (iii): I wrote a large part of my text in one session and inserted all symbols and equations using TeXMaths. Worked perfectly, and after saving the document, I went home, quite satisfied with my achievements this day. When I tried to continue the next day, LibreOffice told me the document is corrupted, and was subsequently unable to open it. I finally managed to open it with TextMaker, which didn't complain, but also didn't show any of the equations I had inserted the day before. Well, I saved the document anyway to at least restore the text. Opening the file saved by TextMaker with Writer worked, and even all symbols and equations showed up as SVG graphics, but without the possibility to edit them by TeXMaths.

ad (iv): Since my colleague had previously used the Mendeley plugin for Word, it was him who had the task to insert our various references (initially about 40). That seemed to work very well, although he found the plugin irritatingly slow (40 references take something like a minute to process). However, when he tried to enter additional references a few days later, Mendeley claimed that the previous one were edited manually, displayed a dialogue asking whether we would like to keep this manual edit or disregard it. Regardless the choice, the previous citations were now generated twice. And with any further citation, twice more, so that after adding three more citations, [1] became [1][1][1][1][1][1][1][1]. The plugin also took proportionally longer for processing the file, so in the last example, it took about 10 min. Well, we went one version back. But what worked so nicely the day before was now inexplicably broken. It turned out that a simple sync of Mendeley (which is carried out automatically when you start this software) can be sufficient for triggering this behavior. We finally inserted the last references manually, overriding and actually irreversibly damaging the links between the citations and the bibliography.

In the final stages, working on the proposal felt like skating on atomically thin ice (Icen 😎). We always expected the worst, and instead of concentrating on the content, we treated the document like a piece of prehistoric art which could be damaged by anything, including just viewing the document on the screen. That feeling was very distracting. I would have loved to correct my position, really, but LibreOffice in its present state is clearly no alternative to LaTeX for preparing the documents and presentations required in my professional environment. I will check again in another ten years. 😉

In principle, I would have no problem with being solely responsible for the document if I could use LaTeX and would get the contribution from the collaborators simply as plain text. It is them having a problem with that, since they don't know what plain text is. In this context, I increasingly understand the trend to collaborative software: it's not that people really work at the same time, simultaneously, on a document, but it's the fact that people work on it with the guaranteed same software which counts.

Functions with default values

Suppose you would like to have a command generating a secure password for an online service at the command line. You would google for that and find 10 ways to generate a random password. At the end of his article, the author presents the ideal way to generate a secure password:

date | md5sum

The author of the article (Lowell Heddings, the founder and CEO of How-To Geek) states:

I’m sure that some people will complain that it’s not as random as some of the other options, but honestly, it’s random enough if you’re going to be using the whole thing.

Random enough? Sure, the 'whole thing' looks random enough:

9ec463af3db95e8e44de84417d9f408f

but the look is deceptive: this is in fact an extremely weak password. To understand why, let's look at the output of the 'date' command:

↪ date
Sun 12 May 2019 04:33:26 PM CEST

We see that without additional parameter (like +"%N"), 'date' gives us one password for each second of the year. How many passwords do we get in this way? Well,

 ↪ date +"%s"
1557666649

i.e., 1,557,666,649 seconds has passed since 00:00:00, Jan 1, 1970 (Unix epoch time), and that's how many passwords we get.

Now, the possibility to order Pizza online came much later, namely, at August 22nd, 1994.

↪ date -d 19940822 +"%s"
777506400

That leaves us with 780,160,249 passwords since this memorable day in 1994 or a complexity of 30 bits, corresponding to a 5-digit password with a character space of 62. Let's get one of these and see how difficult it is to crack:

↪ pwgen -s 5 -1
p9iCN

Now, even my ancient GTX650Ti with its modest MD5 hashing performance of 1.5 GH/s cracks this password in 5 s (note that an RTX2080 delivers 36 GH/s...):

○ → hashcat -O -a 3 -m 0 myhashes.hash ?a?a?a?a?a
hashcat (v5.1.0) starting...

OpenCL Platform #1: NVIDIA Corporation
======================================
- Device #1: GeForce GTX 650 Ti, 243/972 MB allocatable, 4MCU

0b91091d40a8623891367459d5b2a406:p9iCN

Session..........: hashcat
Status...........: Cracked
Hash.Type........: MD5
Hash.Target......: 0b91091d40a8623891367459d5b2a406
Time.Started.....: Mon May 13 12:48:58 2019 (5 secs)
Time.Estimated...: Mon May 13 12:49:03 2019 (0 secs)
Guess.Mask.......: ?a?a?a?a?a [5]
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........: 514.0 MH/s (6.21ms) @ Accel:64 Loops:47 Thr:1024 Vec:2
Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.........: 2328363008/7737809375 (30.09%)
Rejected.........: 0/2328363008 (0.00%)
Restore.Point....: 24379392/81450625 (29.93%)
Restore.Sub.#1...: Salt:0 Amplifier:0-47 Iteration:0-47
Candidates.#1....: s3v\, -> RPuJG
Hardware.Mon.#1..: Temp: 44c Fan: 33%

But actually, it's even worse: instead of cracking the hash one can easily precompute all possible values of the 'date | md5sum' command, and thus create a dictionary containing these “passwords”. I could start right away:

for (( time=777506400; time<=1557666649; time++ )); do date -d@$time | md5sum | tr -d "-"; done > lowell_heddings_passwords.txt

On my desktop with its Xeon E3 v2, this command computes one million passwords in about half an hour, i.e, I'd need about 17 days for computing all passwords back to 1994. Writing a corresponding program running on the GPU would cut this down to seconds. Note that the resulting list of “random enough” passwords is static, i.e., it is indeed a dictionary, and not even a particularly large one.

Lowell Heddings himself mentions several alternative ways to generate a password in his article before turning to the worst possible solution. But if we desire cryptographically secure solutions, even apparently innocuous commands are beset with difficulties, as pointed out by, for example, the carpetsmoker (better carpets than mattresses). In the end, it all boils down to the following three choices that are available on virtually any Linux installation. If we limit ourselves to a character space of 62:

cat /dev/urandom | base64 | tr -d /=+ | head -c 25; echo
openssl rand -base64 25 | tr -d /=+ | head -c 25; echo
gpg2 --armor --gen-random 1 25 | tr -d /=+ | head -c 25; echo

If we insist of having almost all printable characters (which often calls for trouble):

cat /dev/urandom | base91 | head -c 25
openssl rand 25 | base91 | head -c 25; echo
gpg2 --gen-random 1 25 | base91 | head -c 25; echo

One could, in principle, also utilize dedicated password generators, such as Theodore Tso's 'pwgen', Adel I. Mirzazhanov's 'apg', or haui's 'hpg':

pwgen -s 25
apg -a 1 -M ncl -m 25 -x 25
hpg --alphanum 25

All of these ways are cryptographically equivalent in the sense that the entropy of the passwords generated by either of them asymptotically approaches the theoretical value ($\log_2(62) \approx 5.954$ bits per character) when you average over many (10,000,000 or more). In the present context (functions with default values) the generators do not offer any advantage, but only add unnecessary complexity.

Now, whatever you chose as your favorite, you don't want to memorize the command or rely on the history of your favorite shell. One could define an alias with the password length as parameter, but I prefer to use a function for this case to have a default length of 25 characters with the option to change this value:

↪ pw62
mPcSU1c3lBTC7gChJ4MBw1sZW

↪ pw62 8
Yjs6NhYM

↪ pw62 32
cfn4KKugWHhOBF8qn6SO5Rj7uC2LksnK

Here's how to implement this functionality for the three major shells. Note the very elegant way with which a default value can be implemented within the bash. Update: haui reminded me that the zsh is a drop-in replacement for the bash and thus of course implements all bash variable substitution, particularly ${var:-default}. Hence, we can use the same syntax for the bash and the zsh, and only the fish needs the comparatively clumsy construct shown below. 😎

bash

function pw62
{
cat /dev/urandom | base64 | tr -d /=+ | head -c ${1:-25}; echo
}

fish

function pw62
if set -q $argv
set length 25
else
set length $argv
end

cat /dev/urandom | base64 | tr -d /=+ | head -c $length; echo

end

zsh (alternative to the bash syntax)

function pw62()
{
if [ "$1" != "" ]
then
integer length=$1
else
integer length=25
fi

cat /dev/urandom | base64 | tr -d /=+ | head -c $length; echo
}

Oid's Graffel

I generally like Debian, as documented by the fact that it's my Linux distribution of choice for pdes-net.org, for the two compute server at the office, for the Mini (which is currently out of order due to a defunct SSD), and for the virtual machine that I've reserved for online banking (biig mistake...see below). Since the stable version of Debian delivers only outdated software, I'm using 'testing' as the base, and if needed, I also install packages from 'sid'.

On my main systems, however, I don't use Debian, but Archlinux. I have several good reasons for this decision. One of them is that packages that belong in a museum are not reserved to Debian Stable, but are also regularly found in Testing or Sid.

One example is 'look', which I've recently reported to be a fast way for finding an entry in a huge file. The version of look in Debian, however, contains a bug that has been fixed ten years ago. Except, of course, in Debian (and all derivatives).

But what are 10 years if you can have 20? In 2010, c't presented a Perl script for downloading and processing the transactions from an account at Deutsche Bank. The script served me well for several years, but it broke a number of times due to changes of the web interface and Perl itself. I was able to fix the script the first four times, but the last time, about five years ago, I had to ask haui for help. And a few weeks ago, it simply broke completely, and I decided to let it go and extend my old bash script to process the csv files downloaded from Deutsche Bank.

Part of one of the new scripts is the following oneliner:

tail -n +4 $current_rates | iconv -f ISO8859-1 -t utf8 | awk '{split($0,a,";"); print a[14]}' | sed 's/,/./g' | bc -l | xargs printf %.2f"\n" | tr '\n' ' ' | awk '{print strftime("%Y-%m-%d")"\t"$7"\t"$6"\t"$1"\t"$5"\t"$4" \t"$2" \t"$3}'> $cleaned_rates

Worked perfectly on my notebook running Archlinux, but in the virtual machine reserved for online banking, I got the following error message:

mawk: line 2: function strftime never defined

Hmmm...

$ awk -W version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan

Are you kidding me? That's rather extreme even for Debian standards. Particularly when considering that version 1.3.4 was published in 2009, and strftime was added to it in 2012. But surely, sid has a more recent version...NOT 😒

Even CentOS 6 came with mawk 1.3.4. Shame on you, Debilian!

Well, the only choice was to install gawk, and in this particular case, the performance hit doesn't matter at all. But why isn't that the default, if the Debilians have chosen to neglect mawk? And why do they do that anyway?

Well, whatever. The scripts are working now. 😉

Saving Nexie

We own a first generation Nexus 7 from 2012, which my wife affectionately calls Nexie because of its compact form factor. It impressed me with its high build quality, particularly considering its modest price point of €199. It's performance was more than satisfactory with the stock Android 4.1, but when it got updated to Android 5.1 in 2015, it was reduced to an unresponsive brick, no matter what I've tried (including a factory reset). We finally decided to retire it and to use it as a, well, wall clock. But recently, the time on the Nexus was often running slow by more than an hour. The reason were resource-intensive background processes updating the various Google apps installed by default.

Can we resurrect the Nexus by flashing it with an alternative ROM such as LineageOS? That's at least what I'm going to try. Note that I'm fairly ignorant with respect to Android and its devices. But anyway, let's get going.

1st step

I search the web for “Nexus 7 (2012) LineageOS”. To my relief, there's general agreement that our Nexus can run LineageOS 14.1, corresponding to Android 7.1. I thus download the image for our (GSM free) version of the Nexus 7. I also find some instructions regarding the installation. I learn from them that I have the option to install the Google Apps (particularly the play store) or not.

2nd step

I'm in favor of installing a pure LineageOS system free of Google Apps, and to use F-Droid instead of the Google play store, but my wife pleads for the latter. Since she's the primary user, I look at the options on OpenGapps and download the pico build for ARM 32 bit, Android 7.1.

3rd step

To install the custom ROM and the Google Apps, I need a custom recovery image such as the one provided by TWRP. I download the latest version for my Nexus.

4th step

Prepare your device, they say. All right, I tap the 'Build Number' under 'About Phone' in 'Settings' seven times and thus become a developer (not a joke, it really works that way 😲). I then scroll down and enable USB debugging.

5th step

The instructions mention the commands adb and fastboot, which I find to be contained in the package android-tools. I thus install these tools on my Fujitsu Lifebook:

sudo pacman -S android-tools

6th step

I use the USB cable of my Kobo reader (a conventional microUSB-to-USB cable) to connect the Nexus to my Lifebook, and chose MTP in the USB dialog on the Nexus.

7th step

All right, now it comes. (On hindsight, I could certainly do better when I would try a second time. But anyway: it worked. 😎)

# adb reboot bootloader

Yup, the Nexus boots and is now in a kind of repair mode. 😊

Now as root:

$ fastboot oem unlock
$ fastboot flash recovery twrp-3.3.0-0-grouper.img

A subsequent 'adb reboot bootloader' doesn't work (I now believe that 'fastboot reboot' would have). I reboot the Nexus manually by navigating with the volume and power keys. I then switch in the same way to recovery mode, upon which TWRP starts up.

# adb push /home/cobra/Downloads/lineage-14.1-20171122_224807-UNOFFICIAL-aaopt-grouper.zip /sdcard/
# adb push /home/cobra/Downloads/open_gapps-arm-7.1-pico-20190426.zip /sdcard/

Next, as found in the instructions, I navigate in TWRP to the Wipe menu. For some reason, wiping fails, and I'm stuck in a boot loop. 😨

I search the web and find that boot loops are rather common. A recommended solution is to either erase or format the userdata:

$ fastboot erase userdata
$ fastboot format userdata

but that doesn't do anything (just telling me that it's <waiting for device>). Only after I hold power/volumedown for 10 s, I see the repair menu, and when going to recovery mode, TWRP seems to finish what it has tried to do. pooh

I have no idea what went wrong there, nor precisely how it was corrected. But I found one statement in the web that gave me courage: “as long as your device does anything when switching it on, it is NOT bricked.”

The rest is easy: I go to install, select LineageOS and Google Apps, install and reboot.

The result

Much better than I had hoped for. The interface reacts instantaneously, animations run smoothly, and apps start fast. It feels as good as new. Well, my wife says: even better. ☺

Search package providing a certain command

I've posted a short note on this topic almost exactly ten years ago, and it's time for an update. The situation: you've heard or read about a certain tool and want to install it, but you can't find it no matter how hard you try.

My first advice: don't search and install via graphical applications. ”Software centers” popular in consumer distributions may not show command line applications at all, so if you've read my last post and search for dc, you won't find what you are looking for.

Second: not every command comes in a package with the same name. For example, in Archlinux, dc is bundled with bc, and it is the latter (much more popular) application which gives the package its name.

To master such situations, it's time to leave graphical software centers behind and to learn a few basics about the actual package manager underneath. As an example, I'm showing a search for dig and drill, each of which is contained in a differently named package, with the name of these packages depending (as always) on the distribution.

Archlinux

pacman -Fs dig
        bind-tools
pacman -Fs drill
        ldns

Debian

wajig whichpkg /usr/bin/dig
        dnsutils
wajig whichpkg /usr/bin/drill
        ldnsutils

CentOS/Fedora

yum/dnf provides /usr/bin/dig
        bind-utils
yum/dnf provides /usr/bin/drill
        ldns

OpenSUSE

Reportedly, zypper offers the same functionality.

Of the big six, that leaves Gentoo and Slackware. If you use these or a distribution whose package manager is not covered here, while it offers the desired funtionality, send me a note.

Update: An attentive reader reminded me of the Pacman Rosetta, which includes OpenSUSE and Gentoo. 😎

Calculators

For quick calculations, I prefer to use my HP handheld calculators whenever possible, simply because I'm much faster with them than with anything else thanks their responsive physical keypad and, of course, RPN. Alas, there are computational tasks that few, if any, handhelds are up to. Big numbers, in particular, usually result in an overflow rather than in the desired solution. Let's take factorials as example – they are faster growing than any ordinary functions (including exponential ones) and are thus perfectly suited for getting big numbers.

Here's how the factorial \(n!\) looks in comparison to its little sister, the exponential \(e^n\):

../images/factorial.svg

The dashed line shows the Stirling approximation \(\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n\), which reveals that the factorial essentially grows with \(n^n\) and thus faster than any exponential whatever its base.

Now, the largest factorial my HP42s can handle is 253!, which amounts to 5.173460992e+499. For a handheld, this is more than respectable: the largest factorial one can compute on a Linux desktop with, for example, xcalc as the calculator application, is 170!, limited simply by the fact that numbers in xcalc are represented by double precision floats.

All right, xcalc is ancient. But as a matter of fact, most calculator applications running on Windows, MacOS, or Linux have difficulties with large numbers. The Windows calculator, for example, gives up at any numbers bigger than 1e+10,000, and hence can't calculate factorials larger than 3249!. And we didn't even talk about getting exact results, which demand arbitrary precision arithmetic already for much smaller numbers.

Let's have a look at some calculators for Linux that can do better than those above. I use Mathematica 11.2 as reference:

  • 1,000,000! = 8.263932e+5,565,708, taking 0.185/1 s for an exact result/numerical approximation
  • 100,000,000! = 1.617204e+756,570,556, taking 46/344 s for an exact result/numerical approximation
  • 0verflows at $MaxNumber 1.605216761933662e+1,355,718,576,299,609

Note that a file storing the result of 100,000,000! has an uncompressed size of 0.757 GB. So be careful when writing even larger factorials to disk 😉 ($MaxNumber would be 1.35 PB!).

CLI Calculators

bc/dc

The Unix calculators. Offer arbitrary precision since 1970, and now you know what the 'bc' stands for in this blog's title! 😉 Neither of them supports factorials out of the box, but hey, these are programming languages, not plain calculators. Examples for scripts computing factorials can be found on Rosettacode and on Stackoverflow, but be aware that these examples are horribly inefficient — for fast algorithms see Peter Luschny's page. Here's the “script” for dc:

dc -e '?[q]sQ[d1=Qd1-lFx*]dsFxp'

After typing a number like 1000, we get an exact result. Overflows somewhat below 67,000!. bc does not, but it's too slow to be of much use for very much larger numbers.

wcalc

Approximate results with principally arbitrary precision defined by the command line parameter P (which accepts only integers and is thus useless for really large numbers).

wcalc -P 10 'fact(1000000)'

Takes 1.42 s, overflows somewhat below 44,500,000!

calc

My default calculator on PCs. Gives exact results.

calc 1000000!

Takes 330 s, and overflows somewhat below 2,200,000,000!

hypercalc

When firing up hypercalc, we are greeted with ”Go ahead -- just TRY to make me overflow!”. And indeed, that's not an easy task at first. Hypercalc gives approximate results only, but essentially instantaneous ones even for absolutely monstrous numbers. The factorials we have considered so far are kids play for this program. Instead of the factorial of a million, a billion, a trillion, why not ask for the factorial of a Googol! Hypercalc tells us this number amounts to 1e+(9.9565705518098e+101), and that agrees with the solution from Wolfram Alpha (see below), the only tool, which can at least partly follow hypercalc into the realm of big numbers.

But not when it comes to really big ones. Let's have a look, for example, at Pickover's superfactorial n$. What about, say, 10$? That's completely out of reach for any program I know, but not for hypercalc: 8pt8e+23804068 (PT stands for PowerTower). But even this is still a very very very small number: hypercalc overflows only at 1e+308pt1e+34 or, equivalently in Donald Knuth's up-arrow notation, 10↑↑1.7976e+308.

All of this is contained in a Perl script available for download (or for installation in the AUR for Archlinux users), and additionally in a Javascript powered web interface.

iPython

So far, we haven't been able to get exact results faster than with Mathematica. Let's see how python is doing in this regard. There are two possibilities, the first using plain python, the second scipy:

In [1]: import math
In [2]: %timeit math.factorial(1000000)
6.5 s ± 19.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [3]: from scipy.special import factorial
In [4]: %timeit factorial(1000000, exact=True)
6.53 s ± 17.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

It's disappointing that scipy doesn't outperform regular python, but instead seems to use exactly the same code. A special function should, IMHO, perform better than Mathematica (which needs only 1 s for the same task).

You can use python also directly from the terminal, and write the result to disk instead of displaying it directly:

echo 1000000 | python -c 'import sys; import math; print(math.factorial(int(sys.stdin.readline())))' > fac1M.dat

julia

Python turned out to be a disappointment, what about Julia, which is advertised to be suitable for high-performance numerical analysis and computational science?

julia> @time factorial(big(1000000));
0.162650 seconds (1.57 k allocations: 53.524 MiB, 2.00% gc time)
julia> @time factorial(big(100000000));
45.550577 seconds (432.60 k allocations: 11.586 GiB, 0.90% gc time)

Now we're talking!

You can write the results to disk in this way:

julia> using DelimitedFiles
julia> fac1M=factorial(big(1000000));
julia> writedlm("fac1M.dat", fac1M)

Graphical calculators

Most desktop calculators still attempt to imitate the look of handhelds, just like media players used to resemble stereo decks. Some of these reconstructions are historically accurate and appeal to our nostalgia, but in terms of usability, these relicts of the 1990s are among the most clumsy and inefficient user interfaces ever invented, particularly since people never use the keyboard to interact with these abominations, but the mouse. However, some graphical calculators give you the choice.

qalculate!

A great general purpose calculator packed with features. Includes an RPN mode and a plotting interface to gnuplot, as well as excellent conversion utilities that can be updated daily (important for currencies). We can get an approximate result for 1,000,000! in under 1 s, but 100,000,000! takes so long that I didn't wait. Overflows reportedly at 922,337,203,854,775,808!, which is impressive, but of little use because of the comparably poor performance.

speedcrunch

Easy to use and insanely fast. Even on slow hardware, the result is there as soon as you type it. Overflows at 72,306,961!

Web calculators

Young people tell me that installing local apps is so 1990ish, and of course, you can perform essentially all calculations you ever need in the interwebs.

Casio

When I look at my colleagues desks at the office, I frequently see Casio calculators from the 1980s. Even I have one, although I have no idea when and where I've acquired it (and I also don't remember ever using it). In any case, true to their roots, Casio offers a quite capable online calculator. It gives 1,000,000! in about 2 s and overflows only at 1e+100,000,000, just below 14,845,000!

Wolfram Alpha

More than a calculator: a knowledge engine. You may ask what's the weather in Berlin today, and get the interesting bit of information that on April 22nd, it was -4°C in 1997 and 31°C in 1968. But foremost, Wolfram Alpha is an arbitrary precision calculator. It gives exact results where appropriate (for reasonably sized outputs) and approximate ones when the output seems to large. Regardless the task, the answers take a few second, whether you calculate 2+2 or 1e+(9^9^9)! [1e+1e+(1.58274e+369693108)]. Overflows at (2e+1573347107)! or 1e+(1e+9.196824545990035)! That's much higher than Mathematica, but still nothing compared to hypercalc.

Quality journalism, the second

I'll keep this post in German since most of the links and the quotes are. Use DeepL to translate. ;)

Der Niedergang von Zeitungen und Zeitschriften macht auch vor Computerzeitschriften nicht halt – ganz im Gegenteil. Die c't is davon noch vergleichsweise wenig betroffen, was aber ausschließlich an ihren treuen Abonnenten liegt. Doch auch hier bröckelt es seit Jahren langsam, aber stetig. Ich habe seit mehr als 20 Jahren ein Abonnement der c't, die ich als Klolektüre auch auf keinen Fall vermissen möchte. Allerdings häufen sich in den letzten Jahren Fehler einer Art, die einem das Vergnügen nachhaltig vergällen. Wenn man jede Aussage hinterfragen muß, ist es einfacher, selbst zu recherchieren. Und zur reinen Unterhaltung kann ich auch Fix und Foxi lesen.

Mit der letzten Ausgabe ist mir der Kragen geplatzt, und ich habe mich dazu hinreißen lassen, einen Leserbrief einzusenden:

...

Es ist ja ein wirklich lobenswertes Ziel, den Leuten Lua- oder auch Python-Programmierung näher zu bringen, aber ich erwarte, daß zumindest erwähnt wird, daß es auch deutlich einfacher geht (wenn es denn so ist).

c't 7/2019, p. 158. „Trotz dieser Flexibilität stößt man irgendwann an Grenzen: So kann das Tool von Haus aus nicht die aktuelle Wetterlage bei wttr.in erfragen und anzeigen. Diese und weitere Funktionen lassen sich jedoch leicht über selbstgeklöppelte Lua-Skripte nachrüsten.”

Was immer auch „von Haus aus” bedeuten soll, Lua-Skripte braucht man nicht dafür.

Stündliche Abfrage des Wetters:

${execpi 3600 curl -s "wttr.in/Berlin?nT&lang=de" | head -n -2}

Stündliche Abfrage des Wetters in Farbe. ;)

${execpi 3600 curl -s "wttr.in/Berlin?n&lang=de" | ~/.config/conky/ansito | head -n -2}

Ansito: https://github.com/pawamoy/ansito

Und ganz ähnlich in

c't 5/2019, p. 42. „Der Grep-Befehl durchsucht die Datei auf einem Rechner mit Core i5 mit SSD in etwas mehr als einer Minute. Er nutzt aber nicht aus, dass die Datei bereits nach Hashes sortiert ist. In einer sortierten Liste kann man per binärer Suche viel schneller suchen. Eine selbst programmierte binäre Suche in Python braucht nur wenige Zeilen Code. Wir haben daher kurzerhand ein Skript entwickelt, das die Datenmassen in Rekordzeit durchforstet.”

Auch sehr schön, aber mit keinem Wort erwähnt, daß es unter Linux deutlich einfacher und etwa viermal schneller geht:

$ look $(echo -n "111111" | sha1sum | awk '{print toupper($1)}') pwned-passwords-sha1-ordered-by-hash-v4.txt

Himmel nochmal, das ist doch nicht so schwer. Ein Satz, der darauf hinweist, ist doch nicht zu viel verlangt. Oder doch?

...

Begleitet werden diese Eindrücke natürlich von der Entwicklung von heise online (ein von der c't prinzipell redaktionell unabhängiges Medium), das ich wie so viele langjährige c't-Abonennten als Online-Heimathafen betrachte. Neulich kam es dort zur Veröffentlichung eines Artikels einer Autorin aus der Ecke der genderfeministischen SJWs, der vor allem mit der kompletten Abwesenheit auch nur irgendeiner Kompetenz glänzt. Ein Zitat:

Während Entwickler stets bemüht sind, möglichst genau den Programmcode einzugeben und dabei keine Tippfehler zu machen, sind SozialwissenschaftlerInnen trainiert das "große Ganze" zu erkennen, die systemischen Zusammenhänge in der Welt zu überblicken.

Die mehr als 5000 Kommentare ließen keinen Zweifel daran übrig, daß es Heise hiermit geschafft hat, seine Kernklientel nachhaltig zu verärgern. :)

Wenige Tage später erreichte mich diese E-Mail vom „neuen Online-Service heise+”:

Sehr geehrter Herr Brandt, es freut uns sehr, dass Sie als 7 Leser unseren Qualitäts-Journalismus unterstützen.

Ich habe nicht nachgefragt, was „7 Leser” zu bedeuten hat. Ein solch eklatanter Fehler in einer Mail, die schätzungsweise an eine halbe Million Leute rausgeht, in einem Atemzug mit dem selbsternannten Merkmal des Qualitätsjournalismus' (nur echt mit Deppenbindestrich) zu nennen, ist schon recht frech. Ob sie wohl irgendwann mal merken, warum die Leute sie nicht mehr kaufen?