Intel microcode updates

Intel offers an updated microcode data file since 8th of January. According to Heise, these updates are exclusively devoted to Spectre and CPUs younger than 2013, Meltdown being taken care of by kernel updates, and older CPUs being (hopefully) the subject of subsequent microcode updates.

To examine and to eventually apply these updates, they have to be downloaded:

Arch:

sudo pacman -S intel-ucode

Debian:

sudo apt install intel-microcode

Debian automatically updates initrd, but for Arch, one has to update the bootlader as described in the wiki. Prior to doing so, one should check whether the updated microcode file actually holds updates for the CPU in use at all. In agreement with the report of Heise, there's no update for my Ivy Bridge Xeon:

➜  ~ bsdtar -Oxf /boot/intel-ucode.img | iucode_tool -tb -lS -
iucode_tool: system has processor(s) with signature 0x000306a9
microcode bundle 1: (stdin)
selected microcodes:
  001/138: sig 0x000306a9, pf_mask 0x12, 2015-02-26, rev 0x001c, size 12288

But the Haswell i7 at work is destined to receive one:

➤ bsdtar -Oxf /boot/intel-ucode.img | iucode_tool -tb -lS -
iucode_tool: system has processor(s) with signature 0x000306c3
microcode bundle 1: (stdin)
selected microcodes:
  001/147: sig 0x000306c3, pf_mask 0x32, 2017-11-20, rev 0x0023, size 23552

After a reboot, it is easy to ckeck whether an update of the microcode has taken place or not:

➜  ~ dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0x1c, date = 2015-02-26
[    0.652031] microcode: sig=0x306a9, pf=0x2, revision=0x1c
[    0.652284] microcode: Microcode Update Driver: v2.2.

Same as before.

➤ dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0x23, date = 2017-11-20
[    0.552077] microcode: sig=0x306c3, pf=0x2, revision=0x23
[    0.552404] microcode: Microcode Update Driver: v2.2.

Indeed, a new one!

And what does the update do? Am I now immune to both Meltdown and Spectre on the Haswell system?

According to the 'Spectre & Meltdown Checker', the update has actually very little effect. Here's the result on my Xeon:

$ ./spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.29

Checking for vulnerabilities against running kernel Linux 4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64
CPU is Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
- Checking count of LFENCE opcodes in kernel:  NO
> STATUS:  VULNERABLE  (only 21 opcodes found, should be >= 70, heuristic to be improved when official patches become available)

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
- Mitigation 1
- Hardware (CPU microcode) support for mitigation:  NO
- Kernel support for IBRS:  NO
- IBRS enabled for Kernel space:  NO
- IBRS enabled for User space:  NO
- Mitigation 2
- Kernel compiled with retpoline option:  NO
- Kernel compiled with a retpoline-aware compiler:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)

CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
- Kernel supports Page Table Isolation (PTI):  YES
- PTI enabled and active:  YES
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

And here's the Haswell. Note Spectre 2.

$ ./spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.29

Checking for vulnerabilities against running kernel Linux 4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64
CPU is Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
- Checking count of LFENCE opcodes in kernel:  NO
> STATUS:  VULNERABLE  (only 21 opcodes found, should be >= 70, heuristic to be improved when official patches become available)

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
- Mitigation 1
- Hardware (CPU microcode) support for mitigation:  YES
- Kernel support for IBRS:  NO
- IBRS enabled for Kernel space:  NO
- IBRS enabled for User space:  NO
- Mitigation 2
- Kernel compiled with retpoline option:  NO
- Kernel compiled with a retpoline-aware compiler:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)

CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
- Kernel supports Page Table Isolation (PTI):  YES
- PTI enabled and active:  YES
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

Don't see the difference? Look again in 'Hardware (CPU microcode) support for mitigation'. That's all, yes. Kind of sobering, I agree. Please blame Intel, not me.

Update: After reading this article, I understand that the microcode update only prepares the ground for the actual patch, which will come with kernel 4.15 and later versions. I'll check again then, of course after updating the 'Spectre & Meltdown Checker' (simply pulling the lastest version via 'git pull origin master').

Meltdown patch available for Arch

If you haven't heard of Meltdown and Spectre, it's about time you do. Since yesterday, all newspapers and even TV provide extensive coverage on a recently discovered vulnerability of modern CPUs potentially resulting in a leak of sensitive data. While Meltdown seems to primarily affect all modern Intel CPUs, Spectre also applies to AMD and ARM chips. The scale of this vulnerability is not only unprecedented, it's historic.

The KPTI (formerly KAISER) patch developed by the University of Graz defeats Meltdown. The patch is part of the coming Linux kernel 4.15 and has already been backported to 4.14.11.

Which brings me to the good news for Archers like myself: Kernel 4.14.11 is available since yesterday, 8:13 CET. Spectacular work from upstream, but also from the Arch team! No new microcode, though – the currently available one is still from 17th of November.

CentOS just provided patches as well. There's nothing from Debian yet, however. 😞

Oh, and I've just received a mail from the hoster of pdes-net.org. Good to see they react at once.

What a great start of 2018. Well, regardless, happy new year to all of you. 😉

Update: An in-depth analysis of the mechanisms resulting in meltdown and spectre can be found in an online article (in German) written by the legendary Andreas Stiller (who, most unfortunately, retired at the end of 2017).

Genealogy

My first Linux was Redhat 2.0, installed on a Pentium 90 from a CD attached to a magazine entitled “Linux: ein Profi-OS für den PC”, which I had purchased for 9,99 DM at Karstadt in December 1995. I was mesmerized: to run a Unix system on my PC not unlike the Solaris I've had before on a Sun workstation (which was entirely out of reach financially) was a revelation. Soon after, I acquired the “Kofler” (2nd edition) that included a CD with Redhat 3.0.3.

Why was I so interested in Linux? Of all operating systems I knew, Solaris was the only one I found to be a pleasure to work with. DOS was stable and reliable, but much too limited, and MacOS and Windows appeared to me as demonstrations of the various ways a computer can crash rather than operating systems.

I used MacOS 6 on a Macintosh II from 1992 to 1994 in Japan and learned to thoroughly despise this caricature of an operating system. I sometimes felt that I spent more time in looking at the bomb than doing anything useful. When Apple launched the switch campaign a decade later, the frequent crashes of MacOS and the bizarre error messages were already a legend.

I returned to Germany in 1994 and and had great hopes in Windows, from which I'd heard from a guy working for FutureWave Software, a company which developed the precursor of Shockwave Flash. Well ... the much touted Windows turned out to be DOS with an amateurishly designed GUI, which was prone to surreal crashes that occurred spontaneously, without any apparent reason.

Before one could enjoy these magic moments, one had to install the whole caboodle. And that meant, of course, to install first DOS 6.22 (which came in four 3½ inch floppy disks) and then Windows 3.11 (eight 3½ inch floppy disks). If you're too young to know what that means, listen to the sound of computing in the 1990s.

Redhat, in contrast, came on a CD, which in itself seemed to reflect the technological supremacy of this OS over its commercial cousin. This impression, however, turned out to be nothing but a delusion: the installation procedure could only be started from DOS! The installation itself required intimate knowledge of the hardware components of the computer and their IRQ numbers and IO addresses. Ironically, the easiest way to get this information was an installation of Windows on the same computer.

What made the installation even more difficult was my plan to realize a dual boot configuration—Windows for the games, Linux for LaTeX. In fact, the typesetting suite was one of the main reasons for my interest in Linux, because it was an integral part of the distribution at that time. I had just installed LaTeX on Windows on my computer at the office, and after an entire day and a seemingly endless sequence of floppy disks, I realized that I didn't want to do that again.

After struggling with a number of difficulties, I managed to set up my dual boot system. Encouraged by this success and the pleasant user experience, I installed a variety of distributions in the years to come, and found the installation to become easier and easier with every year. Installing Mandrake Leeloo in 1998 on a brand-new Pentium II 266 was way easier than installing Windows 98. In 2001, HAL was still science fiction, but we had computers every dumbo could handle.

At least that was my impression. Ubuntu, a Debian derivative, materialized in 2004 and was touted to be the first Linux distribution a normal user would be able to install and use. The Ubuntu hype is unbroken since, and in many mainstream media, Ubuntu has become synonymous to Linux. In recent years, Ubuntu has been superseded by Mint in terms of popularity. It seems that the masses always chose unwisely.

But what is a good choice? And how should a beginner chose from the 305 distributions listed on Distrowatch?

Well, let's start with the second question. The situation is actually much less confusing than it seems at first glance. As a matter of fact, we do not face 305, but just about a dozen of independent Linux distributions, and the rest are offsprings. Wikipedia has a comprehensive article about this subject, and the fantastically detailed timelines visualize the historical development most beautifully. The comparison of Linux distributions is another illuminating article.

For simplicity, let's project this development onto a one-dimensional time axis. These are the originals (together with popular derivatives):

Slackware (July 1993)

Porteus, SalixOS, Slax, Vector, Puppy, (SUSE)

Debian (September 1993)

Ubuntu, Mint, ElementaryOS, Grml, Knoppix, SteamOS, Damn Small, Puppy, ...

Redhat (October 1994)

CentOS, Mandrake/Mandriva/Mageia, Scientific, Fedora, Qubes

SUSE (May 1996)

OpenSUSE

Gentoo (July 2000)

Sabayon

Archlinux (March 2002)

ArchBang, Antergos, Chakra, Manjaro

Is that really all there is? Well, these are the big six. There are some notable newcomers:

CRUX (December 2002), Alpine (April 2006), Void (2008), and Solus (December 2015)

The first three are technically markedly different from the mainstream distributions, and are definitely not aimed at beginners. All right, all right...which one of the big six is aimed at beginners?

None, of course. What do you think? That back then anybody in his right mind developed primarily for noobs? Hell, the word was not even created yet, since the whole category of people who could be labeled as noobs did not exist. The world wide web, which would give birth to a generation that watches videos to learn how to boil eggs, had only been invented. Incredible as it sounds, there was no Google, no Youtube, no Twitter or Facebook. Watching a video with the bitrate of modems in 1993 (14.4 kb/s) would only have worked in ultraslow motion anyway (1 s stretched to 5 min). In any case, personal computers and their operating systems were perceived as a revolution in user friendliness compared to what had existed before, and people were willing to acquire the skills it took to operate them.

To develop Linux for noobs is a decidedly modern phenomenon, invented by a visionary southafrican billionaire in the hope to become the 21st century's Bill Gates. Indeed, Mark Shuttleworth was the first person who tried to market Linux. He did that in a remarkably effective way by appealing to first world people's natural sentimentality: “Ubuntu is an ancient African word meaning ‘humanity to others’.” Hardened Linux veterans like me reacted to this campaign in a rather unfavorable way, I'm afraid:

Ubuntu is an ancient African word meaning 'I can't configure Debian'.

And now to the first question: what is a good choice? As I've stated in a previous post, I generally do not like to make recommendations – people's qualifications, needs, and preferences are just too diverse. However, I can tell you what criteria are important for me and what I have consequently chosen to work with.

  1. I'm not willing to make any compromise regarding security. The distribution I use must have a dedicated security team and a dedicated security advisory system. That excludes the majority of pet project and show-case distributions derived from one of the big six.

  2. Many thousands of useful programs exist in the open-source world. I want as many of them as possible to be easily accessible in central repositories managed by the distribution. A clearly defined core subset should be officially supported. Situations as in Ubuntu (and derivatives), where no one knows a priori what's supported and what not, are unacceptable.

  3. New software versions should be available days after they are provided upstream, not months or years. I do not have the patience for waiting for bugfixes for several months because of six-month release cycles or similar nonsense. That leaves only rolling-release distributions such as, most prominently, Arch, Gentoo, Debian Testing and Debian Sid, Fedora Rawhide, and openSUSE Tumbleweed.

  4. Last but not least: I want to invest as little time and effort with my computer installations as possible. They should run smoothly and function as expected.

I've thus almost inevitably arrived at the following constellation:

Desktop Home: Arch
Desktop Office: Arch
Notebook: Arch
Netbook: Debian Sid
Server: Debian Testing
Compute Servers: Debian Testing, CentOS [1]

In addition to all these physical systems, I also have various installations of Debian Testing, Debian Sid, Arch, and CentOS as virtual machines. Oh, and, before I forget: there's also a lonely Windows 7, that is about as troublesome as all of the above together. No, I'm not kidding. Just the regular monthly update takes an hour.

In any case, those are the distributions I'm using. What can you learn from that, if you are a noob? Just a few basic things, perhaps. First of all, it's good to know what you really want. And then, it's good to act correspondingly, no matter of your level of noobishness. 😉

Quality journalism

c't 25/2017. A test of the new iPhone X entitled Für die nächsten 10 Jahre. In the conclusion on page 55:

Nur das iPhone X zeigt, wie ein aktuelles Smartphone aussehen sollte. [...] Face ID ist ein Alleinstellungsmerkmal gegenüber allen anderen und man möchte es nach kürzester Zeit nicht mehr missen.

Same issue on page 60: A test of the new OnePlus 5T entitled Hohe Schlagzahl.

Zusätzlich baut OnePlus eine Gesichtserkennung ein. Die arbeitet in unter einer halben Sekunde, ließ sich nicht von Fotos überlisten und nicht von Brillen oder Mützen verwirren.

It is well known that a significant percentage of the population and apparently 100% of all journalists suffer from a catastrophic failure of higher cerebral functions when being confronted with products from Apple. But what's the reason for this distressing loss of self-control? Well, if you look into my previous post on a recension of the iPad by Spiegel Online, it is clear that this loss closely resembles the one seen in sexually overloaded situations such as mating rituals and reproductive scenarios, during which the male brain is fully occupied in sending messages to the pituitary gland triggering production of testosterone (which, incidentally, is also responsible for making the individual drool).

The inevitable conclusion of these observations is that Apple coats their products with certain pheromones acting as a highly effective sexual stimulus. Since the next Apple store is just a 10 min walk, I shall test this hypothesis myself. Should I develop the madness described above, please shoot me on sight.

Xpdf (4!)

Desktop publishing (DTP) was initiated by two ground-breaking developments of Adobe. They first established postscript in 1984, which, after being quickly adapted by Apple in 1985 in their first laser printer, became the de-facto standard in the DTP world for a long time to come. Second, they developed the portable document standard (pdf) in 1993, which is now not only dominating DTP, but also all electronic publishing activities.

I don't remember when pdf became relevant for me. For publishing, most journals still prefer figures in eps format, although some accept pdf as well. I also don't remember whether Xpdf or gv was serving as my pdf reader in the 1990s. In any case, Xpdf was (as far as I know) the first dedicated pdf reader for Linux, and came with a Motif interface popular at that time (after all, the popular Unix desktop CDE used Motif!). This archaic interface didn't change since 1995, and is certainly one of the main reasons why nobody uses Xpdf any more.

Well, I do, but only for a single purpose: I use Xpdf to extract vector graphics from pdf files. A few days ago, I planned to do exactly that, opened the paper with Xpdf and ... but wait a second, that's not Xpdf!


../images/xpdf4_paper.png

And yet, the window title says Xpdf. What's going on?

../images/xpdf4_about.png

A new logo, and a new toolkit. If I would have been asked, I would have bet anything on this not going to happen, ever. Fortunately, nobody has asked...

In any case, it's nice. But does my script work? Of course not. At least nothing happens when I hit Ctrl-e. Starting Xpdf from a terminal shows that the script is started all right, but the filename is put in additional quotation marks. Ha, that's easy: in ~/.xpdfrc, the line

bind ctrl-e any "run(pdfsnap '%f' %p %x %y %X %Y)"

just has to read

bind ctrl-e any "run(pdfsnap %f %p %x %y %X %Y)"

and it works!

By the way: this update is illustrating the difference between Archlinux and Debian Sid very nicely. For both systems, the update came almost at the same day, but had different content: 4.00 for Arch and 3.04-4+b1 for Debian. Sid is not vicious, but a snail. 😉

MOTD

The file /etc/motd contains the message displayed by the server when an ssh connection is established. For a default Debian system, this message reads:

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/``*``/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

Bla blup, gna gna gna gna. Nothing is more dreadful and dull than legal disclaimers.

As an alibi, Debian adds a script in /etc/update-motd.d that executes the command

uname -snrvm,

resulting on pdes-net.org in

Linux v22016124074441159 4.12.0-1-amd64 #1 SMP Debian 4.12.6-1 (2017-08-12) x86_64

which is almost as boring as the legal blah-blah above. I don't need to be reminded that I'm using Linux, nor that it's the 64 bit variant (and that even twice). The hostname isn't hot news either, and it would certainly suffice to show the kernel version only once.

Imagine instead a dynamic message, one that truly constitutes a message of the day (MOTD). I'd like to see, for example, the current system load at login, and information concerning the systems available ressources.

Searching the interwebs for that, I first found lots of outdated and contradictory information, but finally this very useful guide for current Debian versions. In essence, one can put arbitrary scripts in /etc/update-motd.d. Instead of writing these scripts myself, I've used the ones of Nick Charlton that seem very close to what I wanted.

Because of the update query, these scripts delay ssh login by about 0.7 s. Let's see what my fellow PdeS will say about that. duck

Update 21.8.17: I totally forgot the obligatory “screenshot”:

[cobra:~] $ ssh pdes

           _                            _
 _ __   __| | ___  ___       _ __   ___| |_   ___  _ __ __ _
| '_ \ / _` |/ _ \/ __|_____| '_ \ / _ \ __| / _ \| '__/ _` |
| |_) | (_| |  __/\__ \_____| | | |  __/ |_ | (_) | | | (_| |
| .__/ \__,_|\___||___/     |_| |_|\___|\__(_)___/|_|  \__, |
|_|                                                    |___/

Welcome to Debian GNU/Linux testing (buster) (4.12.0-1-amd64).

System information as of: Mon Aug 21 19:35:08 CEST 2017

System load:    0.00    Memory usage:   1.6%
Usage on /:     1%      Swap usage:     0.0%
Local users:    3

0 updates to install.
0 are security updates.

You have new mail.
Last login: Sun Aug 20 11:14:11 2017 from 92.195.42.164

Update 26.8.17: Nick's elaborate python script for detecting updatable packages doesn't work here. I've replaced it with a simple bash one-liner:

#!/bin/bash

number=$(aptitude search '~U' | wc -l)
%number=$(wajig toupgrade | tail -n +3 | wc -l) %alternative command using wajig

echo -e "$number updates to install.\n"

DNS privacy

In my last post, I've focused on the immediately obvious merits of a local DNS resolver. I didn't comment on an issue that I find at least as important: privacy, or rather, the lack thereof in the DNS system. Read Geoff Huston's excellent post for an overview.

One of the main reasons why I've chosen Unbound as my local DNS resolver is that it was designed with privacy in mind. In particular, it supports QNAME minimization and DNS over TLS. The latter is only one of the various possible approaches that are currently under discussion for the realization of an encrypted DNS system. However, it is among the few that already work: there are a number of test servers in essentially continuous operation. I've used it for a couple of weeks and did not experience any interruption of service.

To test whether a server really offers DNS over TLS, use pydig:

pydig @185.49.141.38 +dnssec +tls=auth +tls_hostname=getdnsapi.net www.heise.de

vs.

pydig @8.8.8.8 +dnssec +tls=auth +tls_hostname=getdnsapi.net www.heise.de

In order to use DNS over TLS in Unbound, we only need minimal modifications of the configuration files I've posted previously. First of all, we of course need to define authoritative servers supporting DNS over TLS. Second, encryption has to be enabled.

01_Basic.conf

forward-addr: 146.185.167.43@853         # securedns.eu over TLS
forward-addr: 185.49.141.37@853          # getdnsapi.net over TLS
forward-first: no
forward-ssl-upstream: yes

02_Advanced.conf

ssl-upstream: yes

After restarting the resolver with

systemctl restart unbound.service

all of your DNS requests are encrypted over TLS. 😊

Unbound

I've been running a local DNS resolver for the last decade. I do that for two reasons: first of all, to bypass censorship and surveillance, and second, to profit from the essentially instantaneous answers of a local DNS cache. The latter point is becoming increasingly important in the last years, since modern websites tend to invoke links to dozens of other domains all of which have to be resolved. A satisfactory web experience thus requires, first of all, a low-latency connection to the DNS server we use.

To specify “low”, let's have a look at the typical connections we have at home. With my vanilla ADSL2+, the latency to the fastest DNS servers around amounts to 8 ms for my desktop, which is connected to the router via a GB switch, or 12 ms for all devices connected via WIFI (802.11g). These values are not too bad, but not what I'd associate with “low latency”. At work, for example, we use a dedicated DNS server available in the intranet with a latency of 0.3 ms. Now we're talking.

I have the same speed at home since 2009, when I started using pdnsd. Unfortunately, the development of this caching DNS proxy has stopped 2012. In addition, DNSSEC was initiated 2010 and is now an indispensable part of the modern internet. To keep up with this development, I hence needed a local recursive DNS resolver that not only caches, but also validates. The article in c't 12/2017 about the validating, recursive and caching DNS resolver Unbound thus came just in time.

The setup provided by c't applied to Ubuntu, and proved to be incomplete anyway (see the comments at the end of the article). With the help of the Archwiki and Calomel, I came up with the following configuration that works as desired on Archlinux. On Debian or Fedora/CentOS, some of the initial steps may be not be necessary.

We first install unbound

pacman -S unbound

and enable the service

systemctl enable unbound.service

We need to edit the service unit, and do that by issuing

systemctl edit unbound.service

to create a drop-in_snippet. The command above automatically opens your $EDITOR, i.e., in my case vim. The content of the snippet should be:

[Service]
ExecStartPre=sudo -u unbound /usr/bin/unbound-anchor -a /etc/unbound/root.key

After saving the file, give it a meaningful name:

cd /etc/systemd/system/unbound.service.d
mv override.conf update_rootkey.conf

We can now turn to the configuration of Unbound. Replace the default configuration file /etc/unbound/unbound.conf by a file with the following content:

# Unbound configuration file
#
# See the unbound.conf(5) man page.
#
# See /etc/unbound/unbound.conf.example for a commented
# reference config file.
#
# The following line includes additional configuration files from the
# /etc/unbound/unbound.conf.d directory.

include: "/etc/unbound/unbound.conf.d/*.conf"

Next, we create this directory:

mkdir /etc/unbound/unbound.conf.d

Let's put the following four files in this directory:

01_Basic.conf

## Basic configuration
#
server:
        interface: ::0
        interface: 0.0.0.0
                access-control: ::1 allow
                access-control: 2001:DB8:: allow
                # Beispiel f. ULA
                # access-control: fd00:aaaa:bbbb::/64 allow
                access-control: 192.168.178.0/16 allow
                verbosity: 1

forward-zone:
          name: "."
          # hopefully free of censoring and logging, definitely with DNSSEC Support:
          forward-addr: 194.150.168.168            # dns.as250.net (CCC)
          forward-addr: 194.95.202.198             # omni.digital.udk-berlin.de (Universität der Künste)
          forward-addr: 85.214.20.141              # h1768020.stratoserver.net (Digitalcourage e.V.)
          forward-addr: 80.237.196.2               # dnsc1.dtfh.de (CCC)
          forward-addr: 194.8.57.12                # ns.n-ix.net (Nürnberger Internet eXchange)
          forward-addr: 84.200.69.80               # resolver1.ihgip.net (DNS Watch)
          forward-addr: 84.200.70.40               # resolver2.ihgip.net (DNS Watch)
          forward-addr: 77.109.148.136             # xiala.net
          forward-addr: 77.109.148.137             # xiala.net
          forward-addr: 91.239.100.100             # anycast.censurfridns.dk (UncensoredDNS)
          forward-addr: 89.233.43.71               # unicast.censurfridns.dk (UncensoredDNS)
          forward-addr: 213.73.91.35               # dnscache.berlin.ccc.de (CCC)
          forward-addr: 62.113.203.55              # secondary.server.edv-froehlich.de (OpenNIC)
          forward-addr: 62.113.203.99              # OpenNIC

02_Advanced.conf

## Advanced configuration
#
server:
verbosity: 1
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes

root-hints: /etc/unbound/root.hints

auto-trust-anchor-file: /etc/unbound/root.key

hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes

minimal-responses: yes
prefetch: yes
qname-minimisation: yes
rrset-roundrobin: yes
use-caps-for-id: yes

cache-min-ttl: 3600
cache-max-ttl: 604800

include: /etc/unbound/adservers

03_DumbFirewalls.conf

## reduce edns packet size to help big udp packets
# over dumb firewalls
#

server:
edns-buffer-size: 1232
max-udp-size: 1232

04_Optimize.conf

# Performance optimization
# `https://www.unbound.net/documentation/howto_optimise.html <https://www.unbound.net/documentation/howto_optimise.html>`_
server:
                # use all CPUs
                num-threads: 8

                # power of 2 close to num-threads
                msg-cache-slabs: 8
                rrset-cache-slabs: 8
                infra-cache-slabs: 8
                key-cache-slabs: 8

                # more cache memory, rrset=msg*2
                rrset-cache-size: 200m
                msg-cache-size: 100m

                # more outgoing connections
                # depends on number of cores: 1024/cores - 50
                outgoing-range: 100

                # Larger socket buffer.  OS may need config.
                so-rcvbuf: 8m
                so-sndbuf: 8m

                # Faster UDP with multithreading (only on Linux).
                so-reuseport: yes

We're almost done now. Two cronjobs in /etc/cron.weekly complete the configuration:

unbound_updates

#!/bin/bash
# Updating root hints.

###[ root.hints ]###

curl -sS -L --compressed -o /etc/unbound/root.hints.new `https://www.internic.net/domain/named.cache <https://www.internic.net/domain/named.cache>`_

if ` $? -eq 0  <>`_; then
  mv /etc/unbound/root.hints /etc/unbound/root.hints.bak
  mv /etc/unbound/root.hints.new /etc/unbound/root.hints
  unbound-checkconf >/dev/null
  if ` $? -eq 0  <>`_; then
        rm /etc/unbound/root.hints.bak
        systemctl restart unbound.service
  else
        echo "Warning: Errors in newly downloaded root hints probably due to incomplete download:"
        unbound-checkconf
        mv /etc/unbound/root.hints /etc/unbound/root.hints.new
        mv /etc/unbound/root.hints.bak /etc/unbound/root.hints
  fi
else
  echo "Download of unbound root.hints failed!"
fi

adserver_updates

#!/bin/bash
# Updating adserver list.

###[ adservers ]###

curl -sS -L --compressed -o /etc/unbound/adservers.new "`https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext <https://pgl.yoyo.org/adservers/serverlist.php?hostformat=unbound&showintro=0&mimetype=plaintext>`_"

if ` $? -eq 0  <>`_; then
  mv /etc/unbound/adservers /etc/unbound/adservers.bak
  mv /etc/unbound/adservers.new /etc/unbound/adservers
  unbound-checkconf >/dev/null
  if ` $? -eq 0  <>`_; then
        rm /etc/unbound/adservers.bak
        systemctl restart unbound.service
  else
        echo "Warning: Errors in newly downloaded adserver list probably due to incomplete download:"
        unbound-checkconf
        mv /etc/unbound/adservers /etc/unbound/adservers.new
        mv /etc/unbound/adservers.bak /etc/unbound/adservers
  fi
else
  echo "Download of unbound adservers failed!"
fi

The adserver component is of course optional, but I've found it to be a very efficient way of blocking ads. I'll compare the various possibilities to block ads in a forthcoming post.

For the moment, let's concentrate on the core competences of our new DNS resolver. To do so, we first start it by issuing

systemctl start unbound.service

We can test the resolver on the command line using either dig or its near drop-in replacement drill.

dig +dnssec +multi @localhost debian.org
drill -D @localhost debian.org

What's essential here are the first two lines and the entries in rcode and flags: 'NOERROR' and 'ad', with the latter standing for 'Authenticated Data'. In other words, the DNS response is authentic because it was validated using DNSSEC. The RRSIG blocks provide, among other data, the public key of the domain as explained here.

Let's try that with a domain which is not validated by DNSSEC:

dig +dnssec +multi @localhost archlinux.org

NOERROR, but no 'ad' flag. Quite all right.

And now a domain with a broken/bogus DNSSEC record:

dig +dnssec +multi @localhost dnssec-failed.org

Status: SERVFAIL. Works as well.

Last but no least, let's test the cache of unbound:

for i in $(seq 1 5); do dig www.tuvaluislands.com | grep 'Query time' | awk '{print substr($0, index($0, $2))}'; done
Query time: 746 msec
Query time: 0 msec
Query time: 0 msec
Query time: 0 msec
Query time: 0 msec

Works. 😉

If the command line appears to be too cryptic, we can also test the basic DNSSEC functionality with a browser:

For addresses with broken/bogus DNSSEC records, such as this one, the browser should just display an ERR_NAME_NOT_RESOLVED page. It does? Excellent.

Still...that page is depressing. Let's boost our morale by visiting https://dnssec.vs.uni-due.de/ :

../images/dnssec.png

Thank you, Matthäus 😉 .

Representative surveys

Statistical surveys are a standard tool of sociology, and have been the subject of extensive research. In the hands of professionals, the results of these surveys can be surprisingly accurate. As a result, surveys have attained the status of the oracle of Delphi, and people fervently believe in them. Naturally, this development has made surveys an attractive tool for manipulating public opinion. The standard way to do this is to load the questions with a moral obligation. Don't you agree that the internet should be regulated to deprive extremists of their safe spaces online? No? Really, what a kind of person are you? Don't you ever think of the children?

However, unexpected results of surveys do not always have sinister reasons, but may instead simply reflect the incompetence of the inquirer. In particular, the most elementary rule for designing a survey is frequently forgotten: namely, that those taking part in the survey have to understand the questions. Sounds obvious, doesn't it? But is it?

An example: the recent news of heise online that only(!) 16% percent of all Germans encrypt their emails (Umfrage: Nur 16 Prozent der Deutschen verschlüsseln ihre E-Mails). This survey was conducted by Convios Consulting on behalf of United Internet (UI), one of the largest internet and mail providers in Germany.

UI claims that about 750,000 of their users have generated PGP key pairs. That's a very impressive number, particularly since according to UI, only 4.7 million keys “exist” worldwide. The UI users would thus account for 16% of all PGP keys. Doesn't that demonstrate very nicely that UI's encryption initiative introduced in August 2016 is highly successful?

Well, the whole reason for the survey was to create exactly this impression. I have no doubts that the numbers quoted above are correct, but what do they mean?

First of all, the number reported for the existing keys worldwide only accounts for the keys that have been deposited on key servers. Nobody can estimate how many keys have actually been generated or are in use. That's quite different in the case of the UI encryption scheme, which is based on mailvelope, and the storage of the public key of the user in a database located on a UI server. The number given above is thus the total number of UI customers with a PGP key, unless they use a separate key in a stand-alone MUA (of which I know two 😉).

There are about 40 million email users in Germany. According to UI, about half of them use GMX or WEB.DE, which seems reasonable as UI is reported to have close to 20 million customers. Now, let's suppose that all UI customers who have generated a key are also actually using it to encrypt their mail. In this unlikely case, 3.75% of all UI users would encrypt their mail, much less than the “only 16%” of their survey. Obviously, that must mean that 28.25% of the other 20 million email users in Germany, who are mostly customers of the German Telekom, Google, and Microsoft, encrypt their mail. Right?

Of course not. Try to ask arbitrary Gmail users if they encrypt their mail. 84% will look at you with with blank eyes, but 16% will recognize the word and confirm that they do, YES! Ask them afterwards if they know the difference between transport and end-to-end encryption. I guarantee that you will soon get tired of asking people because even after several hundreds you won't find a single one who can answer your second question...

How many people do encrypt their mails? I don't think there exist any bona fide surveys on that topic. I can only provide anecdotal evidence with very limited statistical significance. On the other hand, I've been a serious advocate of end-to-end-encryption since about 15 years. I've written tutorials and motivated many of my personal contacts to use end-to-end encryption in email and messenging. Well, some would say I forced them at gunpoint. But that would be exaggerated...

I have currently 49 personal contacts with public PGP keys, and 16 business contacts. That doesn't sound too bad, does it? However, 17 and 4 of these keys are expired, leaving 32 and 12, respectively. Subtracting keys whose pass phrases have been forgotten by their users or were otherwise disposed of leaves 19 and 7. Some of my contacts have passed away, are retired, or I've simply lost touch, leaving in the end 4 and 5 with which I can, in principle, exchange end-to-end encrypted mails. Actually, however, there are only three persons with whom I regularly exchange encrypted mails: my patent attorney at work and my fellow PdeS (that's why they have that label) in actual life.

Three out of 65 with an actively used key, but of how many without any clue what that even means? I don't want to spent time on the question of how I could count the number of unique addresses in my mail folders over the past few years. But obviously, this number would be in the several hundreds. In other words, the total percentage of people employing end-to end-encryption in my emails is way lower than 1%. And if I wouldn't be interested in these kind of things, and if I wouldn't be a scientist, this percentage would be exactly zero. Not 16%. Not 0.16%. Zero.

Can we find out how many people really encrypt their mails by a survey? Not really. If less than 100 ppm of all people encrypt (which is the number I find most plausible), we would need a mega-survey over 50000 people to include at least 5 people who actually do encrypt, and that's never going to happen. And don't let them tell you that the rules of statistics somehow don't apply there and representative surveys can answer all of these questions as if by magic. That's bullshit. All of it.

Debian 9

Stretch is stable. Testing is now called Buster.

sed -i 's/stretch/buster/g' /etc/apt/sources.list

I could equally well just use 'testing', but for some presumably deeply rooted psychological reasons, I like the codenames better.

For my veteran netbook mini, buster is now the 7th incarnation of Debian or Debian-derivatives in its 9 years of operation: Etch, Lenny, Squeeze, Wheezy, Jessie, Stretch, Buster. I'm sure it will also run Bullseye. 😉