No more buntu

A few days ago, the Kubuntu 12.10 on my wife's notebook received a regular update but did not, upon the obligatory reboot, return to its regular operation: the display resolution was changed to 1024x768 and neither WiFi nor Bluetooth were avaliable.

I didn't even think twice (my wife said I was just waiting for such an opportunity, and she may be right) : let's replace this sad imitation of GNU/Linux with something I can rely on!

And what would that be? My wife asked for something really established, with an easy, graphical install routine, up-to-date packages and preferably offering a rolling release scheme. And it should not be related in any way to any kind of *buntu *shudder*.

What does that leave?

OpenSUSE, a descendant of Slackware. 😉 Of course, installation (12.3) is a breeze, and activating the Tumbleweed repository (and thus the rolling release scheme) too. Equally expected is the fact that all the hardware of this Fujitsu Lifebook AH 530 is recognized and supported out of the box. The lifebook falls asleep and gracefully wakes up, as desired, and the frequent hiccups and lockups due to the power management under *buntu are a thing of the past.

That leaves me with Archlinux on my main system, Crunchbang on the Mini, openSUSE on the lifebook, Fedora for my office desktop, and Debian Wheezy for the workstations. Pity I can't fit a Gentoo in between. 😉

Serious stutter

Serious Sam 3 installed via Steam has serious performance issues: even when the internal fps counter displays values beyond 60, the game is unplayable: it stutters and stalls in a way which makes even seasoned first-person shooters sick.

The solution is simple: install cpupower (if you're using systemd) or cpufrequtils (initscripts) and issue:

cpupower frequency-set --governor performance

or

cpufreq-set -g performance

The latter command applies only to one core. If you want to set all cores, edit /etc/default/cpufrequtils and restart the service.

Happy fragging! 😉

Collaboration

Last week, I had to edit a manuscript of a fellow senior scientist. The file he gave me was named (I'm not joking) “manuscript.tex”. Since I usually work on many manuscripts in parallel, I gave it a unique name, edited it, and returned it as “xy_abc13_1.tex” using my usual naming convention with xy being the author's name and abc the journal's.

After other authors had their say, a second round was started and the revised manuscript came back to me with the name “manuscript.tex”. I discovered that several of my changes had been overwritten by later changes or were reverted to the original. When I confronted the author with this irritating fact, he told me that he cannot be held responsible because it's obviously way too complicated to keep track of the changes by renaming the file.

I don't share this point of view, but it's true that there are simpler ways to keep track of changes than to manually rename files, namely, version control systems.

In the following, I show you how to profitably use the version control systems mercurial or, alternatively, git when preparing a manuscript or thesis, and how to combine this version control with latexdiff, an indispensable tool for visualizing the changes made from one revision to the other. Using version control and latexdiff in conjunction is dead simple and has many benefits.

After installing mercurial (or git), we first need a minimum configuration in form of an .hgrc or .gitconfig file containing the name and e-mail address of the local user.

mercurial (.hgrc):

[ui]
username = John Doe <[john.doe@doe.de](mailto:john.doe@doe.de "john.doe@doe.de")>

git (.gitconfig):

[user]
name = John Doe
email = [john.doe@doe.de](mailto:john.doe@doe.de "john.doe@doe.de")

We can now initiate our first project:

mkdir project
cd project
cp example.tex project/
hg init
hg add example.tex
hg commit -m "Original version."

The syntax of git is precisely the same.

Now let's edit our manuscript "example.tex", save the changes (under the same name!), and commit the first revision:

hg commit -m "First revision."

hg log should show you the existence of two revisions: 0 and 1. Git has a more complicated numbering scheme as you will discover when issuing git log.

You're now using a VCS! Now, that wasn't that difficult, was it?

Let's add two more commands to our repertoire. Suppose you edit the file, but you are unhappy with the changes and actually would like to start afresh. A simple hg revert -r 1 brings you back to the state of revision 1. And if you have already committed the changes? In this case, hg update -r 1 defines revision 1 to be new head from which further development will occur.

All right, we can track changes, but how can we visualize them? With LaTeXdiff, of course:

LatexDiff

There are several ways to use Latexdiff in conjunction with a VCS.

For example, Latexdiff itself supports git. latexdiff-vc -r f76228 example.tex generates a tex file marking all changes between the current revision and revision f76228. You have to compile this file yourself to obtain a pdf you can view and print. See update at the end of this post.

Mercurial supports external diff programs, and this fragment in your .hgrc allows you to use latexdiff within hg. hg latexdiff -r 0 example.tex > diff.tex creates the tex file containing all differences as above. Once again, you have to compile this file yourself to obtain a pdf you can view and print.

Far more convenient are scripts which automate this procedure. Pål Ellingsen and Paul Hiemstra wrote python scripts supporting mercurial (and additionally git in the case of Paul's script) which automatically generate the pdf containing all changes, including the additional runs necessary for bibtex, and even supporting multi-file documents such as used for writing theses and books. Create and view the diff by

diffLatex.py -r 0 example.tex

or

scm-latexdiff 0:example.tex.

Paul's script also allows us to view the differences between arbitrary revisions: scm-latexdiff 0:example.tex 1:example.tex.

I still give files meaningful names, such as “ob_prb13_1.tex” for my first Physical Review B in 2013. But I will never again worry about file version numbers. What's more, the above scheme is convenient, foolproof (let's see) and entirely transparent. I plan use it for all of my projects in 2013 and the years to come. If only I could convince people to do the same ...

Update 29.07.22:

Recent versions of latexdiff also support explicitely selecting the version control system and compiling the diff file, generating, for example, pdf output:

    latexdiff-vc --hg --pdf -r 0 example.tex

See man latexdiff-vc for further options.

Passwords

C't 3/2013 contained three articles about password security and password cracking. Nothing earth-shattering, but a number of interesting insights.

Advances on the password cracking scene are twofold. First, the hardware has advanced to a point where brute forcing an 8 digit password is a piece of cake if the hash algorithm used is sufficiently fast. For example, a single AMD 7970 is capable of processing about 16 billion NTML hashes per second, and a cluster of 25 of these cards has a reported throughput of 350 billion NTML hashes per second. An 8 digit password composed of small and capital letters as well as numbers spans a search space of 628. Our cluster would brute-force this password in 5 minutes (on average).

The second, and actually far more significant advance is due to password theft. In the past, many commercial sites miserably failed to protect the login data of their users. Often, the user passwords were stored in unsalted SHA1 databases which can be processed with speeds in excess of 100 billion hashes per second using GPU clusters such as referred to above. Many millions of passwords have been leaked that way:

# wc -l rockyou.dict
14344391
# wc -l hashkiller.dict
23685601

Dictionary attacks are now not only common, but actually constitute a major component of the toolbox for breaking passwords. Here are some examples of passwords whose hashes were looted from online databases and which were cracked subsequently:

--jmle94--*
jiujitsu131@
dlnxf780508
28075s10810
198561198561
viatebatefilmul
zhengbo645917
182953c99vk416
nielasus5752754sh
polU09*@l1nk3d1n

As you see, these passwords are not of the type one commonly expects to be easily broken.

Now, every one of us has (or should have) multiple passwords for shopping sites and other services on the interweb, with "multiple" meaning in this context several dozens or even several hundreds. Can we chose them such that they are immune to the attacks described above, and yet memorize them?

Lots of smart people have pondered over this problem. Four schemes have emerged which have recently become popular. The names I gave them are not necessarily the names of the inventors, but those of the people or media who popularized the respective scheme.

Schneier scheme

The oldest and most well-known scheme. To quote Bruce Schneier:

"My advice is to take a sentence and turn it into a password. Something like "This little piggy went to market" might become "tlpWENT2m". That nine-character password won't be in anyone's dictionary.

Well, perhaps not. But I bet that lots of people chose the same sentence, for example "Mary had a little lamb, whose fleece was white as snow", i.e., "mhallwfwwas" which are even 11 characters. And guess what: both of the dictionaries above actually contain this password. D'oh!

So you'd need an absolutely unique, private sentence. For each site you want to login to.

Let's try:

Amazon: "Die Frauen am unteren Amazonas haben durchschnittlich kleinere Brüste als gewohnt." dfauahdkbag
Google: "Die Zahl Googool ist groß, aber kleiner als 70!" dzgigaka7!

I don't know about you, but I don't think I could actually memorize the 72 sentences I'd need.

And if I need to write them down, as Bruce suggests ("If you can't remember your passwords, write them down and put the paper in your wallet. But just write the sentence - or better yet - a hint that will help you remember your sentence"), I can just as well use truly strong passwords (which I will discuss at the end of this entry).

Gibson scheme

Oh yes, this is the Steve Gibson who recommended Zonealarm all over the internet. But let's listen to what he wants to say.

Steve suggests to use a core term (such as dog) which is transformed to a password by padding, like "...dog.........". This idea is based on the simple fact that the length of a password is more important than the character space.

The idea has its merits. Yet, if I would apply this principle to all of my passwords, I certainly couldn't hope to memorize all if both the core term and the padding were varied. Let's try to change only the core:

Amazon: "...Amazonas......."
Google: ".......Googool..."

You could use "<<<<>>>>" instead, of course. Or any other padding. But if you think about it, the number of paddings is very limited. Steve's idea thus boils down to a simple password used in conjunction with a very predictable salt.

Bad idea.

XKCD scheme

Randall's web comic suggests that passphrases composed of a few common words are easier to remember and harder to crack than standard passwords consisting of letters, numbers, and special characters. That's once again a variation of the theme 'length over complication' which I have discussed in a previous entry. As an example, I gave 'HerrRasmussensAra', Randall has chosen 'correct horse battery staple'.

Randall estimates the entropy of his password in a very conservative fashion, assuming a dictionary size of only (244)1/4 = 211 = 2048. Let's use a real dictionary instead:

# wc -l /usr/share/dict/words
119095

That's the default dictionary for the xkcd password generator for Linux. The search space for a passphrase composed of 4 words from this dictionary is 11909544 = 67 bit large, essentially the same as for a password composed of 12 numbers and letters. Not larger?

Mind you: just combining two words won't do, regardless their length and obscurity. Using cudaHashcat-lite, it takes even my humble 650Ti (with roughly one third the single-precision throughput of the above mentioned 7970) less than one second to get your passphrase.

Does it help when trying to memorize the passwords?

Amazon: "Millionen Leser konifizieren Amazon".
Google: "Googlifizierung normalisiert Billionen Hornissen."

You see that you run into the same problem as with Schneier's scheme. 😞

C't scheme

Or Schmidt scheme, since it is always propagated by Jürgen Schmidt, the head of Heise Security. The idea is as simple as it is appealing: take a strong, random master password you can still easily remember, such as ":xT9/qwB". Combine that with a snippet derived from the domain name, such as "ad6" for amazon.de or "gc6" for google.com. Voilà, you got a strong password which is easily reconstructed for as many sites as you wish.

Is it? Schmidt emphasizes never to use the same password twice, but "gc6" may represent google.com as well as gibson.com. Using simply the domain names is much too obvious, and the reason why many passwords are broken easily. Should we use "goc6" and "gib6"? Several sites won't accept passwords that long anyway, nor passwords containing characters such as ":" and "/".

There goes the vision of a univeral master password. Schmidt further disillusions the reader by telling him not to use the full master password on sites he doesn't fully trust. The criteria for this trust, however, remain unclear, particularly because Schmidt explicitly states that users can't recognize which sites are trustworthy. He proposes further exceptions for sites he trusts even less (?), for which he's using passwords which are not entirely obvious but easy to crack.

And finally, he proudly reports that really important passwords should be unique anyway and written down on paper.

This scheme has more exceptions than rules, it seems, and it doesn't keep its promise it seems to have at first glance.

My scheme

  1. Generate the strongest password the site allows using an appropriate program.
  2. Store these passwords in a strongly encrypted password database such as KeepassKeepassXC.
  3. Save web-related passwords in the cloud to synchronize them across browsers using a strongly encrypted cloud service such as LastEnPass.
  4. Choose very strong passwords for your database and memorize them (and only them).
  5. Store your database redundantly. Use cloud storage services such as WualanextCloud offering client-side encryption.

I know what you think. OMGOMG PASSWORD SAFES WHAT ABOUT TROJANS AND KEYLOGGERS AND EEEVEN WORSE!!! OMGWTF IN THE CLOUD IS THIS GUY NUTS!!!

Many users of Windows react in that way. It's a Pavlov's reflex: the deeply internalized belief that infections with malware are inevitable and must be quietly accepted. The argument itself, of course, is not a rational one.

Why not? Well, the argument is that a trojan and an associated keylogger could snatch the entire password database once you enter the master password. Well ... sure they could. And if you wouldn't use a password manager, what do you imagine these keyloggers would do? Delete themselves, being bitterly disappointed?

Try to be strong: they'd collect your passwords, you know, one by one.

Disk speed

My previous post contained some vague statements about the low latency of the SSD with which my current system is equipped. Haui suggested a more quantitative measure and referred me to a little utility called seeker capable of actually measuring the random access time of storage devices. Thus started, I continued and measured both the transfer rate TR (using hdparm -t) and the random access time AT (using seeker) of some other storage devices which I had lying around.

Name                Interface       Capacity (GB)       TR (MB/s)       AT (ms)

Plextor PX-256M5P   SATA 6G         256                 435             0.1
RunCore Pro IV      Mini PCIe       32                  75              0.35
USB sticks          USB 2.0         2--8                20--30          1
SD card (Class 10)  SD              16                  10              2
2.5" USB disk       USB 2.0         500                 35              15
WD WD10EFRX         SATA 6G         1000                150             20

When looking at these data, several facts strike me as remarkable. First of all, when comparing the first and the last position, it becomes clear that SSDs are now the only viable drives for desktop computing, offering a capacity more than sufficient for all modern OS and their applications, a transfer rate clearly superior to all available magnetic HDs, and, finally, a decrease in access time by two orders of magnitude. The latter is of particular importance, since it not only creates this 'snappy' feeling we all love, but is also responsible for the absence of the system stall during extended disk operations, such as a backup or a file search (see also the technical remark on IOPS at the end of this entry).

Second, I cannot help but marvel at the abundance of affordable and high-performing mobile storage solutions available today. Readers below age 30 will perhaps not understand, so let me remind you of one historical fact:

3.5" floppy disk            0.0015      0.015       300

THIS THING was the only available portable storage solution for a decade prior to the advent of affordable CD ROM drives in the late 90s. Consequently, the OS of your choice as well as games came on 20 or 30 of these disks. Look at the numbers above to get at least a SLIGHT idea what installations were alike. They indeed often took HOURS and required the repeated manual insertion and ejection of EACH OF THESE DISKS.

AHHHHH! How dreadful!

Eh ... sorry.

'Portable' also meant, of course, that we used these disks for transporting data back and forth. At least 20% of my data were destroyed in in transit due to the magnetic field in the U-Bahn. GNAAAAA!

I'm so glad these times are over. 😊

On a technical note: SSDs are typically not assessed in terms of random access times, but of I/O operations per second (IOPS). These quantities are not simply inversely related since IOPS are usually measured in a way relevant for servers, namely, with 32 threads. Ideally, the number of IOPS would scale linearly with the number of threads, but experience shows that 32 threads are sufficient to reveal the maximum, saturated IOPS of the drive.

To illustrate the different response of SSds and HDDs to multithreaded access patterns, I've tested both the Plextor SSD and the WD HDD with seekmark (available on the AUR), a multithreaded I/O benchmark based on seeker.

Name                IOPS (1 thread)     IOPS (32 threads)       Gain

Plextor PX-256M5P   10000               100000                  10
WD WD10EFRX         50                  100                     2

As you can see, the SSD handles multithreaded access patterns much more gracefully than the HDD. Data base servers with very high demands may saturate even this I/O rate, but otherwise we may state that 100000 IOPS ought to be enough for everybody. 😉

** Update: **

Here's a graphical overview of the drives' performance when subjected to different numbers of threads:

iops

The plot visualizes the enormous difference between SSD and HDD as well as the saturation of the drives for larger numbers of threads. My Plextor SSD, for example, scales linearly up to 8 threads, but saturates thereafter.

Local Area Wiki

I'm an ardent user of the desktop wiki Zim, which is a very handy tool to organize ones own notes and ideas. Zim also offers a built-in webserver which allows users in the same network to glance upon these notes and ideas. The exported web site cannot, however, be edited directly, and this feature is thus less useful for an actual sharing of notes in a wiki-style sense. There are also a number of other features which Zim lacks in this context. Tags, for example, are indispensable for filtering the displayed content in what I planned to set up, namely, a shopping list.

I've spend a few hours looking around and finally concluded that TiddlyWiki is ideally suited for the kind of local area wiki I envisioned. It consists of a single html file which can be directly edited in a modern web browser and saved locally (with the Firefox extension TiddlyFox, Java does not need to be enabled in the browser).

I basically use the default design with only a few changes of colors and font sizes defined in the shadow tiddlers 'ColorPalette' and 'StyleSheet':

A TiddlyWiki example

The wiki, being physically located on a network share accessible by all family members, is then made avaliable in my LAN using darkhttpd as web server:

#!/bin/bash
wwwserv=/media/Thecus/tiddlywiki
log=/home/cobra/.darkhttpd/access.log
pidfile=/home/cobra/.darkhttpd/httpd.pid
darkhttpd $wwwserv --port 8081 --index wishlist.html --log $log --pidfile $pidfile --daemon

Heavy metal

The first thing I notice is its weight: the 'Fractal Design R4' is a hefty 12.3 kg, more than twice than my previous case which is not much smaller, but consists of rather thin and flimsy aluminum sheets. The matted black side panels of the R4 are made of remarkably thick rolled steel, and are additionally fit with dense bitumen inlets for sound suppression which add to the overall weight.

Inside, the massive Thermalright Silver Arrow dominates the visual impression, but one can also perceive the cooler of the GPU just below, the power supply at the bottom, and the two drives at the bottom right.

inside the case

Once powered up, the system starts with an audible hum, definitely not as I've expected. I first reduce the speed of the case fans with the external switch, but that just leads to a slight change of the tune of the hum. The case fans, when fully turned up, only emit the characteristic noise of high air flow, but the hum is clearly of different origin. I thus enter the systems BIOS and set the default CPU fan speed from 7 to 1. After a restart, the hum is gone.

In fact, I now can't tell if the system is actually powered up. I need to close the windows and strain my ears, and still come closer than about 1 m to register that it is alive. That's just as I hoped it would be. Right now, of course, the system is just idling, and there's no load yet.

Prior to put a load on the machine, I need to install an operating system. Needless to say, that will be Archlinux. There have been a lot of new developments since I have done so the last time, namely, GPT instead of MBR (and thus gdisk instead of fdisk), btrfs instead of ext4 (a least for the system partition), syslinux instead of grub, systemd instead of sysvinit, and of course, the abandonment of the Arch Installation Framework in favor of the Arch Install Scripts.

I just follow the instructions and am rewarded within 30 min with a running system without encountering a single problem. Note that I've never used the Arch Install Scripts nor any other of the new paradigms mentioned above. So much for all the whining in the Arch forums. 😉

partitioning and co

After the set up, it is the very high speed of the SSD (see the numbers for /dev/sda above) which dominates the first impression. This high speed results in a boot time of 5 s when measured from the syslinux menu to the KDE login, and another 7 s from this login to a fully loaded KDE desktop. That's really very fast, but I actually couldn't care less: I reboot my system perhaps once a month, if at all.

What's important in the actual everyday operation of the system is not the high transfer rate of the SSD, but rather its very low access time. Thanks to the low latency, and the potentially very high I/O rate, the system feels actually even faster than the above numbers suggest. The result is an overall 'snappiness' of the system which cannot be achieved by merely using faster processors or graphic cards.

But these latter factors contribute to the actual , objectively measurable speed. For example, (multicore) transcoding is 2 times faster than on my wife's i750, and thus 6 times faster than on my old E6600. The (singlecore) Mathematica Benchmark is 2 times faster than for a i7 950, and 3 times than for a E6600. During these benchmarks, the CPU fans spin up from about 650 to 900 rpm and keep the CPU below 50°C. Even at full load, the system remains essentially inaudible.

And what about games? Well, I've installed the usual suspects (Nexuiz, Xonotic) and they run at the highest possible settings with the display maxed out at all times. I hope nobody tells my wife that the 650Ti, despite its modest appearance and miniature size, is actually faster than her GTX260. 😉

In any case, I've discovered yamagi-quake and just fight my way through ancient monsters made more presentable by high-resolution texture packs but nevertheless remaining what they are: clumsy creatures of the 90s without even a shred of intelligence. I adore them. 😉

And for completeness, my desktop with the inevitable conky. The CPU speed is an average over the 8 available cores (cpu0 in conky lingo).

screenie

Custom made

It was high time to get an replacement for my 6 years old desktop, which recently started to show clear signs of its age. I thus visited the configurator of Alternate, my trusted online computer store since more than a decade. I intended to combine plenty of CPU power, a GPU capable of running the occasional 3D shooter at full HD resolution and maximum details with 40+ fps, and a top-notch SSD capable of holding the base system as well as my home. All in one case and all cooled to arctic temperatures as silently as possible, of course.

Here you see the components I've selected:

What will surprise most of you is the (for a desktop) rather exotic CPU. Why a Xeon, which is usually seen in servers, and not an i7? Well, the Xeon 1240v2 is identical to the i7 3770 except for the latter's integrated graphics and the resulting additional power consumption (77 vs. 69 W) and costs. And why no AMD, namely, an FX8350 with 8 cores and 4 GHz? Simply because the Xeon runs circles around the FX, and needs ony half the power while doing that. Even at full load, with all cores running, the FX can't keep up with the Xeon.

The GPU is only of entry performance level, but that's entirely satisfactory for me. Its very low power consumption and its reportedly near silence even under heavy load were the decisive factors which lead me to select this particular card from MSI.

Silence, or better to say, noise reduction, was also an important criterion for several other components. The case, in particular, is claimed to be designed for cooling even the most powerful systems while keeping noise down to a minimum. The power supply should be quiet already according to its name (note the meager 400 W, which are more than enough for this system). Finally, the CPU cooler is reported to be at least capable of silent operation, while at the same time, it is perhaps one of the most powerful cooling solutions available right now. This potential is important for me, as I may want to load my system for hours even on hot summer days.

Finally, I've finally decided to really put both the system as well as my home on one SSD, as announced previously. Unlike SSDs with the common Sandforce controller, the Plextor M5 Pro models do not rely on data compression to achieve their phenomenal transfer rates. The Western Digital HD, by the way, serves as a backup disk only. Their red series is explicitly marketed for high reliability even in 24/7 operation, making it particularly suitable for backup purposes. Furthermore, the 1 TB model I've purchased is a single platter design, and I expect that the disk is virtually silent.

All of the above were my expectations when selecting the components. Alternate actually built it exactly to these specifications and sent it within a couple of days. My next post will report on my actual experience with this system and whether or not it lives up to my expectations.

Wink wink, nudge nudge

[ 2542.226389] NVRM: The NVIDIA GeForce 7300 GT GPU installed in this system is supported
through the NVIDIA 304.xx Legacy drivers. Please visit http://www.nvidia.com/object/unix.html
for more information.  The 310.19 NVIDIA driver will ignore this GPU.
[ 2542.226443] NVRM: No NVIDIA graphics adapter found!

What does that try to tell me? Well, it's really rather obvious (say no more, say no more):

This system belongs in a museum. Yeah, I know. 😏

Turbo

My Mini is now in its fourth year, but so far, I don't see any reason for its retirement. The combination of a small footprint (9") and a system free of any moving parts (aka noise) has remained unique. Contemporary netbooks are invariably equipped with magnetic hard drives and fans, and tend to be a good deal larger and heavier without offering any significant advances in terms of processing power.

There's a catch, of course: the Mini's SSD is only 8 GB in size, 5 of which I'd devoted to the operating system. Originally, the Mini came with a customized version of Ubuntu 8.04, and I've updated it step by step from Ubuntu 8.10 to the current 12.04. The updates became increasingly voluminous, and eventually the system partition was filled to 90% despite the removal of the entire TeX documentation and LibreOffice.

And what's more: I've really grown tired of Ubuntu. I never understood the hype, but I thought that using Ubuntu on at least one machine would be useful for supporting other Linux users (95% of which seem to use Ubuntu). Man, was I right: Ubuntu developed into the buggiest distribution I've ever used. I still continued to use it because I had myself arranged with it quite comfortably, but also out of sheer laziness. Still, I've never trusted the good intentions of Mr. Shuttleworth, and after the latest developments it became painfully clear even to many casual users what Ubuntu really aims at. So, the support is over, and you are on your own.

It is time to act! I've decided to do the following:

  1. Buy a larger SSD.
  2. Throw Ubuntu to the dogs.
  3. Install a lightweight, community-driven distribution.

ad 1.: Bought a RunCore Pro IV SSD with 32 GB.

ad 2.: Well, I just opened the Mini, as you see below:

old ssd

The SSD is right below the 2 EUR coin. I unscrewed it (that requires a Phillips #0 driver and quite some determination) and put the new one inside:

old ssd

I'll keep the old SSD in a box until a museum will ask for it. 😉

What you also see in the above photograph are the latches for the battery, which may be replaced within seconds (which I have just done). The Mini may not look as sexy as certain Air- and Zenbooks, but it is infinitely more practical and durable.

ad 3.: I've opted for Crunchbang (#!) "Waldorf", a lightweight distribution based on Debian Wheezy:

new ssd

Installation was a breeze (including the encryption of /home) and took 15 min, and all hardware (particularly the Wifi card) was detected and configured correctly at install time. Everything worked right out of the box (even my LUKS encrypted SD card containing a btrfs filesystem). And in contrast to Archbang, to which I'm much attracted, I'll never need to compile important packages, which would be a nuisance on the Mini.

#! uses Openbox as the default window manager, tint2 as panel (visible on the top of the screenshot above), and dmenu as application launcher. A good choice, since all are slim and functional.

For the Mini, it's essential that I don't need a mouse to use the system. #! is perfect in this regard:

Ctrl-Alt-Left/Right:            switch desktop

Super+T:                Terminal        (Gnome Terminator)
Super+E:                Editor          (Geany)
Super+F:                File Manager        (Thunar)
Super+M:                Media Player        (Gnome Mplayer)
Super+W:                Browser         (Iceweasel)
etc.

Alt-F3:                 Run command     (Dmenu)

In Terminator, Alt-Left/Right switches between different windows. In Geany, F9 compiles a LaTeX document and F5 will display the pdf. In Iceweasel, Ctrl-Pageup/down will switch tabs, and all the rest is handled by Pentadactyl.

Apropos LaTeX: the persistence of TL 2009 was the main reason for me to shy away from Debian-related distris. Fortunately, TexLive 2012 has finally arrived also in the Debian (testing) repositories!

Hardware-wise, an "hdparm -t /dev/sda" returns 75 MB/s, more than twice of the original disk and on par with the magnetic hard drives on my notebook and desktop. And indeed, the Mini now seems to fly: a cold boot brings you in 20 s to the login screen, and then in 5 s to the desktop seen in the screenshot above. It's actually real fun to use the little guy, and I currently prefer it over my two bigger alternatives. 😉