Old habits

Or: Recombination dynamics in a coupled two-level system with strong nonradiative contribution (an ipython notebook)

One of my students investigates the transient behavior of the photoluminescence emitted by (In,Ga)N quantum heterostructures after being irradiated by a short laser pulse. The characteristic feature of the transients he observes for these structures is a power-law decay of the photoluminescence intensity with time at low temperatures (10 K), which changes into an exponential decay at higher temperatures (150 K).

His results reminded me of ones I acquired myself ages ago, during my own time as a PhD student. I didn't have a sensible interpretation then, but I do have one now. Hence, to the surprise of my student, I nonchalantly wrote down the following two coupled differential equations as if they had just occurred to me:

${\dot n_b} = -n_b/\tau_{rel} - n_b/\tau_{nr} + n_w \exp(-\frac{E_b}{k_B T})/\tau_e$

${\dot n_w} = n_b/\tau_{rel} - t^{b-1} n_w/\tau_{w} - n_w \exp(-\frac{E_b}{k_B T})/\tau_e$

with the second term in the second equation ($t^{b-1} n_w/\tau_{w}$) being the experimental observable. The form of this term is giving rise to what is known as a stretched exponential, which for $b \rightarrow 0$ approaches a power law for long times.

Using Mathematica, it takes 7 lines of code to solve this system and to plot it for several temperatures $T$ (or, equivalently and as done here, for different energies $k_B T$):

from IPython.display import Image
Image(filename='/home/cobra/ownCloud/MyStuff/Documents/pdes-net.org/files/images/deqs.png')

Mathematica

As I had hoped, this simple model reproduces the behavior observed in the experiment fairly well. My student was also pleased, but only with the result, not with the method: he's familiar with Matlab, but not with Mathematica. Well, I suspect that he's also not too familiar with Matlab, since he could otherwise have easily solved the equations himself.

In any case, his admission reminded me that I actually wanted to migrate my computational activities to free software whenever possible. It's not easy to get rid of old habits, and as I'm using Mathematica since 23 years, the code above just came naturally, while the one below still required an explicit intellectual effort. But that's essentially the same lame excuse which I'm tired to hear from users of, for example, Microsoft Office when asked to prepare a document with LibreOffice.

So let's get moving. Here's the above differential equation system solved and plotted using numpy, scipy, and matplotlib in an ipython notebook. Note how the notebook integrates the actual code with comments, links, pictures and equations. Editing this notebook is a real treat thanks to the use of markdown and LaTeX syntax.

#Initialize
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint 

mpl.rcParams['figure.figsize'] = (6, 4)
mpl.rcParams['font.size'] = 16
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'Serif'
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['xtick.major.pad'] = 7
# Parameters
taurel = 0.1       # capture time
taue = 0.1         # emission time
taunr = 0.01       # nonradiative lifetime
tauw = 1.65        # radiative lifetime
eb = 20            # activation energy (in meV)
b = 0              # stretching parameter (approaches power law for b -> 0)
# solve the system dn/dt = f(n,t) and plot the solution
fig = plt.figure()
#for kt in np.linspace(1,13,7):          # approximate temperatures
for T in [10,30,50,70,100,150]:          # exact temperatures

    kt = 0.086173324*T                   # in meV

    def f(n,t):
            nbt = n[0]
            nwt = n[1]
            # the model equations
            f0 = - nbt/taurel - nbt/taunr + nwt*np.exp(-eb/kt)/taue
            f1 = nbt/taurel - nwt*np.exp(-eb/kt)/taue - t**(b-1)*nwt/tauw
            return [f0, f1]

    # initial conditions
    nb0 = 1.                            # initial population in barrier
    nw0 = 0                             # initial population in well
    n0 = [nb0, nw0]                     # initial condition vector
    t  = np.logspace(-2,2,1000)         # logarithmic time grid

    # solve the DES
    soln = odeint(f, n0, t)
    nb = soln[:, 0]
    nw = soln[:, 1]

    # plot results
    plt.loglog(t, t**(b-1)*nw/tauw/max(t**(b-1)*nw/tauw), label=r'%.0f K' %T)
    plt.xlabel('Time (ns)')
    plt.ylabel('Intensity (arb. units)')
    plt.axis([7e-3,40,1e-5,2]);
    plt.legend(loc='lower left', frameon=False, prop={'size':15}, labelspacing=0.15)

ipython

fig.savefig('transients.pdf')

The above command saves this plot as a publication-ready figure in pdf format. There are many other available formats, including eps (for the traditional LaTeX/dvipdf toolchain), svg (for further editing with inkscape, or publishing in the web) and png (for insertion in a Powerpoint/Impress presentation).

Benchmarks

I haven't posted any browser benchmark since more than four years. For a good reason: if all contenders perform equally well, there's no need to benchmark them.

Of course, the recent excitement about Apple's Safari outpacing Chrome and Firefox still came to my attention. 😉 As it turned out, however, Safari managed to do that only in benchmarks developed by Apple, but not in those provided by Apple's competitors Google and Mozilla. This result seems to confirm qualified opinions as to which the available browser benchmarks are to be disregarded altogether.

Well, let's see what we have:

Google: Octane
Mozilla: Kraken

Now let's see what we've got:

System

Jetstream

Speedometer

Octane

Kraken (ms)

5: Office (i7 4790, Archlinux)

Chromium 44.0.2403.155

225.72±7.14

95.7±2.0

41571

749.8±0.6%

Firefox 40.0.2

215.21±8.08

61.0±2.1

35537

839.5±2.4%

4: Desktop (Xeon E3 1240 v2, Archlinux)

Chromium 44.0.2403.155

180.72±7.66

53.7±0.35

32280

919.6±1.7%

Firefox 40.0.2

164.03±1.44

29132

1030.3±4.1%

3: Notebook (Pentium P6200, Archlinux)

Chromium 44.0.2403.155

82.44±1.56

22.8±0.34

13167

2138.6±1.6%

Firefox 40.0.2

70.53±5.91

11944

2300.4±2.8%

2: Netbook (Atom N270, Debian Stretch)

Chromium 44.0.2403.107

17.28±1.37

5.15±0.07

2924

12985.7±3.7%

1: Tablet (ARM Cortex A9, Android 5.1.1)

Chrome 44.0.2403.133

15.46±0.92

7.64±0.19

2776

16248.3±5.1%

Chromium consistently performs better than Firefox across all benchmarks, even in Mozilla's own benchmark Kraken. The difference, however, is insignificant.

But that's not what I was actually interested in. What I really wanted to see was whether one can abuse these browser benchmarks for a kind of quick and dirty system benchmarking without the need to install anything. And as you see, all benchmarks scale fairly well:

../images/browser_benchmarks.svg

If we restrict ourselves to x86 for the moment, Jetstream (red) and Octane (blue) scale essentially identically across the entire range of systems. As a matter of fact, they even seem way too close if we suppose that Jetstream and octane are independent benchmarks. Kraken (yellow) scales very similarly, with the sole exception of the mini, for which it indicates only half of the performance than all other benchmarks. Finally, Speedometer (green) really seems to like the i4790. Perhaps it's making use of AVX2?

For the ARM architecture of the tablet, Jetstream and Octane are again almost identical, but Kraken suffers and Speedometer gains. No big deal, though: the notebook is still miles away. What's also interesting: a tablet from 2012 does not outperform a netbook from 2008, contrary to what the media want us to believe. But who believes them anyway anything anymore.

Compared to specialized benchmarks designed to test the number crunching performance of systems, the current ones reflect the average system performance we can expect in everyday situations. Particularly, of course, in browsing. 😉

Pip

I usually manage my system-wide Python installation with the system's package manager, and avoid using Python's own package manager pip. Not so in a virtual environment. As described in a previous post, pip, together with the pip-tools, offers a very convenient way to get and to keep all tools in your virtual environment up to date.

Imagine my surprise when a couple of months ago 'pip-review --interactive' did not update my tools one by one, as it used to do, but only resulted in a 'fish: Unknown command 'pip-review'. As it turned out, the developer of pip-tools (which I dutifully kept up to date) had decided to dump pip-review in favor of two new commands, pip-compile and pip-sync.

I'm sure he had good reasons for that, and it's really no big deal for end-users like me. After a 'pip list' I knew what I needed, created a corresponding requirements.in as Vincent described in his post, and ran

pip-compile requirements.in && pip-sync requirements.txt

That's all. It's still as useful as ever.

Strategic location

Last Friday, a record was broken: 38.9°C in Berlin. We've got five (5) fans working day and night, but it was a lost battle from the very beginning. At least they helped us to dissipate the 8 liters of mineral water we've consumed over the day...

I always wonder how the cats manage. Without being able to sweat, all they can do is to breath rapidly and to search for a cooler place.

Interestingly, the individual wellness zone differs greatly from cat to cat. Indy was hiding in such narrow, secluded spaces that I could not possibly get a decent photograph. Luca, on the other hand, decided to place himself right at the entrance of our bathroom, below the Noren serving as visual separation during the hottest days of the year:

Put them on ice

Using bleeding edge distributions has the charm of getting all the great new stuff prior to anyone else. Such as conky 1.10, which comes with an entirely new and shining configuration syntax. Yippee ki-yay etc. etc.

Naturally, I first got 1.10 on my three Arch-based systems. Upon startup, conky tried to convert the previous config on the fly, but failed to do so. A manual conversion via convert.lua also failed. Grmpf.

Well, I thought, the new Lua syntax doesn't seem to be that different from the old one. I thus edited my config file and changed all entries according to the new rules. Took me about 10 min. Still plenty of time till the beginning of my next meeting! Let's start conky with its new config and iron out the few remaining wrinkles in the remaining 5 min.

Error. Error. Errrror!

Come on, girls. Why don't you start a new branch of Conky (2?) and keep the old one (1.9) as stable? Why forcing the new Lua syntax down the throats of everybody, including people like me who don't have the time to pamper immature and bawling software like yours?

After the meeting, I downgraded conky and put in on hold. In Arch, you can downgrade by issuing

pacman -U conky-1.9.0-7

in the directory holding your old package, i.e., normally /var/cache/pacman/pkg (I've moved this cache to my HDD and can thus afford to keep several versions for each package). You put the package on hold by simply adding it to the line

IgnorePkg   = conky

in /etc/pacman.conf.

A few days later, the same happened in Debian Stretch. Muuuh! I checked and saw that the Jessie repository still lists conky 1.9. Excellent! Let's downgrade by first adding the Jessie repository to /etc/apt/sources.list, and then running

wajig update
wajig install conky=1.9.0-6
wajig install conky-all=1.9.0-6

To put them on hold, use

wajig hold conky
wajig hold conky-all

Don't sabotage my conkys. I like them as they are:

sudo for polkit

Call me old-fashioned, but I usually configure my systems to have a root account, and I do everything which requires root privileges as root. With one exception: I like to be able to update the system without having to enter a password. All system which I administer are running rolling-release distributions (Arch or Debian Sid/Testing) for which updates are frequent.

For updating from the command line, sudo is the method of choice. Note that neither Arch nor Debian have sudo installed by default. Also note that manual changes are now expected to be placed in a separate file in /etc/sudoers.d instead of directly in /etc/sudoers. To edit this file, use 'visudo -f filename' (don't use an extension, since filenames containing a period "." will be ignored). Everything else is self-explaining.

But what if you're planning to use a front-end which does not respect these settings since it responds to a different framework to control user privileges?

I've encountered this problem with pamac, which listens to polkit. In this case, and as explained in detail by the Arch Wiki, one has to create a custom rule in /etc/polkit-1/rules.d/. To allow, for example, all users in the group wheel to update without having to enter a password, I've put there the following as 49-passwordless-pamac.rules:

/* Allow members of the wheel group to update with pamac 
 * without password authentication, similar to "sudo NOPASSWD:"
 */

polkit.addRule(function(action, subject) 
{
if ((action.id == "org.manjaro.pamac.commit") &&
subject.isInGroup("wheel"))
{
return polkit.Result.YES;
}
});

Enough is not enough

People like to tell me about their digital life. Recently, I hear a lot about laptops replacing desktops. To my surprise, many are willing to spend quite a bit for this transition: most more than €1000 and some €2000 and above. All of the former assure me that the performance of their desktop replacement is "more than enough". Several of the latter actually believe that the system's performance is directly related to its pricetag. One member of this group owns a Macbook 12 (sorry for the cliché, but what can I do) and tried to convince me in a particular insistent and tenacious way that his gadget would outperform even the most powerful desktops available.

As a matter of fact, it is quite far from this feat (look here). As all ultrabooks equipped with a Core M-5YXX processor, it performs slightly better (20–40%) than my €299 Fujitsu Lifebook which, however, is miles away from a decent desktop:

Right, the last one is not your usual desktop, but a compute server with—at the moment of the screenshot—a load of 24. The remaining 8 physical cores managed to outperform my desktop, if only by a slight margin. But I bet that my new office desktop (an i7 4790) will be able to complete the run under 100 s ... let's see tomorrow. 😉

Update: even below 80 s:

In any case, my lifebook has been thoroughly smashed and humiliated: instead of the 2 minutes required by the Xeons it needed a staggering 20 minutes for the same result. The lifebook is great for writing this entry, but for serious tasks I rather turn to a serious computer. And the same applies to your Core M driven ultrabooks.

At this point, the compassionate ones of the laptop owners are sure to secretly pity me. Just imagine that I have to stay all time in a kind of server room for being able to take advantage of the performance depicted above. ... how gruesome!

Well, as much as I like to sit in my study, I prefer to run my computations from where I want. How do I do that?

I have WiFi. 😄

Seriously, for computations I use an ipython server running on my desktop. The posts of Fillipo and Nikolaus have helped me to find the best (most robust and convenient) way to connect to this server.

For what follows, both the server and the client need to have mosh installed. For the server, tmux (which I also like to use for different reasons) is required in addition.

From the client, I connect to the server (blackvelvet) by issuing

mosh blackvelvet -- tmux new-session -s ipython

In this session, I then start an ipython notebook server on blackvelvet:

ipython notebook --no-browser --port=8889

and subsequently detach the session with Ctrl-a d.

I can attach again anytime by issuing

mosh blackvelvet -- tmux a

on the client. Isn't that neat?

To connect to the server, I start an ssh tunnel on the client with

ssh -N -L localhost:8888:localhost:8889 cobra@blackvelvet

and open the notebook at http://localhost:8888/:

Thanks to MathJax, the font rendering is way better than that of Mathematica.

The ssh tunnel can be stopped with a single Ctrl-C on the client, the ipython server needs a double one on the server (after attaching the session again).

Sculpting

Suppose we have a crystal in the form of a rectangular block, given by a list of the absolute atomic coordinates with one atom per line

Ga   -9.278018850000   -9.642000000000   -7.870137870000
N    -7.422415050000   -9.642000000000   -8.518637230000
.
.
.

looking like that:

What we really want, though, is a hexagonal column which tapers down from the bottom to the top — just like a column of Doric order. How can we sculpt this column out of the block we have? Where do we get the digital chisel we need? A colleague of mine solved this quest in the most elegant fashion: with an awk one-liner.

awk '{x=sqrt($2^2);y=sqrt($3^2);d=x/2+sqrt(3)/2*y;z=$4;cutoff='$radius1'*('$zmax'-z)/'$zrange'+'$radius2'*(z-('$zmin'))/'$zrange'};d<=cutoff && x<cutoff' block.dat > column.dat

Voilà:

Season color

An early morning shot of the norway maple just outside my study. Together with a freshly pressed orange juice followed by good coffee, and watching my cats watching the birds in the tree, the world doesn't seem such a hostile place after all.

Printing

A century ago, printing under Linux had only one name: Postscript. Preferably spoken natively by the printer. Which, in most cases, was a laser printer from Hewlett Packard with a price tag well above $2000. Color laser printers became available in the mid 90s, but it was not until 2005 that I've seen them to represent the standard printing solution in offices.

At home, laser printers are still comparatively rare. Inkjets dominate the scene for several reasons: they are (at the first glance) very affordable, they can produce printouts of photographs with astonishingly high fidelity, and they are available as multifunctional all-in-one solutions combining printer, scanner, fax and copier.

I followed this trend without reflecting on my actual needs. Since I thought (wrongly) that I anyway don't need any printing at home, I purchased a GDI printer and connected it via USB to the Windows-powered gaming rig of my wife. The first one, a simple Canon inkjet, which ceased to function after an extended period of inactivity because of the resulting dry ink, was followed by an Epson all-in-one, which did its job until the ink was dry. We simply don't print that much.

I was tired of these toys anyway, since I began to see the convenience of printing from all my devices anywhere in the LAN. A week ago, I've thus acquired a Hewlett Packard LaserJet Pro 200. It speaks Postscript, has an ethernet connection and 128 MB memory, resolves 600 dpi and turns out 14 pages per minute. For €140.

I knew, of course, that these low-end business-class color lasers have become quite affordable over the past few years. To see one in action, and to see with your own eyes that the print quality is quite on par with the enterprise-class model from 2010 in the office, well, that's different. At present, I cannot deny an entirely unjustified feeling of grandness when issuing Ctrl-P . 😉