RSS Atom Add a new post titled:
Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false


In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5
bindsym XF86AudioMute exec /usr/bin/pactl set-sink-mute @DEFAULT_SINK@ toggle

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:


Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1

Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

Finally, because X sometimes fail to detect my external monitor when docking/undocking, I also wrote a script to set the displays properly and bound it to the appropriate key on my laptop:

bindsym XF86Display exec /home/francois/bin/external-monitor
CrashPlan and non-executable /tmp directories

If your computer's /tmp is non-executable, you will run into problems with CrashPlan.

For example, the temp directory on my laptop is mounted using this line in /etc/fstab:

tmpfs  /tmp  tmpfs  size=1024M,noexec,nosuid,nodev  0  0

This configuration leads to two serious problems with CrashPlan.

CrashPlan client not starting up

The first one is that while the daemon is running, the client doesn't start up and doesn't print anything out to the console.

You have to look in /usr/local/crashplan/log/ui_error.log to find the following error message:

Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
  Can't load library: /tmp/.cpswt/
  Can't load library: /tmp/.cpswt/
  no swt-gtk-4234 in java.library.path
  no swt-gtk in java.library.path
  /tmp/.cpswt/ /tmp/.cpswt/ failed to map segment from shared object: Operation not permitted

  at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
  at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
  at org.eclipse.swt.internal.C.<clinit>(Unknown Source)
  at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
  at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
  at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source)
  at com.backup42.desktop.CPDesktop.<init>(
  at com.backup42.desktop.CPDesktop.main(

To fix this, you must tell the client to use a different directory, one that is executable and writable by users who need to use the GUI, by adding something like this to the GUI_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:

Backup waiting forever

The second problem is that once you're able to start the client, backups are stuck at "waiting for backup" and you can see the following in /usr/local/crashplan/log/engine_error.log:

Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager
  at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(
  at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(
  at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(
  at com.code42.backup.path.BackupSetsManager.access$1600(
  at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(

This time, you must tell the server to use a different directory, one that is executable and writable by the CrashPlan engine user (root on my machine), by adding something like this to the SRV_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:
What's in a debian/ directory?

If you're looking to get started at packaging free software for Debian, you should start with the excellent New Maintainers' Guide or the Introduction to Debian Packaging on the Debian wiki.

Once you know the basics, or if you prefer to learn by example, you may be interested in the full walkthrough which follows. We will look at the contents of three simple packages.


This package is a node.js library for the Libravatar service.

Version 2.0.0-3 of that package contains the following files in its debian/ directory:

  • changelog
  • compat
  • control
  • copyright
  • docs
  • node-libravatar.install
  • rules
  • source/format
  • watch


Source: node-libravatar
Priority: extra
Maintainer: Francois Marier <>
Build-Depends: debhelper (>= 9)
Standards-Version: 3.9.4
Section: web
Vcs-Git: git://

Package: node-libravatar
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends}, nodejs
Description: libravatar library for NodeJS
 This library allows web application authors to make use of the free Libravatar
 service ( This service hosts avatar images for
 users and allows other sites to look them up using email addresses.
 node-libravatar includes full support for federated avatar servers.

This is probably the most important file since it contains the bulk of the metadata about this package.

Maintainer is a required field listing the maintainer of that package, which can be a person or a team. It only contains a single value though, any co-maintainers will be listed under the optional Uploaders field.

Build-Depends lists the packages which are needed to build the package (e.g. a compiler), as opposed to those which are needed to install the binary package (e.g. a library it uses).

Standards-Version refers to the version of the Debian Policy that this package complies with.

The Homepage field refers to the upstream homepage, whereas the Vcs-* fields point to the repository where the packaging is stored. If you take a look at the node-libravatar packaging repository you will see that it contains three branches:

  • upstream is the source as it was in the tarball downloaded from upstream.
  • master is the upstream branch along with all of the Debian customizations.
  • pristine-tar is unrelated to the other two branches and is used by the pristine-tar tool to reconstitute the original upstream tarball as needed.

After these fields comes a new section which starts with a Package field. This is the definition of a binary package, not to be confused with the Source field at the top of this file, which refers to the name of the source package. In this particular example, they are both the same and there is only one of each, however this is not always the case, as we'll see later.

Inside that binary package definition, lives the Architecture field which is normally one of these two:

  • all for a binary package that will work on all architectures but only needs to be built once
  • any for a binary package that will work everywhere but that will need to be built separately for each architecture

Finally, the last field worth pointing out is the Depends field which lists all of the runtime dependencies that the binary package has. This is what will be pulled in by apt-get when you apt-get install node-libravatar. The two variables will be substituted later by debhelper.


node-libravatar (2.0.0-3) unstable; urgency=low

  * debian/watch: poll github directly
  * Bump Standards-Version up to 3.9.4

 -- Francois Marier <>  Mon, 20 May 2013 12:07:49 +1200

node-libravatar (2.0.0-2) unstable; urgency=low

  * More precise license tag and upstream contact in debian/copyright

 -- Francois Marier <>  Tue, 29 May 2012 22:51:03 +1200

node-libravatar (2.0.0-1) unstable; urgency=low

  * New upstream release
    - new non-backward-compatible API

 -- Francois Marier <>  Mon, 07 May 2012 14:54:19 +1200

node-libravatar (1.1.1-1) unstable; urgency=low

  * Initial release (Closes: #661771)

 -- Francois Marier <>  Fri, 02 Mar 2012 15:29:57 +1300

This may seem at first like a mundane file, but it is very important since it is the canonical source of the package version (2.0.0-3 in this case). This is the only place where you need to bump the package version when uploading a new package to the Debian archive.

The first line also includes the distribution where the package will be uploaded. It is usually one of these values:

  • unstable for the vast majority of uploads
  • stable for uploads that have been approved by the release maintainers and fix serious bugs in the stable version of Debian
  • stable-security for security fixes to the stable version of Debian that cannot wait until the next stable point release and have been approved by the security team

Packages uploaded to unstable will migrate automatically to testing provided that a few conditions are met (e.g. no release-critical bugs were introduced). The length of time before that migration is influenced by the urgency field (low, medium or high) in the changelog entry.

Another thing worth noting is that the first upload normally needs to close an ITP (Intent to Package) bug.


#!/usr/bin/make -f
# -*- makefile -*-

    dh $@ 


As can be gathered from the first two lines of this file, this is a Makefile. This is what controls how the package is built.

There's not much to see and that's because most of its content is automatically added by debhelper. So let's look at it in action by building the package:

$ git buildpackage -us -uc

and then looking at parts of the build log (../

 fakeroot debian/rules clean
dh clean 

One of the first things we see is the debian/rules file being run with the clean target. To find out what that does, have a look at the dh_auto_clean which states that it will attempt to delete build residues and run something like make clean using the upstream Makefile.

 debian/rules build
dh build 

Next we see the build target being invoked and looking at dh_auto_configure we see that this will essentially run ./configure and its equivalents.

The dh_auto_build helper script then takes care of running make (or equivalent) on the upstream code.

This should be familiar to anybody who has ever built a piece of free software from scratch and has encountered the usual method for building from source:

make install

Finally, we get to actually build the .deb:

 fakeroot debian/rules binary
dh binary 
dpkg-deb: building package `node-libravatar' in `../node-libravatar_2.0.0-3_all.deb'.

Here we see a number of helpers, including dh_auto_install which takes care of running make install.

Going back to the debian/rules, we notice that there is manually defined target at the bottom of the file:


which essentially disables dh_auto_test by replacing it with an empty set of commands.

The reason for this becomes clear when we take a look at the test target of the upstream Makefile and the dependencies it has: tap, a node.js library that is not yet available in Debian.

In other words, we can't run the test suite on the build machines so we need to disable it here.



This file simply specifies the version of debhelper that is required by the various helpers used in debian/rules. Version 9 is the latest at the moment.


Upstream-Name: node-libravatar
Upstream-Contact: Francois Marier <>

Files: *
Copyright: 2011 Francois Marier <>
License: Expat

Files: debian/*
Copyright: 2012 Francois Marier <>
License: Expat

License: Expat
 Permission is hereby granted, free of charge, to any person obtaining a copy of this
 software and associated documentation files (the "Software"), to deal in the Software
 without restriction, including without limitation the rights to use, copy, modify,
 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
 permit persons to whom the Software is furnished to do so, subject to the following
 The above copyright notice and this permission notice shall be included in all copies
 or substantial portions of the Software.

This machine-readable file lists all of the different licenses encountered in this package.

It requires that the maintainer audits the upstream code for any copyright statements that might be present in addition to the license of the package as a whole.


This file contains a list of upstream files that will be copied into the /usr/share/doc/node-libravatar/ directory by dh_installdocs.


lib/*    usr/lib/nodejs/

The install file is used by dh_install to supplement the work done by dh_auto_install which, as we have seen earlier, essentially just runs make install on the upstream Makefile.

Looking at that upstream Makefile, it becomes clear that the files will need to be installed manually by the Debian package since that Makefile doesn't have an install target.


version=3 /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz

This is the file that allows Debian tools like the Package Tracking System to automatically detect that a new upstream version is available.

What it does is simply visit the upstream page which contains all of the release tarballs and look for links which have an href matching the above regular expression.

Running uscan --report --verbose will show us all of the tarballs that can be automatically discovered using this watch file:

-- Scanning for watchfiles in .
-- Found watchfile in ./debian
-- In debian/watch, processing watchfile line: /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
-- Found the following matching hrefs:
Newest version on remote site is 2.0.0, local version is 2.0.0
 => Package is up to date
-- Scan finished


This second package is the equivalent Python library for the Libravatar service.

Version 1.6-2 of that package contains similar files in its debian/ directory, but let's look at two in particular:

  • control
  • upstream/signing-key.asc


Source: pylibravatar
Section: python
Priority: optional
Maintainer: Francois Marier <>
Build-Depends: debhelper (>= 9), python-all, python3-all
Standards-Version: 3.9.5

Package: python-libravatar
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-dns, python
Description: Libravatar module for Python 2
 Module to make use of the federated avatar hosting service
 from within Python applications.

Package: python3-libravatar
Architecture: all
Depends: ${misc:Depends}, ${python3:Depends}, python3-dns, python3
Description: Libravatar module for Python 3
 Module to make use of the federated avatar hosting service
 from within Python applications.

Here is an example of a source package (pylibravatar) which builds two separate binary packages: python-libravatar and python3-libravatar.

This highlights the fact that a given upstream source can be split into several binary packages in the archive when it makes sense. In this case, there is no point in Python 2 applications pulling in the Python 3 files, so the two separate packages make sense.

Another common example is the use of a -doc package to separate the documentation from the rest of a package so that it doesn't need to be installed on production servers for example.


Version: GnuPG v1


This is simply the OpenPGP key that the upstream developer uses to sign release tarballs.

Since PGP signatures are available on the upstream download page, it's possible to instruct uscan to check signatures before downloading tarballs.

The way to do that is to use the pgpsigurlmange option in debian/watch:


which is simply a regular expression replacement string which takes the tarball URL and converts it to the URL of the matching PGP signature.


The last package we will look at is a file integrity checker. It essentially goes through all of the files in /usr/bin/ and /usr/lib/ and stores a hash of them in its database. When one of these files changes, you get an email.

In particular, we will look at the following files in the debian/ directory of version 2.7.59-18:

  • dirs
  • fcheck.cron.d
  • fcheck.postrm
  • fcheck.postinst
  • patches/
  • README.Debian
  • rules
  • source/format


This directory contains ten patches as well as a file called series which lists the patches that should be applied to the upstream source and in which order. Should you need to temporarily disable a patch, simply remove it from this file and it will no longer be applied at build time.

Let's have a look at patches/04_cfg_sha256.patch:

Description: Switch to sha256 hash algorithm
Forwarded: not needed
Author: Francois Marier <>
Last-Update: 2009-03-15

--- a/fcheck.cfg
+++ b/fcheck.cfg
@@ -149,8 +149,7 @@ TimeZone        = EST5EDT
 #$Signature      = /usr/bin/sum
 #$Signature      = /usr/bin/cksum
 #$Signature      = /usr/bin/md5sum
-$Signature      = /bin/cksum
+$Signature      = /usr/bin/sha256sum

 # Include an optional configuration file.

This is a very simple patch which changes the default configuration of fcheck to promote the use of a stronger hash function. At the top of the file is a bunch of metadata in the DEP-3 format.

Why does this package contain so many customizations to the upstream code when Debian's policy is to push fixes upstream and work towards reduce the delta between upstream and Debian's code? The answer can be found in debian/control:


This package no longer has an upstream maintainer and its original source is gone. In other words, the Debian package is where all of the new bug fixes get done.


3.0 (quilt)

This file contains what is called the source package format. What it basically says is that the patches found in debian/patches/ will be applied to the upstream source using the quilt tool at build time.


# postrm script for fcheck
# see: dh_installdeb(1)

set -e

# summary of how this script can be called:
#        * <postrm> `remove'
#        * <postrm> `purge'
#        * <old-postrm> `upgrade' <new-version>
#        * <new-postrm> `failed-upgrade' <old-version>
#        * <new-postrm> `abort-install'
#        * <new-postrm> `abort-install' <old-version>
#        * <new-postrm> `abort-upgrade' <old-version>
#        * <disappearer's-postrm> `disappear' <overwriter>
#          <overwriter-version>
# for details, see or
# the debian-policy package

case "$1" in

      if [ -e /var/lib/fcheck/fcheck.dbf ]; then
        echo "Purging old database file ..."
        rm -f /var/lib/fcheck/fcheck.dbf
      rm -rf /var/lib/fcheck
      rm -rf /var/log/fcheck
      rm -rf /etc/fcheck

        echo "postrm called with unknown argument \`$1'" >&2
        exit 1

# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.


exit 0

This script is one of the many possible maintainer scripts that a package can provide if needed.

This particular one, as the name suggests, will be run after the package is removed (apt-get remove fcheck) or purged (apt-get remove --purge fcheck). Looking at the case statement above, it doesn't do anything extra in the remove case, but it deletes a few files and directories when called with the purge argument.


This optional README file contains Debian-specific instructions that might be useful to users. It supplements the upstream README which is often more generic and cannot assume a particular system configuration.


#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.

# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1

build: build-stamp

    pod2man --section=8 $(CURDIR)/debian/fcheck.pod > $(CURDIR)/fcheck.8
    touch build-stamp

    rm -f build-stamp 
    rm -f $(CURDIR)/fcheck.8

install: build
    cp $(CURDIR)/fcheck $(CURDIR)/debian/fcheck/usr/sbin/fcheck
    cp $(CURDIR)/fcheck.cfg $(CURDIR)/debian/fcheck/etc/fcheck/fcheck.cfg

# Build architecture-independent files here.
binary-arch: build install

# Build architecture-independent files here.
binary-indep: build install
    dh_installman fcheck.8

binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install

This is an example of a old-style debian/rules file which you still encounter in packages which haven't yet upgraded to the latest version of debhelper 9, as can be shown by the contents of debian/compat:


It does essentially the same thing that what we've seen in the build log, but in a more verbose way.



This file contains a list of directories that dh_installdirs will create in the build directory.

The reason why these directories need to be created is that files are copied into these directories in the install target of the debian/rules file.

Note that this is different from directories which are created at the time of installation of the package. In that case, the directory (e.g. /var/log/fcheck/) must be created in the postinst script and removed in the postrm script.


# Regular cron job for the fcheck package
30 */2  * * *   root    test -x /usr/sbin/fcheck && if ! nice ionice -c3 /usr/sbin/fcheck -asxrf /etc/fcheck/fcheck.cfg >/var/run/fcheck.out 2>&1; then mailx -s "ALERT: [fcheck] `hostname --fqdn`" root </var/run/fcheck.out ; /usr/sbin/fcheck -cadsxlf /etc/fcheck/fcheck.cfg ; fi ; rm -f /var/run/fcheck.out

This file is the cronjob which drives the checks performed by this package. It will be copied to /etc/cron.d/fcheck by dh_installcron.

Settings v. Prefs in Gaia Development

Jed and I got confused the other day when trying to add hidden prefs for a small Firefox OS application. We wanted to make a few advanced options configurable via preferences (like those found in about:config in Firefox) but couldn't figure out why it wasn't possible to access them from within our certified application.

The answer is that settings and prefs are entirely different things in FxOS land.


This is how you set prefs in Gaia:

pref("devtools.debugger.forbid-certified-apps", false);
pref("dom.inter-app-communication-api.enabled", true);

from build/config/custom-prefs.js.

These will be used by the Gecko layer like this:

if (!Preferences::GetBool("dom.inter-app-communication-api.enabled", false)) {
  return false;

from within C++ code, and like this:

let restrictPrivileges = Services.prefs.getBoolPref("devtools.debugger.forbid-certified-apps");

from JavaScript code.

Preferences can be strings, integers or booleans.


Settings on the other hand are JSON objects which can be set like this:

"alarm.enabled": false,

in build/config/common-settings.json and can then be read like this:

var req = navigator.mozSettings.createLock().get('alarm.enabled');
req.onsuccess = function() {

as long as you have the following in your application manifest:

"permissions": {
  "settings":{ "access": "readwrite" },

In other words, if you set something in build/config/custom-prefs.js, don't expect to be able to read it using navigator.mozSettings or the SettingsHelper!

Using vnc to do remote tech support over high-latency networks

If you ever find yourself doing a bit of technical support for relatives over the phone, there's nothing like actually seeing what they are doing on their computer. One of the best tools for such remote desktop sharing is vnc.

Here's the best setup I have come up with so far. If you have any suggestions, please leave a comment!

Basic vnc configuration

First off, you need two things: a vnc server on your relative's machine and a vnc client on yours. Thanks to vnc being an open protocol, there are many choices for both.

I eventually settled on x11vnc for the server and ssvnc for the client. They are both available in the standard Debian and Ubuntu repositories.

Since I have ssh access on the machine that needs to run the server, I simply login and then run x11vnc. Here's what ~/.x11vnrc contains:


That option appears to be necessary when the desktop to share is running gnome-shell / compiz.

Afterwards, I start the client on my laptop with the following command:

ssvncviewer -encodings zrle -scale 1280x775 localhost

The scaling factor is simply the resolution of the client minus any window decorations.

ssh configuration

As you can see above, the client is not connecting directly to the server. Instead it's connecting to its own vnc port (localhost:5900). That's because I'm tunelling the traffic through the ssh connection in order to avoid relying on vnc extensions for authentication and encryption.

Here's what the client's ~/.ssh/config needs for that simple use case:

  LocalForward 5900

If the remote host (which has an internal IP address of in this example) is not connected directly to the outside world and instead goes through a gateway, then your ~/.ssh/config will look like this:

  ForwardAgent yes
  LocalForward 5900

  ProxyCommand ssh -q -a nc -q0 %h 22

and the remote host will need to open up a port on its firewall for the gateway (internal IP address of here):

iptables -A INPUT -p tcp --dport 5900 -s -j ACCEPT

Optimizing for high-latency networks

Since I do most of my tech support over a very high latency network, I tweaked the default vnc settings to reduce the amount of network traffic.

I added this to ~/.x11vncrc on the vnc server:

ncache 10

and changed the client command line to this:

ssvncviewer -compresslevel 9 -quality 3 -bgr233 -encodings zrle -use64 -scale 1280x775 -ycrop 1024 localhost

This decreases image quality (and required bandwidth) and enables client-side caching.

The magic 1024 number is simply the full vertical resolution of the remote machine, which sports a vintage 1280x1024 LCD monitor.

Hardening ssh Servers

Basic configuration

There are a few basic things that most admins will already know (and that tiger will warn you about if you forget):

  • only allow version 2 of the protocol
  • disable root logins
  • disable password authentication

This is what /etc/ssh/sshd_config should contain:

Protocol 2
PasswordAuthentication no
PermitRootLogin no

Whitelist approach to giving users ssh access

To ensure that only a few users have ssh access to the server and that newly created users don't have it enabled by default, create a new group:

addgroup sshuser

and then add the relevant users to it:

adduser francois sshuser

Finally, add this to /etc/ssh/sshd_config:

AllowGroups sshuser

Deterring brute-force (or dictionary) attacks

One way to ban attackers who try to brute-force your ssh server is to install the fail2ban package. It keeps an eye on the ssh log file (/var/log/auth.log) and temporarily blocks IP addresses after a number of failed login attempts.

Another approach is to hide the ssh service using Single-Packet Authentication. I have fwknop installed on some of my servers and use small wrapper scripts to connect to them.

Using restricted shells

For those users who only need an ssh account on the server in order to transfer files (using scp or rsync), it's a good idea to set their shell (via chsh) to a restricted one like rssh.

Should they attempt to log into the server, these users will be greeted with the following error message:

This account is restricted by rssh.
Allowed commands: rsync 

If you believe this is in error, please contact your system administrator.

Connection to closed.

Restricting authorized keys to certain IP addresses

In addition to listing all of the public keys that are allowed to log into a user account, the ~/.ssh/authorized_keys file also allows (as the man page points out) a user to impose a number of restrictions.

Perhaps the most useful option is from which allows a user to restrict the IP addresses which can login using a specific key.

Here's what one of my authorized_keys looks like:

from="" ssh-rsa AAAAB3Nz...zvCn bot@example

You may also want to include the following options to each entry: no-X11-forwarding, no-user-rc, no-pty, no-agent-forwarding and no-port-forwarding.

Increasing the amount of logging

The first thing I'd recommend is to increase the level of verbosity in /etc/ssh/sshd_config:


which will, amongst other things, log the fingerprints of keys used to login:

sshd: Connection from port 39671
sshd: Found matching RSA key: de:ad:be:ef:ca:fe
sshd: Postponed publickey for francois from port 39671 ssh2 [preauth]
sshd: Accepted publickey for francois from port 39671 ssh2 

Secondly, if you run logcheck and would like to whitelist the "Accepted publickey" messages on your server, you'll have to start by deleting the first line of /etc/logcheck/ignore.d.server/sshd. Then you can add an entry for all of the usernames and IP addresses that you expect to see.

Finally, it is also possible to log all commands issued by a specific user over ssh by enabling the pam_tty_audit module in /etc/pam.d/sshd:

session required enable=francois

However this module is not included in wheezy and has only recently been re-added to Debian.

Identitying stolen keys

One thing I'd love to have is a way to identify a stolen public key. Given the IP restrictions described above, if a public key is stolen and used from a different IP, I will see something like this in /var/log/auth.log:

sshd: Connection from port 39492
sshd: Authentication tried for francois with correct key but not from a permitted host (host=, ip=
sshd: Failed publickey for francois from port 39492 ssh2
sshd: Connection closed by [preauth]

So I can get the IP address of the attacker (likely to be a random VPS or a Tor exit node), but unfortunately, the key fingerprints don't appear for failed connections like they do for successful ones. So I don't know which key to revoke.

Is there any way to identify which key was used in a failed login attempt or is the solution to only ever have a single public key in each authorized_keys file and create a separate user account for each user?

Running your own XMPP server on Debian or Ubuntu

In order to get closer to my goal of reducing my dependence on centralized services, I decided to setup my own XMPP / Jabber server on a Linode VPS running Debian wheezy. I chose ejabberd since it was recommended by the RTC Quick Start website and here's how I put everything together.


My personal domain is and so I created the following DNS records:

jabber-gw            CNAME
_xmpp-client._tcp    SRV      5 0 5222
_xmpp-server._tcp    SRV      5 0 5269

Then I went to get a free XMPP SSL certificate for from StartSSL. This is how I generated the CSR (Certificate Signing Request) on a high-entropy machine:

openssl req -new -newkey rsa:2048 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/"

I downloaded the signed certificate as well as the StartSSL intermediate certificate and combined them this way:

cat ssl.crt ssl.key > ejabberd.pem

ejabberd installation

Installing ejabberd on Debian is pretty simple and I mostly followed the steps on the Ubuntu wiki with an additional customization to solve the Pidgin "Not authorized" connection problems.

  1. Install the package, using "admin" as the username for the administrative user:

    apt-get install ejabberd
  2. Set the following in /etc/ejabberd/ejabberd.cfg (don't forget the trailing dots!):

    {acl, admin, {user, "admin", ""}}.
    {hosts, [""]}.
    {fqdn, ""}.
  3. Copy the SSL certificate into the /etc/ejabberd/ directory and set the permissions correctly:

    chown root:ejabberd /etc/ejabberd/ejabberd.pem
    chmod 640 /etc/ejabberd/ejabberd.pem
  4. Improve the client-to-server TLS configuration by adding starttls_required to this block:

        {5222, ejabberd_c2s, [
          {access, c2s},
          {shaper, c2s_shaper},
          {max_stanza_size, 65536},
          {certfile, "/etc/ejabberd/ejabberd.pem"}
  5. Restart the ejabberd daemon:

    /etc/init.d/ejabberd restart
  6. Create a new user account for yourself:

    ejabberdctl register me P@ssw0rd1!
  7. Open up the following ports on the server's firewall:

    iptables -A INPUT -p tcp --dport 5222 -j ACCEPT
    iptables -A INPUT -p tcp --dport 5269 -j ACCEPT

Client setup

On the client side, if you use Pidgin, create a new account with the following settings in the "Basic" tab:

  • Protocol: XMPP
  • Username: me
  • Domain:
  • Password: P@ssw0rd1!

and the following setting in the "Advanced" tab:

  • Connection security: Require encryption

From this, I was able to connect to the server without clicking through any certificate warnings.


If you want to make sure that XMPP federation works, add your GMail address as a buddy to the account and send yourself a test message.

In this example, the XMPP address I give to my friends is

Finally, to ensure that your TLS settings are reasonable, use this automated tool to test both the client-to-server (c2s) and the server-to-server (s2s) flows.

Creating a Linode-based VPN setup using OpenVPN on Debian or Ubuntu

Using a Virtual Private Network is a good way to work-around geoIP restrictions but also to protect your network traffic when travelling with your laptop and connecting to untrusted networks.

While you might want to use Tor for the part of your network activity where you prefer to be anonymous, a VPN is a faster way to connect to sites that already know you.

Here are my instructions for setting up OpenVPN on Debian / Ubuntu machines where the VPN server is located on a cheap Linode virtual private server. They are largely based on the instructions found on the Debian wiki.

An easier way to setup an ad-hoc VPN is to use sshuttle but for some reason, it doesn't seem work on Linode or Rackspace virtual servers.

Generating the keys

Make sure you run the following on a machine with good entropy and not a VM! I personally use a machine fitted with an Entropy Key.

The first step is to install the required package:

sudo apt-get install openvpn

Then, copy the following file in your home directory (no need to run any of this as root):

mkdir easy-rsa
cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ easy-rsa/
cd easy-rsa/2.0

and put something like this in your ~/easy-rsa/2.0/vars:

export KEY_SIZE=2084
export KEY_CITY="Auckland"
export KEY_ORG=""
export KEY_EMAIL=""
export KEY_OU=VPN

Create this symbolic link:

ln -s openssl-1.0.0.cnf openssl.cnf

and generate the keys:

. ./vars
./build-key-server server  # press ENTER at every prompt, no password
./build-key akranes  # "akranes" as Name, no password
/usr/sbin/openvpn --genkey --secret keys/ta.key

Configuring the server

On my server, a Linode VPS called, I installed the openvpn package:

apt-get install openvpn

and then copied the following files from my high-entropy machine:

cp ca.crt dh2048.pem server.key server.crt ta.key /etc/openvpn/
chown root:root /etc/openvpn/*
chmod 600 /etc/openvpn/ta.key /etc/openvpn/server.key

Then I took the official configuration template:

cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
gunzip /etc/openvpn/server.conf.gz

and set the following in /etc/openvpn/server.conf (which includes recommendations from

dh dh2048.pem
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS"
push "dhcp-option DNS"
tls-auth ta.key 0
cipher AES-256-CBC
auth SHA384
user nobody
group nogroup

(These DNS servers are the ones I found in /etc/resolv.conf on my Linode VPS.)

Finally, I added the following to these configuration files:

  • /etc/sysctl.conf:

  • /etc/rc.local (just before exit 0):

    iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE
  • /etc/default/openvpn:


and ran sysctl -p before starting OpenVPN:

/etc/init.d/openvpn start

If the server has a firewall, you'll need to open up this port:

iptables -A INPUT -p udp --dport 1194 -j ACCEPT

as well as let forwarded packets flow:

iptables -A FORWARD -i eth0 -o tun0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -s -o eth0 -j ACCEPT

Configuring the client

The final piece of this solution is to setup my laptop, akranes, to connect to hafnarfjordur by installing the relevant Network Manager plugin:

apt-get install network-manager-openvpn-gnome

The laptop needs these files from the high-entropy machine:

cp ca.crt akranes.crt akranes.key ta.key /etc/openvpn/
chown root:francois /etc/openvpn/akranes.key /etc/openvpn/ta.key
chmod 640 /etc/openvpn/ta.key /etc/openvpn/akranes.key

and my own user needs to have read access to the secret keys.

To create a new VPN, right-click on Network-Manager and add a new VPN connection of type "OpenVPN":

  • Gateway:
  • Type: Certificates (TLS)
  • User Certificate: /etc/openvpn/akranes.crt
  • CA Certificate: /etc/openvpn/ca.crt
  • Private Key: /etc/openvpn/akranes.key
  • Available to all users: NO

then click the "Avanced" button and set the following:

  • General
    • Use LZO data compression: YES
  • Security
    • Cipher: AES-256-CBC
    • HMAC Authentication: SHA-384
  • TLS Authentication
    • Subject Match: server
    • Verify peer (server) certificate usage signature: YES
    • Remote peer certificate TLS type: Server
    • Use additional TLS authentication: YES
    • Key File: /etc/openvpn/ta.key
    • Key Direction: 1


If you run into problems, simply take a look at the logs while attempting to connect to the server:

tail -f /var/log/syslog

on both the server and the client.

In my experience, searching for the error messages you find in there is usually enough to solve the problem.

Next steps

The next thing I'm going to add to this VPN setup is a local unbound DNS resolver that will be offered to all clients.

Is there anything else you have in your setup and that I should consider adding to mine?

Things that work well with Tor

Tor is a proxy server which allows its users to hide their IP address from the websites they connect to. In order to provide this level of anonymity however, it introduces latency into these connections, an unfortunate performance-privacy trade-off which means that few users choose to do all of their browsing through Tor.

Here are a few things that I have found work quite well through Tor. If there are any other interesting use cases I've missed, please leave a comment!

Tor setup

There are already great docs on how to install and configure the Tor server and the only thing I would add is that I've found that having a Polipo proxy around is quite useful for those applications that support HTTP proxies but not SOCKS proxies.

On Debian, it's just a matter of installing the polipo package and then configuring it as it used to be recommended by the Tor project.

RSS feeds

The whole idea behind RSS feeds is that articles are downloaded in batch ahead of time. In other words, latency doesn't matter.

I use akregator to read blogs and the way to make it fetch articles over Tor is to change the KDE-wide proxy server using systemsettings and setting a manual proxy of localhost on port 8008 (i.e. the local instance of Polipo). If you don't see the proxy settings in the KDE control panel, make sure that the kde-baseapps-bin, libkonq-common and kpart-webkit packages are installed.

Similarly, I use podget to automatically fetch podcasts through this cron job in /etc/cron.d/podget-francois:

0 12 * * 1-5 francois   http_proxy=http://localhost:8008/ https_proxy=http://localhost:8008/ nocache nice ionice -n7 /usr/bin/podget -s

Prior to that, I was using hpodder and had the following in ~/.hpodder/curlrc:



For those of us using the GNU Privacy Guard to exchange encrypted emails, keeping our public keyring up to date is important since it's the only way to ensure that revoked keys are taken into account. The script I use for this runs once a day and has the unfortunate side effect of revealing the contents of my address book to the keyserver I use.

Therefore, I figured that I should at least hide my IP address by putting the following in ~/.gnupg/gpg.conf:

keyserver-options http-proxy=

However, that tends to makes key submission fail and so I created a key submission alias in my ~/.bashrc which avoids sending keys through Tor:

alias gpgsendkeys='gpg --send-keys --keyserver-options http-proxy=""'

Instant messaging

Communication via XMPP is another use case that's not affected much by a bit of extra latency.

To get Pidgin to talk to an XMPP server over Tor, simply open "Tools | Preferences" and set a SOCKS5 (not Tor/Privacy) proxy of localhost on port 9050.


Finally, I found that since I am running GMail in a separate browser profile, I can take advantage of GMail's excellent caching and preloading and run the whole thing over Tor by setting that entire browser profile to run its traffic through the Tor SOCKS proxy on port 9050.

The Perils of RAID and Full Disk Encryption on Ubuntu 12.04

I've been using disk encryption (via LUKS and cryptsetup) on Debian and Ubuntu for quite some time and it has worked well for me. However, while setting up full disk encryption for a new computer on a RAID1 partition, I discovered that there are a few major problems with RAID on Ubuntu.

My Setup: RAID and LUKS

Since I was setting up a new machine on Ubuntu 12.04 LTS (Precise Pangolin), I used the alternate CD (I burned ubuntu-12.04.3-alternate-amd64+mac.iso to a blank DVD) to get access to the full disk encryption options.

First, I created a RAID1 array to mirror the data on the two hard disks. Then, I used the partition manager built into the installer to setup an unencrypted boot partition (/dev/md0 mounted as /boot) and an encrypted root partition (/dev/md1 mounted as /) on the RAID1 array.

While I had done full disk encryption and mirrored drives before, I had never done them at the same time on Ubuntu or Debian.

The problem: cannot boot an encrypted degraded RAID

After setting up the RAID, I decided to test it by booting from each drive with the other one unplugged.

The first step was to ensure that the system is configured (via dpkg-reconfigure mdadm) to boot in "degraded mode".

When I rebooted with a single disk though, I received a evms_activate is not available error message instead of the usual cryptsetup password prompt. The exact problem I ran into is best described in this comment (see this bug for context).

It turns out that booting degraded RAID arrays has been plagued with several problems.

My solution: an extra initramfs boot script to start the RAID array

The underlying problem is that the RAID1 array is not started automatically when it's missing a disk and so cryptsetup cannot find the UUID of the drive to decrypt (as configured in /etc/crypttab).

My fix, based on a script I was lucky enough to stumble on, lives in /etc/initramfs-tools/scripts/local-top/cryptraid:

     echo "$PREREQ"
case $1 in
     exit 0

cat /proc/mdstat
mdadm --run /dev/md1
cat /proc/mdstat

After creating that file, remember to:

  1. make the script executable (using chmod a+x) and
  2. regenerate the initramfs (using dpkg-reconfigure linux-image-KERNELVERSION).

To make sure that the script is doing the right thing:

  1. press "Shift" while booting to bring up the Grub menu
  2. then press "e" to edit the default boot line
  3. remove the "quiet" and "splash" options from the kernel arguments
  4. press F10 to boot with maximum console output

You should see the RAID array stopped (look for the output of the first cat /proc/mdstat call) and then you should see output from a running degraded RAID array.

Backing up the old initramfs

If you want to be extra safe while testing this new initramfs, make sure you only reconfigure one kernel at a time (no update-initramfs -u -k all) and make a copy of the initramfs before you reconfigure the kernel:

cp /boot/initrd.img-KERNELVERSION-generic /boot/initrd.img-KERNELVERSION-generic.original

Then if you run into problems, you can go into the Grub menu, edit the default boot option and make it load the .original initramfs.