RSS Atom Add a new post titled:
Using unattended-upgrades on Rackspace's Debian and Ubuntu servers

I install the unattended-upgrades package on almost all of my Debian and Ubuntu servers in order to ensure that security updates are automatically applied. It works quite well except that I still need to login manually to upgrade my Rackspace servers whenever a new rackspace-monitoring-agent is released because it comes from a separate repository that's not covered by unattended-upgrades.

It turns out that unattended-upgrades can be configured to automatically upgrade packages outside of the standard security repositories but it's not very well documented and the few relevant answers you can find online are still using the old whitelist syntax.

Initial setup

The first thing to do is to install the package if it's not already done:

apt-get install unattended-upgrades

and to answer yes to the automatic stable update question.

If you don't see the question (because your debconf threshold is too low -- change it with dpkg-reconfigure debconf), you can always trigger the question manually:

dpkg-reconfigure -plow unattended-upgrades

Once you've got that installed, the configuration file you need to look at is /etc/apt/apt.conf.d/50unattended-upgrades.

Whitelist matching criteria

Looking at the unattended-upgrades source code, I found the list of things that can be used to match on in the whitelist:

  • origin (shortcut: o)
  • label (shortcut: l)
  • archive (shortcut: a)
  • suite (which is the same as archive)
  • component (shortcut: c)
  • site (no shortcut)

You can find the value for each of these fields in the appropriate _Release file under /var/lib/apt/lists/.

Note that the value of site is the hostname of the package repository, also present in the first part these *_Release filenames ( in the example below).

In my case, I was looking at the following inside /var/lib/apt/lists/stable.packages.cloudmonitoring.rackspace.com_debian-wheezy-x86%5f64_dists_cloudmonitoring_Release:

Origin: Rackspace
Codename: cloudmonitoring
Date: Fri, 23 Jan 2015 18:58:49 UTC
Architectures: i386 amd64
Components: main

which means that, in addition to site, the only things I could match on were origin and component since there are no Suite or Label fields in the Release file.

This is the line I ended up adding to my /etc/apt/apt.conf.d/50unattended-upgrades:

 Unattended-Upgrade::Origins-Pattern {
         // Archive or Suite based matching:
         // Note that this will silently match a different release after
         // migration to the specified archive (e.g. testing becomes the
         // new stable).
 //      "o=Debian,a=stable";
 //      "o=Debian,a=stable-updates";
 //      "o=Debian,a=proposed-updates";
+        "origin=Rackspace,component=main";


To ensure that the config is right and that unattended-upgrades will pick up rackspace-monitoring-agent the next time it runs, I used:

unattended-upgrade --dry-run --debug

which should output something like this:

Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ['origin=Debian,archive=stable,label=Debian-Security', 'origin=Debian,archive=oldstable,label=Debian-Security', 'origin=Rackspace,component=main']
Checking: rackspace-monitoring-agent (["<Origin component:'main' archive:'' origin:'Rackspace' label:'' site:'' isTrusted:True>"])
pkgs that look like they should be upgraded: rackspace-monitoring-agent
Option --dry-run given, *not* performing real actions
Packages that are upgraded: rackspace-monitoring-agent

Making sure that automatic updates are happening

In order to make sure that all of this is working and that updates are actually happening, I always install apticron on all of the servers I maintain. It runs once a day and emails me a list of packages that need to be updated and it keeps doing that until the system is fully up-to-date.

The only thing missing from this is getting a reminder whenever a package update (usually the kernel) requires a reboot to take effect. That's where the update-notifier-common package comes in.

Because that package will add a hook that will create the /var/run/reboot-required file whenever a kernel update has been installed, all you need to do is create a cronjob like this in /etc/cron.daily/reboot-required:

cat /var/run/reboot-required 2> /dev/null || true

assuming of course that you are already receiving emails sent to the root user (if not, add the appropriate alias in /etc/aliases and run newaliases).

Making Firefox Hello work with NoScript and RequestPolicy

Firefox Hello is a new beta feature in Firefox 34 which give users the ability to do plugin-free video-conferencing without leaving the browser (using WebRTC technology).

If you cannot get it to work after adding the Hello button to the toolbar, this post may help.

Preferences to check

There are a few preferences to check in about:config:

  • media.peerconnection.enabled should be true
  • network.websocket.enabled should be true
  • loop.enabled should be true
  • loop.throttled should be false


If you use the popular NoScript add-on, you will need to whitelist the following hosts:

  • about:loopconversation


If you use the less popular but equally annoying RequestPolicy add-on, then you will need to whitelist the following destination host:


as well as the following origin to destination mappings:

  • about:loopconversation ->
  • about:loopconversation ->
  • about:loopconversation ->
  • ->
  • ->
  • ->
  • ->

I have unfortunately not been able to find a way to restrict to a set of (source, destination) pairs. I suspect that the use of websockets confuses RequestPolicy.

If you find a more restrictive policy that works, please leave a comment!

Mercurial and Bitbucket workflow for Gecko development

While it sounds like I should really switch to a bookmark-based Mercurial workflow for my Gecko development, I figured that before I do that, I should document how I currently use patch queues and Bitbucket.

Starting work on a new bug

After creating a new bug in Bugzilla, I do the following:

  1. Create a new mozilla-central-mq-BUGNUMBER repo on Bitbucket using the web interface and put as the Website in the repository settings.
  2. Create a new patch queue: hg qqueue -c BUGNUMBER
  3. Initialize the patch queue: hg init --mq
  4. Make some changes.
  5. Create a new patch: hg qnew -Ue bugBUGNUMBER.patch
  6. Commit the patch to the mq repo: hg commit --mq -m "Initial version"
  7. Push the mq repo to Bitbucket: hg push ssh://
  8. Make the above URL the default for pull/push by putting this in .hg/patches-BUGNUMBER/.hg/hgrc:

    default =
    default-push = ssh://

Working on a bug

I like to preserve the history of the work I did on a patch. So once I've got some meaningful changes to commit to my patch queue repo, I do the following:

  1. Add the changes to the current patch: hg qref
  2. Check that everything looks fine: hg diff --mq
  3. Commit the changes to the mq repo: hg commit --mq
  4. Push the changes to Bitbucket: hg push --mq

Switching between bugs

Since I have one patch queue per bug, I can easily work on more than one bug at a time without having to clone the repository again and work from a different directory.

Here's how I switch between patch queues:

  1. Unapply the current queue's patches: hg qpop -a
  2. Switch to the new queue: hg qqueue BUGNUMBER
  3. Apply all of the new queue's patches: hg qpush -a

Rebasing a patch queue

To rebase my patch onto the latest mozilla-central tip, I do the following:

  1. Unapply patches using hg qpop -a
  2. Update the branch: hg pull -u
  3. Reapply the first patch: hg qpush and resolve any conflicts
  4. Update the patch file in the queue: hg qref
  5. Repeat steps 3 and 4 for each patch.
  6. Commit the changes: hg commit --mq -m "Rebase patch"


Thanks to Thinker Lee for telling me about qqueue and Chris Pearce for explaining to me how he uses mq repos on Bitbucket.

Of course, feel free to leave a comment if I missed anything useful or if there's a easier way to do any of the above.

Hiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to ( and

servers = (
    address = "";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
    address = "";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

ssh "pgrep znc || znc"
read -p "Terminate the bouncer? [y/N] " -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]
  ssh killall -sSIGINT znc

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine.

Start by installing (as root) the necessary packages:

apt-get install lxc libvirt-bin debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here): = veth = up = virbr0 = 00:FF:AA:xx:xx:xx =

but I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

  2. and then applying it using:

    sysctl -p
  3. Ensure that the network bridge is automatically started on boot:

    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
  5. and applying the rules using:


Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR= lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64

then install a text editor inside the container because the root image doesn't have one by default:

apt-get install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Fixing Perl locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.


What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder

Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist and follow the instructions.

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email:
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring:

    sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

Signing the list key

In order to give assurance to list members that the list key is legitimate, it should be signed by the list admin.

  1. Export the key:

    sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ -a --export LISTNAE@HOSTNAME > ~/LISTNAME.asc
  2. Copy the key onto your own machine using scp and import it: gpg --import < LISTNAME.asc
  3. Sign the key locally by editing the key gpg --edit-key KEYID and using the sign command.
  4. Export your signature: gpg -a --export KEYID > LISTNAME.asc
  5. Copy the signed key back onto the server using scp.
  6. Import your signature into the schleuder public keyring:

    sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import < ~/LISTNAME.asc

After that, anybody requesting the list key will get your signature as well.

Outsourcing your webapp maintenance to Debian

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.


Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

whereas hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

due to the presence of an SRV record on

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)

Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.


Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.


The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false


In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5
bindsym XF86AudioMute exec /usr/bin/pactl set-sink-mute @DEFAULT_SINK@ toggle

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:


Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1

Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

Finally, because X sometimes fail to detect my external monitor when docking/undocking, I also wrote a script to set the displays properly and bound it to the appropriate key on my laptop:

bindsym XF86Display exec /home/francois/bin/external-monitor
CrashPlan and non-executable /tmp directories

If your computer's /tmp is non-executable, you will run into problems with CrashPlan.

For example, the temp directory on my laptop is mounted using this line in /etc/fstab:

tmpfs  /tmp  tmpfs  size=1024M,noexec,nosuid,nodev  0  0

This configuration leads to two serious problems with CrashPlan.

CrashPlan client not starting up

The first one is that while the daemon is running, the client doesn't start up and doesn't print anything out to the console.

You have to look in /usr/local/crashplan/log/ui_error.log to find the following error message:

Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
  Can't load library: /tmp/.cpswt/
  Can't load library: /tmp/.cpswt/
  no swt-gtk-4234 in java.library.path
  no swt-gtk in java.library.path
  /tmp/.cpswt/ /tmp/.cpswt/ failed to map segment from shared object: Operation not permitted

  at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
  at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
  at org.eclipse.swt.internal.C.<clinit>(Unknown Source)
  at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
  at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
  at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source)
  at com.backup42.desktop.CPDesktop.<init>(
  at com.backup42.desktop.CPDesktop.main(

To fix this, you must tell the client to use a different directory, one that is executable and writable by users who need to use the GUI, by adding something like this to the GUI_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:

Backup waiting forever

The second problem is that once you're able to start the client, backups are stuck at "waiting for backup" and you can see the following in /usr/local/crashplan/log/engine_error.log:

Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager
  at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(
  at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(
  at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(
  at com.code42.backup.path.BackupSetsManager.access$1600(
  at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(

This time, you must tell the server to use a different directory, one that is executable and writable by the CrashPlan engine user (root on my machine), by adding something like this to the SRV_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:
What's in a debian/ directory?

If you're looking to get started at packaging free software for Debian, you should start with the excellent New Maintainers' Guide or the Introduction to Debian Packaging on the Debian wiki.

Once you know the basics, or if you prefer to learn by example, you may be interested in the full walkthrough which follows. We will look at the contents of three simple packages.


This package is a node.js library for the Libravatar service.

Version 2.0.0-3 of that package contains the following files in its debian/ directory:

  • changelog
  • compat
  • control
  • copyright
  • docs
  • node-libravatar.install
  • rules
  • source/format
  • watch


Source: node-libravatar
Priority: extra
Maintainer: Francois Marier <>
Build-Depends: debhelper (>= 9)
Standards-Version: 3.9.4
Section: web
Vcs-Git: git://

Package: node-libravatar
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends}, nodejs
Description: libravatar library for NodeJS
 This library allows web application authors to make use of the free Libravatar
 service ( This service hosts avatar images for
 users and allows other sites to look them up using email addresses.
 node-libravatar includes full support for federated avatar servers.

This is probably the most important file since it contains the bulk of the metadata about this package.

Maintainer is a required field listing the maintainer of that package, which can be a person or a team. It only contains a single value though, any co-maintainers will be listed under the optional Uploaders field.

Build-Depends lists the packages which are needed to build the package (e.g. a compiler), as opposed to those which are needed to install the binary package (e.g. a library it uses).

Standards-Version refers to the version of the Debian Policy that this package complies with.

The Homepage field refers to the upstream homepage, whereas the Vcs-* fields point to the repository where the packaging is stored. If you take a look at the node-libravatar packaging repository you will see that it contains three branches:

  • upstream is the source as it was in the tarball downloaded from upstream.
  • master is the upstream branch along with all of the Debian customizations.
  • pristine-tar is unrelated to the other two branches and is used by the pristine-tar tool to reconstitute the original upstream tarball as needed.

After these fields comes a new section which starts with a Package field. This is the definition of a binary package, not to be confused with the Source field at the top of this file, which refers to the name of the source package. In this particular example, they are both the same and there is only one of each, however this is not always the case, as we'll see later.

Inside that binary package definition, lives the Architecture field which is normally one of these two:

  • all for a binary package that will work on all architectures but only needs to be built once
  • any for a binary package that will work everywhere but that will need to be built separately for each architecture

Finally, the last field worth pointing out is the Depends field which lists all of the runtime dependencies that the binary package has. This is what will be pulled in by apt-get when you apt-get install node-libravatar. The two variables will be substituted later by debhelper.


node-libravatar (2.0.0-3) unstable; urgency=low

  * debian/watch: poll github directly
  * Bump Standards-Version up to 3.9.4

 -- Francois Marier <>  Mon, 20 May 2013 12:07:49 +1200

node-libravatar (2.0.0-2) unstable; urgency=low

  * More precise license tag and upstream contact in debian/copyright

 -- Francois Marier <>  Tue, 29 May 2012 22:51:03 +1200

node-libravatar (2.0.0-1) unstable; urgency=low

  * New upstream release
    - new non-backward-compatible API

 -- Francois Marier <>  Mon, 07 May 2012 14:54:19 +1200

node-libravatar (1.1.1-1) unstable; urgency=low

  * Initial release (Closes: #661771)

 -- Francois Marier <>  Fri, 02 Mar 2012 15:29:57 +1300

This may seem at first like a mundane file, but it is very important since it is the canonical source of the package version (2.0.0-3 in this case). This is the only place where you need to bump the package version when uploading a new package to the Debian archive.

The first line also includes the distribution where the package will be uploaded. It is usually one of these values:

  • unstable for the vast majority of uploads
  • stable for uploads that have been approved by the release maintainers and fix serious bugs in the stable version of Debian
  • stable-security for security fixes to the stable version of Debian that cannot wait until the next stable point release and have been approved by the security team

Packages uploaded to unstable will migrate automatically to testing provided that a few conditions are met (e.g. no release-critical bugs were introduced). The length of time before that migration is influenced by the urgency field (low, medium or high) in the changelog entry.

Another thing worth noting is that the first upload normally needs to close an ITP (Intent to Package) bug.


#!/usr/bin/make -f
# -*- makefile -*-

    dh $@ 


As can be gathered from the first two lines of this file, this is a Makefile. This is what controls how the package is built.

There's not much to see and that's because most of its content is automatically added by debhelper. So let's look at it in action by building the package:

$ git buildpackage -us -uc

and then looking at parts of the build log (../

 fakeroot debian/rules clean
dh clean 

One of the first things we see is the debian/rules file being run with the clean target. To find out what that does, have a look at the dh_auto_clean which states that it will attempt to delete build residues and run something like make clean using the upstream Makefile.

 debian/rules build
dh build 

Next we see the build target being invoked and looking at dh_auto_configure we see that this will essentially run ./configure and its equivalents.

The dh_auto_build helper script then takes care of running make (or equivalent) on the upstream code.

This should be familiar to anybody who has ever built a piece of free software from scratch and has encountered the usual method for building from source:

make install

Finally, we get to actually build the .deb:

 fakeroot debian/rules binary
dh binary 
dpkg-deb: building package `node-libravatar' in `../node-libravatar_2.0.0-3_all.deb'.

Here we see a number of helpers, including dh_auto_install which takes care of running make install.

Going back to the debian/rules, we notice that there is manually defined target at the bottom of the file:


which essentially disables dh_auto_test by replacing it with an empty set of commands.

The reason for this becomes clear when we take a look at the test target of the upstream Makefile and the dependencies it has: tap, a node.js library that is not yet available in Debian.

In other words, we can't run the test suite on the build machines so we need to disable it here.



This file simply specifies the version of debhelper that is required by the various helpers used in debian/rules. Version 9 is the latest at the moment.


Upstream-Name: node-libravatar
Upstream-Contact: Francois Marier <>

Files: *
Copyright: 2011 Francois Marier <>
License: Expat

Files: debian/*
Copyright: 2012 Francois Marier <>
License: Expat

License: Expat
 Permission is hereby granted, free of charge, to any person obtaining a copy of this
 software and associated documentation files (the "Software"), to deal in the Software
 without restriction, including without limitation the rights to use, copy, modify,
 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
 permit persons to whom the Software is furnished to do so, subject to the following
 The above copyright notice and this permission notice shall be included in all copies
 or substantial portions of the Software.

This machine-readable file lists all of the different licenses encountered in this package.

It requires that the maintainer audits the upstream code for any copyright statements that might be present in addition to the license of the package as a whole.


This file contains a list of upstream files that will be copied into the /usr/share/doc/node-libravatar/ directory by dh_installdocs.


lib/*    usr/lib/nodejs/

The install file is used by dh_install to supplement the work done by dh_auto_install which, as we have seen earlier, essentially just runs make install on the upstream Makefile.

Looking at that upstream Makefile, it becomes clear that the files will need to be installed manually by the Debian package since that Makefile doesn't have an install target.


version=3 /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz

This is the file that allows Debian tools like the Package Tracking System to automatically detect that a new upstream version is available.

What it does is simply visit the upstream page which contains all of the release tarballs and look for links which have an href matching the above regular expression.

Running uscan --report --verbose will show us all of the tarballs that can be automatically discovered using this watch file:

-- Scanning for watchfiles in .
-- Found watchfile in ./debian
-- In debian/watch, processing watchfile line: /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
-- Found the following matching hrefs:
Newest version on remote site is 2.0.0, local version is 2.0.0
 => Package is up to date
-- Scan finished


This second package is the equivalent Python library for the Libravatar service.

Version 1.6-2 of that package contains similar files in its debian/ directory, but let's look at two in particular:

  • control
  • upstream/signing-key.asc


Source: pylibravatar
Section: python
Priority: optional
Maintainer: Francois Marier <>
Build-Depends: debhelper (>= 9), python-all, python3-all
Standards-Version: 3.9.5

Package: python-libravatar
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-dns, python
Description: Libravatar module for Python 2
 Module to make use of the federated avatar hosting service
 from within Python applications.

Package: python3-libravatar
Architecture: all
Depends: ${misc:Depends}, ${python3:Depends}, python3-dns, python3
Description: Libravatar module for Python 3
 Module to make use of the federated avatar hosting service
 from within Python applications.

Here is an example of a source package (pylibravatar) which builds two separate binary packages: python-libravatar and python3-libravatar.

This highlights the fact that a given upstream source can be split into several binary packages in the archive when it makes sense. In this case, there is no point in Python 2 applications pulling in the Python 3 files, so the two separate packages make sense.

Another common example is the use of a -doc package to separate the documentation from the rest of a package so that it doesn't need to be installed on production servers for example.


Version: GnuPG v1


This is simply the OpenPGP key that the upstream developer uses to sign release tarballs.

Since PGP signatures are available on the upstream download page, it's possible to instruct uscan to check signatures before downloading tarballs.

The way to do that is to use the pgpsigurlmange option in debian/watch:


which is simply a regular expression replacement string which takes the tarball URL and converts it to the URL of the matching PGP signature.


The last package we will look at is a file integrity checker. It essentially goes through all of the files in /usr/bin/ and /usr/lib/ and stores a hash of them in its database. When one of these files changes, you get an email.

In particular, we will look at the following files in the debian/ directory of version 2.7.59-18:

  • dirs
  • fcheck.cron.d
  • fcheck.postrm
  • fcheck.postinst
  • patches/
  • README.Debian
  • rules
  • source/format


This directory contains ten patches as well as a file called series which lists the patches that should be applied to the upstream source and in which order. Should you need to temporarily disable a patch, simply remove it from this file and it will no longer be applied at build time.

Let's have a look at patches/04_cfg_sha256.patch:

Description: Switch to sha256 hash algorithm
Forwarded: not needed
Author: Francois Marier <>
Last-Update: 2009-03-15

--- a/fcheck.cfg
+++ b/fcheck.cfg
@@ -149,8 +149,7 @@ TimeZone        = EST5EDT
 #$Signature      = /usr/bin/sum
 #$Signature      = /usr/bin/cksum
 #$Signature      = /usr/bin/md5sum
-$Signature      = /bin/cksum
+$Signature      = /usr/bin/sha256sum

 # Include an optional configuration file.

This is a very simple patch which changes the default configuration of fcheck to promote the use of a stronger hash function. At the top of the file is a bunch of metadata in the DEP-3 format.

Why does this package contain so many customizations to the upstream code when Debian's policy is to push fixes upstream and work towards reduce the delta between upstream and Debian's code? The answer can be found in debian/control:


This package no longer has an upstream maintainer and its original source is gone. In other words, the Debian package is where all of the new bug fixes get done.


3.0 (quilt)

This file contains what is called the source package format. What it basically says is that the patches found in debian/patches/ will be applied to the upstream source using the quilt tool at build time.


# postrm script for fcheck
# see: dh_installdeb(1)

set -e

# summary of how this script can be called:
#        * <postrm> `remove'
#        * <postrm> `purge'
#        * <old-postrm> `upgrade' <new-version>
#        * <new-postrm> `failed-upgrade' <old-version>
#        * <new-postrm> `abort-install'
#        * <new-postrm> `abort-install' <old-version>
#        * <new-postrm> `abort-upgrade' <old-version>
#        * <disappearer's-postrm> `disappear' <overwriter>
#          <overwriter-version>
# for details, see or
# the debian-policy package

case "$1" in

      if [ -e /var/lib/fcheck/fcheck.dbf ]; then
        echo "Purging old database file ..."
        rm -f /var/lib/fcheck/fcheck.dbf
      rm -rf /var/lib/fcheck
      rm -rf /var/log/fcheck
      rm -rf /etc/fcheck

        echo "postrm called with unknown argument \`$1'" >&2
        exit 1

# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.


exit 0

This script is one of the many possible maintainer scripts that a package can provide if needed.

This particular one, as the name suggests, will be run after the package is removed (apt-get remove fcheck) or purged (apt-get remove --purge fcheck). Looking at the case statement above, it doesn't do anything extra in the remove case, but it deletes a few files and directories when called with the purge argument.


This optional README file contains Debian-specific instructions that might be useful to users. It supplements the upstream README which is often more generic and cannot assume a particular system configuration.


#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.

# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1

build: build-stamp

    pod2man --section=8 $(CURDIR)/debian/fcheck.pod > $(CURDIR)/fcheck.8
    touch build-stamp

    rm -f build-stamp 
    rm -f $(CURDIR)/fcheck.8

install: build
    cp $(CURDIR)/fcheck $(CURDIR)/debian/fcheck/usr/sbin/fcheck
    cp $(CURDIR)/fcheck.cfg $(CURDIR)/debian/fcheck/etc/fcheck/fcheck.cfg

# Build architecture-independent files here.
binary-arch: build install

# Build architecture-independent files here.
binary-indep: build install
    dh_installman fcheck.8

binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install

This is an example of a old-style debian/rules file which you still encounter in packages which haven't yet upgraded to the latest version of debhelper 9, as can be shown by the contents of debian/compat:


It does essentially the same thing that what we've seen in the build log, but in a more verbose way.



This file contains a list of directories that dh_installdirs will create in the build directory.

The reason why these directories need to be created is that files are copied into these directories in the install target of the debian/rules file.

Note that this is different from directories which are created at the time of installation of the package. In that case, the directory (e.g. /var/log/fcheck/) must be created in the postinst script and removed in the postrm script.


# Regular cron job for the fcheck package
30 */2  * * *   root    test -x /usr/sbin/fcheck && if ! nice ionice -c3 /usr/sbin/fcheck -asxrf /etc/fcheck/fcheck.cfg >/var/run/fcheck.out 2>&1; then mailx -s "ALERT: [fcheck] `hostname --fqdn`" root </var/run/fcheck.out ; /usr/sbin/fcheck -cadsxlf /etc/fcheck/fcheck.cfg ; fi ; rm -f /var/run/fcheck.out

This file is the cronjob which drives the checks performed by this package. It will be copied to /etc/cron.d/fcheck by dh_installcron.