RSS Atom Add a new post titled:
LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine.

Start by installing (as root) the necessary packages:

apt-get install lxc libvirt-bin debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24

but I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:

    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://http.debian.net/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64

then install a text editor inside the container because the root image doesn't have one by default:

apt-get install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Fixing Perl locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.

Requirements

What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder

Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import
Outsourcing your webapp maintenance to Debian

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l
256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)

Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.

Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false

Screensaver

In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5
bindsym XF86AudioMute exec /usr/bin/pactl set-sink-mute @DEFAULT_SINK@ toggle

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:

HandleLidSwitch=lock

Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1

Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

Finally, because X sometimes fail to detect my external monitor when docking/undocking, I also wrote a script to set the displays properly and bound it to the appropriate key on my laptop:

bindsym XF86Display exec /home/francois/bin/external-monitor
CrashPlan and non-executable /tmp directories

If your computer's /tmp is non-executable, you will run into problems with CrashPlan.

For example, the temp directory on my laptop is mounted using this line in /etc/fstab:

tmpfs  /tmp  tmpfs  size=1024M,noexec,nosuid,nodev  0  0

This configuration leads to two serious problems with CrashPlan.

CrashPlan client not starting up

The first one is that while the daemon is running, the client doesn't start up and doesn't print anything out to the console.

You have to look in /usr/local/crashplan/log/ui_error.log to find the following error message:

Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
  Can't load library: /tmp/.cpswt/libswt-gtk-4234.so
  Can't load library: /tmp/.cpswt/libswt-gtk.so
  no swt-gtk-4234 in java.library.path
  no swt-gtk in java.library.path
  /tmp/.cpswt/libswt-gtk-4234.so: /tmp/.cpswt/libswt-gtk-4234.so: failed to map segment from shared object: Operation not permitted

  at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
  at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
  at org.eclipse.swt.internal.C.<clinit>(Unknown Source)
  at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
  at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
  at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source)
  at com.backup42.desktop.CPDesktop.<init>(CPDesktop.java:266)
  at com.backup42.desktop.CPDesktop.main(CPDesktop.java:200)

To fix this, you must tell the client to use a different directory, one that is executable and writable by users who need to use the GUI, by adding something like this to the GUI_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:

-Djava.io.tmpdir=/home/username/.crashplan-tmp

Backup waiting forever

The second problem is that once you're able to start the client, backups are stuck at "waiting for backup" and you can see the following in /usr/local/crashplan/log/engine_error.log:

Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager
  at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(JNAInotifyFileWatcherDriver.java:21)
  at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:393)
  at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(BackupSetsManager.java:331)
  at com.code42.backup.path.BackupSetsManager.access$1600(BackupSetsManager.java:66)
  at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(BackupSetsManager.java:1073)
  at com.code42.utils.AWorker.run(AWorker.java:158)
  at java.lang.Thread.run(Thread.java:744)

This time, you must tell the server to use a different directory, one that is executable and writable by the CrashPlan engine user (root on my machine), by adding something like this to the SRV_JAVA_OPTS variable of /usr/local/crashplan/bin/run.conf:

-Djava.io.tmpdir=/var/crashplan
What's in a debian/ directory?

If you're looking to get started at packaging free software for Debian, you should start with the excellent New Maintainers' Guide or the Introduction to Debian Packaging on the Debian wiki.

Once you know the basics, or if you prefer to learn by example, you may be interested in the full walkthrough which follows. We will look at the contents of three simple packages.

node-libravatar

This package is a node.js library for the Libravatar service.

Version 2.0.0-3 of that package contains the following files in its debian/ directory:

  • changelog
  • compat
  • control
  • copyright
  • docs
  • node-libravatar.install
  • rules
  • source/format
  • watch

debian/control

Source: node-libravatar
Priority: extra
Maintainer: Francois Marier <francois@debian.org>
Build-Depends: debhelper (>= 9)
Standards-Version: 3.9.4
Section: web
Homepage: https://github.com/fmarier/node-libravatar
Vcs-Git: git://git.debian.org/collab-maint/node-libravatar.git
Vcs-Browser: http://git.debian.org/?p=collab-maint/node-libravatar.git;a=summary

Package: node-libravatar
Architecture: all
Depends: ${shlibs:Depends}, ${misc:Depends}, nodejs
Description: libravatar library for NodeJS
 This library allows web application authors to make use of the free Libravatar
 service (https://www.libravatar.org). This service hosts avatar images for
 users and allows other sites to look them up using email addresses.
 .
 node-libravatar includes full support for federated avatar servers.

This is probably the most important file since it contains the bulk of the metadata about this package.

Maintainer is a required field listing the maintainer of that package, which can be a person or a team. It only contains a single value though, any co-maintainers will be listed under the optional Uploaders field.

Build-Depends lists the packages which are needed to build the package (e.g. a compiler), as opposed to those which are needed to install the binary package (e.g. a library it uses).

Standards-Version refers to the version of the Debian Policy that this package complies with.

The Homepage field refers to the upstream homepage, whereas the Vcs-* fields point to the repository where the packaging is stored. If you take a look at the node-libravatar packaging repository you will see that it contains three branches:

  • upstream is the source as it was in the tarball downloaded from upstream.
  • master is the upstream branch along with all of the Debian customizations.
  • pristine-tar is unrelated to the other two branches and is used by the pristine-tar tool to reconstitute the original upstream tarball as needed.

After these fields comes a new section which starts with a Package field. This is the definition of a binary package, not to be confused with the Source field at the top of this file, which refers to the name of the source package. In this particular example, they are both the same and there is only one of each, however this is not always the case, as we'll see later.

Inside that binary package definition, lives the Architecture field which is normally one of these two:

  • all for a binary package that will work on all architectures but only needs to be built once
  • any for a binary package that will work everywhere but that will need to be built separately for each architecture

Finally, the last field worth pointing out is the Depends field which lists all of the runtime dependencies that the binary package has. This is what will be pulled in by apt-get when you apt-get install node-libravatar. The two variables will be substituted later by debhelper.

debian/changelog

node-libravatar (2.0.0-3) unstable; urgency=low

  * debian/watch: poll github directly
  * Bump Standards-Version up to 3.9.4

 -- Francois Marier <francois@debian.org>  Mon, 20 May 2013 12:07:49 +1200

node-libravatar (2.0.0-2) unstable; urgency=low

  * More precise license tag and upstream contact in debian/copyright

 -- Francois Marier <francois@debian.org>  Tue, 29 May 2012 22:51:03 +1200

node-libravatar (2.0.0-1) unstable; urgency=low

  * New upstream release
    - new non-backward-compatible API

 -- Francois Marier <francois@debian.org>  Mon, 07 May 2012 14:54:19 +1200

node-libravatar (1.1.1-1) unstable; urgency=low

  * Initial release (Closes: #661771)

 -- Francois Marier <francois@debian.org>  Fri, 02 Mar 2012 15:29:57 +1300

This may seem at first like a mundane file, but it is very important since it is the canonical source of the package version (2.0.0-3 in this case). This is the only place where you need to bump the package version when uploading a new package to the Debian archive.

The first line also includes the distribution where the package will be uploaded. It is usually one of these values:

  • unstable for the vast majority of uploads
  • stable for uploads that have been approved by the release maintainers and fix serious bugs in the stable version of Debian
  • stable-security for security fixes to the stable version of Debian that cannot wait until the next stable point release and have been approved by the security team

Packages uploaded to unstable will migrate automatically to testing provided that a few conditions are met (e.g. no release-critical bugs were introduced). The length of time before that migration is influenced by the urgency field (low, medium or high) in the changelog entry.

Another thing worth noting is that the first upload normally needs to close an ITP (Intent to Package) bug.

debian/rules

#!/usr/bin/make -f
# -*- makefile -*-

%:
    dh $@ 

override_dh_auto_test:

As can be gathered from the first two lines of this file, this is a Makefile. This is what controls how the package is built.

There's not much to see and that's because most of its content is automatically added by debhelper. So let's look at it in action by building the package:

$ git buildpackage -us -uc

and then looking at parts of the build log (../node-libravatar_2.0.0-3_amd64.build):

 fakeroot debian/rules clean
dh clean 
   dh_testdir
   dh_auto_clean
   dh_clean

One of the first things we see is the debian/rules file being run with the clean target. To find out what that does, have a look at the dh_auto_clean which states that it will attempt to delete build residues and run something like make clean using the upstream Makefile.

 debian/rules build
dh build 
   dh_testdir
   dh_auto_configure
   dh_auto_build

Next we see the build target being invoked and looking at dh_auto_configure we see that this will essentially run ./configure and its equivalents.

The dh_auto_build helper script then takes care of running make (or equivalent) on the upstream code.

This should be familiar to anybody who has ever built a piece of free software from scratch and has encountered the usual method for building from source:

./configure
make
make install

Finally, we get to actually build the .deb:

 fakeroot debian/rules binary
dh binary 
   dh_testroot
   dh_prep
   dh_installdirs
   dh_auto_install
   dh_install
...
   dh_md5sums
   dh_builddeb
dpkg-deb: building package `node-libravatar' in `../node-libravatar_2.0.0-3_all.deb'.

Here we see a number of helpers, including dh_auto_install which takes care of running make install.

Going back to the debian/rules, we notice that there is manually defined target at the bottom of the file:

override_dh_auto_test:

which essentially disables dh_auto_test by replacing it with an empty set of commands.

The reason for this becomes clear when we take a look at the test target of the upstream Makefile and the dependencies it has: tap, a node.js library that is not yet available in Debian.

In other words, we can't run the test suite on the build machines so we need to disable it here.

debian/compat

9

This file simply specifies the version of debhelper that is required by the various helpers used in debian/rules. Version 9 is the latest at the moment.

debian/copyright

Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: node-libravatar
Upstream-Contact: Francois Marier <francois@libravatar.org>
Source: https://github.com/fmarier/node-libravatar

Files: *
Copyright: 2011 Francois Marier <francois@libravatar.org>
License: Expat

Files: debian/*
Copyright: 2012 Francois Marier <francois@debian.org>
License: Expat

License: Expat
 Permission is hereby granted, free of charge, to any person obtaining a copy of this
 software and associated documentation files (the "Software"), to deal in the Software
 without restriction, including without limitation the rights to use, copy, modify,
 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
 permit persons to whom the Software is furnished to do so, subject to the following
 conditions:
 .
 The above copyright notice and this permission notice shall be included in all copies
 or substantial portions of the Software.
 .
 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
 CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
 OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

This machine-readable file lists all of the different licenses encountered in this package.

It requires that the maintainer audits the upstream code for any copyright statements that might be present in addition to the license of the package as a whole.

debian/docs

README.md

This file contains a list of upstream files that will be copied into the /usr/share/doc/node-libravatar/ directory by dh_installdocs.

debian/node-libravatar.install

lib/*    usr/lib/nodejs/

The install file is used by dh_install to supplement the work done by dh_auto_install which, as we have seen earlier, essentially just runs make install on the upstream Makefile.

Looking at that upstream Makefile, it becomes clear that the files will need to be installed manually by the Debian package since that Makefile doesn't have an install target.

debian/watch

version=3
https://github.com/fmarier/node-libravatar/tags /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz

This is the file that allows Debian tools like the Package Tracking System to automatically detect that a new upstream version is available.

What it does is simply visit the upstream page which contains all of the release tarballs and look for links which have an href matching the above regular expression.

Running uscan --report --verbose will show us all of the tarballs that can be automatically discovered using this watch file:

-- Scanning for watchfiles in .
-- Found watchfile in ./debian
-- In debian/watch, processing watchfile line:
   https://github.com/fmarier/node-libravatar/tags /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
-- Found the following matching hrefs:
     /fmarier/node-libravatar/archive/node-libravatar-2.0.0.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.1.1.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.1.0.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.0.1.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.0.0.tar.gz
Newest version on remote site is 2.0.0, local version is 2.0.0
 => Package is up to date
-- Scan finished

pylibravatar

This second package is the equivalent Python library for the Libravatar service.

Version 1.6-2 of that package contains similar files in its debian/ directory, but let's look at two in particular:

  • control
  • upstream/signing-key.asc

debian/control

Source: pylibravatar
Section: python
Priority: optional
Maintainer: Francois Marier <francois@debian.org>
Build-Depends: debhelper (>= 9), python-all, python3-all
Standards-Version: 3.9.5
Homepage: https://launchpad.net/pyLibravatar
...

Package: python-libravatar
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-dns, python
Description: Libravatar module for Python 2
 Module to make use of the federated Libravatar.org avatar hosting service
 from within Python applications.
...

Package: python3-libravatar
Architecture: all
Depends: ${misc:Depends}, ${python3:Depends}, python3-dns, python3
Description: Libravatar module for Python 3
 Module to make use of the federated Libravatar.org avatar hosting service
 from within Python applications.
...

Here is an example of a source package (pylibravatar) which builds two separate binary packages: python-libravatar and python3-libravatar.

This highlights the fact that a given upstream source can be split into several binary packages in the archive when it makes sense. In this case, there is no point in Python 2 applications pulling in the Python 3 files, so the two separate packages make sense.

Another common example is the use of a -doc package to separate the documentation from the rest of a package so that it doesn't need to be installed on production servers for example.

debian/upstream/signing-key.asc

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQINBEpQYz4BEAC7REQD1za69RUnkt6nRCFhSJmmoeJc+yEiWTKc9GOIMAwJDme1
+CMYgVn4Xzf1VQYwD/lE+mfWgyeMomLQjDM1mxx/LOM2a1WWPOk9+PvQwKfRJy92
...
UxDtZm/4yUmU6KvHvOGiDCMuIiB+MqhqJJ5wf80wXhzu8nmC+fyGt6nvu0ggMle8
sAMgXt/aQUTZE5zNCQ==
=RkTO
-----END PGP PUBLIC KEY BLOCK-----

This is simply the OpenPGP key that the upstream developer uses to sign release tarballs.

Since PGP signatures are available on the upstream download page, it's possible to instruct uscan to check signatures before downloading tarballs.

The way to do that is to use the pgpsigurlmange option in debian/watch:

version=3
opts=pgpsigurlmangle=s/$/.asc/ https://pypi.python.org/pypi/pyLibravatar https://pypi.python.org/packages/source/p/pyLibravatar/pyLibravatar-(.*)\.tar\.gz

which is simply a regular expression replacement string which takes the tarball URL and converts it to the URL of the matching PGP signature.

fcheck

The last package we will look at is a file integrity checker. It essentially goes through all of the files in /usr/bin/ and /usr/lib/ and stores a hash of them in its database. When one of these files changes, you get an email.

In particular, we will look at the following files in the debian/ directory of version 2.7.59-18:

  • dirs
  • fcheck.cron.d
  • fcheck.postrm
  • fcheck.postinst
  • patches/
  • README.Debian
  • rules
  • source/format

debian/patches

This directory contains ten patches as well as a file called series which lists the patches that should be applied to the upstream source and in which order. Should you need to temporarily disable a patch, simply remove it from this file and it will no longer be applied at build time.

Let's have a look at patches/04_cfg_sha256.patch:

Description: Switch to sha256 hash algorithm
Forwarded: not needed
Author: Francois Marier <francois@debian.org>
Last-Update: 2009-03-15

--- a/fcheck.cfg
+++ b/fcheck.cfg
@@ -149,8 +149,7 @@ TimeZone        = EST5EDT
 #$Signature      = /usr/bin/sum
 #$Signature      = /usr/bin/cksum
 #$Signature      = /usr/bin/md5sum
-$Signature      = /bin/cksum
-
+$Signature      = /usr/bin/sha256sum


 # Include an optional configuration file.

This is a very simple patch which changes the default configuration of fcheck to promote the use of a stronger hash function. At the top of the file is a bunch of metadata in the DEP-3 format.

Why does this package contain so many customizations to the upstream code when Debian's policy is to push fixes upstream and work towards reduce the delta between upstream and Debian's code? The answer can be found in debian/control:

Homepage: http://web.archive.org/web/20050415074059/www.geocities.com/fcheck2000/

This package no longer has an upstream maintainer and its original source is gone. In other words, the Debian package is where all of the new bug fixes get done.

debian/source/format

3.0 (quilt)

This file contains what is called the source package format. What it basically says is that the patches found in debian/patches/ will be applied to the upstream source using the quilt tool at build time.

debian/fcheck.postrm

#!/bin/sh
# postrm script for fcheck
#
# see: dh_installdeb(1)

set -e

# summary of how this script can be called:
#        * <postrm> `remove'
#        * <postrm> `purge'
#        * <old-postrm> `upgrade' <new-version>
#        * <new-postrm> `failed-upgrade' <old-version>
#        * <new-postrm> `abort-install'
#        * <new-postrm> `abort-install' <old-version>
#        * <new-postrm> `abort-upgrade' <old-version>
#        * <disappearer's-postrm> `disappear' <overwriter>
#          <overwriter-version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package


case "$1" in
    remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
    ;;

    purge)
      if [ -e /var/lib/fcheck/fcheck.dbf ]; then
        echo "Purging old database file ..."
        rm -f /var/lib/fcheck/fcheck.dbf
      fi
      rm -rf /var/lib/fcheck
      rm -rf /var/log/fcheck
      rm -rf /etc/fcheck
    ;;

    *)
        echo "postrm called with unknown argument \`$1'" >&2
        exit 1
    ;;
esac

# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.

#DEBHELPER#

exit 0

This script is one of the many possible maintainer scripts that a package can provide if needed.

This particular one, as the name suggests, will be run after the package is removed (apt-get remove fcheck) or purged (apt-get remove --purge fcheck). Looking at the case statement above, it doesn't do anything extra in the remove case, but it deletes a few files and directories when called with the purge argument.

debian/README.Debian

This optional README file contains Debian-specific instructions that might be useful to users. It supplements the upstream README which is often more generic and cannot assume a particular system configuration.

debian/rules

#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.

# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1

build-arch:
build-indep:
build: build-stamp

build-stamp:
    dh_testdir
    pod2man --section=8 $(CURDIR)/debian/fcheck.pod > $(CURDIR)/fcheck.8
    touch build-stamp

clean:
    dh_testdir
    dh_testroot
    rm -f build-stamp 
    rm -f $(CURDIR)/fcheck.8
    dh_clean

install: build
    dh_testdir
    dh_testroot
    dh_prep
    dh_installdirs
    cp $(CURDIR)/fcheck $(CURDIR)/debian/fcheck/usr/sbin/fcheck
    cp $(CURDIR)/fcheck.cfg $(CURDIR)/debian/fcheck/etc/fcheck/fcheck.cfg

# Build architecture-independent files here.
binary-arch: build install

# Build architecture-independent files here.
binary-indep: build install
    dh_testdir
    dh_testroot
    dh_installdocs
    dh_installcron
    dh_installman fcheck.8
    dh_installchangelogs
    dh_installexamples
    dh_installlogcheck
    dh_link
    dh_strip
    dh_compress
    dh_fixperms
    dh_installdeb
    dh_shlibdeps
    dh_gencontrol
    dh_md5sums
    dh_builddeb

binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install

This is an example of a old-style debian/rules file which you still encounter in packages which haven't yet upgraded to the latest version of debhelper 9, as can be shown by the contents of debian/compat:

8

It does essentially the same thing that what we've seen in the build log, but in a more verbose way.

debian/dirs

usr/sbin
etc/fcheck

This file contains a list of directories that dh_installdirs will create in the build directory.

The reason why these directories need to be created is that files are copied into these directories in the install target of the debian/rules file.

Note that this is different from directories which are created at the time of installation of the package. In that case, the directory (e.g. /var/log/fcheck/) must be created in the postinst script and removed in the postrm script.

debian/fcheck.cron.d

#
# Regular cron job for the fcheck package
#
30 */2  * * *   root    test -x /usr/sbin/fcheck && if ! nice ionice -c3 /usr/sbin/fcheck -asxrf /etc/fcheck/fcheck.cfg >/var/run/fcheck.out 2>&1; then mailx -s "ALERT: [fcheck] `hostname --fqdn`" root </var/run/fcheck.out ; /usr/sbin/fcheck -cadsxlf /etc/fcheck/fcheck.cfg ; fi ; rm -f /var/run/fcheck.out

This file is the cronjob which drives the checks performed by this package. It will be copied to /etc/cron.d/fcheck by dh_installcron.

Settings v. Prefs in Gaia Development

Jed and I got confused the other day when trying to add hidden prefs for a small Firefox OS application. We wanted to make a few advanced options configurable via preferences (like those found in about:config in Firefox) but couldn't figure out why it wasn't possible to access them from within our certified application.

The answer is that settings and prefs are entirely different things in FxOS land.

Preferences

This is how you set prefs in Gaia:

pref("devtools.debugger.forbid-certified-apps", false);
pref("dom.inter-app-communication-api.enabled", true);

from build/config/custom-prefs.js.

These will be used by the Gecko layer like this:

if (!Preferences::GetBool("dom.inter-app-communication-api.enabled", false)) {
  return false;
}

from within C++ code, and like this:

let restrictPrivileges = Services.prefs.getBoolPref("devtools.debugger.forbid-certified-apps");

from JavaScript code.

Preferences can be strings, integers or booleans.

Settings

Settings on the other hand are JSON objects which can be set like this:

"alarm.enabled": false,

in build/config/common-settings.json and can then be read like this:

var req = navigator.mozSettings.createLock().get('alarm.enabled');
req.onsuccess = function() {
  marionetteScriptFinished(req.result['alarm.enabled']);
};

as long as you have the following in your application manifest:

"permissions": {
  ...
  "settings":{ "access": "readwrite" },
  ...
}

In other words, if you set something in build/config/custom-prefs.js, don't expect to be able to read it using navigator.mozSettings or the SettingsHelper!

Using vnc to do remote tech support over high-latency networks

If you ever find yourself doing a bit of technical support for relatives over the phone, there's nothing like actually seeing what they are doing on their computer. One of the best tools for such remote desktop sharing is vnc.

Here's the best setup I have come up with so far. If you have any suggestions, please leave a comment!

Basic vnc configuration

First off, you need two things: a vnc server on your relative's machine and a vnc client on yours. Thanks to vnc being an open protocol, there are many choices for both.

I eventually settled on x11vnc for the server and ssvnc for the client. They are both available in the standard Debian and Ubuntu repositories.

Since I have ssh access on the machine that needs to run the server, I simply login and then run x11vnc. Here's what ~/.x11vnrc contains:

noxdamage

That option appears to be necessary when the desktop to share is running gnome-shell / compiz.

Afterwards, I start the client on my laptop with the following command:

ssvncviewer -encodings zrle -scale 1280x775 localhost

The scaling factor is simply the resolution of the client minus any window decorations.

ssh configuration

As you can see above, the client is not connecting directly to the server. Instead it's connecting to its own vnc port (localhost:5900). That's because I'm tunelling the traffic through the ssh connection in order to avoid relying on vnc extensions for authentication and encryption.

Here's what the client's ~/.ssh/config needs for that simple use case:

Host server.example.com:
  LocalForward 5900 127.0.0.1:5900

If the remote host (which has an internal IP address of 192.168.1.2 in this example) is not connected directly to the outside world and instead goes through a gateway, then your ~/.ssh/config will look like this:

Host gateway.example.com:
  ForwardAgent yes
  LocalForward 5900 192.168.1.2:5900

Host server.example.com:
  ProxyCommand ssh -q -a gateway.example.com nc -q0 %h 22

and the remote host will need to open up a port on its firewall for the gateway (internal IP address of 192.168.1.1 here):

iptables -A INPUT -p tcp --dport 5900 -s 192.168.1.1/32 -j ACCEPT

Optimizing for high-latency networks

Since I do most of my tech support over a very high latency network, I tweaked the default vnc settings to reduce the amount of network traffic.

I added this to ~/.x11vncrc on the vnc server:

ncache 10
ncache_cr

and changed the client command line to this:

ssvncviewer -compresslevel 9 -quality 3 -bgr233 -encodings zrle -use64 -scale 1280x775 -ycrop 1024 localhost

This decreases image quality (and required bandwidth) and enables client-side caching.

The magic 1024 number is simply the full vertical resolution of the remote machine, which sports a vintage 1280x1024 LCD monitor.

Hardening ssh Servers

Basic configuration

There are a few basic things that most admins will already know (and that tiger will warn you about if you forget):

  • only allow version 2 of the protocol
  • disable root logins
  • disable password authentication

This is what /etc/ssh/sshd_config should contain:

Protocol 2
PasswordAuthentication no
PermitRootLogin no

Whitelist approach to giving users ssh access

To ensure that only a few users have ssh access to the server and that newly created users don't have it enabled by default, create a new group:

addgroup sshuser

and then add the relevant users to it:

adduser francois sshuser

Finally, add this to /etc/ssh/sshd_config:

AllowGroups sshuser

Deterring brute-force (or dictionary) attacks

One way to ban attackers who try to brute-force your ssh server is to install the fail2ban package. It keeps an eye on the ssh log file (/var/log/auth.log) and temporarily blocks IP addresses after a number of failed login attempts.

Another approach is to hide the ssh service using Single-Packet Authentication. I have fwknop installed on some of my servers and use small wrapper scripts to connect to them.

Using restricted shells

For those users who only need an ssh account on the server in order to transfer files (using scp or rsync), it's a good idea to set their shell (via chsh) to a restricted one like rssh.

Should they attempt to log into the server, these users will be greeted with the following error message:

This account is restricted by rssh.
Allowed commands: rsync 

If you believe this is in error, please contact your system administrator.

Connection to server.example.com closed.

Restricting authorized keys to certain IP addresses

In addition to listing all of the public keys that are allowed to log into a user account, the ~/.ssh/authorized_keys file also allows (as the man page points out) a user to impose a number of restrictions.

Perhaps the most useful option is from which allows a user to restrict the IP addresses which can login using a specific key.

Here's what one of my authorized_keys looks like:

from="192.0.2.2" ssh-rsa AAAAB3Nz...zvCn bot@example

You may also want to include the following options to each entry: no-X11-forwarding, no-user-rc, no-pty, no-agent-forwarding and no-port-forwarding.

Increasing the amount of logging

The first thing I'd recommend is to increase the level of verbosity in /etc/ssh/sshd_config:

LogLevel VERBOSE

which will, amongst other things, log the fingerprints of keys used to login:

sshd: Connection from 192.0.2.2 port 39671
sshd: Found matching RSA key: de:ad:be:ef:ca:fe
sshd: Postponed publickey for francois from 192.0.2.2 port 39671 ssh2 [preauth]
sshd: Accepted publickey for francois from 192.0.2.2 port 39671 ssh2 

Secondly, if you run logcheck and would like to whitelist the "Accepted publickey" messages on your server, you'll have to start by deleting the first line of /etc/logcheck/ignore.d.server/sshd. Then you can add an entry for all of the usernames and IP addresses that you expect to see.

Finally, it is also possible to log all commands issued by a specific user over ssh by enabling the pam_tty_audit module in /etc/pam.d/sshd:

session required pam_tty_audit.so enable=francois

However this module is not included in wheezy and has only recently been re-added to Debian.

Identitying stolen keys

One thing I'd love to have is a way to identify a stolen public key. Given the IP restrictions described above, if a public key is stolen and used from a different IP, I will see something like this in /var/log/auth.log:

sshd: Connection from 198.51.100.10 port 39492
sshd: Authentication tried for francois with correct key but not from a permitted host (host=198.51.100.10, ip=198.51.100.10).
sshd: Failed publickey for francois from 198.51.100.10 port 39492 ssh2
sshd: Connection closed by 198.51.100.10 [preauth]

So I can get the IP address of the attacker (likely to be a random VPS or a Tor exit node), but unfortunately, the key fingerprints don't appear for failed connections like they do for successful ones. So I don't know which key to revoke.

Is there any way to identify which key was used in a failed login attempt or is the solution to only ever have a single public key in each authorized_keys file and create a separate user account for each user?

Running your own XMPP server on Debian or Ubuntu

In order to get closer to my goal of reducing my dependence on centralized services, I decided to setup my own XMPP / Jabber server on a Linode VPS running Debian wheezy. I chose ejabberd since it was recommended by the RTC Quick Start website and here's how I put everything together.

DNS and SSL

My personal domain is fmarier.org and so I created the following DNS records:

jabber-gw            CNAME    fmarier.org.
_xmpp-client._tcp    SRV      5 0 5222 jabber-gw.fmarier.org.
_xmpp-server._tcp    SRV      5 0 5269 jabber-gw.fmarier.org.

Then I went to get a free XMPP SSL certificate for jabber-gw.fmarier.org from StartSSL. This is how I generated the CSR (Certificate Signing Request) on a high-entropy machine:

openssl req -new -newkey rsa:2048 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/CN=jabber-gw.fmarier.org"

I downloaded the signed certificate as well as the StartSSL intermediate certificate and combined them this way:

cat ssl.crt ssl.key sub.class1.server.ca.pem > ejabberd.pem

ejabberd installation

Installing ejabberd on Debian is pretty simple and I mostly followed the steps on the Ubuntu wiki with an additional customization to solve the Pidgin "Not authorized" connection problems.

  1. Install the package, using "admin" as the username for the administrative user:

    apt-get install ejabberd
    
  2. Set the following in /etc/ejabberd/ejabberd.cfg (don't forget the trailing dots!):

    {acl, admin, {user, "admin", "fmarier.org"}}.
    {hosts, ["fmarier.org"]}.
    {fqdn, "jabber-gw.fmarier.org"}.
    
  3. Copy the SSL certificate into the /etc/ejabberd/ directory and set the permissions correctly:

    chown root:ejabberd /etc/ejabberd/ejabberd.pem
    chmod 640 /etc/ejabberd/ejabberd.pem
    
  4. Improve the client-to-server TLS configuration by adding starttls_required to this block:

    {listen,
      [
        {5222, ejabberd_c2s, [
          {access, c2s},
          {shaper, c2s_shaper},
          {max_stanza_size, 65536},
          starttls,
          starttls_required,
          {certfile, "/etc/ejabberd/ejabberd.pem"}
        ]},
    
  5. Restart the ejabberd daemon:

    /etc/init.d/ejabberd restart
    
  6. Create a new user account for yourself:

    ejabberdctl register me fmarier.org P@ssw0rd1!
    
  7. Open up the following ports on the server's firewall:

    iptables -A INPUT -p tcp --dport 5222 -j ACCEPT
    iptables -A INPUT -p tcp --dport 5269 -j ACCEPT
    

Client setup

On the client side, if you use Pidgin, create a new account with the following settings in the "Basic" tab:

  • Protocol: XMPP
  • Username: me
  • Domain: fmarier.org
  • Password: P@ssw0rd1!

and the following setting in the "Advanced" tab:

  • Connection security: Require encryption

From this, I was able to connect to the server without clicking through any certificate warnings.

Testing

If you want to make sure that XMPP federation works, add your GMail address as a buddy to the account and send yourself a test message.

In this example, the XMPP address I give to my friends is me@fmarier.org.

Finally, to ensure that your TLS settings are reasonable, use this automated tool to test both the client-to-server (c2s) and the server-to-server (s2s) flows.