RSS Atom Add a new post titled:
Dynamic DNS on your own domain

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS
dyn NS
dyn NS
dyn NS
dyn NS

Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

use=web,, web-skip='IP Address'

and the following in /etc/default/ddclient:


Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short
Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (
  • Redirect anything else to

The reason for this is that the main Libravatar service listens on and not, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ [last,redirect=301]

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/

authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> = /var/www/acme = /var/www/acme
LXC setup on Debian stretch

Here's how to setup LXC-based "chroots" on Debian stretch. While I wrote about this on Debian jessie, I had to make some networking changes for stretch and so here are the full steps that should work on stretch.

Start by installing (as root) the necessary packages:

apt install lxc libvirt-clients debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here): = veth = lxcbr0 = up = 00:FF:AA:xx:xx:xx

That configuration requires that the veth kernel module be loaded. If you have any kinds of module-loading restrictions enabled, you probably need to add the following to /etc/modules and reboot:


Next, I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

  2. and then applying it using:

    sysctl -p
  3. Restart the network bridge:

    systemctl restart lxc-net.service
  4. and ensure that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A FORWARD -d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
  5. and applying the rules using:


Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR= lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64 -F

Since the root password is randomly generated, you'll need to reset it before you can login as root:

sudo lxc-attach -n sid64 passwd

Then login as root and install a text editor inside the container because the root image doesn't have one by default:

apt install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Mounting your home directory inside a container

In order to have my home directory available within the container, I created a user account for myself inside the container and then added the following to the container config file (/var/lib/lxc/sid64/config):

lxc.mount.entry=/home/francois home/francois none bind 0 0

before restarting the container:

lxc-stop -n sid64
lxc-start -n sid64

Fixing locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

If you see these errors while reconfiguring the locales package:

Generating locales (this might take a while)...
  en_US.UTF-8...cannot change mode of new locale archive: No such file or directory
  fr_CA.UTF-8...cannot change mode of new locale archive: No such file or directory
Generation complete.

and see the following dmesg output on the host:

[235350.947808] audit: type=1400 audit(1441664940.224:225): apparmor="DENIED" operation="chmod" info="Failed name lookup - deleted entry" error=-2 profile="/usr/bin/lxc-start" name="/usr/lib/locale/locale-archive.WVNevc" pid=21651 comm="localedef" requested_mask="w" denied_mask="w" fsuid=0 ouid=0

then AppArmor is interfering with the locale-gen binary and the work-around I found is to temporarily shutdown AppArmor on the host:

lxc-stop -n sid64
systemctl stop apparmor
lxc-start -n sid64

and then start up it later once the locales have been updated:

lxc-stop -n sid64
systemctl start apparmor
lxc-start -n sid64

AppArmor support

If you are running AppArmor, your container probably won't start until you add the following to the container config (/var/lib/lxc/sid64/config):

lxc.aa_allow_incomplete = 1
Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get
country CA: DFS-FCC
    (2402 - 2472 @ 40), (N/A, 30), (N/A)
    (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW
    (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW
    (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS
    (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS
    (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

Proxy ACME challenges to a single machine

The Libravatar mirrors are setup using DNS round-robin which makes it a little challenging to automatically provision Let's Encrypt certificates.

In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.

Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.

DNS Configuration

In order to serve the certbot validation files separately from the main service, I created a new hostname,, pointing to the main Libravatar server:

CNAME acme

Mirror Configuration

On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):

<VirtualHost *:80>
    ServerAdmin __WEBMASTEREMAIL__

    ProxyPass /.well-known/acme-challenge/
    ProxyPassReverse /.well-known/acme-challenge/

Then I enabled the right modules and restarted Apache:

a2enmod proxy
a2enmod proxy_http
systemctl restart apache2.service

Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:

cd /etc/libravatar
/usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true

Main Configuration

On the main server, I created a new webroot:

mkdir -p /var/www/acme/.well-known

and a new vhost in /etc/apache2/sites-available/acme.conf:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

before enabling it and restarting Apache:

a2ensite acme
systemctl restart apache2.service

Registering a new TLS certificate

With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:

certbot certonly --webroot -w /var/www/acme -d

The resulting certificate will then be automatically renewed before it expires.

Test mail server on Ubuntu and Debian

I wanted to setup a mail service on a staging server that would send all outgoing emails to a local mailbox. This avoids sending emails out to real users when running the staging server using production data.

First, install the postfix mail server:

apt install postfix

and choose the "Local only" mail server configuration type.

Then change the following in /etc/postfix/

default_transport = error


default_transport = local:root

and restart postfix:

systemctl restart postfix.service

Once that's done, you can find all of the emails in /var/mail/root.

So you can install mutt:

apt install mutt

and then view the mailbox like this:

mutt -f /var/mail/root
Checking Your Passwords Against the Have I Been Pwned List

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

TLS Authentication on Freenode and OFTC

In order to easily authenticate with IRC networks such as OFTC and Freenode, it is possible to use client TLS certificates (also known as SSL certificates). In fact, it turns out that it's very easy to setup both on irssi and on znc.

Generate your TLS certificate

On a machine with good entropy, run the following command to create a keypair that will last for 10 years:

openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN=<your nick>"

Then extract your key fingerprint using this command:

openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'

Share your fingerprints with NickServ

On each IRC network, do this:

/msg NickServ IDENTIFY Password1!
/msg NickServ CERT ADD <your fingerprint>

in order to add your fingerprint to the access control list.

Configure ZNC

To configure znc, start by putting the key in the right place:

cp user.pem ~/.znc/users/<your nick>/networks/oftc/moddata/cert/

and then enable the built-in cert plugin for each network in ~/.znc/configs/znc.conf:

<Network oftc>
            LoadModule = cert
    <Network freenode>
            LoadModule = cert

Configure irssi

For irssi, do the same thing but put the cert in ~/.irssi/user.pem and then change the OFTC entry in ~/.irssi/config to look like this:

  address = "";
  chatnet = "OFTC";
  port = "6697";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";

and the Freenode one to look like this:

  address = "";
  chatnet = "Freenode";
  port = "7000";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";

That's it. That's all you need to replace password authentication with a much stronger alternative.

pristine-tar and git-buildpackage Work-arounds

I recently ran into problems trying to package the latest version of my planetfilter tool.

This is how I was able to temporarily work-around bugs in my tools and still produce a package that can be built reproducibly from source and that contains a verifiable upstream signature.

pristine-tar being is unable to reproduce a tarball

After importing the latest upstream tarball using gbp import-orig, I tried to build the package but ran into this pristine-tar error:

$ gbp buildpackage
gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
xdelta3: normally this indicates that the source file is incorrect
xdelta3: please verify the source file with sha1sum or equivalent
xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/ line 56.
pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
pristine-tar: failed to generate tarball

So I decided to throw away what I had, re-import the tarball and try again. This time, I got a different pristine-tar error:

$ gbp buildpackage
gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
xdelta3: normally this indicates that the source file is incorrect
xdelta3: please verify the source file with sha1sum or equivalent
xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/ line 56.
pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
pristine-tar: failed to generate tarball

I filed bug 871938 for this.

As a work-around, I simply symlinked the upstream tarball I already had and then built the package using the tarball directly instead of the upstream git branch:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz ../planetfilter_0.7.4.orig.tar.gz
gbp buildpackage --git-tarball-dir=..

Given that only the upstream and master branches are signed, the .delta file on the pristine-tar branch could be fixed at any time in the future by committing a new .delta file once pristine-tar gets fixed. This therefore seems like a reasonable work-around.

git-buildpackage doesn't import the upstream tarball signature

The second problem I ran into was a missing upstream signature after building the package with git-buildpackage:

$ lintian -i planetfilter_0.7.4-1_amd64.changes
E: planetfilter changes: orig-tarball-missing-upstream-signature planetfilter_0.7.4.orig.tar.gz
N:    The packaging includes an upstream signing key but the corresponding
N:    .asc signature for one or more source tarballs are not included in your
N:    .changes file.
N:    Severity: important, Certainty: certain
N:    Check: changes-file, Type: changes

This problem (and the lintian error I suspect) is fairly new and hasn't been solved yet.

So until gbp import-orig gets proper support for upstream signatures, my work-around was to copy the upstream signature in the export-dir output directory (which I set in ~/.gbp.conf) so that it can be picked up by the final stages of gbp buildpackage:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz.asc ../build-area/planetfilter_0.7.4.orig.tar.gz.asc

If there's a better way to do this, please feel free to leave a comment (authentication not required)!

Time Synchronization with NTP and systemd

I recently ran into problems with generating TOTP 2-factor codes on my laptop. The fact that some of the codes would work and some wouldn't suggested a problem with time keeping on my laptop.

This was surprising since I've been running NTP for a many years and have therefore never had to think about time synchronization. After realizing that ntpd had stopped working on my machine for some reason, I found that systemd provides an easier way to keep time synchronized.

The new systemd time synchronization daemon

On a machine running systemd, there is no need to run the full-fledged ntpd daemon anymore. The built-in systemd-timesyncd can do the basic time synchronization job just fine.

However, I noticed that the daemon wasn't actually running:

$ systemctl status systemd-timesyncd.service 
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
   Active: inactive (dead)
Condition: start condition failed at Thu 2017-08-03 21:48:13 PDT; 1 day 20h ago
     Docs: man:systemd-timesyncd.service(8)

referring instead to a mysterious "failed condition". Attempting to restart the service did provide more details though:

$ systemctl restart systemd-timesyncd.service 
$ systemctl status systemd-timesyncd.service 
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
   Active: inactive (dead)
Condition: start condition failed at Sat 2017-08-05 18:19:12 PDT; 1s ago
           └─ ConditionFileIsExecutable=!/usr/sbin/ntpd was not met
     Docs: man:systemd-timesyncd.service(8)

The above check for the presence of /usr/sbin/ntpd points to a conflict between ntpd and systemd-timesyncd. The solution of course is to remove the former before enabling the latter:

apt purge ntp

Enabling time synchronization with NTP

Once the ntp package has been removed, it is time to enable NTP support in timesyncd.

Start by choosing the NTP server pool nearest you and put it in /etc/systemd/timesyncd.conf. For example, mine reads like this:


before restarting the daemon:

systemctl restart systemd-timesyncd.service 

That may not be enough on your machine though. To check whether or not the time has been synchronized with NTP servers, run the following:

$ timedatectl status
 Network time on: yes
NTP synchronized: no
 RTC in local TZ: no

If NTP is not enabled, then you can enable it by running this command:

timedatectl set-ntp true

Once that's done, everything should be in place and time should be kept correctly:

$ timedatectl status
 Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no