RSS Atom Add a new post titled:
Mercurial commit series in Phabricator using Arcanist

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes...
pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong...

Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI:

Included changes:
  M       modules/libpref/init/all.js
  M       netwerk/base/nsChannelClassifier.cpp
  M       netwerk/base/nsChannelClassifier.h
  M       toolkit/components/url-classifier/Classifier.cpp
  M       toolkit/components/url-classifier/SafeBrowsing.jsm
  M       toolkit/components/url-classifier/nsUrlClassifierDBService.cpp
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       xpcom/base/

~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4
3 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI:

Included changes:
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76
2 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (e40312d)$ arc diff --no-amend
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI:

Included changes:
  M       toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html
  M       toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html

Link all revisions together

In order to ensure that these commits depend on one another, click on that last link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes...
pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Updated an existing Differential revision:
        Revision URI:

Included changes:
  M       browser/base/content/test/general/trackingPage.html
  M       netwerk/test/unit/test_trackingProtection_annotateChannels.js
  M       toolkit/components/antitracking/test/browser/browser_imageCache.js
  M       toolkit/components/antitracking/test/browser/browser_subResources.js
  M       toolkit/components/antitracking/test/browser/head.js
  M       toolkit/components/antitracking/test/browser/popup.html
  M       toolkit/components/antitracking/test/browser/tracker.js
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

Recovering from a botched hg histedit on a mercurial bookmark

If you are in the middle of a failed Mercurial hg histedit, you can normally do hg histedit --abort to cancel it, though sometimes you also have to reach out for hg update -C. This is the equivalent of git's git rebase --abort and it does what you'd expect.

However, if you go ahead and finish the history rewriting and only notice problems later, it's not as straighforward. With git, I'd look into the reflog (git reflog) for the previous value of the branch pointer and simply git reset --hard to that value. Done.

Based on a Stack Overflow answer, I thought I could undo my botched histedit using:

hg unbundle ~/devel/mozilla-unified/.hg/strip-backup/47906774d58d-ae1953e1-backup.hg

but it didn't seem to work. Maybe it doesn't work when using bookmarks.

Here's what I ended up doing to fully revert my botched Mercurial histedit. If you know of a simpler way to do this, feel free to leave a comment.

Collecting the commits to restore

The first step was to collect all of the commits hashes I needed to restore. Luckily, I had sumitted my patch to Try before changing it and so I was able to look at the pushlog to get all of the commits at once.

If I didn't have that, I could also go to the last bookmark I pushed and click on parent commits until I hit the first one that's not mine. Then I could collect all of the commits using the browser's back button:

For that last one, I had to click on the changeset commit hash link in order to get the commit hash instead of the name of the bookmark (/rev/hashstore-crash-1434206).

Recreating the branch from scratch

This is what did to export patches for each commit and then re-import them one after the other:

for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe ; do hg export $c > ~/$c.patch ; done
hg up ff8505d177b9
hg bookmarks hashstore-crash-1434206-new
for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe 4140cd9c67b0 ; do hg import ~/$c.patch ; done

Copying a bookmark

As an aside, if you want to make a copy of a bookmark before you do an hg histedit, it's not as simple as:

hg up hashstore-crash-1434206
hg bookmarks hashstore-crash-1434206-copy
hg up hashstore-crash-1434206

While that seemed to work at the time, the histedit ended up messing with both of them.

An alternative that works is to push the bookmark to another machine. That way if worse comes to worse, you can hg clone from there and hg export the commits you want to re-import using hg import.

Mysterious 'everybody is busy/congested at this time' error in Asterisk

I was trying to figure out why I was getting a BUSY signal from Asterisk while trying to ring a SIP phone even though that phone was not in use.

My asterisk setup looks like this:

phone 1 <--SIP--> asterisk 1 <==IAX2==> asterisk 2 <--SIP--> phone 2

While I couldn't call SIP phone #2 from SIP phone #1, the reverse was working fine (ringing #1 from #2). So it's not a network/firewall problem. The two SIP phones can talk to one another through their respective Asterisk servers.

This is the error message I could see on the second asterisk server:

$ asterisk -r
  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5
    -- Called SIP/12345
    -- SIP/12345-00000002 redirecting info has changed, passing it to IAX2/iaxuser-6347
    -- SIP/12345-00000002 is busy
  == Everyone is busy/congested at this time (1:1/0/0)
    -- Executing [12345@local:2] Goto("IAX2/iaxuser-6347", "in12345-BUSY,1") in new stack
    -- Goto (local,in12345-BUSY,1)
    -- Executing [in12345-BUSY@local:1] Hangup("IAX2/iaxuser-6347", "17") in new stack
  == Spawn extension (local, in12345-BUSY, 1) exited non-zero on 'IAX2/iaxuser-6347'
    -- Hungup 'IAX2/iaxuser-6347'


  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • iaxuser is the user account on server #2 that server #1 uses
  • local is the context that for incoming IAX calls on server #1

This Everyone is busy/congested at this time (1:1/0/0) was surprising since looking at each SIP channel on that server showed nobody as busy:

asterisk2*CLI> sip show inuse
* Peer name               In use          Limit           
12345                     0/0/0           2               

So I enabled the raw SIP debug output and got the following (edited for clarity):

asterisk2*CLI> sip set debug on
SIP Debugging enabled

  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5

INVITE sip:12345@;line=m2vlbuoc SIP/2.0
Via: SIP/2.0/UDP
From: "Francois Marier" <sip:67890@>
To: <sip:12345@;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: Asterisk PBX
Contact: <sip:67890@>
Content-Length: 274

    -- Called SIP/12345

<--- SIP read from UDP: --->
SIP/2.0 100 Trying
Via: SIP/2.0/UDP
From: "Francois Marier" <sip:67890@>
To: <sip:12345@;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@;line=m2vlbuoc>
Content-Length: 0

--- (9 headers 0 lines) ---

<--- SIP read from UDP: --->
SIP/2.0 480 Do Not Disturb
Via: SIP/2.0/UDP
From: "Francois Marier" <sip:67890@>
To: <sip:12345@;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@;line=m2vlbuoc>
Content-Length: 0


  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • 67890 is the extension of SIP phone #1 on Asterisk server #2
  • is the IP address of SIP phone #2
  • is the IP address of Asterisk server #2

From there, I can see that SIP phone #2 is returning a status of 408 Do Not Disturb. That's what the problem was: the phone itself was in DnD mode and set to reject all incoming calls.

Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt
Using a Kenwood TH-D72A with Pat on Linux and ax25

Here is how I managed to get my Kenwood TH-D72A radio working with Pat on Linux using the built-in TNC and the AX.25 mode

Installing Pat

First of all, download and install the latest Pat package from the GitHub project page.

dpkg -i pat_x.y.z_amd64.deb

Then, follow the installation instructions for the AX.25 mode and install the necessary packages:

apt install ax25-tools ax25-apps

along with the systemd script that comes with Pat:



Once the packages are installed, it's time to configure everything correctly:

  1. Power cycle the radio.
  2. Enable TNC in packet12 mode (band A*).
  3. Tune band A to VECTOR channel 420 (or 421 if you can't reach VA7EOC on simplex).
  4. Put the following in /etc/ax25/axports (replacing CALLSIGN with your own callsign):

     wl2k    CALLSIGN    9600    128    4    Winlink
  5. Set HBAUD to 1200 in /etc/default/ax25.

  6. Download and compile the tmd710_tncsetup script mentioned in a comment in /etc/default/ax25:

     gcc -o tmd710_tncsetup tmd710_tncsetup.c
  7. Add the tmd710_tncsetup script in /etc/default/ax25 and use these command line parameters (-B 0 specifies band A, use -B 1 for band B):

     tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
  8. Start ax25 driver:

     systemctl start ax25.service

Connecting to a winlink gateway

To monitor what is being received and transmitted:

axlisten -cart

Then create aliases like these in ~/.wl2k/config.json:

  "connect_aliases": {
    "ax25-VA7EOC": "ax25://wl2k/VA7EOC-10",
    "ax25-VE7LAN": "ax25://wl2k/VE7LAN-10"

and use them to connect to your preferred Winlink gateways.


If it doesn't look like ax25 can talk to the radio (i.e. the TX light doesn't turn ON), then it's possible that the tmd710_tncsetup script isn't being run at all, in which case the TNC isn't initialized correctly.

On the other hand, if you can see the radio transmitting but are not seeing any incoming packets in axlisten then double check that the speed is set correctly:

  • HBAUD in /etc/default/ax25 should be set to 1200
  • line speed in /etc/ax25/axports should be set to 9600
  • SERIAL_SPEED in tmd710_tncsetup should be set to 9600
  • radio displays packet12 in the top-left corner, not packet96

If you can establish a connection, but it's very unreliable, make sure that you have enabled software flow control (the -s option in tmd710_tncsetup).

If you can't connect to VA7EOC-10 on UHF, you could also try the VHF BCFM repeater on Mt Seymour, VE7LAN (VECTOR channel 65).

Looking back on starting Libravatar

Update (2018-07-31): Libravatar is not going away

As noted on the official Libravatar blog, I will be shutting the service down on 2018-09-01.

It has been an incredible journey but Libravatar has been more-or-less in maintenance mode for 5 years, so it's somewhat outdated in its technological stack and I no longer have much interest in doing the work that's required every two years when migrating to a new version of Debian/Django. The free software community prides itself on transparency and so while it is a difficult decision to make, it's time to be upfront with the users who depend on the project and admit that the project is not sustainable in its current form.

Many things worked well

The most motivating aspect of running Libravatar has been the steady organic growth within the FOSS community. Both in terms of traffic (in March 2018, we served a total of 5 GB of images and 12 GB of 302 redirects to Gravatar), integration with other sites and projects (Fedora, Debian, Mozilla, Linux kernel, Gitlab, Liberapay and many others), but also in terms of users:

In addition, I wanted to validate that it is possible to run a FOSS service without having to pay for anything out-of-pocket, so that it would be financially sustainable. Hosting and domain registrations have been entirely funded by the community, thanks to the generosity of sponsors and donors. Most of the donations came through Gittip/Gratipay and Liberapay. While Gratipay has now shut down, I encourage you to support Liberapay.

Finally, I made an effort to host Libravatar on FOSS infrastructure. That meant shying away from popular proprietary services in order to make a point that these convenient and well-known services aren't actually needed to run a successful project.

A few things didn't pan out

On the other hand, there were also a few disappointments.

A lot of the libraries and plugins never implemented DNS federation. That was the key part of the protocol that made Libravatar a decentralized service but unfortunately the rest of the protocol was must easier to implement and therefore many clients stopped there.

In addition, it turns out that while the DNS system is essentially a federated caching system for IP addresses, many DNS resolvers aren't doing a good job caching records and that created unnecessary latency for clients that chose to support DNS federation.

The main disappointment was that very few people stepped up to run mirrors. I designed the service so that it could scale easily in the same way that Linux distributions have coped with increasing user bases: "ftp" mirrors. By making the actual serving of images only require Apache and mod_rewrite, I had hoped that anybody running Apache would be able to add an extra vhost to their setup and start serving our static files. A few people did sign up for this over the years, but it mostly didn't work. Right now, there are no third-party mirrors online.

The other aspect that was a little disappointing was the lack of code contributions. There were a handful from friends in the first couple of months, but it's otherwise been a one-man project. I suppose that when a service works well for what people use it for, there are less opportunities for contributions (or less desire for it). The fact dev environment setup was not the easiest could definitely be a contributing factor, but I've only ever had a single person ask about it so it's not clear that this was the limiting factor. Also, while our source code repository was hosted on Github and open for pull requests, we never even received a single drive-by contribution, hinting at the fact that Github is not the magic bullet for community contributions that many people think it is.

Finally, it turns out that it is harder to delegate sysadmin work (you need root, for one thing) which consumes the majority of the time in a mature project. The general administration and maintenance of Libravatar has never moved on beyond its core team of one. I don't have a lot of ideas here, but I do want to join others who have flagged this as an area for "future work" in terms of project sustainability.

Personal goals

While I was originally inspired by Evan Prodromou's vision of a suite of FOSS services to replace the proprietary stack that everybody relies on, starting a free software project is an inherently personal endeavour: the shape of the project will be influenced by the personal goals of the founder.

When I started the project in 2011, I had a few goals:

This project personally taught me a lot of different technologies and allowed me to try out various web development techniques I wanted to explore at the time. That was intentional: I chose my technologies so that even if the project was a complete failure, I would still have gotten something out of it.

A few things I've learned

I learned many things along the way, but here are a few that might be useful to other people starting a new free software project:

  • Speak about your new project at every user group you can. It's important to validate that you can get other people excited about your project. User groups are a great (and cheap) way to kickstart your word of mouth marketing.

  • When speaking about your project, ask simple things of the attendees (e.g. create an account today, join the IRC channel). Often people want to support you but they can't commit to big tasks. Make sure to take advantage of all of the support you can get, especially early on.

  • Having your friends join (or lurk on!) an IRC channel means it's vibrant, instead of empty, and there are people around to field simple questions or tell people to wait until you're around. Nobody wants to be alone in a channel with a stranger.

Thank you

I do want to sincerely thank all of the people who contributed to the project over the years:

  • Jonathan Harker and Brett Wilkins for productive hack sessions in the Catalyst office.
  • Lars Wirzenius, Andy Chilton and Jesse Noller for graciously hosting the service.
  • Christian Weiske, Melissa Draper, Thomas Goirand and Kai Hendry for running mirrors on their servers.
  • Chris Forbes, fr33domlover, Kang-min Liu and strk for writing and maintaining client libraries.
  • The Wellington Perl Mongers for their invaluable feedback on an early prototype.
  • The #equifoss group for their ongoing suppport and numerous ideas.
  • Nigel Babu and Melissa Draper for producing the first (and only) project stikers, as well as Chris Cormack for spreading so effectively.
  • Adolfo Jayme, Alfredo Hernández, Anthony Harrington, Asier Iturralde Sarasola, Besnik, Beto1917, Daniel Neis, Eduardo Battaglia, Fernando P Silveira, Gabriele Castagneti, Heimen Stoffels, Iñaki Arenaza, Jakob Kramer, Jorge Luis Gomez, Kristina Hoeppner, Laura Arjona Reina, Léo POUGHON, Marc Coll Carrillo, Mehmet Keçeci, Milan Horák, Mitsuhiro Yoshida, Oleg Koptev, Rodrigo Díaz, Simone G, Stanislas Michalak, Volkan Gezer, VPablo, Xuacu Saturio, Yuri Chornoivan, yurchor and zapman for making Libravatar speak so many languages.

I'm sure I have forgotten people who have helped over the years. If your name belongs in here and it's not, please email me or leave a comment.

Dynamic DNS on your own domain

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS
dyn NS
dyn NS
dyn NS
dyn NS

Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

use=web,, web-skip='IP Address'

and the following in /etc/default/ddclient:


Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short
Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (
  • Redirect anything else to

The reason for this is that the main Libravatar service listens on and not, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ [last,redirect=301]

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/

authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> = /var/www/acme = /var/www/acme
LXC setup on Debian stretch

Here's how to setup LXC-based "chroots" on Debian stretch. While I wrote about this on Debian jessie, I had to make some networking changes for stretch and so here are the full steps that should work on stretch.

Start by installing (as root) the necessary packages:

apt install lxc libvirt-clients debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here): = veth = lxcbr0 = up = 00:FF:AA:xx:xx:xx

and enable networking by putting the following in a new /etc/default/lxc-net file:


That configuration requires that the veth kernel module be loaded. If you have any kinds of module-loading restrictions enabled, you probably need to add the following to /etc/modules and reboot:


Next, I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

  2. and then applying it using:

    sysctl -p
  3. Restart the network bridge:

    systemctl restart lxc-net.service
  4. and ensure that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A FORWARD -d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
  5. and applying the rules using:


Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR= lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64 -F

Since the root password is randomly generated, you'll need to reset it before you can login as root:

sudo lxc-attach -n sid64 passwd

Then login as root and install a text editor inside the container because the root image doesn't have one by default:

apt install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Mounting your home directory inside a container

In order to have my home directory available within the container, I created a user account for myself inside the container and then added the following to the container config file (/var/lib/lxc/sid64/config):

lxc.mount.entry=/home/francois home/francois none bind 0 0

before restarting the container:

lxc-stop -n sid64
lxc-start -n sid64

Fixing locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

If you see these errors while reconfiguring the locales package:

Generating locales (this might take a while)...
  en_US.UTF-8...cannot change mode of new locale archive: No such file or directory
  fr_CA.UTF-8...cannot change mode of new locale archive: No such file or directory
Generation complete.

and see the following dmesg output on the host:

[235350.947808] audit: type=1400 audit(1441664940.224:225): apparmor="DENIED" operation="chmod" info="Failed name lookup - deleted entry" error=-2 profile="/usr/bin/lxc-start" name="/usr/lib/locale/locale-archive.WVNevc" pid=21651 comm="localedef" requested_mask="w" denied_mask="w" fsuid=0 ouid=0

then AppArmor is interfering with the locale-gen binary and the work-around I found is to temporarily shutdown AppArmor on the host:

lxc-stop -n sid64
systemctl stop apparmor
lxc-start -n sid64

and then start up it later once the locales have been updated:

lxc-stop -n sid64
systemctl start apparmor
lxc-start -n sid64

AppArmor support

If you are running AppArmor, your container probably won't start until you add the following to the container config (/var/lib/lxc/sid64/config):

lxc.aa_allow_incomplete = 1
Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get
country CA: DFS-FCC
    (2402 - 2472 @ 40), (N/A, 30), (N/A)
    (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW
    (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW
    (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS
    (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS
    (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more wireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options (where CA is the country code for Canada, where my router is physically located).

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.