RSS Atom Add a new post titled:
Mysterious 400 Bad Request in Django debug mode

While upgrading Libravatar to a more recent version of Django, I ran into a mysterious 400 error.

In debug mode, my site was working fine, but with DEBUG = False, I would only a page containing this error:

Bad Request (400)

with no extra details in the web server logs.

Turning on extra error logging

To see the full error message, I configured logging to a file by adding this to settings.py:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        'file': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': '/tmp/debug.log',
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file'],
            'level': 'DEBUG',
            'propagate': True,
        },
    },
}

Then I got the following error message:

Invalid HTTP_HOST header: 'www.example.com'. You may need to add u'www.example.com' to ALLOWED_HOSTS.

Temporary hack

Sure enough, putting this in settings.py would make it work outside of debug mode:

ALLOWED_HOSTS = ['*']

which means that there's a mismatch between the HTTP_HOST from Apache and the one that Django expects.

Root cause

The underlying problem was that the Libravatar config file was missing the square brackets around the ALLOWED_HOSTS setting.

I had this:

ALLOWED_HOSTS = 'www.example.com'

instead of:

ALLOWED_HOSTS = ['www.example.com']
Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty).

After showing the boot screen for about 30 seconds, a busybox shell pops up:

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.

(initramfs)

Typing exit will display more information about the failure before bringing us back to the same busybox shell:

Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  

(initramfs)

which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found.

There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk

First, create bootable USB disk using the latest Ubuntu installer:

  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):

     dd if=ubuntu.iso of=/dev/sdc1
    

and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition

Assuming a drive which is partitioned this way:

  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition

Open a terminal and mount the required partitions:

cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev

Note:

  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).

  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition

Then "enter" the root partition using:

chroot /mnt

and make sure that the lvm2 package is installed:

apt install lvm2

before regenerating the initramfs for all of the installed kernels:

update-initramfs -c -k all
Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:

#!/bin/bash

/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"

pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt ejabberd
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
fi
popd > /dev/null

# Generate the right certs for ejabberd and znc
if test /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem -nt /etc/ejabberd/ejabberd.pem ; then
    cat /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem > /etc/ejabberd/ejabberd.pem
fi
cat /etc/letsencrypt/live/irc.fmarier.org/privkey.pem /etc/letsencrypt/live/irc.fmarier.org/fullchain.pem > /home/francois/.znc/znc.pem

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

Finally, since my XMPP server and IRC bouncer need the private key and the full certificate chain to be in the same file, so I regenerate these files at the end of the script. In the case of ejabberd, I only do so if the certificates have actually changed since overwriting ejabberd.pem changes its timestamp and triggers an fcheck notification (since it watches all files under /etc).

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

Manually expanding a RAID1 array on Ubuntu

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine.

My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive

In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:

$ parted /dev/sdc
unit s
print
mkpart
print

Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):

$ parted /dev/sdd
unit s
mktable gpt
mkpart

Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:

  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive

I created the partition and flagged it as a RAID one:

mkpart
toggle 2 raid

and then deleted the temporary partition on the old 2 TB drive:

$ parted /dev/sdc
print
rm 3
print

Create a temporary RAID1 array on the new drive

With the new drive properly partitioned, I created a new RAID array for it:

mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing

and added it to /etc/mdadm/mdadm.conf:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

which required manual editing of that file to remove duplicate entries.

Create the encrypted partition

With the new RAID device in place, I created the encrypted LUKS partition:

cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
cryptsetup luksOpen /dev/md10 chome2

I took the UUID for the temporary RAID partition:

blkid /dev/md10

and put it in /etc/crypttab as chome2.

Then, I formatted the new LUKS partition and mounted it:

mkfs.ext4 -m 0 /dev/mapper/chome2
mkdir /home2
mount /dev/mapper/chome2 /home2

Copy the data from the old drive

With the home paritions of both drives mounted, I copied the files over to the new drive:

eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/

making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive

After the copy, I switched over to the new drive in a step-by-step way:

  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.

Add the old drive to the new RAID array

With all of this working, it was time to clear the mdadm superblock from the old drive:

mdadm --zero-superblock /dev/sdc1

and then change the second partition of the old drive to make it the same size as the one on the new drive:

$ parted /dev/sdc
rm 2
mkpart
toggle 2 raid
print

before adding it to the new array:

mdadm /dev/md1 -a /dev/sdc1

Rename the new array

To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:

umount /home
cryptsetup luksClose chome
mdadm --stop /dev/md10
mdadm --stop /dev/md1

before running this command:

mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2

and updating the name in /etc/mdadm/mdadm.conf.

The last step was to regenerate the initramfs:

update-initramfs -u

before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block

The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.

If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address

If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules

iface eth0 inet6 static
    address 2600:3c01::xxxx:xxxx:xxxx:939f/64
    gateway fe80::1
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

iface tun0 inet6 static
    address 2600:3c01:xxxx:xxxx::/64
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.

Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:

ping6 2600:3c01:xxxx:xxxx::1

OpenVPN configuration

The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:

proto udp

to:

proto udp6

in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:

server-ipv6 2600:3c01:xxxx:xxxx::/64
push "route-ipv6 2000::/3"

to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.

In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:

net.ipv6.conf.all.forwarding=1

and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):

# openvpn
-A INPUT -p udp --dport 1194 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.

With all of this done, apply the settings by running:

sysctl -p /etc/sysctl.d/openvpn.conf
ip6tables-apply
systemctl restart openvpn.service

Testing the connection

Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.

Then you can ping the server's new IP address:

ping6 2600:3c01:xxxx:xxxx::1

and from the server, you can ping the client's IP (which you can see in the network settings):

ping6 2600:3c01:xxxx:xxxx::1002

Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:

ping6 ipv6.google.com

and then pinging your client from an IPv6-enabled host somewhere:

ping6 2600:3c01:xxxx:xxxx::1002

If that works, other online tests should also work.

Creating a home music server using mpd

I recently setup a music server on my home server using the Music Player Daemon, a cross-platform free software project which has been around for a long time.

Basic setup

Start by installing the server and the client package:

apt install mpd mpc

then open /etc/mpd.conf and set these:

music_directory    "/path/to/music/"
bind_to_address    "192.168.1.2"
bind_to_address    "/run/mpd/socket"
zeroconf_enabled   "yes"
password           "Password1"

before replacing the alsa output:

audio_output {
   type    "alsa"
   name    "My ALSA Device"
}

with a pulseaudio one:

audio_output {
   type    "pulse"
   name    "Pulseaudio Output"
}

In order for the automatic detection (zeroconf) of your music server to work, you need to prevent systemd from creating the network socket:

systemctl stop mpd.service
systemctl stop mpd.socket
systemctl disable mpd.socket

otherwise you'll see this in /var/log/mpd/mpd.log:

zeroconf: No global port, disabling zeroconf

Once all of that is in place, start the mpd daemon:

systemctl start mpd.service

and create an index of your music files:

MPD_HOST=Password1@/run/mpd/socket mpc update

while watching the logs to notice any files that the mpd user doesn't have access to:

tail -f /var/log/mpd/mpd.log

Enhancements

I also added the following in /etc/logcheck/ignore.server.d/local-mpd to silence unnecessary log messages in logcheck emails:

^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Started Music Player Daemon.$
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopped Music Player Daemon.$
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopping Music Player Daemon...$

and created a cronjob in /etc/cron.d/mpd-francois to update the database daily and stop the music automatically in the evening:

# Refresh DB once an hour
5 * * * *  mpd  test -r /run/mpd/socket && MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update
# Think of the neighbours
0 22 * * 0-4  mpd  test -r /run/mpd/socket && MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
0 23 * * 5-6  mpd  test -r /run/mpd/socket && MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop

Clients

To let anybody on the local network connect, I opened port 6600 on the firewall (/etc/network/iptables.up.rules since I'm using Debian's iptables-apply):

-A INPUT -s 192.168.1.0/24 -p tcp --dport 6600 -j ACCEPT

Then I looked at the long list of clients on the mpd wiki.

Desktop

The official website suggests two clients which are available in Debian and Ubuntu:

Both of them work well, but haven't had a release since 2011, even though there is some activity in 2013 and 2015 in their respective source control repositories.

Ario has a simpler user interface but gmpc has cover art download working out of the box, which is why I might stick with it.

In both cases, it is possible to configure a polipo proxy so that any external resources are fetched via Tor.

Android

On Android, I got these two to work:

I picked M.A.L.P. since it includes a nice widget for the homescreen.

iOS

On iOS, these are the most promising clients I found:

since MPoD and MPaD don't appear to be available on the AppStore anymore.

Using iptables with NetworkManager

I used to rely on ifupdown to bring up my iptables firewall automatically using a config like this in /etc/network/interfaces:

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up /sbin/iptables-restore /etc/network/iptables.up.rules
    pre-up /sbin/ip6tables-restore /etc/network/ip6tables.up.rules

allow-hotplug wlan0
iface wlan0 inet dhcp
    pre-up /sbin/iptables-restore /etc/network/iptables.up.rules
    pre-up /sbin/ip6tables-restore /etc/network/ip6tables.up.rules

but that doesn't seem to work very well in the brave new NetworkManager world.

What does work reliably is a "pre-up" NetworkManager script, something that gets run before a network interface is brought up. However, despite what the documentation says, a dispatcher script in /etc/NetworkManager/dispatched.d/ won't work on my Debian and Ubuntu machines. Instead, I had to create a new iptables script in /etc/NetworkManager/dispatcher.d/pre-up.d/:

#!/bin/sh

LOGFILE=/var/log/iptables.log

if [ "$1" = lo ]; then
    echo "$0: ignoring $1 for \`$2'" >> $LOGFILE
    exit 0
fi

case "$2" in
    pre-up)
        echo "$0: restoring iptables rules for $1" >> $LOGFILE
        /sbin/iptables-restore /etc/network/iptables.up.rules >> $LOGFILE 2>&1
        /sbin/ip6tables-restore /etc/network/ip6tables.up.rules >> $LOGFILE 2>&1
        ;;
    *)
        echo "$0: nothing to do with $1 for \`$2'" >> $LOGFILE
        ;;
esac

exit 0

and then make that script executable:

chmod a+x /etc/NetworkManager/dispatcher.d/pre-up.d/iptables

With this in place, I can put my iptables rules in the usual place (/etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules) and use the handy iptables-apply and ip6tables-apply commands to test any changes to my firewall rules.

Persona Guiding Principles

Given the impending shutdown of Persona and the lack of a clear alternative to it, I decided to write about some of the principles that guided its design and development in the hope that it may influence future efforts in some way.

Permission-less system

There was no need for reliers (sites relying on Persona to log their users in) to ask for permission before using Persona. Just like a site doesn't need to ask for permission before creating a link to another site, reliers didn't need to apply for an API key before they got started and authenticated their users using Persona.

Similarly, identity providers (the services vouching for their users identity) didn't have to be whitelisted by reliers in order to be useful to their users.

Federation at the domain level

Just like email, Persona was federated at the domain name level and put domain owners in control. Just like they can choose who gets to manage emails for their domain, they could:

  • run their own identity provider, or
  • delegate to their favourite provider.

Site owners were also in control of the mechanism and policies involved in authenticating their users. For example, a security-sensitive corporation could decide to require 2-factor authentication for everyone or put a very short expiry on the certificates they issued.

Alternatively, a low-security domain could get away with a much simpler login mechanism (including a "0-factor" mechanism in the case of http://mockmyid.com!).

Privacy from your identity provider

While identity providers were the ones vouching for their users' identity, they didn't need to know the websites that their users are visiting. This is a potential source of control or censorship and the design of Persona was able to eliminate this.

The downside of this design of course is that it becomes impossible for an identity provider to provide their users with a list of all of the sites where they successfully logged in for audit purposes, something that centralized systems can provide easily.

The browser as a trusted agent

The browser, whether it had native support for the BrowserID protocol or not, was the agent that the user needed to trust. It connected reliers (sites using Persona for logins) and identity providers together and got to see all aspects of the login process.

It also held your private keys and therefore was the only party that could impersonate you. This is of course a power which it already held by virtue of its role as the web browser.

Additionally, since it was the one generating and holding the private keys, your browser could also choose how long these keys are valid and may choose to vary that amount of time depending on factors like a shared computer environment or Private Browsing mode.

Other clients/agents would likely be necessary as well, especially when it comes to interacting with mobile applications or native desktop applications. Each client would have its own key, but they would all be signed by the identity provider and therefore valid.

Bootstrapping a complex system requires fallbacks

Persona was a complex system which involved a number of different actors. In order to slowly roll this out without waiting on every actor to implement the BrowserID protocol (something that would have taken an infinite amount of time), fallbacks were deemed necessary:

  • client-side JavaScript implementation for browsers without built-in support
  • centralized fallback identity provider for domains without native support or a working delegation
  • centralized verifier until local verification is done within authentication libraries

In addition, to lessen the burden on the centralized identity provider fallback, Persona experimented with a number of bridges to provide quasi-native support for a few large email providers.

Support for multiple identities

User research has shown that many users choose to present a different identity to different websites. An identity system that would restrict them to a single identity wouldn't work.

Persona handled this naturally by linking identities to email addresses. Users who wanted to present a different identity to a website could simply use a different email address. For example, a work address and a personal address.

No lock-in

Persona was an identity system which didn't stand between a site and its users. It exposed email address to sites and allowed them to control the relationship with their users.

Sites wanting to move away from Persona can use the email addresses they have to both:

  • notify users of the new login system, and
  • allow users to reset (or set) their password via an email flow.

Websites should not have to depend on the operator of an identity system in order to be able to talk to their users.

Short-lived certificates instead of revocation

Instead of relying on the correct use of revocation systems, Persona used short-lived certificates in an effort to simplify this critical part of any cryptographic system.

It offered three ways to limit the lifetime of crypto keys:

  • assertion expiry (set by the client)
  • key expiry (set by the client)
  • certificate expiry (set by the identify provider)

The main drawback of such a pure expiration-based system is the increased window of time between a password change (or a similar signal that the user would like to revoke access) and the actual termination of all sessions. A short expirty can mitigate this problem, but it cannot be eliminated entirely unlike in a centralized identity system.

Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.

Description

In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov.

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match

Breakage

If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.