RSS Atom Add a new post titled:
IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block

The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.

If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address

If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules

iface eth0 inet6 static
    address 2600:3c01::xxxx:xxxx:xxxx:939f/64
    gateway fe80::1
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

iface tun0 inet6 static
    address 2600:3c01:xxxx:xxxx::/64
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.

Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:

ping6 2600:3c01:xxxx:xxxx::1

OpenVPN configuration

The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:

proto udp

to:

proto udp6

in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:

server-ipv6 2600:3c01:xxxx:xxxx::/64
push "route-ipv6 2000::/3"

to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.

In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:

net.ipv6.conf.all.forwarding=1

and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):

# openvpn
-A INPUT -p udp --dport 1194 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.

With all of this done, apply the settings by running:

sysctl -p /etc/sysctl.d/openvpn.conf
ip6tables-apply
systemctl restart openvpn.service

Testing the connection

Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.

Then you can ping the server's new IP address:

ping6 2600:3c01:xxxx:xxxx::1

and from the server, you can ping the client's IP (which you can see in the network settings):

ping6 2600:3c01:xxxx:xxxx::1002

Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:

ping6 ipv6.google.com

and then pinging your client from an IPv6-enabled host somewhere:

ping6 2600:3c01:xxxx:xxxx::1002

If that works, other online tests should also work.

Creating a home music server using mpd

I recently setup a music server on my home server using the Music Player Daemon, a cross-platform free software project which has been around for a long time.

Basic setup

Start by installing the server and the client package:

apt install mpd mpc

then open /etc/mpd.conf and set these:

music_directory    "/path/to/music/"
bind_to_address    "192.168.1.2"
bind_to_address    "/run/mpd/socket"
zeroconf_enabled   "yes"
password           "Password1"

before replacing the alsa output:

audio_output {
   type    "alsa"
   name    "My ALSA Device"
}

with a pulseaudio one:

audio_output {
   type    "pulse"
   name    "Pulseaudio Output"
}

In order for the automatic detection (zeroconf) of your music server to work, you need to prevent systemd from creating the network socket:

systemctl stop mpd.service
systemctl stop mpd.socket
systemctl disable mpd.socket

otherwise you'll see this in /var/log/mpd/mpd.log:

zeroconf: No global port, disabling zeroconf

Once all of that is in place, start the mpd daemon:

systemctl start mpd.service

and create an index of your music files:

MPD_HOST=Password1@/run/mpd/socket mpc update

while watching the logs to notice any files that the mpd user doesn't have access to:

tail -f /var/log/mpd/mpd.log

Enhancements

I also added the following in /etc/logcheck/ignore.server.d/local-mpd to silence unnecessary log messages in logcheck emails:

^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Started Music Player Daemon.$
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopped Music Player Daemon.$
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopping Music Player Daemon...$

and created a cronjob in /etc/cron.d/mpd-francois to update the database daily and stop the music automatically in the evening:

# Refresh DB once a day
5 1 * * *  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update
# Think of the neighbours
0 22 * * 0-4  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
0 23 * * 5-6  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop

Clients

To let anybody on the local network connect, I opened port 6600 on the firewall (/etc/network/iptables.up.rules since I'm using Debian's iptables-apply):

-A INPUT -s 192.168.1.0/24 -p tcp --dport 6600 -j ACCEPT

Then I looked at the long list of clients on the mpd wiki.

Desktop

The official website suggests two clients which are available in Debian and Ubuntu:

Both of them work well, but haven't had a release since 2011, even though there is some activity in 2013 and 2015 in their respective source control repositories.

Ario has a simpler user interface but gmpc has cover art download working out of the box, which is why I might stick with it.

In both cases, it is possible to configure a polipo proxy so that any external resources are fetched via Tor.

Android

On Android, I got these two to work:

I picked M.A.L.P. since it includes a nice widget for the homescreen.

iOS

On iOS, these are the most promising clients I found:

since MPoD and MPaD don't appear to be available on the AppStore anymore.

Using iptables with NetworkManager

I used to rely on ifupdown to bring up my iptables firewall automatically using a config like this in /etc/network/interfaces:

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up /sbin/iptables-restore /etc/network/iptables.up.rules
    pre-up /sbin/ip6tables-restore /etc/network/ip6tables.up.rules

allow-hotplug wlan0
iface wlan0 inet dhcp
    pre-up /sbin/iptables-restore /etc/network/iptables.up.rules
    pre-up /sbin/ip6tables-restore /etc/network/ip6tables.up.rules

but that doesn't seem to work very well in the brave new NetworkManager world.

What does work reliably is a "pre-up" NetworkManager script, something that gets run before a network interface is brought up. However, despite what the documentation says, a dispatcher script in /etc/NetworkManager/dispatched.d/ won't work on my Debian and Ubuntu machines. Instead, I had to create a new iptables script in /etc/NetworkManager/dispatcher.d/pre-up.d/:

#!/bin/sh

LOGFILE=/var/log/iptables.log

if [ "$1" = lo ]; then
    echo "$0: ignoring $1 for \`$2'" >> $LOGFILE
    exit 0
fi

case "$2" in
    pre-up)
        echo "$0: restoring iptables rules for $1" >> $LOGFILE
        /sbin/iptables-restore /etc/network/iptables.up.rules >> $LOGFILE 2>&1
        /sbin/ip6tables-restore /etc/network/ip6tables.up.rules >> $LOGFILE 2>&1
        ;;
    *)
        echo "$0: nothing to do with $1 for \`$2'" >> $LOGFILE
        ;;
esac

exit 0

and then make that script executable:

chmod a+x /etc/NetworkManager/dispatcher.d/pre-up.d/iptables

With this in place, I can put my iptables rules in the usual place (/etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules) and use the handy iptables-apply and ip6tables-apply commands to test any changes to my firewall rules.

Persona Guiding Principles

Given the impending shutdown of Persona and the lack of a clear alternative to it, I decided to write about some of the principles that guided its design and development in the hope that it may influence future efforts in some way.

Permission-less system

There was no need for reliers (sites relying on Persona to log their users in) to ask for permission before using Persona. Just like a site doesn't need to ask for permission before creating a link to another site, reliers didn't need to apply for an API key before they got started and authenticated their users using Persona.

Similarly, identity providers (the services vouching for their users identity) didn't have to be whitelisted by reliers in order to be useful to their users.

Federation at the domain level

Just like email, Persona was federated at the domain name level and put domain owners in control. Just like they can choose who gets to manage emails for their domain, they could:

  • run their own identity provider, or
  • delegate to their favourite provider.

Site owners were also in control of the mechanism and policies involved in authenticating their users. For example, a security-sensitive corporation could decide to require 2-factor authentication for everyone or put a very short expiry on the certificates they issued.

Alternatively, a low-security domain could get away with a much simpler login mechanism (including a "0-factor" mechanism in the case of http://mockmyid.com!).

Privacy from your identity provider

While identity providers were the ones vouching for their users' identity, they didn't need to know the websites that their users are visiting. This is a potential source of control or censorship and the design of Persona was able to eliminate this.

The downside of this design of course is that it becomes impossible for an identity provider to provide their users with a list of all of the sites where they successfully logged in for audit purposes, something that centralized systems can provide easily.

The browser as a trusted agent

The browser, whether it had native support for the BrowserID protocol or not, was the agent that the user needed to trust. It connected reliers (sites using Persona for logins) and identity providers together and got to see all aspects of the login process.

It also held your private keys and therefore was the only party that could impersonate you. This is of course a power which it already held by virtue of its role as the web browser.

Additionally, since it was the one generating and holding the private keys, your browser could also choose how long these keys are valid and may choose to vary that amount of time depending on factors like a shared computer environment or Private Browsing mode.

Other clients/agents would likely be necessary as well, especially when it comes to interacting with mobile applications or native desktop applications. Each client would have its own key, but they would all be signed by the identity provider and therefore valid.

Bootstrapping a complex system requires fallbacks

Persona was a complex system which involved a number of different actors. In order to slowly roll this out without waiting on every actor to implement the BrowserID protocol (something that would have taken an infinite amount of time), fallbacks were deemed necessary:

  • client-side JavaScript implementation for browsers without built-in support
  • centralized fallback identity provider for domains without native support or a working delegation
  • centralized verifier until local verification is done within authentication libraries

In addition, to lessen the burden on the centralized identity provider fallback, Persona experimented with a number of bridges to provide quasi-native support for a few large email providers.

Support for multiple identities

User research has shown that many users choose to present a different identity to different websites. An identity system that would restrict them to a single identity wouldn't work.

Persona handled this naturally by linking identities to email addresses. Users who wanted to present a different identity to a website could simply use a different email address. For example, a work address and a personal address.

No lock-in

Persona was an identity system which didn't stand between a site and its users. It exposed email address to sites and allowed them to control the relationship with their users.

Sites wanting to move away from Persona can use the email addresses they have to both:

  • notify users of the new login system, and
  • allow users to reset (or set) their password via an email flow.

Websites should not have to depend on the operator of an identity system in order to be able to talk to their users.

Short-lived certificates instead of revocation

Instead of relying on the correct use of revocation systems, Persona used short-lived certificates in an effort to simplify this critical part of any cryptographic system.

It offered three ways to limit the lifetime of crypto keys:

  • assertion expiry (set by the client)
  • key expiry (set by the client)
  • certificate expiry (set by the identify provider)

The main drawback of such a pure expiration-based system is the increased window of time between a password change (or a similar signal that the user would like to revoke access) and the actual termination of all sessions. A short expirty can mitigate this problem, but it cannot be eliminated entirely unlike in a centralized identity system.

Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.

Description

In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov.

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match

Breakage

If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s
print

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb
Cleaning up obsolete config files on Debian and Ubuntu

As part of regular operating system hygiene, I run a cron job which updates package metadata and looks for obsolete packages and configuration files.

While there is already some easily available information on how to purge unneeded or obsolete packages and how to clean up config files properly in maintainer scripts, the guidance on how to delete obsolete config files is not easy to find and somewhat incomplete.

These are the obsolete conffiles I started with:

$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/tunables/ntpd 5519e4c01535818cb26f2ef9e527f191 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete
 /etc/apparmor.d/usr.sbin.ntpd a00aa055d1a5feff414bacc89b8c9f6e obsolete
 /etc/bash_completion.d/initramfs-tools 7eeb7184772f3658e7cf446945c096b1 obsolete
 /etc/bash_completion.d/insserv 32975fe14795d6fce1408d5fd22747fd obsolete
 /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf 8df3896101328880517f530c11fff877 obsolete
 /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf d81013f5bfeece9858706aed938e16bb obsolete

To get rid of the /etc/bash_completion.d/ files, I first determined what packages they were registered to:

$ dpkg -S /etc/bash_completion.d/initramfs-tools
initramfs-tools: /etc/bash_completion.d/initramfs-tools
$ dpkg -S /etc/bash_completion.d/insserv
initramfs-tools: /etc/bash_completion.d/insserv

and then followed Paul Wise's instructions:

$ rm /etc/bash_completion.d/initramfs-tools /etc/bash_completion.d/insserv
$ apt install --reinstall initramfs-tools insserv

For some reason that didn't work for the /etc/dbus-1/system.d/ files and I had to purge and reinstall the relevant package:

$ dpkg -S /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
$ dpkg -S /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf

$ apt purge system-config-printer-common
$ apt install system-config-printer

The files in /etc/apparmor.d/ were even more complicated to deal with because purging the packages that they come from didn't help:

$ dpkg -S /etc/apparmor.d/abstractions/evince
evince: /etc/apparmor.d/abstractions/evince
$ apt purge evince
$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete

I was however able to get rid of them by also purging the apparmor profile packages that are installed on my machine:

$ apt purge apparmor-profiles apparmor-profiles-extra evince ntp
$ apt install apparmor-profiles apparmor-profiles-extra evince ntp

Not sure why I had to do this but I suspect that these files used to be shipped by one of the apparmor packages and then eventually migrated to the evince and ntp packages directly and dpkg got confused.

If you're in a similar circumstance, you want want to search for the file you're trying to get rid of on Google and then you might end up on http://apt-browse.org/ which could lead you to the old package that used to own this file.

Simple remote mail queue monitoring

In order to monitor some of the machines I maintain, I rely on a simple email setup using logcheck. Unfortunately that system completely breaks down if mail delivery stops.

This is the simple setup I've come up with to ensure that mail doesn't pile up on the remote machine.

Server setup

The first thing I did on the server-side is to follow Sean Whitton's advice and configure postfix so that it keeps undelivered emails for 10 days (instead of 5 days, the default):

postconf -e maximal_queue_lifetime=10d

Then I created a new user:

adduser mailq-check

with a password straight out of pwgen -s 32.

I gave ssh permission to that user:

adduser mailq-check sshuser

and then authorized my new ssh key (see next section):

sudo -u mailq-check -i
mkdir ~/.ssh/
cat - > ~/.ssh/authorized_keys

Laptop setup

On my laptop, the machine from where I monitor the server's mail queue, I first created a new password-less ssh key:

ssh-keygen -t ed25519 -f .ssh/egilsstadir-mailq-check
cat ~/.ssh/egilsstadir-mailq-check.pub

which I then installed on the server.

Then I added this cronjob in /etc/cron.d/egilsstadir-mailq-check:

0 2 * * * francois /usr/bin/ssh -i /home/francois/.ssh/egilsstadir-mailq-check mailq-check@egilsstadir mailq | grep -v "Mail queue is empty"

and that's it. I get a (locally delivered) email whenever the mail queue on the server is non-empty.

There is a race condition built into this setup since it's possible that the server will want to send an email at 2am. However, all that does is send a spurious warning email in that case and so it's a pretty small price to pay for a dirt simple setup that's unlikely to break.

Using OpenVPN on iOS and OSX

I have written instructions on how to connect to your own OpenVPN server using Network Manager as well as Android.

Here is how to do it on iOS and OSX assuming you have followed my instructions for the server setup.

Generate new keys

From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:

./build-key iphone   # "iphone" as Name, no password

and for your laptop:

./build-key osx      # "osx" as Name, no password

Using OpenVPN Connect on iOS

The app you need to install from the App Store is OpenVPN Connect.

Once it's installed, connect your phone to your computer and transfer the following files using iTunes:

  • ca.crt
  • iphone.crt
  • iphone.key
  • iphone.ovpn
  • ta.key

You should then be able to select it after launching the app. See the official FAQ if you run into any problems.

iphone.ovpn is a configuration file that you need to supply since the OpenVPN Connect app doesn't have a configuration interface. You can use this script to generate it or write it from scratch using this template.

On Linux, you can also create a configuration file using Network Manager 1.2, use the following command:

nmcli connection export hafnarfjordur > iphone.ovpn

though that didn't quite work in my experience.

Here is the config I successfully used to connect to my server:

client
remote hafnarfjordur.fmarier.org 1194
ca ca.crt
cert iphone.crt
key iphone.key
cipher AES-256-CBC
auth SHA384
comp-lzo yes
proto udp
tls-remote server
remote-cert-tls server
ns-cert-type server
tls-auth ta.key 1

Using Viscosity on Mac OSX

One of the possible OpenVPN clients you can use on OSX is Viscosity.

Here are the settings you'll need to change when setting up a new VPN connection:

  • General
    • Remote server: hafnarfjordur.fmarier.org
  • Authentication
    • Type: SSL/TLS client
    • CA: ca.crt
    • Cert: osx.crt
    • Key: osx.key
    • Tls-Auth: ta.key
    • direction: 1
  • Options
    • peer certificate: require server nsCertType
    • compression: turn LZO on
  • Networking
    • send all traffic on VPN
  • Advanced
    • add the following extra OpenVPN configuration commands:

      cipher AES-256-CBC
      auth SHA384