RSS Atom Add a new post titled:
Using vnc to do remote tech support over high-latency networks

If you ever find yourself doing a bit of technical support for relatives over the phone, there's nothing like actually seeing what they are doing on their computer. One of the best tools for such remote desktop sharing is vnc.

Here's the best setup I have come up with so far. If you have any suggestions, please leave a comment!

Basic vnc configuration

First off, you need two things: a vnc server on your relative's machine and a vnc client on yours. Thanks to vnc being an open protocol, there are many choices for both.

I eventually settled on x11vnc for the server and ssvnc for the client. They are both available in the standard Debian and Ubuntu repositories.

Since I have ssh access on the machine that needs to run the server, I simply login and then run x11vnc. Here's what ~/.x11vnrc contains:

noxdamage

That option appears to be necessary when the desktop to share is running gnome-shell / compiz.

Afterwards, I start the client on my laptop with the following command:

ssvncviewer -encodings zrle -scale 1280x775 localhost

The scaling factor is simply the resolution of the client minus any window decorations.

ssh configuration

As you can see above, the client is not connecting directly to the server. Instead it's connecting to its own vnc port (localhost:5900). That's because I'm tunelling the traffic through the ssh connection in order to avoid relying on vnc extensions for authentication and encryption.

Here's what the client's ~/.ssh/config needs for that simple use case:

Host server.example.com:
  LocalForward 5900 127.0.0.1:5900

If the remote host (which has an internal IP address of 192.168.1.2 in this example) is not connected directly to the outside world and instead goes through a gateway, then your ~/.ssh/config will look like this:

Host gateway.example.com:
  ForwardAgent yes
  LocalForward 5900 192.168.1.2:5900

Host server.example.com:
  ProxyCommand ssh -q -a gateway.example.com nc -q0 %h 22

and the remote host will need to open up a port on its firewall for the gateway (internal IP address of 192.168.1.1 here):

iptables -A INPUT -p tcp --dport 5900 -s 192.168.1.1/32 -j ACCEPT

Optimizing for high-latency networks

Since I do most of my tech support over a very high latency network, I tweaked the default vnc settings to reduce the amount of network traffic.

I added this to ~/.x11vncrc on the vnc server:

ncache 10
ncache_cr

and changed the client command line to this:

ssvncviewer -compresslevel 9 -quality 3 -bgr233 -encodings zrle -use64 -scale 1280x775 -ycrop 1024 localhost

This decreases image quality (and required bandwidth) and enables client-side caching.

The magic 1024 number is simply the full vertical resolution of the remote machine, which sports a vintage 1280x1024 LCD monitor.

Hardening ssh Servers

Basic configuration

There are a few basic things that most admins will already know (and that tiger will warn you about if you forget):

  • only allow version 2 of the protocol
  • disable root logins
  • disable password authentication

This is what /etc/ssh/sshd_config should contain:

Protocol 2
PasswordAuthentication no
PermitRootLogin no

Whitelist approach to giving users ssh access

To ensure that only a few users have ssh access to the server and that newly created users don't have it enabled by default, create a new group:

addgroup sshuser

and then add the relevant users to it:

adduser francois sshuser

Finally, add this to /etc/ssh/sshd_config:

AllowGroups sshuser

Deterring brute-force (or dictionary) attacks

One way to ban attackers who try to brute-force your ssh server is to install the fail2ban package. It keeps an eye on the ssh log file (/var/log/auth.log) and temporarily blocks IP addresses after a number of failed login attempts.

Another approach is to hide the ssh service using Single-Packet Authentication. I have fwknop installed on some of my servers and use small wrapper scripts to connect to them.

Using restricted shells

For those users who only need an ssh account on the server in order to transfer files (using scp or rsync), it's a good idea to set their shell (via chsh) to a restricted one like rssh.

Should they attempt to log into the server, these users will be greeted with the following error message:

This account is restricted by rssh.
Allowed commands: rsync 

If you believe this is in error, please contact your system administrator.

Connection to server.example.com closed.

Restricting authorized keys to certain IP addresses

In addition to listing all of the public keys that are allowed to log into a user account, the ~/.ssh/authorized_keys file also allows (as the man page points out) a user to impose a number of restrictions.

Perhaps the most useful option is from which allows a user to restrict the IP addresses which can login using a specific key.

Here's what one of my authorized_keys looks like:

from="192.0.2.2" ssh-rsa AAAAB3Nz...zvCn bot@example

You may also want to include the following options to each entry: no-X11-forwarding, no-user-rc, no-pty, no-agent-forwarding and no-port-forwarding.

Increasing the amount of logging

The first thing I'd recommend is to increase the level of verbosity in /etc/ssh/sshd_config:

LogLevel VERBOSE

which will, amongst other things, log the fingerprints of keys used to login:

sshd: Connection from 192.0.2.2 port 39671
sshd: Found matching RSA key: de:ad:be:ef:ca:fe
sshd: Postponed publickey for francois from 192.0.2.2 port 39671 ssh2 [preauth]
sshd: Accepted publickey for francois from 192.0.2.2 port 39671 ssh2 

Secondly, if you run logcheck and would like to whitelist the "Accepted publickey" messages on your server, you'll have to start by deleting the first line of /etc/logcheck/ignore.d.server/sshd. Then you can add an entry for all of the usernames and IP addresses that you expect to see.

Finally, it is also possible to log all commands issued by a specific user over ssh by enabling the pam_tty_audit module in /etc/pam.d/sshd:

session required pam_tty_audit.so enable=francois

However this module is not included in wheezy and has only recently been re-added to Debian.

Identitying stolen keys

One thing I'd love to have is a way to identify a stolen public key. Given the IP restrictions described above, if a public key is stolen and used from a different IP, I will see something like this in /var/log/auth.log:

sshd: Connection from 198.51.100.10 port 39492
sshd: Authentication tried for francois with correct key but not from a permitted host (host=198.51.100.10, ip=198.51.100.10).
sshd: Failed publickey for francois from 198.51.100.10 port 39492 ssh2
sshd: Connection closed by 198.51.100.10 [preauth]

So I can get the IP address of the attacker (likely to be a random VPS or a Tor exit node), but unfortunately, the key fingerprints don't appear for failed connections like they do for successful ones. So I don't know which key to revoke.

Is there any way to identify which key was used in a failed login attempt or is the solution to only ever have a single public key in each authorized_keys file and create a separate user account for each user?

Running your own XMPP server on Debian or Ubuntu

In order to get closer to my goal of reducing my dependence on centralized services, I decided to setup my own XMPP / Jabber server on a Linode VPS running Debian wheezy. I chose ejabberd since it was recommended by the RTC Quick Start website and here's how I put everything together.

DNS and SSL

My personal domain is fmarier.org and so I created the following DNS records:

jabber-gw            CNAME    fmarier.org.
_xmpp-client._tcp    SRV      5 0 5222 jabber-gw.fmarier.org.
_xmpp-server._tcp    SRV      5 0 5269 jabber-gw.fmarier.org.

Then I went to get a free XMPP SSL certificate for jabber-gw.fmarier.org from StartSSL. This is how I generated the CSR (Certificate Signing Request) on a high-entropy machine:

openssl req -new -newkey rsa:2048 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/CN=jabber-gw.fmarier.org"

I downloaded the signed certificate as well as the StartSSL intermediate certificate and combined them this way:

cat ssl.crt ssl.key sub.class1.server.ca.pem > ejabberd.pem

ejabberd installation

Installing ejabberd on Debian is pretty simple and I mostly followed the steps on the Ubuntu wiki with an additional customization to solve the Pidgin "Not authorized" connection problems.

  1. Install the package, using "admin" as the username for the administrative user:

    apt-get install ejabberd
    
  2. Set the following in /etc/ejabberd/ejabberd.cfg (don't forget the trailing dots!):

    {acl, admin, {user, "admin", "fmarier.org"}}.
    {hosts, ["fmarier.org"]}.
    {fqdn, "jabber-gw.fmarier.org"}.
    
  3. Copy the SSL certificate into the /etc/ejabberd/ directory and set the permissions correctly:

    chown root:ejabberd /etc/ejabberd/ejabberd.pem
    chmod 640 /etc/ejabberd/ejabberd.pem
    
  4. Improve the client-to-server TLS configuration by adding starttls_required to this block:

    {listen,
      [
        {5222, ejabberd_c2s, [
          {access, c2s},
          {shaper, c2s_shaper},
          {max_stanza_size, 65536},
          starttls,
          starttls_required,
          {certfile, "/etc/ejabberd/ejabberd.pem"}
        ]},
    
  5. Restart the ejabberd daemon:

    /etc/init.d/ejabberd restart
    
  6. Create a new user account for yourself:

    ejabberdctl register me fmarier.org P@ssw0rd1!
    
  7. Open up the following ports on the server's firewall:

    iptables -A INPUT -p tcp --dport 5222 -j ACCEPT
    iptables -A INPUT -p tcp --dport 5269 -j ACCEPT
    

Client setup

On the client side, if you use Pidgin, create a new account with the following settings in the "Basic" tab:

  • Protocol: XMPP
  • Username: me
  • Domain: fmarier.org
  • Password: P@ssw0rd1!

and the following setting in the "Advanced" tab:

  • Connection security: Require encryption

From this, I was able to connect to the server without clicking through any certificate warnings.

Testing

If you want to make sure that XMPP federation works, add your GMail address as a buddy to the account and send yourself a test message.

In this example, the XMPP address I give to my friends is me@fmarier.org.

Finally, to ensure that your TLS settings are reasonable, use this automated tool to test both the client-to-server (c2s) and the server-to-server (s2s) flows.

Creating a Linode-based VPN setup using OpenVPN on Debian or Ubuntu

Using a Virtual Private Network is a good way to work-around geoIP restrictions but also to protect your network traffic when travelling with your laptop and connecting to untrusted networks.

While you might want to use Tor for the part of your network activity where you prefer to be anonymous, a VPN is a faster way to connect to sites that already know you.

Here are my instructions for setting up OpenVPN on Debian / Ubuntu machines where the VPN server is located on a cheap Linode virtual private server. They are largely based on the instructions found on the Debian wiki.

An easier way to setup an ad-hoc VPN is to use sshuttle but for some reason, it doesn't seem work on Linode or Rackspace virtual servers.

Generating the keys

Make sure you run the following on a machine with good entropy and not a VM! I personally use a machine fitted with an Entropy Key.

The first step is to install the required package:

sudo apt-get install openvpn

Then, copy the following file in your home directory (no need to run any of this as root):

mkdir easy-rsa
cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ easy-rsa/
cd easy-rsa/2.0

and put something like this in your ~/easy-rsa/2.0/vars:

export KEY_SIZE=2084
export KEY_COUNTRY="NZ"
export KEY_PROVINCE="AKL"
export KEY_CITY="Auckland"
export KEY_ORG="fmarier.org"
export KEY_EMAIL="francois@fmarier.org"
export KEY_CN=hafnarfjordur.fmarier.org
export KEY_NAME=hafnarfjordur.fmarier.org
export KEY_OU=VPN

Create this symbolic link:

ln -s openssl-1.0.0.cnf openssl.cnf

and generate the keys:

. ./vars
./clean-all
./build-ca
./build-key-server server  # press ENTER at every prompt, no password
./build-key akranes  # "akranes" as Name, no password
./build-dh
/usr/sbin/openvpn --genkey --secret keys/ta.key

Configuring the server

On my server, a Linode VPS called hafnarfjordur.fmarier.org, I installed the openvpn package:

apt-get install openvpn

and then copied the following files from my high-entropy machine:

cp ca.crt dh2048.pem server.key server.crt ta.key /etc/openvpn/
chown root:root /etc/openvpn/*
chmod 600 /etc/openvpn/ta.key /etc/openvpn/server.key

Then I took the official configuration template:

cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
gunzip /etc/openvpn/server.conf.gz

and set the following in /etc/openvpn/server.conf (which includes recommendations from BetterCrypto.org):

dh dh2048.pem
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 74.207.241.5"
push "dhcp-option DNS 74.207.242.5"
tls-auth ta.key 0
tls-cipher DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-RS
cipher AES-256-CBC
auth SHA384
user nobody
group nogroup

(These DNS servers are the ones I found in /etc/resolv.conf on my Linode VPS.)

Finally, I added the following to these configuration files:

  • /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  • /etc/rc.local (just before exit 0):

    iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
    
  • /etc/default/openvpn:

    AUTOSTART="all"
    

and ran sysctl -p before starting OpenVPN:

/etc/init.d/openvpn start

If the server has a firewall, you'll need to open up this port:

iptables -A INPUT -p udp --dport 1194 -j ACCEPT

as well as let forwarded packets flow:

iptables -A FORWARD -i eth0 -o tun0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -s 10.8.0.0/24 -o eth0 -j ACCEPT

Configuring the client

The final piece of this solution is to setup my laptop, akranes, to connect to hafnarfjordur by installing the relevant Network Manager plugin:

apt-get install network-manager-openvpn-gnome

The laptop needs these files from the high-entropy machine:

cp ca.crt akranes.crt akranes.key ta.key /etc/openvpn/
chown root:francois /etc/openvpn/akranes.key /etc/openvpn/ta.key
chmod 640 /etc/openvpn/ta.key /etc/openvpn/akranes.key

and my own user needs to have read access to the secret keys.

To create a new VPN, right-click on Network-Manager and add a new VPN connection of type "OpenVPN":

  • Gateway: hafnarfjordur.fmarier.org
  • Type: Certificates (TLS)
  • User Certificate: /etc/openvpn/akranes.crt
  • CA Certificate: /etc/openvpn/ca.crt
  • Private Key: /etc/openvpn/akranes.key
  • Available to all users: NO

then click the "Avanced" button and set the following:

  • General
    • Use LZO data compression: YES
  • Security
    • Cipher: AES-256-CBC
    • HMAC Authentication: SHA-384
  • TLS Authentication
    • Subject Match: server
    • Verify peer (server) certificate usage signature: YES
    • Remote peer certificate TLS type: Server
    • Use additional TLS authentication: YES
    • Key File: /etc/openvpn/ta.key
    • Key Direction: 1

Debugging

If you run into problems, simply take a look at the logs while attempting to connect to the server:

tail -f /var/log/syslog

on both the server and the client.

In my experience, searching for the error messages you find in there is usually enough to solve the problem.

Next steps

The next thing I'm going to add to this VPN setup is a local unbound DNS resolver that will be offered to all clients.

Is there anything else you have in your setup and that I should consider adding to mine?

Things that work well with Tor

Tor is a proxy server which allows its users to hide their IP address from the websites they connect to. In order to provide this level of anonymity however, it introduces latency into these connections, an unfortunate performance-privacy trade-off which means that few users choose to do all of their browsing through Tor.

Here are a few things that I have found work quite well through Tor. If there are any other interesting use cases I've missed, please leave a comment!

Tor setup

There are already great docs on how to install and configure the Tor server and the only thing I would add is that I've found that having a Polipo proxy around is quite useful for those applications that support HTTP proxies but not SOCKS proxies.

On Debian, it's just a matter of installing the polipo package and then configuring it as it used to be recommended by the Tor project.

RSS feeds

The whole idea behind RSS feeds is that articles are downloaded in batch ahead of time. In other words, latency doesn't matter.

I use akregator to read blogs and the way to make it fetch articles over Tor is to change the KDE-wide proxy server using systemsettings and setting a manual proxy of localhost on port 8008 (i.e. the local instance of Polipo).

Similarly, I use podget to automatically fetch podcasts through this cron job in /etc/cron.d/podget-francois:

0 12 * * 1-5 francois   http_proxy=http://localhost:8008/ https_proxy=http://localhost:8008/ nocache nice ionice -n7 /usr/bin/podget -s

Prior to that, I was using hpodder and had the following in ~/.hpodder/curlrc:

proxy=socks4a://localhost:9050

GnuPG

For those of us using the GNU Privacy Guard to exchange encrypted emails, keeping our public keyring up to date is important since it's the only way to ensure that revoked keys are taken into account. The script I use for this runs once a day and has the unfortunate side effect of revealing the contents of my address book to the keyserver I use.

Therefore, I figured that I should at least hide my IP address by putting the following in ~/.gnupg/gpg.conf:

keyserver-options http-proxy=http://127.0.0.1:8008

However, that tends to makes key submission fail and so I created a key submission alias in my ~/.bashrc which avoids sending keys through Tor:

alias gpgsendkeys='gpg --send-keys --keyserver-options http-proxy=""'

Instant messaging

Communication via XMPP is another use case that's not affected much by a bit of extra latency.

To get Pidgin to talk to an XMPP server over Tor, simply open "Tools | Preferences" and set a SOCKS5 (not Tor/Privacy) proxy of localhost on port 9050.

GMail

Finally, I found that since I am running GMail in a separate browser profile, I can take advantage of GMail's excellent caching and preloading and run the whole thing over Tor by setting that entire browser profile to run its traffic through the Tor SOCKS proxy on port 9050.

The Perils of RAID and Full Disk Encryption on Ubuntu 12.04

I've been using disk encryption (via LUKS and cryptsetup) on Debian and Ubuntu for quite some time and it has worked well for me. However, while setting up full disk encryption for a new computer on a RAID1 partition, I discovered that there are a few major problems with RAID on Ubuntu.

My Setup: RAID and LUKS

Since I was setting up a new machine on Ubuntu 12.04 LTS (Precise Pangolin), I used the alternate CD (I burned ubuntu-12.04.3-alternate-amd64+mac.iso to a blank DVD) to get access to the full disk encryption options.

First, I created a RAID1 array to mirror the data on the two hard disks. Then, I used the partition manager built into the installer to setup an unencrypted boot partition (/dev/md0 mounted as /boot) and an encrypted root partition (/dev/md1 mounted as /) on the RAID1 array.

While I had done full disk encryption and mirrored drives before, I had never done them at the same time on Ubuntu or Debian.

The problem: cannot boot an encrypted degraded RAID

After setting up the RAID, I decided to test it by booting from each drive with the other one unplugged.

The first step was to ensure that the system is configured (via dpkg-reconfigure mdadm) to boot in "degraded mode".

When I rebooted with a single disk though, I received a evms_activate is not available error message instead of the usual cryptsetup password prompt. The exact problem I ran into is best described in this comment (see this bug for context).

It turns out that booting degraded RAID arrays has been plagued with several problems.

My solution: an extra initramfs boot script to start the RAID array

The underlying problem is that the RAID1 array is not started automatically when it's missing a disk and so cryptsetup cannot find the UUID of the drive to decrypt (as configured in /etc/crypttab).

My fix, based on a script I was lucky enough to stumble on, lives in /etc/initramfs-tools/scripts/local-top/cryptraid:

#!/bin/sh
PREREQ="mdadm"
prereqs()
{
     echo "$PREREQ"
}
case $1 in
prereqs)
     prereqs
     exit 0
     ;;
esac

cat /proc/mdstat
mdadm --run /dev/md1
cat /proc/mdstat

After creating that file, remember to:

  1. make the script executable (using chmod a+x) and
  2. regenerate the initramfs (using dpkg-reconfigure linux-image-KERNELVERSION).

To make sure that the script is doing the right thing:

  1. press "Shift" while booting to bring up the Grub menu
  2. then press "e" to edit the default boot line
  3. remove the "quiet" and "splash" options from the kernel arguments
  4. press F10 to boot with maximum console output

You should see the RAID array stopped (look for the output of the first cat /proc/mdstat call) and then you should see output from a running degraded RAID array.

Backing up the old initramfs

If you want to be extra safe while testing this new initramfs, make sure you only reconfigure one kernel at a time (no update-initramfs -u -k all) and make a copy of the initramfs before you reconfigure the kernel:

cp /boot/initrd.img-KERNELVERSION-generic /boot/initrd.img-KERNELVERSION-generic.original

Then if you run into problems, you can go into the Grub menu, edit the default boot option and make it load the .original initramfs.

Presenting from a separate user account

While I suspect that professional speakers have separate presentation laptops that they use only to give talks, I don't do this often enough to justify the hassle and cost of a separate machine. However, I do use a separate user account to present from.

It allows me to focus on my presentation and not stress out about running into configuration problems or exposing private information. But mostly, I think it's about removing anything that could be distracting for the audience.

The great thing of having a separate user for this is that you can do whatever you want in your normal account and still know that the other account is ready to go and configured for presenting on a big screen.

Basics

The user account I use when giving talks is called presenter and it has the same password as my main user account, just to keep things simple. However, it doesn't need to belong to any of the UNIX groups that my main user account belongs to.

In terms of configuration, it looks like this:

  • power management and screen saver are turned off
  • all sound effects are turned off
  • window manager set to the default one (as opposed to a weird tiling one)
  • desktop notifications are turned off

Of course, this user account only has the software I need while presenting. You won't find an instant messaging, IRC or Twitter client running on there.

The desktop only has the icons I need for the presentation: slides and backup videos (in case the network is down and/or prevents me from doing a live demo).

Web browsers

While I usually have my presentations in PDF format (for maximum compatibility, you never know when you'll have to borrow someone else's laptop), I use web browsers (different ones to show that my demos work with all of them) all the time for demos.

Each browser:

  • clears everything (cookies, history, cache, etc.) at the end of the session
  • has the site I want to demo as the homepage
  • only contains add-ons or extensions I need for the demos
  • has the minimal number of toolbars and doesn't have any bookmarks
  • has search suggestions turned off
  • never asks to remember passwords

Terminal and editors

Some of my demos may feature coding demos and running scripts, which is why I have my terminal and editor set to:

  • a large font size
  • a color scheme with lots of contrast

It's all about making sure that the audience can see everything and follow along easily.

Also, if you have the sl package installed system-wide, you'll probably want to put the following in your ~/.bashrc:

alias sl="ls"
alias LS="ls"

Rehearsal

It's very important to rehearse the whole presentation using this account to make sure that you have everything you need and that you are comfortable with the configuration (for example, the large font size).

If you have access to a projector or a digital TV, try connecting your laptop to it. This will ensure that you know how to change the resolution of the external monitor and to turn mirroring ON and OFF ahead of time (especially important if you never use the default window manager or desktop environment). I keep a shortcut to the display settings in the sidebar.

Localization

Another thing I like to do is to set my operating system and browser locales to the one where I am giving a talk, assuming that it is a western language I can understand to some extent.

It probably doesn't make a big difference, but I think it's a nice touch and a few people have commented on this in the past. My theory is that it might be less distracting to the audience if they are shown the browser UI and menus they see every day. I'd love to have other people's thoughts on this point though.

Also, pay attention to the timezone since it could be another source of distraction as audience members try to guess what timezone your computer is set to.

Anything else?

If you also use a separate account for your presentations, I'd be curious to know whether there are other things you've customised. If there's anything I missed, please leave a comment!

How many Australasian banks use HSTS?

HTTP Strict Transport Security is a simple mechanism that secure sites can use to protect their users against an sslstrip-style HTTPS-to-HTTP downgrade attack.

Typical attack

The typical HTTPS-to-HTTP downgrade attack looks like this:

  1. victim connects to a compromised wifi access point
  2. victim connects to bank.com using attacker's DNS resolver
  3. attacker directs victim to a local server proxying the bank.com homepage
  4. victim clicks on "online banking" link as usual not noticing that it's an HTTP link instead of the usual HTTPS link
  5. attacker mounts a man-in-the-middle attack over that HTTP online banking login page
  6. victim leaks credentials to attacker

You can watch a short video demo of this attack, but if you don't want to set any of this up on your server, it turns out you can buy a little USB device that does it all for you.

What HSTS does

The fix is simple: let the browser know that it should never connect to the online banking site over plain HTTP. It should automatically upgrade to an encrypted HTTPS connection.

How should a site let the browser know? By including an HTTP header in its responses:

Strict-Transport-Security: max-age=10886400

It works in Chrome, Firefox and Opera. Other browsers don't benefit from this protection, but it also doesn't interfere with anything on those other browsers. So anybody with an HTTPS-only site should make use of this.

How many banks use it?

Given how easy it is to implement (and the fact that it's been in browsers since Chrome 4 and Firefox 4), how many of the Australasian banks actually make use of it? After all, almost all of the documentation explaining the motivation behind HSTS uses online banking as an example.

Here are all of the New Zealand banks I tested:

Bank Online Banking URL Header?
ASB https://fnc.asbbank.co.nz/1/User/LogOn YES!
ANZ https://secure.anz.co.nz/IBCS/pgLogin no
BankDirect https://vault.bankdirect.co.nz/default.asp no
BNZ https://www.bnz.co.nz/ib/app/login no
HSBC https://www.hsbc.co.nz/1/2/HUB_IDV2/IDV_EPP... no
Kiwibank https://www.ib.kiwibank.co.nz/ no
Rabobank https://secure1.rabodirect.co.nz/exp/authenticationDGPEN.jsp no
SBS https://sbsbanking.sbs.net.nz/secure/ no
TSB https://homebank.tsbbank.co.nz/online/ no
Westpac https://sec.westpac.co.nz/IOLB/Login.jsp no

and the Australian banks I looked at:

Bank Online Banking URL Header?
ANZ https://www.anz.com/INETBANK/bankmain.asp no
Bank of China https://ebs.boc.cn/BocnetClient/LoginFrameAbroad.do?_locale=en_US no
Bank of Melbourne https://ibanking.bankofmelbourne.com.au/ibank/loginPage.action no
Bankwest https://ibs.bankwest.com.au/BWLogin/rib.aspx no
Bendigobank https://www.bendigobank.com.au/banking/BBLIBanking/ no
Bank of Queensland https://www.ib.boq.com.au/boqbl no
Citibank https://www.citibank.com.au/AUGCB/JSO/signon/DisplayUsernameSignon.do no
Commonwealth Bank https://www.my.commbank.com.au/netbank/Logon/Logon.aspx no
Heritage Bank https://online.hbs.net.au/hbsv47/ntv471.asp?wci=entry no
HSBC https://www.hsbc.com.au/1/2/HUB_IDV2/IDV_EPP... no
Mebank https://ib.mebank.com.au/ME no
NAB https://ib.nab.com.au/nabib/index.jsp no
Rabobank https://secure.rabodirect.com.au/exp/policyenforcer/pages/loginB2CDGPEN.jsf?login no
St. George https://ibanking.stgeorge.com.au/ibank/loginPage.action no
Suncorp Bank https://internetbanking.suncorpbank.com.au/ no
Westpac https://online.westpac.com.au/esis/Login/SrvPage no

Conclusion

So, well done ASB! Not only do you stand out from your peers, but you also allowed New Zealand to beat Australia in terms of HSTS coverage :)

Here's the script I used to generate these results: https://github.com/fmarier/hsts-check. Feel free to leave a comment or email me if I missed an Australasia-based banking site.

Server Migration Plan

I recently had to migrate the main Libravatar server to a new virtual machine. In order to minimize risk and downtime, I decided to write a migration plan ahead of time.

I am sharing this plan here in case it gives any ideas to others who have to go through a similar process.

Prepare DNS

  • Change the TTL on the DNS entry for libravatar.org to 3600 seconds.
  • Remove the mirrors I don't control from the DNS load balancer (cdn and seccdn).
  • Remove the main server from cdn and seccdn in DNS.

Preparing the new server

  • Setup the new server.
  • Copy the database from the old site and restore it.
  • Copy /var/lib/libravatar from the old site.
  • Hack my local /etc/hosts file to point to the new server's IP address:

    xxx.xxx.xxx.xxx www.libravatar.org stats.libravatar.org cdn.libravatar.org
    
  • Test all functionality on the new site.

Preparing the old server

  • Prepare a static "under migration" Apache config in /etc/apache2/sites-enables.static/:

    <VirtualHost *:80>
        RewriteEngine On
        RewriteRule ^ https://www.libravatar.org [redirect=301,last]
    </VirtualHost>
    
    <VirtualHost *:443>
        SSLEngine on
        SSLProtocol TLSv1
        SSLHonorCipherOrder On
        SSLCipherSuite RC4-SHA:HIGH:!kEDH
    
        SSLCertificateFile /etc/libravatar/www.crt
        SSLCertificateKeyFile /etc/libravatar/www.pem
        SSLCertificateChainFile /etc/libravatar/www-chain.pem
    
        RewriteEngine On
        RewriteRule ^ /var/www/migration.html [last]
    
        <Directory /var/www>
            Allow from all
            Options -Indexes
        </Directory>
    </VirtualHost>
    
  • Put this static file in /var/www/migration.html:

    <html>
    <body>
    <p>We are migrating to a new server. See you soon!</p>
    <p>- <a href="http://identi.ca/libravatar">@libravatar</a></p>
    </body>
    </html>
    
  • Enable the rewrite module:

    a2enmod rewrite
    
  • Prepare an Apache config proxying to the new server in /etc/apache2/sites-enabled.proxy/:

    <VirtualHost *:80>
        RewriteEngine On
        RewriteRule ^ https://www.libravatar.org [redirect=301,last]
    </VirtualHost>
    
    <VirtualHost *:443>
        SSLEngine on
        SSLProtocol TLSv1
        SSLHonorCipherOrder On
        SSLCipherSuite RC4-SHA:HIGH:!kEDH
    
        SSLCertificateFile /etc/libravatar/www.crt
        SSLCertificateKeyFile /etc/libravatar/www.pem
        SSLCertificateChainFile /etc/libravatar/www-chain.pem
    
        SSLProxyEngine on
        ProxyPass / https://www.libravatar.org/
        ProxyPassReverse / https://www.libravatar.org/
    </VirtualHost>
    
  • Enable the proxy-related modules for Apache:

    a2enmod proxy
    a2enmod proxy_connect
    a2enmod proxy_http
    

Migrating servers

  • Tweet and dent about the upcoming migration.

  • Enable the static file config on the old server (disabling the Django app).

  • Copy the database from the old server and restore it on the new server.

  • Copy /var/lib/libravatar from the old server to the new one.

Disable mirror sync

  • Log into each mirror and comment out the sync cron jobs in /etc/cron.d/libravatar-slave.
  • Make sure mirrors are no longer able to connect to the old server by deleting /var/lib/libravatar/master/.ssh/authorized_keys on the old server.

Testing the main site

  • Hack my local /etc/hosts file to point to the new server's IP address:

    xxx.xxx.xxx.xxx www.libravatar.org stats.libravatar.org cdn.libravatar.org
    
  • Test all functionality on the new site.

  • If testing is successful, update DNS to point to the new server with a short TTL (in case we need to revert).

  • Enable the proxy config on the old server.

  • Hack my local /etc/hosts file to point to the old server's IP address.
  • Test basic functionality going through the proxy.
  • Remove local /etc/hosts/ hacks.

Re-enable mirror sync

  • Build a new libravatar-slave package with an updated known_hosts file for the new server.
  • Log into each server I control and update that package.
  • Test the connection to the master (hacking /etc/hosts on the mirror if needed):

    sudo -u libravatar-slave ssh libravatar-master@0.cdn.libravatar.org
    
  • Uncomment the sync cron jobs in /etc/cron.d/libravatar-slave.

  • An hour later, make sure that new images are copied over and that the TLS certs are still working.
  • Remove /etc/hosts hacks from all mirrors.

Post migration steps

  • Tweet and dent about the fact that the migration was successful.
  • Send a test email to the support address included in the tweet/dent.

  • Take a backup of config files and data on the old server in case I forgot to copy something to the new one.

  • Get in touch with mirror owners to tell them to update libravatar-slave package and test ssh configuration.

  • Add third-party controlled mirrors back to the DNS load-balancer once they are up to date.

  • A few days later, change the TTL for the main site back to 43200 seconds.

  • A week later, kill the proxy on the old server by shutting it down.
Debugging Gearman configuration

Gearman is a queuing system that has been in Debian for a long time and is quite reliable.

I ran into problems however when upgrading a server from Debian squeeze to wheezy however. Here's how I debugged my Gearman setup.

Log verbosity

First of all, I started by increasing the verbosity level of the daemon by adding --verbose=INFO to /etc/default/gearman-job-server (the possible values of the verbose option are in the libgearman documentation) and restarting the daemon:

/etc/init.d/gearman-job-server restart

I opened a second terminal to keep an eye on the logs:

tail -f /var/log/gearman-job-server/gearman.log

Listing available workers

Next, I registered a very simple worker:

gearman -w -f mytest cat

and made sure it was connected properly by telneting into the Gearman process:

telnet localhost 4730

and listing all currently connected workers using the workers command (one of the commands available in the Gearman TEXT protocol).

There should be an entry similar to this one:

30 127.0.0.1 - : mytest

Because there are no exit or quit commands in the TEXT protocol, you need to terminate the telnet connection like this:

  1. Press Return
  2. Press Ctrl + ]
  3. Press Return
  4. Type quit at the telnet prompt and press Return.

Finally, I sent some input to the simple worker I setup earlier:

echo "hi there" | gearman -f mytest

and got my input repeated on the terminal:

hi there

Gearman bug

I traced my problems down to this error message when I sent input to the worker:

gearman: gearman_client_run_tasks : connect_poll(Connection refused)
getsockopt() failed -> libgearman/connection.cc:104

It turns out that it is a known bug that was fixed upstream but still affects Debian wheezy and some versions of Ubuntu. The bug report is pretty unhelpful since the work-around is hidden away in the comments of this "invalid" answer: be explicit about the hostname and port number in both gearman calls.

So I was able to make it work like this:

gearman -w -h 127.0.0.1 -p 4730 -f mytest cat    
echo "hi there" | gearman -h 127.0.0.1 -p 4730 -f mytest

where the hostname matches exactly what's in /etc/default/gearman-job-server.