RSS Atom Add a new post titled:
Using OpenVPN on iOS and OSX

I have written instructions on how to connect to your own OpenVPN server using Network Manager as well as Android.

Here is how to do it on iOS and OSX assuming you have followed my instructions for the server setup.

Generate new keys

From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:

./build-key iphone   # "iphone" as Name, no password

and for your laptop:

./build-key osx      # "osx" as Name, no password

Using OpenVPN Connect on iOS

The app you need to install from the App Store is OpenVPN Connect.

Once it's installed, connect your phone to your computer and transfer the following files using iTunes:

  • ca.crt
  • iphone.crt
  • iphone.key
  • iphone.ovpn
  • ta.key

You should then be able to select it after launching the app. See the official FAQ if you run into any problems.

iphone.ovpn is a configuration file that you need to supply since the OpenVPN Connect app doesn't have a configuration interface. You can use this script to generate it or write it from scratch using this template.

On Linux, you can also create a configuration file using Network Manager 1.2, use the following command:

nmcli connection export hafnarfjordur > iphone.ovpn

though that didn't quite work in my experience.

Here is the config I successfully used to connect to my server:

remote 1194
ca ca.crt
cert iphone.crt
key iphone.key
cipher AES-256-CBC
auth SHA384
comp-lzo yes
proto udp
tls-remote server
remote-cert-tls server
ns-cert-type server
tls-auth ta.key 1

Using Viscosity on Mac OSX

One of the possible OpenVPN clients you can use on OSX is Viscosity.

Here are the settings you'll need to change when setting up a new VPN connection:

  • General
    • Remote server:
  • Authentication
    • Type: SSL/TLS client
    • CA: ca.crt
    • Cert: osx.crt
    • Key: osx.key
    • Tls-Auth: ta.key
    • direction: 1
  • Options
    • peer certificate: require server nsCertType
    • compression: turn LZO on
  • Networking
    • send all traffic on VPN
  • Advanced
    • add the following extra OpenVPN configuration commands:

      cipher AES-256-CBC
      auth SHA384
Using DNSSEC and DNSCrypt in Debian

While there is real progress being made towards eliminating insecure HTTP traffic, DNS is a fundamental Internet service that still usually relies on unauthenticated cleartext. There are however a few efforts to try and fix this problem. Here is the setup I use on my Debian laptop to make use of both DNSSEC and DNSCrypt.


DNSCrypt was created to enable end-users to encrypt the traffic between themselves and their chosen DNS resolver.

To switch away from your ISP's default DNS resolver to a DNSCrypt resolver, simply install the dnscrypt-proxy package and then set it as the default resolver either in /etc/resolv.conf:


if you are using a static network configuration or in /etc/dhcp/dhclient.conf:

supersede domain-name-servers;

if you rely on dynamic network configuration via DHCP.

There are two things you might want to keep in mind when choosing your DNSCrypt resolver:

  • whether or not they keep any logs of the DNS traffic
  • whether or not they support DNSSEC

I have personally selected a resolver located in Iceland by setting the following in /etc/default/dnscrypt-proxy:


While DNSCrypt protects the confidentiality of our DNS queries, it doesn't give us any assurance that the results of such queries are the right ones. In order to authenticate results in that way and prevent DNS poisoning, a hierarchical cryptographic system was created: DNSSEC.

In order to enable it, I have setup a local unbound DNSSEC resolver on my machine and pointed /etc/resolv.conf (or /etc/dhcp/dhclient.conf) to my unbound installation at

Then I put the following in /etc/unbound/unbound.conf.d/dnscrypt.conf:

    # Remove localhost from the donotquery list
    do-not-query-localhost: no

    name: "."

to stop unbound from resolving DNS directly and to instead go through the encrypted DNSCrypt proxy.


In my experience, unbound and dnscrypt-proxy are fairly reliable but they eventually get confused (presumably) by network changes and start returning errors.

The ugly but dependable work-around I have found is to create a cronjob at /etc/cron.d/restart-dns.conf that restarts both services once a day:

0 3 * * *    root    /usr/sbin/service dnscrypt-proxy restart
1 3 * * *    root    /usr/sbin/service unbound restart

Captive portals

The one remaining problem I need to solve has to do with captive portals. This can be quite annoying when travelling because it requires me to use the portal's DNS resolver in order to connect to the splash screen that unlocks the wifi connection.

The dnssec-trigger package looked promising but when I tried it on my jessie laptop, it wasn't particularly reliable.

My temporary work-around is to comment out this line in /etc/dhcp/dhclient.conf whenever I need to connect to such annoying wifi networks:

#supersede domain-name-servers;

If you've found a better solution to this problem, please leave a comment!

How Safe Browsing works in Firefox

Firefox has had support for Google's Safe Browsing since 2005 when it started as a stand-alone Firefox extension. At first it was only available in the USA, but it was opened up to the rest of the world in 2006 and moved to the Google Toolbar. It then got integrated directly into Firefox 2.0 before the public launch of the service in 2007.

Many people seem confused by this phishing and malware protection system and while there is a pretty good explanation of how it works on our support site, it doesn't go into technical details. This will hopefully be of interest to those who have more questions about it.

Browsing Protection

The main part of the Safe Browsing system is the one that watches for bad URLs as you're browsing. Browsing protection currently protects users from:

If a Firefox user attempts to visit one of these sites, a warning page will show up instead, which you can see for yourself here:

The first two warnings can be toggled using the browser.safebrowsing.malware.enabled preference (in about:config) whereas the last one is controlled by browser.safebrowsing.enabled.

List updates

It would be too slow (and privacy-invasive) to contact a trusted server every time the browser wants to establish a connection with a web server. Instead, Firefox downloads a list of bad URLs every 30 minutes from the server ( and does a lookup against its local database before displaying a page to the user.

Downloading the entire list of sites flagged by Safe Browsing would be impractical due to its size so the following transformations are applied:

  1. each URL on the list is canonicalized,
  2. then hashed,
  3. of which only the first 32 bits of the hash are kept.

The lists that are requested from the Safe Browsing server and used to flag pages as malware/unwanted or phishing can be found in urlclassifier.malwareTable and urlclassifier.phishTable respectively.

If you want to see some debugging information in your terminal while Firefox is downloading updated lists, turn on browser.safebrowsing.debug.

Once downloaded, the lists can be found in the cache directory:

  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/ on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/ on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\ on Windows

Resolving partial hash conflicts

Because the Safe Browsing database only contains partial hashes, it is possible for a safe page to share the same 32-bit hash prefix as a bad page. Therefore when a URL matches the local list, the browser needs to know whether or not the rest of the hash matches the entry on the Safe Browsing list.

In order resolve such conflicts, Firefox requests from the Safe Browsing server (browser.safebrowsing.provider.mozilla.gethashURL) all of the hashes that start with the affected 32-bit prefix and adds these full-length hashes to its local database. Turn on browser.safebrowsing.debug to see some debugging information on the terminal while these "completion" requests are made.

If the current URL doesn't match any of these full hashes, the load proceeds as normal. If it does match one of them, a warning interstitial page is shown and the load is canceled.

Download Protection

The second part of the Safe Browsing system protects users against malicious downloads. It was launched in 2011, implemented in Firefox 31 on Windows and enabled in Firefox 39 on Mac and Linux.

It roughly works like this:

  1. Download the file.
  2. Check the main URL, referrer and redirect chain against a local blocklist (urlclassifier.downloadBlockTable) and block the download in case of a match.
  3. On Windows, if the binary is signed, check the signature against a local whitelist (urlclassifier.downloadAllowTable) of known good publishers and release the download if a match is found.
  4. If the file is not a binary file then release the download.
  5. Otherwise, send the binary file's metadata to the remote application reputation server (browser.safebrowsing.downloads.remote.url) and block the download if the server indicates that the file isn't safe.

Blocked downloads can be unblocked by right-clicking on them in the download manager and selecting "Unblock".

While the download protection feature is automatically disabled when malware protection (browser.safebrowsing.malware.enabled) is turned off, it can also be disabled independently via the browser.safebrowsing.downloads.enabled preference.

Note that Step 5 is the only point at which any information about the download is shared with Google. That remote lookup can be suppressed via the browser.safebrowsing.downloads.remote.enabled preference for those users concerned about sending that metadata to a third party.

Types of malware

The original application reputation service would protect users against "dangerous" downloads, but it has recently been expanded to also warn users about unwanted software as well as software that's not commonly downloaded.

These various warnings can be turned on and off in Firefox through the following preferences:

  • browser.safebrowsing.downloads.remote.block_dangerous
  • browser.safebrowsing.downloads.remote.block_dangerous_host
  • browser.safebrowsing.downloads.remote.block_potentially_unwanted
  • browser.safebrowsing.downloads.remote.block_uncommon

and tested using Google's test page.

If you want to see how often each "verdict" is returned by the server, you can have a look at the telemetry results for Firefox Beta.


One of the most persistent misunderstandings about Safe Browsing is the idea that the browser needs to send all visited URLs to Google in order to verify whether or not they are safe.

While this was an option in version 1 of the Safe Browsing protocol (as disclosed in their privacy policy at the time), support for this "enhanced mode" was removed in Firefox 3 and the version 1 server was decommissioned in late 2011 in favor of version 2 of the Safe Browsing API which doesn't offer this type of real-time lookup.

Google explicitly states that the information collected as part of operating the Safe Browsing service "is only used to flag malicious activity and is never used anywhere else at Google" and that "Safe Browsing requests won't be associated with your Google Account". In addition, Firefox adds a few privacy protections:

  • Query string parameters are stripped from URLs we check as part of the download protection feature.
  • Cookies set by the Safe Browsing servers to protect the service from abuse are stored in a separate cookie jar so that they are not mixed with regular browsing/session cookies.
  • When requesting complete hashes for a 32-bit prefix, Firefox throws in a number of extra "noise" entries to obfuscate the original URL further.

On balance, we believe that most users will want to keep Safe Browsing enabled, but we also make it easy for users with particular needs to turn it off.

Learn More

If you want to learn more about how Safe Browsing works in Firefox, you can find all of the technical details on the Safe Browsing and Application Reputation pages of the Mozilla wiki or you can ask questions on our mailing list.

Google provides some interesting statistics about what their systems detect in their transparency report and offers a tool to find out why a particular page has been blocked. Some information on how phishing sites are detected is also available on the Google Security blog, but for more detailed information about all parts of the Safe Browsing system, see the following papers:

Extracting Album Covers from the iTunes Store

The iTunes store is a good source of high-quality album cover art. If you search for the album on Google Images, then visit the page and right-click on the cover image, you will get a 170 px by 170 px image. Change the 170x170 in the URL to one of the following values to get a higher resolution image:

  • 170x170
  • 340x340
  • 600x600
  • 1200x1200
  • 1400x1400

Alternatively, use this handy webapp to query the iTunes search API and get to the source image directly.

Streamzap remotes and evdev in MythTV

Modern versions of Linux and MythTV enable infrared remote controls without the need for lirc. Here's how I migrated my Streamzap remote to evdev.

Installing packages

In order to avoid conflicts between evdev and lirc, I started by removing lirc and its config:

apt purge lirc

and then I installed this tool:

apt install ir-keytable

Remapping keys

While my Streamzap remote works out of the box with kernel 3.16, the keycodes that it sends to Xorg are not the ones that MythTV expects.

I therefore copied the existing mapping:

cp /lib/udev/rc_keymaps/streamzap /home/mythtv/

and changed it to this:

0x28c0 KEY_0
0x28c1 KEY_1
0x28c2 KEY_2
0x28c3 KEY_3
0x28c4 KEY_4
0x28c5 KEY_5
0x28c6 KEY_6
0x28c7 KEY_7
0x28c8 KEY_8
0x28c9 KEY_9
0x28ca KEY_ESC
0x28cb KEY_MUTE # |
0x28cc KEY_UP
0x28ce KEY_DOWN
0x28d0 KEY_UP
0x28d1 KEY_LEFT
0x28d2 KEY_ENTER
0x28d3 KEY_RIGHT
0x28d4 KEY_DOWN
0x28d5 KEY_M
0x28d6 KEY_ESC
0x28d7 KEY_L
0x28d8 KEY_P
0x28d9 KEY_ESC
0x28da KEY_BACK # <
0x28db KEY_FORWARD # >
0x28dc KEY_R
0x28e0 KEY_D
0x28e1 KEY_I
0x28e2 KEY_END
0x28e3 KEY_A

The complete list of all EV_KEY keycodes can be found in the kernel.

The following command will write this mapping to the driver:

/usr/bin/ir-keytable w /home/mythtv/streamzap -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00

and they should take effect once MythTV is restarted.

Applying the mapping at boot

While the naïve solution is to apply the mapping at boot (for example, by sticking it in /etc/rc.local), that only works if the right modules are loaded before rc.local runs.

A much better solution is to write a udev rule so that the mapping is written after the driver is loaded.

I created /etc/udev/rules.d/streamzap.rules with the following:

# Configure remote control for MythTV
ACTION=="add", ATTRS{idVendor}=="0e9c", ATTRS{idProduct}=="0000", RUN+="/usr/bin/ir-keytable -c -w /home/mythtv/streamzap -D 1000 -P 250 -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00"

and got the vendor and product IDs using:

grep '^[IN]:' /proc/bus/input/devices

The -D and -P parameters control what happens when a button on the remote is held down and the keypress must be repeated. These delays are in milliseconds.

Linux kernel module options on Debian

Linux kernel modules often have options that can be set. Here's how to make use of them on Debian-based systems, using the i915 Intel graphics driver as an example.

To get the list of all available options:

modinfo -p i915

To check the current value of a particular option:

cat /sys/module/i915/parameters/enable_ppgtt

To give that option a value when the module is loaded, create a new /etc/modprobe.d/i915.conf file and put the following in it:

options i915 enable_ppgtt=0

and then re-generate the initial RAM disks:

update-initramfs -u -k all

Alternatively, that option can be set at boot time on the kernel command line by setting the following in /etc/default/grub:


and then updating the grub config:

Tweaking Cookies For Privacy in Firefox

Cookies are an important part of the Web since they are the primary mechanism that websites use to maintain user sessions. Unfortunately, they are also abused by surveillance marketing companies to follow you around the Web. Here are a few things you can do in Firefox to protect your privacy.

Cookie Expiry

Cookies are sent from the website to your browser via a Set-Cookie HTTP header on the response. It looks like this:

HTTP/1.1 200 OK
Date: Mon, 07 Dec 2015 16:55:43 GMT
Server: Apache
Set-Cookie: SESSIONID=65576c6c64206e6f2c657920756f632061726b636465742065686320646f2165
Content-Length: 2036
Content-Type: text/html;charset=UTF-8

When your browser sees this, it saves that cookie for the given hostname and keeps it until you close the browser.

Should a site want to persist their cookie for longer, they can add an Expires attribute:

Set-Cookie: SESSIONID=65576c...; expires=Tue, 06-Dec-2016 22:38:26 GMT

in which case the browser will retain the cookie until the server-provided expiry date (which could be in a few years). Of course, that's if you don't instruct your browser to do things differently.

In order to change your cookie settings, you must open the Firefox preferences, click on "Privacy" and then choose "Use custom settings for history" under the "History" heading.

There, you will have the ability to turn off cookies entirely (network.cookie.cookieBehavior = 2), which I don't recommend you do in your browser since you won't be able to login anywhere. On the other hand, turning off cookies is what I do (and recommend) in Thunderbird since I can't think of a legitimate reason for an email to leave a cookie in my mail client.

Another control you'll find there is "Keep until" which defaults to honoring the server-provided expiry ("they expire" aka network.cookie.lifetimePolicy = 0) or making them expire at the end of the browsing session ("I close Firefox" aka network.cookie.lifetimePolicy = 2).

A third option is available if you type about:config into your URL bar and looking for the network.cookie.lifetimePolicy preference. Setting this to 3 will honor the server-provided expiry up to a maximum lifetime of 90 days. You can also make that 90 days be anything you want by changing the network.cookie.lifetime.days preference.

Regardless of the settings you choose, you can always tell Firefox to clear cookies when you close it by selecting the "Clear history when Firefox closes" checkbox, clicking the "Settings" button and making sure that "Cookies" is selected (privacy.clearOnShutdown.cookies = true). This could be useful for example if you'd like to ensure that cookies never last longer than 5 days but are also cleared whenever you shut down Firefox.

Third-Party Cookies

So far, we've only looked at first-party cookies: the ones set by the website you visit and which are typically used to synchronize your login state with the server.

There is however another kind: third-party cookies. These ones are set by the third-party resources that a page loads. For example, if a page loads JavaScript from a third-party ad network, you can be pretty confident that they will set their own cookie in order to build a profile on you and serve you "better and more relevant ads".

Controlling Third-Party Cookies

If you'd like to opt out of these, you have a couple of options. The first one is to turn off third-party cookies entirely by going back into the Privacy preferences and selecting "Never" next to the "Accept third-party cookies" setting (network.cookie.cookieBehavior = 1). Unfortunately, turning off third-party cookies entirely tends to break a number of sites which rely on this functionality (for example as part of their for login process).

A more forgiving option is to accept third-party cookies only for sites which you have actually visited directly. For example, if you visit Facebook and login, you will get a cookie from them. Then when you visit other sites which include Facebook widgets they will not recognize you unless you allow cookies to be sent in a third-party context. To do that, choose the "From visited" option (network.cookie.cookieBehavior = 3). However, note that a few payment gateways are still relying on arbitrary third-party cookies and will break unless you keep the default (network.cookie.cookieBehavior = 0).

In addition to this setting, you can also choose to make all third-party cookies automatically expire when you close Firefox by setting the network.cookie.thirdparty.sessionOnly option to true in about:config.

Other Ways to Limit Third-Party Cookies

Another way to limit undesirable third-party cookies is to tell the browser to avoid connecting to trackers in the first place. This functionality is now built into Private Browsing mode and enabled by default. To enable it outside of Private Browsing too, simply go into about:config and set privacy.trackingprotection.enabled to true.

You could also install the EFF's Privacy Badger add-on which uses heuristics to detect and block trackers, unlike Firefox tracking protection which uses a blocklist of known trackers.

My Recommended Settings

On my work computer I currently use the following:

network.cookie.cookieBehavior = 0
network.cookie.lifetimePolicy = 3
network.cookie.lifetime.days = 5
network.cookie.thirdparty.sessionOnly = true
privacy.trackingprotection.enabled = true

which allows me to stay logged into most sites for the whole week (no matter now often I restart Firefox Nightly) while limiting tracking and other undesirable cookies as much as possible.

How Tracking Protection works in Firefox

Firefox 42, which was released last week, introduced a new feature in its Private Browsing mode: tracking protection.

If you are interested in how this list is put together and then used in Firefox, this post is for you.

Safe Browsing lists

There are many possible ways to download URL lists to the browser and check against that list before loading anything. One of those is already implemented as part of our malware and phishing protection. It uses the Safe Browsing v2.2 protocol.

In a nutshell, the way that this works is that each URL on the block list is hashed (using SHA-256) and then that list of hashes is downloaded by Firefox and stored into a data structure on disk:

  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/mozstd-track* on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/mozstd-track* on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\mozstd-track* on Windows

This sbdbdump script can be used to extract the hashes contained in these files and will output something like this:

$ ~/sbdbdump/ -v .
- Reading sbstore: mozstd-track-digest256
[mozstd-track-digest256] magic 1231AF3B Version 3 NumAddChunk: 1 NumSubChunk: 0 NumAddPrefix: 0 NumSubPrefix: 0 NumAddComplete: 1696 NumSubComplete: 0
[mozstd-track-digest256] AddChunks: 1445465225
[mozstd-track-digest256] SubChunks:
[mozstd-track-digest256] addComplete[chunk:1445465225] e48768b0ce59561e5bc141a52061dd45524e75b66cad7d59dd92e4307625bdc5
[mozstd-track-digest256] MD5: 81a8becb0903de19351427b24921a772

The name of the blocklist being dumped here (mozstd-track-digest256) is set in the urlclassifier.trackingTable preference which you can find in about:config. The most important part of the output shown above is the addComplete line which contains a hash that we will see again in a later section.

List lookups

Once it's time to load a resource, Firefox hashes the URL, as well as a few variations of it, and then looks for it in the local lists.

If there's no match, then the load proceeds. If there's a match, then we do an additional check against a pairwise allowlist.

The pairwise allowlist (hardcoded in the urlclassifier.trackingWhitelistTable pref) is designed to encode what we call "entity relationships". The list groups related domains together for the purpose of checking whether a load is first or third party (e.g. and both belong to the same entity).

Entries on this list (named mozstd-trackwhite-digest256) look like this:

which translates to "if you're on the site, then don't block resources from

If there's a match on the second list, we don't block the load. It's only when we get a match on the first list and not the second one that we go ahead and cancel the network load.

If you visit our test page, you will see tracking protection in action with a shield icon in the URL bar. Opening the developer tool console will expose the URL of the resource that was blocked:

The resource at "" was blocked because tracking protection is enabled.

Creating the lists

The blocklist is created by Disconnect according to their definition of tracking.

The Disconnect list is on their Github page, but the copy we use in Firefox is the copy we have in our own repository. Similarly the Disconnect entity list is from here but our copy is in our repository. Should you wish to be notified of any changes to the lists, you can simply subscribe to this Atom feed.

To convert this JSON-formatted list into the binary format needed by the Safe Browsing code, we run a custom list generation script whenever the list changes on GitHub.

If you run that script locally using the same configuration as our server stack, you can see the conversion from the original list to the binary hashes.

Here's a sample entry from the mozstd-track-digest256.log file:

[m] >>
[hash] e48768b0ce59561e5bc141a52061dd45524e75b66cad7d59dd92e4307625bdc5

and one from mozstd-trackwhite-digest256.log:

[entity] Twitter >> (canonicalized), hash a8e9e3456f46dbe49551c7da3860f64393d8f9d96f42b5ae86927722467577df

This in combination with the sbdbdump script mentioned earlier, will allow you to audit the contents of the local lists.

Serving the lists

The way that the binary lists are served to Firefox is through a custom server component written by Mozilla: shavar.

Every hour, Firefox requests updates from If new data is available, then the whole list is downloaded again. Otherwise, all it receives in return is an empty 204 response.

To replicate how Firefox downloads the list, you can use this download script to ask the server for a copy of the full TP list:

$ ./

and then follow the URL redirection to get the actual list payload from the CDN:

$ wget

Once you've downloaded that binary file, you can examine its content using this extractor script:

$ ./ 1445465225
Parsing a 54294-byte response file
Processing control line...
Add chunk 1445465225 contains 54272 bytes of 32-byte hashes
Found 1696 prefixes in 54272 bytes

and dump all of the hashes it contains using the --verbose argument:

$ ./ --verbose 1445465225
Parsing a 54294-byte response file
Processing control line...
Add chunk 1445465225 contains 54272 bytes of 32-byte hashes
Found 1696 prefixes in 54272 bytes

which, as I have highlighted, contains the hash we have seen earlier.

Should you want to play with the server backend and run your own instance, follow the installation instructions and then go into about:config to change these preferences to point to your own instance:


Note that on Firefox 43 and later, these prefs have been renamed to:


Learn more

If you want to learn more about how tracking protection works in Firefox, you can find all of the technical details on the Mozilla wiki or you can ask questions on our mailing list.

Thanks to Tanvi Vyas for reviewing a draft of this post.

Introducing reboot-notifier for jessie and stretch

One of the packages that got lost in the transition from Debian wheezy to jessie was the update-notifier-common package which could be used to receive notifications when a reboot is needed (for example, after installing a kernel update).

I decided to wrap this piece of functionality along with a simple cron job and create a new package: reboot-notifier.

Because it uses the same file (/var/run/reboot-required) to indicate that a reboot is needed, it should work fine with any custom scripts that admins might have written prior to jessie.

If you're running sid or strech, all you need to do is:

apt install reboot-notifier

On jessie, you'll need to add the backports repository to /etc/apt/sources.list:

deb jessie-backports main
Hooking into docking and undocking events to run scripts

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.

This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock.

The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events

Create the following ACPI event scripts as suggested in this guide.

Firstly, /etc/acpi/events/thinkpad-dock:

event=ibm/hotkey LEN0068:00 00000080 00004010
action=su francois -c "/home/francois/bin/external-monitor dock"

Secondly, /etc/acpi/events/thinkpad-undock:

event=ibm/hotkey LEN0068:00 00000080 00004011
action=su francois -c "/home/francois/bin/external-monitor undock"

then restart udev:

sudo service udev restart

Finding the right events

To make sure the events are the right ones, lift them off of:

sudo acpi_listen

and ensure that your script is actually running by adding:

logger "ACPI event: $*"

at the begininng of it and then looking in /var/log/syslog for this lines like:

logger: external-monitor undock
logger: external-monitor dock

If that doesn't work for some reason, try using an ACPI event script like this:

action=logger %e

to see which event you should hook into.

Using xrandr inside an ACPI event script

Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:

logger "ACPI event: $*"
xrandr -d :0.0 --output DP2 --auto
xrandr -d :0.0 --output eDP1 --auto
xrandr -d :0.0 --output DP2 --left-of eDP1