Recent changes to this wiki:

Comment moderation
diff --git a/posts/checking-your-passwords-against-hibp/comment_1_557b3f6294c8fdca37f5d69c9b0a91fd._comment b/posts/checking-your-passwords-against-hibp/comment_1_557b3f6294c8fdca37f5d69c9b0a91fd._comment
new file mode 100644
index 0000000..359a3b7
--- /dev/null
+++ b/posts/checking-your-passwords-against-hibp/comment_1_557b3f6294c8fdca37f5d69c9b0a91fd._comment
@@ -0,0 +1,21 @@
+[[!comment format=mdwn
+ ip="202.61.72.50"
+ claimedauthor="Russell Stuart"
+ subject="Taking a sledgehammer to an egg?"
+ date="2017-10-19T03:47:28Z"
+ content="""
+That pwned list of a password is a fantastic resource.  Thanks for posting a pointer to it.
+
+But Egad! - using postgres to index and search it??  You must have the patience of a saint.
+
+Given a false positive isn't a death sentence, a bloom filter is a better choice.  Setting the parameters to give a false positive range of 1e-9 (roughly 50/50 chance of getting 1 false positive if I checked a password with it every second for my entire life), the resulting filter occupies 2.6G - about 1/2 the size of the compressed original.  Creating the filter takes about 3 hours on my laptop (please forgive the butt ugly inline python):
+
+    sudo apt-get install python, python-pybloomfilter
+    wget http://.../pwned-*.txt.7z; for f in *.7z; do 7z x $f; done
+    python -c \"import pybloomfilter, sys; b = pybloomfilter.BloomFilter(500000000, 0.000000001, 'pwned.bf'); [b.update(open(f)) for f in sys.argv[1:]]\" pwned-passwords-*.txt
+
+Querying it:
+
+    python -c 'import hashlib,sys,pybloomfilter; b = pybloomfilter.BloomFilter.open(\"pwned.bf\"); sys.stdout.write(\"\".join(\"%s is pwned: %r\n\" % (p, hashlib.sha1(p).hexdigest().upper() + \"\r\n\" in b) for p in sys.argv[1:]))' password1 password2 ...
+
+"""]]
diff --git a/posts/checking-your-passwords-against-hibp/comment_2_5619343f4064a0aed19b23f8e91f223a._comment b/posts/checking-your-passwords-against-hibp/comment_2_5619343f4064a0aed19b23f8e91f223a._comment
new file mode 100644
index 0000000..f389ab3
--- /dev/null
+++ b/posts/checking-your-passwords-against-hibp/comment_2_5619343f4064a0aed19b23f8e91f223a._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="2a00:23c5:69ce:df00:b7:aa91:48db:f9da"
+ claimedauthor="Jonathan"
+ url="jmtd.net"
+ subject="magnet URL for data"
+ date="2017-10-17T09:22:11Z"
+ content="""
+If it helps, I can vouch that this torrent magnet URL corresponds to the initial release of the password list. I found it the most convenient way to obtain the data. magnet:?xt=urn:btih:88145066d8d89cf426a22cfbeb1983dacb2a45d7&dn=pwned-passwords-1.0.txt.7z&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Fzer0day.ch%3A1337&tr=udp%3A%2F%2Fopen.demonii.com%3A1337&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fexodus.desync.com%3A6969
+"""]]

Post about my HIBP lookup tool
diff --git a/posts/checking-your-passwords-against-hibp.mdwn b/posts/checking-your-passwords-against-hibp.mdwn
new file mode 100644
index 0000000..adfa2bb
--- /dev/null
+++ b/posts/checking-your-passwords-against-hibp.mdwn
@@ -0,0 +1,31 @@
+[[!meta title="Checking Your Passwords Against the Have I Been Pwned List"]]
+[[!meta date="2017-10-16T22:10:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Two months ago, Troy Hunt, the security professional behind
+[Have I been pwned?](https://haveibeenpwned.com/),
+[released](https://www.troyhunt.com/introducing-306-million-freely-downloadable-pwned-passwords/)
+an incredibly comprehensive
+[password list](https://haveibeenpwned.com/Passwords) in the hope that it
+would allow web developers to steer their users away from passwords that
+have been compromised in past breaches.
+
+While the list released by HIBP is hashed, the plaintext passwords are out
+there and one should assume that password crackers have access to them.
+So if you use a password on that list, you can be fairly confident
+that it's very easy to guess or crack your password.
+
+I wanted to check my **active** passwords against that list to check whether
+or not any of them are compromised and should be changed immediately. This
+meant that I needed to download the list and do these lookups locally since
+it's not a good idea to send your current passwords to this third-party
+service.
+
+I put my tool up on [Launchpad](https://launchpad.net/hibp-pwlookup) /
+[PyPI](https://pypi.python.org/pypi/hibp-pwlookup) and you are more than
+welcome to give it a go. Install [Postgres](https://www.postgresql.org/) and
+[Psycopg2](http://initd.org/psycopg/) and then follow the
+[README instructions](https://git.launchpad.net/hibp-pwlookup/tree/README.txt)
+to setup your database.
+
+[[!tag debian]] [[!tag nzoss]] [[!tag mozilla]] [[!tag security]]

Add license notice to the frontpage
diff --git a/index.mdwn b/index.mdwn
index d08446d..b37741c 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -1,3 +1,4 @@
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 [[!if test="enabled(sidebar)" then="""
 [[!sidebar]]
 """ else="""

libvirt-bin is now called libvirt-clients
Even in jessie, libvirt-bin is a transitional package:
https://packages.debian.org/jessie/libvirt-bin
diff --git a/posts/lxc-setup-on-debian-jessie.mdwn b/posts/lxc-setup-on-debian-jessie.mdwn
index 417d043..5d764bf 100644
--- a/posts/lxc-setup-on-debian-jessie.mdwn
+++ b/posts/lxc-setup-on-debian-jessie.mdwn
@@ -8,7 +8,7 @@ a few things to get the networking to work on my machine.
 
 Start by installing (as root) the necessary packages:
 
-    apt install lxc libvirt-bin debootstrap
+    apt install lxc libvirt-clients debootstrap
 
 # Network setup
 

Comment moderation
diff --git a/posts/tls_authentication_freenode_and_oftc/comment_2_dac77c215afa19d55048c700d8fdd922._comment b/posts/tls_authentication_freenode_and_oftc/comment_2_dac77c215afa19d55048c700d8fdd922._comment
new file mode 100644
index 0000000..34cf11c
--- /dev/null
+++ b/posts/tls_authentication_freenode_and_oftc/comment_2_dac77c215afa19d55048c700d8fdd922._comment
@@ -0,0 +1,18 @@
+[[!comment format=mdwn
+ ip="38.109.115.130"
+ claimedauthor="Daniel Kahn Gillmor"
+ subject="Followup: "
+ date="2017-09-13T21:57:33Z"
+ content="""
+Thanks to this discussion, i just opened a [bug report on irssi](https://github.com/irssi/irssi/issues/756) to try to resolve the second issue above by sending client certificates in a renegotiated handshake.
+
+I've tested irssi, and it definitely does leak the user's public certificate to a passive network monitor.
+
+I haven't tested ZNC yet -- If someone wanted to open a similar report for ZNC, i'd appreciate it.
+
+If you want to test to see whether it's dumping traffic, you can do this with tshark:
+
+    tshark -O ssl  -Y 'ssl.handshake.certificates_length > 1 && ssl.record.content_type == 22'  -o http.ssl.port:6697 port 6697
+
+I don't have a patch to propose for either irssi or ZNC yet, and don't have much time to work on it myself.  I'd be happy to see that happen, because it would remove one of the major downsides to using certificates for IRC.
+"""]]

Comment moderation
diff --git a/posts/tls_authentication_freenode_and_oftc/comment_1_c11c1c8d07ec6290bdc3fe0a5c305de2._comment b/posts/tls_authentication_freenode_and_oftc/comment_1_c11c1c8d07ec6290bdc3fe0a5c305de2._comment
new file mode 100644
index 0000000..329d433
--- /dev/null
+++ b/posts/tls_authentication_freenode_and_oftc/comment_1_c11c1c8d07ec6290bdc3fe0a5c305de2._comment
@@ -0,0 +1,30 @@
+[[!comment format=mdwn
+ ip="38.109.115.130"
+ claimedauthor="Daniel Kahn Gillmor"
+ subject="problems with certificate-based TLS authentication for IRC"
+ date="2017-09-11T15:13:56Z"
+ content="""
+I used to use this approach myself, but i stopped using it a few years
+ago, for two reasons:
+
+ * certificate expiration -- when my registered certificate expires, i
+   still need to update the server with my new certificate.  to do that,
+   i need my password.  so my password still works, and i still have to
+   retain it and send it to (what i hope is the correct) nickserv
+   service at each cert renewal time.  so this doesn't actually remove
+   either my needing to remember/retain/record a password, and it
+   doesn't make my remembered/recorded password less powerful.
+
+ * client certificate leakage -- in TLS versions 1.2 and earlier (all
+   deployed versions of TLS), the client certificate is exchanged in the
+   clear, during the handshake.  (TLS 1.3 will fix this, but it is not yet fully standardized or in deployed production).  This means that client cert
+   authentication actually leaks your identity to any passive network
+   observer, whereas password-based authentication to nickserv does not.
+
+This pains me, because i generally *strongly* prefer pubkey-based
+authentication over password-based authentication.  But in this case, i
+think it's not enough of a win overall to make the transition.
+
+What do you think about these tradeoffs?  Are there mitigating factors that i should know about that makes them less troubling?
+
+"""]]

creating tag page tags/znc
diff --git a/tags/znc.mdwn b/tags/znc.mdwn
new file mode 100644
index 0000000..d2b3d6c
--- /dev/null
+++ b/tags/znc.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged znc"]]
+
+[[!inline pages="tagged(znc)" actions="no" archive="yes"
+feedshow=10]]

Add a post on TLS authentication for IRC
diff --git a/posts/hiding-network-disconnections-using-irc-bouncer.mdwn b/posts/hiding-network-disconnections-using-irc-bouncer.mdwn
index b4b35ee..7a705e3 100644
--- a/posts/hiding-network-disconnections-using-irc-bouncer.mdwn
+++ b/posts/hiding-network-disconnections-using-irc-bouncer.mdwn
@@ -107,4 +107,5 @@ kernel update, I keep the bouncer running. At the end of the day, I say yes
 to killing the bouncer. That way, I don't have a backlog to go through when
 I wake up the next day.
 
-[[!tag mozilla]] [[!tag debian]] [[!tag irc]] [[!tag irssi]] [[!tag nzoss]] [[!tag letsencrypt]]
+[[!tag mozilla]] [[!tag debian]] [[!tag irc]] [[!tag irssi]] [[!tag nzoss]]
+[[!tag letsencrypt]] [[!tag znc]]
diff --git a/posts/tls_authentication_freenode_and_oftc.mdwn b/posts/tls_authentication_freenode_and_oftc.mdwn
new file mode 100644
index 0000000..31f7f1a
--- /dev/null
+++ b/posts/tls_authentication_freenode_and_oftc.mdwn
@@ -0,0 +1,82 @@
+[[!meta title="TLS Authentication on Freenode and OFTC"]]
+[[!meta date="2017-09-08T21:50:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+In order to easily authenticate with IRC networks such as
+[OFTC](https://www.oftc.net/NickServ/CertFP/) and
+[Freenode](https://freenode.net/kb/answer/certfp), it is possible to use
+*client TLS certificates* (also known as *SSL certificates*). In fact, it
+turns out that it's very easy to setup both on [irssi](https://irssi.org/)
+and on [znc](https://wiki.znc.in/).
+
+# Generate your TLS certificate
+
+On a machine with [good entropy](http://altusmetrum.org/ChaosKey/), run the
+following command to create a keypair that will last for 10 years:
+
+    openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN=<your nick>"
+
+Then extract your key fingerprint using this command:
+
+    openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'
+
+# Share your fingerprints with NickServ
+
+On each IRC network, do this:
+
+    /msg NickServ IDENTIFY Password1!
+    /msg NickServ CERT ADD <your fingerprint>
+
+in order to add your fingerprint to the access control list.
+
+# Configure ZNC
+
+To configure znc, start by putting the key in the right place:
+
+    cp user.pem ~/.znc/users/<your nick>/networks/oftc/moddata/cert/
+
+and then enable the built-in [cert plugin](https://wiki.znc.in/Cert) for
+each network in `~/.znc/configs/znc.conf`:
+
+    <Network oftc>
+        ...
+                LoadModule = cert
+        ...
+	</Network>
+        <Network freenode>
+        ...
+                LoadModule = cert
+        ...
+	</Network>
+
+# Configure irssi
+
+For irssi, do the same thing but put the cert in `~/.irssi/user.pem` and
+then change the OFTC entry in `~/.irssi/config` to look like this:
+
+    {
+      address = "irc.oftc.net";
+      chatnet = "OFTC";
+      port = "6697";
+      use_tls = "yes";
+      tls_cert = "~/.irssi/user.pem";
+      tls_verify = "yes";
+      autoconnect = "yes";
+    }
+
+and the Freenode one to look like this:
+
+    {
+      address = "chat.freenode.net";
+      chatnet = "Freenode";
+      port = "7000";
+      use_tls = "yes";
+      tls_cert = "~/.irssi/user.pem";
+      tls_verify = "yes";
+      autoconnect = "yes";
+    }
+
+That's it. That's all you need to replace password authentication with a
+much stronger alternative.
+
+[[!tag debian]] [[!tag nzoss]] [[!tag irc]] [[!tag irssi]] [[!tag znc]]

Mention the requirement for the veth kernel module
https://github.com/lxc/lxc/issues/1604
diff --git a/posts/lxc-setup-on-debian-jessie.mdwn b/posts/lxc-setup-on-debian-jessie.mdwn
index b46eab6..417d043 100644
--- a/posts/lxc-setup-on-debian-jessie.mdwn
+++ b/posts/lxc-setup-on-debian-jessie.mdwn
@@ -21,8 +21,14 @@ change needed here):
     lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
     lxc.network.ipv4 = 0.0.0.0/24
 
-but I had to make sure that the "guests" could connect to the outside world
-through the "host":
+That configuration requires that the `veth` kernel module be loaded. If
+you have any kinds of module-loading restrictions enabled, you probably
+need to add the following to `/etc/modules` and **reboot**:
+
+    veth
+
+Next, I had to make sure that the "guests" could connect to the outside
+world through the "host":
 
 1. Enable IPv4 forwarding by putting this in `/etc/sysctl.conf`:
 

Add a section on fixing permissions for the scanner
diff --git a/posts/setting-up-a-network-scanner-using-sane.mdwn b/posts/setting-up-a-network-scanner-using-sane.mdwn
index 8dcaa26..e1c7147 100644
--- a/posts/setting-up-a-network-scanner-using-sane.mdwn
+++ b/posts/setting-up-a-network-scanner-using-sane.mdwn
@@ -29,8 +29,8 @@ detects your scanner:
 
     scanimage -L
 
-Note that you'll need to be in the `scanner` group for this to work
-(`adduser username scanner`).
+Note that you may need to be **root** for this to work. We'll fix that in
+the next section.
 
 This should give you output similar to this:
 
@@ -53,6 +53,33 @@ To do a test scan, simply run:
 
 and then take a look at the (greyscale) image it produced (`test.ppm`).
 
+# Letting normal users access the scanner
+
+In order for users to be able to see the scanner, they will need to be in
+the `scanner` group:
+
+    adduser francois scanner
+    adduser saned scanner
+
+with the second one being for remote users.
+
+Next, you'll need to put this in `/etc/udev/rules.d/55-libsane.rules`:
+
+    SUBSYSTEM=="usb", ATTRS{idVendor}=="04a9", MODE="0660", GROUP="scanner", ENV{libsane_matched}="yes"
+
+and then restart udev:
+
+    systemctl restart udev.service
+
+That `04a9` ID is the first part of what you saw in `lsusb`, but you can
+also see it in the output of `sane-find-scanner`.
+
+Finally, test the scanner as your normal user:
+
+    scanimage > test.ppm
+
+to confirm that everything is working.
+
 # Configure the server
 
 With the scanner working locally, it's time to expose it to network clients

Add missing firewall ports for SANE
diff --git a/posts/setting-up-a-network-scanner-using-sane.mdwn b/posts/setting-up-a-network-scanner-using-sane.mdwn
index bd8dfe8..8dcaa26 100644
--- a/posts/setting-up-a-network-scanner-using-sane.mdwn
+++ b/posts/setting-up-a-network-scanner-using-sane.mdwn
@@ -61,10 +61,11 @@ by adding the client IP addresses to `/etc/sane.d/saned.conf`:
     ## Access list
     192.168.1.3
 
-and then opening the appropriate port on your firewall
+and then opening the appropriate ports on your firewall
 (typically `/etc/network/iptables` in Debian):
 
     -A INPUT -s 192.168.1.3 -p tcp --dport 6566 -j ACCEPT
+    -A INPUT -s 192.168.1.3 -p udp -j ACCEPT
 
 Then you need to ensure that the SANE server is running by setting the
 following in `/etc/default/saned`:
@@ -98,7 +99,7 @@ where `myserver` is the hostname or IP address of the server running saned.
 If you have a firewall runnning on the client, make sure you allow
 SANE traffic from the server:
 
-    -A INPUT -s 192.168.1.2 -p tcp --sport 6566  -j ACCEPT
+    -A INPUT -s 192.168.1.2 -p tcp --sport 6566 -j ACCEPT
 
 # Test the scanner remotely
 

Restore lost comment
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
new file mode 100644
index 0000000..4cc2a1a
--- /dev/null
+++ b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
@@ -0,0 +1,47 @@
+[[!comment format=mdwn
+ ip="162.243.251.96"
+ claimedauthor="Eldin Hadzic"
+ subject="Solution"
+ date="2017-08-26T23:33:27Z"
+ content="""
+I figured it out.
+
+In order for OpenVPN to use the locally installed Unbound DNS resolver, do this:
+
+First check for the IP we should use with: `sudo ifconfig`
+
+The IP we need is the one listed at 
+
+    tun0: inet 10.8.0.1
+
+## UNBOUND
+
+Add this to `/etc/unbound/unbound.conf`:
+
+    server:
+        interface: 127.0.0.1
+        interface: 10.8.0.1
+        access-control: 127.0.0.1 allow
+        access-control: 10.8.0.1/24 allow
+
+Then restart Unbound with: `sudo service unbound restart`
+
+Test with: `dig @10.8.0.1 google.com`
+
+(SERVER should read: `SERVER: 10.8.0.1#53(10.8.0.1)`)
+
+## OPENVPN
+
+Add this to (or modify) `/etc/openvpn/server.conf`:
+
+    push \"redirect-gateway def1 bypass-dhcp\"
+    push \"dhcp-option DNS 10.8.0.1\"
+    push \"register-dns\"
+
+Then restart OpenVPN with: `sudo service openvpn restart`
+
+OpenVPN clients should now be using Unbound. Test at <http://dnsleak.com/>.
+
+Eldin Hadzic
+eldinhadzic@protonmail.com
+"""]]

removed
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
deleted file mode 100644
index 42db2f0..0000000
--- a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
+++ /dev/null
@@ -1,43 +0,0 @@
-[[!comment format=mdwn
- ip="162.243.251.96"
- subject="Re: OpenVPN settings"
- date="2017-08-26T22:19:01Z"
- content="""
-We figured it out:
-
-In order for OpenVPN to use the locally installed Unbound DNS resolver, do this:
-
-First check for the IP we should use with: `sudo ifconfig`
-
-The IP we need is the one listed at 
-
-    tun0: inet 10.8.0.1
-
-## UNBOUND
-
-Add this to `/etc/unbound/unbound.conf`:
-
-    server:
-        interface: 127.0.0.1
-        interface: 10.8.0.1
-        access-control: 127.0.0.1 allow
-        access-control: 10.8.0.1/24 allow
-
-Then restart Unbound with: `sudo service unbound restart`
-
-Test with: `dig @10.8.0.1 google.com`
-
-(SERVER should read: `SERVER: 10.8.0.1#53(10.8.0.1)`)
-
-## OPENVPN
-
-Add this to (or modify) `/etc/openvpn/server.conf`:
-
-    push \"redirect-gateway def1 bypass-dhcp\"
-    push \"dhcp-option DNS 10.8.0.1\"
-    push \"register-dns\"
-
-Then restart OpenVPN with: `sudo service openvpn restart`
-
-OpenVPN clients should now be using Unbound. Test at <http://dnsleak.com/>.
-"""]]

Add a blurb about integrating with OpenVPN
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 247ca72..807277c 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -9,7 +9,7 @@ Now that the root DNS servers are [signed,](http://www.root-dnssec.org/2010/07/1
 Being already packaged in [Debian](http://packages.debian.org/source/unstable/unbound) and [Ubuntu](https://launchpad.net/ubuntu/+source/unbound), unbound is only an `apt-get` away:
 
 
-    apt-get install unbound
+    apt install unbound
 
 ## Optional settings
 
@@ -76,7 +76,6 @@ If you're not using DHCP, then you simply need to put this in your `/etc/resolv.
 
 
     nameserver 127.0.0.1
-  
 
 ## Testing DNSSEC resolution
 
@@ -94,4 +93,31 @@ $ dig +dnssec A www.dnssec.cz | grep ad
   
 Are there any other ways of making sure that DNSSEC is fully functional?
 
-[[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag security]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag dns]] [[!tag dnssec]]
+## Integration with OpenVPN
+
+If you are [running your own OpenVPN server](https://feeding.cloud.geek.nz/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu/),
+you can tell clients to connect to the local unbound DNS client by putting the following in `/etc/unbound/unbound.conf.d/openvpn.conf`:
+
+    server:
+        interface: 127.0.0.1
+        interface: 10.8.0.1
+        access-control: 127.0.0.1 allow
+        access-control: 10.8.0.1/24 allow
+
+the following in `/etc/openvpn/server.conf`:
+
+    push "dhcp-option DNS 10.8.0.1"
+    push "register-dns"
+
+and opening the following port on your firewall (typically `/etc/network/iptables.up.rules` on Debian):
+
+    -A INPUT -p udp --dport 53 -s 10.8.0.0/24 -j ACCEPT
+
+Then restart both services and everything should work:
+
+    systemctl restart unbound.service
+    systemctl restart openvpn.service
+
+You can test it on <http://dnsleak.com>.
+
+[[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag security]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag dns]] [[!tag dnssec]] [[!tag openvpn]]

Update unbound config for stretch
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index a5b458b..247ca72 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -38,8 +38,9 @@ and turned on prefetching to hopefully keep in cache the sites I visit regularly
 
 Finally, I also enabled the control interface:
 
-    control-enable: yes
-    control-interface: 127.0.0.1
+    remote-control:
+        control-enable: yes
+        control-interface: 127.0.0.1
 
 and increased the amount of debugging information:
 

Reformat comment using markdown
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
index f5a93b7..42db2f0 100644
--- a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
+++ b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
@@ -7,36 +7,37 @@ We figured it out:
 
 In order for OpenVPN to use the locally installed Unbound DNS resolver, do this:
 
-First check for the IP we should use with: sudo ifconfig
+First check for the IP we should use with: `sudo ifconfig`
 
 The IP we need is the one listed at 
 
-tun0: inet 10.8.0.1 
+    tun0: inet 10.8.0.1
 
-UNBOUND
+## UNBOUND
 
-Add this to /etc/unbound/unbound.conf
+Add this to `/etc/unbound/unbound.conf`:
 
-server:
-    interface: 127.0.0.1
-    interface: 10.8.0.1
-    access-control: 127.0.0.1 allow
-    access-control: 10.8.0.1/24 allow
+    server:
+        interface: 127.0.0.1
+        interface: 10.8.0.1
+        access-control: 127.0.0.1 allow
+        access-control: 10.8.0.1/24 allow
 
-Then restart Unbound with: sudo service unbound restart
+Then restart Unbound with: `sudo service unbound restart`
 
-Test with: dig @10.8.0.1 google.com
-(SERVER should read: SERVER: 10.8.0.1#53(10.8.0.1))
+Test with: `dig @10.8.0.1 google.com`
 
-OPENVPN
+(SERVER should read: `SERVER: 10.8.0.1#53(10.8.0.1)`)
 
-Add this to (or modify) /etc/openvpn/server.conf
+## OPENVPN
 
-push \"redirect-gateway def1 bypass-dhcp\"
-push \"dhcp-option DNS 10.8.0.1\"
-push \"register-dns\"
+Add this to (or modify) `/etc/openvpn/server.conf`:
 
-Then restart OpenVPN with: sudo service openvpn restart
+    push \"redirect-gateway def1 bypass-dhcp\"
+    push \"dhcp-option DNS 10.8.0.1\"
+    push \"register-dns\"
 
-OpenVPN clients should now be using Unbound. Test at http://dnsleak.com/
+Then restart OpenVPN with: `sudo service openvpn restart`
+
+OpenVPN clients should now be using Unbound. Test at <http://dnsleak.com/>.
 """]]

Comment moderation
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
new file mode 100644
index 0000000..f5a93b7
--- /dev/null
+++ b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
@@ -0,0 +1,42 @@
+[[!comment format=mdwn
+ ip="162.243.251.96"
+ subject="Re: OpenVPN settings"
+ date="2017-08-26T22:19:01Z"
+ content="""
+We figured it out:
+
+In order for OpenVPN to use the locally installed Unbound DNS resolver, do this:
+
+First check for the IP we should use with: sudo ifconfig
+
+The IP we need is the one listed at 
+
+tun0: inet 10.8.0.1 
+
+UNBOUND
+
+Add this to /etc/unbound/unbound.conf
+
+server:
+    interface: 127.0.0.1
+    interface: 10.8.0.1
+    access-control: 127.0.0.1 allow
+    access-control: 10.8.0.1/24 allow
+
+Then restart Unbound with: sudo service unbound restart
+
+Test with: dig @10.8.0.1 google.com
+(SERVER should read: SERVER: 10.8.0.1#53(10.8.0.1))
+
+OPENVPN
+
+Add this to (or modify) /etc/openvpn/server.conf
+
+push \"redirect-gateway def1 bypass-dhcp\"
+push \"dhcp-option DNS 10.8.0.1\"
+push \"register-dns\"
+
+Then restart OpenVPN with: sudo service openvpn restart
+
+OpenVPN clients should now be using Unbound. Test at http://dnsleak.com/
+"""]]

Comment moderation
diff --git a/posts/pristine-tar-and-git-buildpackage-work-arounds/comment_1_e0c2bea75571323d9b0089c173e4afef._comment b/posts/pristine-tar-and-git-buildpackage-work-arounds/comment_1_e0c2bea75571323d9b0089c173e4afef._comment
new file mode 100644
index 0000000..76b9453
--- /dev/null
+++ b/posts/pristine-tar-and-git-buildpackage-work-arounds/comment_1_e0c2bea75571323d9b0089c173e4afef._comment
@@ -0,0 +1,40 @@
+[[!comment format=mdwn
+ ip="2a02:120b:7ff:13f0:26be:5ff:fee1:2b31"
+ claimedauthor="Joël Krähemann"
+ url="http://nongnu.org/gsequencer"
+ subject="debian/rules target work-around"
+ date="2017-08-22T15:02:19Z"
+ content="""
+Hi
+
+We worked on a debian/rules target to download upstream tarball and signature. But I don't know if my debian sponsor is happy about it.
+
+
+    # Gets the name of the source package
+    DEB_SOURCE_PACKAGE := $(strip $(shell egrep '^Source: ' debian/control | cut -f 2 -d ':'))
+
+    # Gets the full version of the source package including debian version
+    DEB_VERSION := $(shell dpkg-parsechangelog | egrep '^Version:' | cut -f 2 -d ' ')
+    DEB_NOEPOCH_VERSION := $(shell echo $(DEB_VERSION) | cut -d: -f2-)
+
+    # Gets only the upstream version of the package
+    DEB_UPSTREAM_VERSION := $(shell echo $(DEB_NOEPOCH_VERSION) | sed 's/-[^-]*$$//')
+    DEB_SOURCE_PACKAGE := $(strip $(shell egrep '^Source: ' debian/control | cut -f 2 -d ':'))
+    DEB_UPSTREAM_MINOR_VERSION := $(shell echo $(DEB_UPSTREAM_VERSION) | sed -r 's/([0-9]+).([0-9]+).([0-9]+)/\1.\2.x/')
+
+    # Sets tarball-dir if not provided by command line
+    TARBALL_DIR ?= ../tarballs
+
+    # Sets export-dir if not provided by command line
+    EXPORT_DIR ?= ../build-area
+
+    get-orig-source:
+      mkdir -p $(TARBALL_DIR)
+      mkdir -p $(EXPORT_DIR)
+      wget -O \"$(TARBALL_DIR)/$(DEB_SOURCE_PACKAGE)_$(DEB_UPSTREAM_VERSION).orig.tar.gz\" -c \"http://download.savannah.gnu.org/releases/gsequencer/$(DEB_UPSTREAM_MINOR_VERSION)/$(DEB_SOURCE_PACKAGE)-$(DEB_UPSTREAM_VERSION).tar.gz\"
+      wget -O \"$(TARBALL_DIR)/$(DEB_SOURCE_PACKAGE)_$(DEB_UPSTREAM_VERSION).orig.tar.gz.asc\" -c \"http://download.savannah.gnu.org/releases/gsequencer/$(DEB_UPSTREAM_MINOR_VERSION)/$(DEB_SOURCE_PACKAGE)-$(DEB_UPSTREAM_VERSION).tar.gz.sig\"
+      ln -s \"$(TARBALL_DIR)/$(DEB_SOURCE_PACKAGE)_$(DEB_UPSTREAM_VERSION).orig.tar.gz.asc\" $(EXPORT_DIR)
+
+
+
+"""]]

Comment moderation
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_4_76f7656b5ca945dc2cf6a11ee9402d12._comment b/posts/setting-up-your-own-dnssec-aware/comment_4_76f7656b5ca945dc2cf6a11ee9402d12._comment
new file mode 100644
index 0000000..39b5f93
--- /dev/null
+++ b/posts/setting-up-your-own-dnssec-aware/comment_4_76f7656b5ca945dc2cf6a11ee9402d12._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="francois@665656f0ba400877c9b12e8fbb086e45aa01f7c0"
+ nickname="francois"
+ avatar="http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9"
+ subject="Re: OpenVPN settings"
+ date="2017-08-16T16:20:31Z"
+ content="""
+> What changes need to be made to /etc/openvpn/server.conf in order to use Unbound from within the VPN tunnel when connected to the server from an external client?
+
+I haven't yet figured out how to do that, but it's something I'd really like to add to my [OpenVPN setup](https://feeding.cloud.geek.nz/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu/).
+"""]]

Comment moderation
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_3_cc2943361afc1181a8920ffbfd028465._comment b/posts/setting-up-your-own-dnssec-aware/comment_3_cc2943361afc1181a8920ffbfd028465._comment
new file mode 100644
index 0000000..b47155d
--- /dev/null
+++ b/posts/setting-up-your-own-dnssec-aware/comment_3_cc2943361afc1181a8920ffbfd028465._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="162.243.251.96"
+ subject="OpenVPN settings"
+ date="2017-08-16T06:28:48Z"
+ content="""
+Dear François,
+
+Thank you so much for this! What changes need to be made to /etc/openvpn/server.conf in order to use Unbound from within the VPN tunnel when connected to the server from an external client?
+
+Thanks for your help, François!
+"""]]

Add a step to fixup the 127.0.0.1 entry in /etc/hosts
This will help ensure that the sender address is correctly set to the fully
qualified domain in outgoing emails.
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index f29443e..62b01f0 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -290,6 +290,12 @@ Configuring mail properly is tricky but the following has worked for me.
 In `/etc/hostname`, put the bare hostname (no domain), but in
 `/etc/mailname` put the fully qualified hostname.
 
+In `/etc/hosts`, make sure that the fully qualified hostname is the
+first alias for `127.0.0.1`, followed by the bare hostname and then
+anything else. For example:
+
+    127.0.0.1 hostname.example.com hostname localhost
+
 Change the following in `/etc/postfix/main.cf`:
 
     inet_interfaces = loopback-only

Filed a pristine-tar bug at Tomasz Buchert's request
diff --git a/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn b/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
index b8db6c6..1e801cd 100644
--- a/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
+++ b/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
@@ -38,12 +38,10 @@ This time, I got a different `pristine-tar` error:
     pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
     pristine-tar: failed to generate tarball
 
-After looking through the
-[list of open bugs](https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=pristine-tar;dist=unstable#_0_4_4),
-I thought it was probably not worth filing a bug given how many similar ones
-are waiting to be addressed.
+I filed [bug 871938](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=871938)
+for this.
 
-So as a work-around, I simply symlinked the upstream tarball I already had
+As a work-around, I simply symlinked the upstream tarball I already had
 and then built the package using the tarball directly instead of the
 `upstream` git branch:
 

Add my packaging blog post
diff --git a/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn b/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
new file mode 100644
index 0000000..b8db6c6
--- /dev/null
+++ b/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
@@ -0,0 +1,92 @@
+[[!meta title="pristine-tar and git-buildpackage Work-arounds"]]
+[[!meta date="2017-08-09T22:25:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I recently ran into problems trying to package the
+[latest version](https://launchpad.net/planetfilter/trunk/0.7.4) of my
+[planetfilter](https://feeding.cloud.geek.nz/posts/keeping-up-with-noisy-blog-aggregators-using-planetfilter/)
+tool.
+
+This is how I was able to temporarily work-around bugs in my tools and still
+produce a [package](https://tracker.debian.org/news/860953) that can be
+built reproducibly from source and that contains a verifiable upstream
+signature.
+
+# pristine-tar being is unable to reproduce a tarball
+
+After importing the
+[latest upstream tarball](https://pypi.python.org/pypi/planetfilter/0.7.4)
+using `gbp import-orig`, I tried to build the package but ran into this
+[`pristine-tar`](https://packages.debian.org/sid/pristine-tar) error:
+
+    $ gbp buildpackage
+    gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
+    xdelta3: normally this indicates that the source file is incorrect
+    xdelta3: please verify the source file with sha1sum or equivalent
+    xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56.
+    pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
+    pristine-tar: failed to generate tarball
+
+So I decided to throw away what I had, re-import the tarball and try again.
+This time, I got a different `pristine-tar` error:
+
+    $ gbp buildpackage
+    gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
+    xdelta3: normally this indicates that the source file is incorrect
+    xdelta3: please verify the source file with sha1sum or equivalent
+    xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56.
+    pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
+    pristine-tar: failed to generate tarball
+
+After looking through the
+[list of open bugs](https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=pristine-tar;dist=unstable#_0_4_4),
+I thought it was probably not worth filing a bug given how many similar ones
+are waiting to be addressed.
+
+So as a work-around, I simply symlinked the upstream tarball I already had
+and then built the package using the tarball directly instead of the
+`upstream` git branch:
+
+    ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz ../planetfilter_0.7.4.orig.tar.gz
+    gbp buildpackage --git-tarball-dir=..
+
+Given that only the `upstream` and `master` branches are signed, the
+[.delta file](https://anonscm.debian.org/cgit/collab-maint/planetfilter.git/tree/planetfilter_0.7.4.orig.tar.gz.delta?h=pristine-tar)
+on the
+[`pristine-tar` branch](https://anonscm.debian.org/cgit/collab-maint/planetfilter.git/tree/?h=pristine-tar)
+could be fixed at any time in the future by committing a new `.delta` file
+once `pristine-tar` gets fixed. This therefore seems like a reasonable
+work-around.
+
+# git-buildpackage doesn't import the upstream tarball signature
+
+The second problem I ran into was a missing upstream signature after
+building the package with
+[`git-buildpackage`](https://packages.debian.org/sid/git-buildpackage):
+
+    $ lintian -i planetfilter_0.7.4-1_amd64.changes
+    E: planetfilter changes: orig-tarball-missing-upstream-signature planetfilter_0.7.4.orig.tar.gz
+    N: 
+    N:    The packaging includes an upstream signing key but the corresponding
+    N:    .asc signature for one or more source tarballs are not included in your
+    N:    .changes file.
+    N:    
+    N:    Severity: important, Certainty: certain
+    N:    
+    N:    Check: changes-file, Type: changes
+    N: 
+
+This problem (and the lintian error I suspect) is fairly new and [hasn't been
+solved yet](https://lists.debian.org/debian-devel/2017/07/msg00451.html).
+
+So until `gbp import-orig` gets proper support for upstream signatures, my
+work-around was to copy the upstream signature in the `export-dir` output
+directory (which I set in `~/.gbp.conf`) so that it can be picked up by the
+final stages of `gbp buildpackage`:
+
+    ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz.asc ../build-area/planetfilter_0.7.4.orig.tar.gz.asc
+
+If there's a better way to do this, please feel free to leave a comment
+(authentication not required)!
+
+[[!tag debian]] [[!tag nzoss]] [[!tag packaging]]

Mention that the existing cronjob needs to be disabled
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
index 5fd7dbc..084aeb2 100644
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -9,7 +9,12 @@ tool. Since I use the "temporary webserver" method of proving domain
 ownership via the [ACME protocol](https://ietf-wg-acme.github.io/acme/), I
 cannot use the cert renewal cronjob built into Certbot.
 
-Instead, this is the script I put in `/etc/cron.daily/certbot-renew`:
+To disable the built-in cronjob, I ran the following:
+
+    systemctl disable certbot.service
+    systemctl disable certbot.timer
+
+Then I put my own renewal script in `/etc/cron.daily/certbot-renew`:
 
     #!/bin/bash
 

Rephrase the introduction, as suggested by Marco d'Itri
diff --git a/posts/time-synchronization-with-ntp-and-systemd.mdwn b/posts/time-synchronization-with-ntp-and-systemd.mdwn
index a3bd6c8..84d3841 100644
--- a/posts/time-synchronization-with-ntp-and-systemd.mdwn
+++ b/posts/time-synchronization-with-ntp-and-systemd.mdwn
@@ -9,9 +9,9 @@ some wouldn't suggested a problem with time keeping on my laptop.
 
 This was surprising since I've been running [NTP](http://www.ntp.org/) for a
 many years and have therefore never had to think about time synchronization.
-After looking into this though, I realized that the move to
-[systemd](https://freedesktop.org/wiki/Software/systemd/) had changed how
-this is meant to be done.
+After realizing that `ntpd` had stopped working on my machine for some reason,
+I found that [systemd](https://freedesktop.org/wiki/Software/systemd/)
+provides an easier way to keep time synchronized.
 
 # The new systemd time synchronization daemon
 
@@ -49,8 +49,6 @@ between `ntpd` and `systemd-timesyncd`. The solution of course is to remove
 the former before enabling the latter:
 
     apt purge ntp
-    systemctl enable systemd-timesyncd.service
-    systemctl restart systemd-timesyncd.service
 
 # Enabling time synchronization with NTP
 

Comment moderation
diff --git a/posts/time-synchronization-with-ntp-and-systemd/comment_1_e99afcbef4e7617574d9bf3041b265d3._comment b/posts/time-synchronization-with-ntp-and-systemd/comment_1_e99afcbef4e7617574d9bf3041b265d3._comment
new file mode 100644
index 0000000..7d3e0bc
--- /dev/null
+++ b/posts/time-synchronization-with-ntp-and-systemd/comment_1_e99afcbef4e7617574d9bf3041b265d3._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="217.193.164.68"
+ claimedauthor="EVD"
+ subject="comment 1"
+ date="2017-08-07T06:13:28Z"
+ content="""
+Not sure why, but on my freshly installed Stretch I have ntpd installed in /usr/sbin/ntpd and systemd-timesyncd seems to be running fine. Actually it looks like both are running in top?
+"""]]

creating tag page tags/systemd
diff --git a/tags/systemd.mdwn b/tags/systemd.mdwn
new file mode 100644
index 0000000..62a1852
--- /dev/null
+++ b/tags/systemd.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged systemd"]]
+
+[[!inline pages="tagged(systemd)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tags/ntp
diff --git a/tags/ntp.mdwn b/tags/ntp.mdwn
new file mode 100644
index 0000000..6e70f03
--- /dev/null
+++ b/tags/ntp.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged ntp"]]
+
+[[!inline pages="tagged(ntp)" actions="no" archive="yes"
+feedshow=10]]

Create a new systemd tag
diff --git a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
index 1c3e781..d0f9432 100644
--- a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
+++ b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
@@ -122,4 +122,4 @@ Finally, because X sometimes fail to detect my external monitor when docking/und
 
     bindsym XF86Display exec /home/francois/bin/external-monitor
 
-[[!tag debian]] [[!tag i3]] [[!tag gnome]] [[!tag nzoss]]
+[[!tag debian]] [[!tag i3]] [[!tag gnome]] [[!tag nzoss]] [[!tag systemd]]
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index aeeaa61..0f0e02a 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -132,4 +132,4 @@ since [MPoD](http://www.katoemba.net/makesnosenseatall/mpod/) and
 [MPaD](http://www.katoemba.net/makesnosenseatall/mpad/) don't appear to be
 available on the AppStore anymore.
 
-[[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag mpd]] [[!tag ios]] [[!tag android]] [[!tag tor]]
+[[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag mpd]] [[!tag ios]] [[!tag android]] [[!tag tor]] [[!tag systemd]]
diff --git a/posts/setting-up-a-network-scanner-using-sane.mdwn b/posts/setting-up-a-network-scanner-using-sane.mdwn
index a624359..bd8dfe8 100644
--- a/posts/setting-up-a-network-scanner-using-sane.mdwn
+++ b/posts/setting-up-a-network-scanner-using-sane.mdwn
@@ -140,4 +140,4 @@ before finally restarting the service:
     systemctl daemon-reload
     systemctl restart saned.socket
 
-[[!tag debian]] [[!tag sane]]
+[[!tag debian]] [[!tag sane]] [[!tag systemd]]

Add NTP blog post
diff --git a/posts/time-synchronization-with-ntp-and-systemd.mdwn b/posts/time-synchronization-with-ntp-and-systemd.mdwn
new file mode 100644
index 0000000..a3bd6c8
--- /dev/null
+++ b/posts/time-synchronization-with-ntp-and-systemd.mdwn
@@ -0,0 +1,93 @@
+[[!meta title="Time Synchronization with NTP and systemd"]]
+[[!meta date="2017-08-06T13:10:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I recently ran into problems with generating
+[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
+2-factor codes on my laptop. The fact that some of the codes would work and
+some wouldn't suggested a problem with time keeping on my laptop.
+
+This was surprising since I've been running [NTP](http://www.ntp.org/) for a
+many years and have therefore never had to think about time synchronization.
+After looking into this though, I realized that the move to
+[systemd](https://freedesktop.org/wiki/Software/systemd/) had changed how
+this is meant to be done.
+
+# The new systemd time synchronization daemon
+
+On a machine running systemd, there is no need to run the full-fledged
+`ntpd` daemon anymore. The built-in `systemd-timesyncd` can do the basic
+time synchronization job just fine.
+
+However, I noticed that the daemon wasn't actually running:
+
+    $ systemctl status systemd-timesyncd.service 
+    ● systemd-timesyncd.service - Network Time Synchronization
+       Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
+      Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
+               └─disable-with-time-daemon.conf
+       Active: inactive (dead)
+    Condition: start condition failed at Thu 2017-08-03 21:48:13 PDT; 1 day 20h ago
+         Docs: man:systemd-timesyncd.service(8)
+
+referring instead to a mysterious "failed condition". Attempting to restart
+the service did provide more details though:
+
+    $ systemctl restart systemd-timesyncd.service 
+    $ systemctl status systemd-timesyncd.service 
+    ● systemd-timesyncd.service - Network Time Synchronization
+       Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
+      Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
+               └─disable-with-time-daemon.conf
+       Active: inactive (dead)
+    Condition: start condition failed at Sat 2017-08-05 18:19:12 PDT; 1s ago
+               └─ ConditionFileIsExecutable=!/usr/sbin/ntpd was not met
+         Docs: man:systemd-timesyncd.service(8)
+
+The above check for the presence of `/usr/sbin/ntpd` points to a conflict
+between `ntpd` and `systemd-timesyncd`. The solution of course is to remove
+the former before enabling the latter:
+
+    apt purge ntp
+    systemctl enable systemd-timesyncd.service
+    systemctl restart systemd-timesyncd.service
+
+# Enabling time synchronization with NTP
+
+Once the `ntp` package has been removed, it is time to enable NTP support in
+`timesyncd`.
+
+Start by choosing the [NTP server pool](http://www.pool.ntp.org/en/) nearest
+you and put it in `/etc/systemd/timesyncd.conf`. For example, mine reads
+like this:
+
+    [Time]
+    NTP=ca.pool.ntp.org
+
+before restarting the daemon:
+
+    systemctl restart systemd-timesyncd.service 
+
+That may not be enough on your machine though. To check whether or not the
+time has been synchronized with NTP servers, run the following:
+
+    $ timedatectl status
+    ...
+     Network time on: yes
+    NTP synchronized: no
+     RTC in local TZ: no
+
+If NTP is not enabled, then you can enable it by running this command:
+
+    timedatectl set-ntp true
+
+Once that's done, everything should be in place and time should be kept
+correctly:
+
+    $ timedatectl status
+    ...
+     Network time on: yes
+    NTP synchronized: yes
+     RTC in local TZ: no
+
+[[!tag debian]] [[!tag nzoss]] [[!tag systemd]] [[!tag ntp]]

Reword heading added in 01104034a971ac6f0bce5fe55b9893ea87b112c0
diff --git a/posts/setting-up-a-network-scanner-using-sane.mdwn b/posts/setting-up-a-network-scanner-using-sane.mdwn
index d29341a..a624359 100644
--- a/posts/setting-up-a-network-scanner-using-sane.mdwn
+++ b/posts/setting-up-a-network-scanner-using-sane.mdwn
@@ -112,7 +112,7 @@ and successfully perform a test scan using this command:
 
     scanimage > test.ppm
 
-# Troubleshooting broken
+# Troubleshooting connection problems
 
 If you see the following error in your logs (`systemctl status saned.socket`):
 

Add troubleshooting information for systemd unit bug in sane-backends
diff --git a/posts/setting-up-a-network-scanner-using-sane.mdwn b/posts/setting-up-a-network-scanner-using-sane.mdwn
index cf3a120..d29341a 100644
--- a/posts/setting-up-a-network-scanner-using-sane.mdwn
+++ b/posts/setting-up-a-network-scanner-using-sane.mdwn
@@ -112,4 +112,32 @@ and successfully perform a test scan using this command:
 
     scanimage > test.ppm
 
+# Troubleshooting broken
+
+If you see the following error in your logs (`systemctl status saned.socket`):
+
+    saned.socket: Too many incoming connections (1), dropping connection.
+
+then you can work around [this bug in the systemd
+unit](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=850649) by
+[overriding the systemd unit that comes with the
+package](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sect-Managing_Services_with_systemd-Unit_Files.html#sect-Managing_Services_with_systemd-Unit_File_Modify):
+
+    cp /lib/systemd/system/saned.socket /etc/systemd/system/saned.socket
+
+then replace:
+
+    [Socket]
+    MaxConnections=1
+
+with:
+
+    [Socket]
+    MaxConnections=64
+
+before finally restarting the service:
+
+    systemctl daemon-reload
+    systemctl restart saned.socket
+
 [[!tag debian]] [[!tag sane]]

Mention the scanner group in my SANE post
diff --git a/posts/setting-up-a-network-scanner-using-sane.mdwn b/posts/setting-up-a-network-scanner-using-sane.mdwn
index ca2e320..cf3a120 100644
--- a/posts/setting-up-a-network-scanner-using-sane.mdwn
+++ b/posts/setting-up-a-network-scanner-using-sane.mdwn
@@ -29,6 +29,9 @@ detects your scanner:
 
     scanimage -L
 
+Note that you'll need to be in the `scanner` group for this to work
+(`adduser username scanner`).
+
 This should give you output similar to this:
 
     device `genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
@@ -41,7 +44,7 @@ USB stack:
 
 and that its USB ID shows up in the SANE backend it needs:
 
-    $ grep 190f /etc/sane.d/genesys.conf 
+    $ grep 190f /etc/sane.d/genesys.conf
     usb 0x04a9 0x190f
 
 To do a test scan, simply run:

Explain how to make command-not-found work
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=857090
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 2c33033..f29443e 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -250,6 +250,11 @@ you need to restart a daemon using an obsolete library.
 Most of these tools are configuration-free, except for sysstat, which requires
 enabling data collection in `/etc/default/sysstat` to be useful.
 
+Also, [`command-not-found` won't work until you update the apt cache](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=857090):
+
+    apt update
+    update-command-not-found
+
 # Apache configuration
 
     apt install apache2

Mention nocache since it's useful for cronjobs
https://feeding.cloud.geek.nz/posts/three-wrappers-to-run-commands-without-impacting-the-rest-of-the-system/
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index d12fbf4..2c33033 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -245,7 +245,7 @@ you need to restart a daemon using an obsolete library.
 
 # Handy utilities
 
-    apt install renameutils atool iotop sysstat lsof mtr-tiny mc netcat-openbsd command-not-found
+    apt install renameutils atool iotop sysstat lsof mtr-tiny mc netcat-openbsd command-not-found nocache
 
 Most of these tools are configuration-free, except for sysstat, which requires
 enabling data collection in `/etc/default/sysstat` to be useful.

Mention that the tp_smapi module is unusable on newer ThinkPads
diff --git a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
index eb22b74..6f85734 100644
--- a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
+++ b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
@@ -8,13 +8,19 @@ hook into the ACPI events and run arbitrary scripts.
 
 This was tested on a T420 with a [ThinkPad Dock Series
 3](http://www.thinkwiki.org/wiki/ThinkPad_Port_Replicator_Series_3) as well
-as a T440p with a [ThinkPad Ultra
+as a T440p and a T460p with a [ThinkPad Ultra
 Dock](http://www.thinkwiki.org/wiki/ThinkPad_Ultra_Dock).
 
-The only requirement is the ThinkPad ACPI kernel module which you can find in
-the [tp-smapi-dkms
-package](https://packages.debian.org/stable/tp-smapi-dkms) in Debian. That's
-what generates the `ibm/hotkey` events we will listen for.
+The only requirement is the ThinkPad kernel module. On most ThinkPads
+it's the [`tp_smapi` module](http://www.thinkwiki.org/wiki/Tp_smapi)
+(which you can find in the [tp-smapi-dkms
+package](https://packages.debian.org/stable/tp-smapi-dkms) in Debian)
+but on newer hardware, [that interface is
+gone](https://github.com/evgeni/tp_smapi/issues/24) and you can simply
+use the [`thinkpad_acpi`
+module](http://www.thinkwiki.org/wiki/Thinkpad-acpi) built into the
+kernel. That's what generates the `ibm/hotkey` events we will listen
+for.
 
 ## Hooking into the events
 

Add two more useful packages from my stretch installs
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 59bedd3..d12fbf4 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -245,7 +245,7 @@ you need to restart a daemon using an obsolete library.
 
 # Handy utilities
 
-    apt install renameutils atool iotop sysstat lsof mtr-tiny mc
+    apt install renameutils atool iotop sysstat lsof mtr-tiny mc netcat-openbsd command-not-found
 
 Most of these tools are configuration-free, except for sysstat, which requires
 enabling data collection in `/etc/default/sysstat` to be useful.

Update openvpn settings for latest version of Network Manager
diff --git a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
index fd98016..e18ea5e 100644
--- a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
+++ b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
@@ -164,6 +164,7 @@ then click the "Avanced" button and set the following:
    * Cipher: `AES-256-CBC`
    * HMAC Authentication: `SHA-384`
 * TLS Authentication
+   * Server Certificate Check: Verify name exactly
    * Subject Match: `server`
    * Verify peer (server) certificate usage signature: `YES`
      * Remote peer certificate TLS type: `Server`

Fix typo in ejabberd.yml
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 88834d9..3213970 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -110,7 +110,7 @@ by adding `starttls_required` to this block:
             - "cipher_server_preference"
           ciphers: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
           tls_compression: false
-          dhfile: "/etc/ssl/ejabberd/dh2048.pem"
+          dhfile: "/etc/ejabberd/dh2048.pem"
           max_stanza_size: 65536
           shaper: c2s_shaper
           access: c2s
@@ -121,7 +121,7 @@ by adding `starttls_required` to this block:
         - "no_tlsv1"
         - "no_tlsv1_1"
         - "cipher_server_preference"
-      s2s_dhfile: /etc/ssl/ejabberd/dh2048.pem
+      s2s_dhfile: "/etc/ejabberd/dh2048.pem"
       s2s_ciphers: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
 
 5. Create the required dh2048.pem file:

creating tag page tags/ejabberd
diff --git a/tags/ejabberd.mdwn b/tags/ejabberd.mdwn
new file mode 100644
index 0000000..29a499a
--- /dev/null
+++ b/tags/ejabberd.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged ejabberd"]]
+
+[[!inline pages="tagged(ejabberd)" actions="no" archive="yes"
+feedshow=10]]

Create an ejabberd tag
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 7f904e9..88834d9 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -181,4 +181,4 @@ Finally, to ensure that your TLS settings are reasonable, use this
 [automated tool](https://xmpp.net/) to test both the client-to-server (c2s)
 and the server-to-server (s2s) flows.
 
-[[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag sysadmin]] [[!tag xmpp]] [[!tag letsencrypt]]
+[[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag sysadmin]] [[!tag xmpp]] [[!tag letsencrypt]] [[!tag ejabberd]]

Use a local config file for fail2ban instead of hacking the main one
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index 9351138..75b715a 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -58,7 +58,7 @@ package. It keeps an eye on the ssh log file (`/var/log/auth.log`) and
 temporarily blocks IP addresses after a number of failed login attempts.
 
 To prevent your own IP addresses from being blocked, add them to
-`/etc/fail2ban/jail.conf`:
+`/etc/fail2ban/jail.d/local.conf`:
 
     [DEFAULT]
     ignoreip = 127.0.0.1/8 1.2.3.4
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index ab8abcc..59bedd3 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -97,7 +97,7 @@ work](https://github.com/paramiko/paramiko/issues/509), I also add the following
 
 Since [fail2ban](http://www.fail2ban.org/) is used to rate-limit attempts to
 brute-force ssh connections, you may want to whitelist your own IP addresses
-by adding them to `/etc/fail2ban/jail.conf`:
+by adding them to `/etc/fail2ban/jail.d/local.conf`:
 
     [DEFAULT]
     ignoreip = 127.0.0.1/8 1.2.3.4

Make sure the debian-security-support package is installed
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 722d134..ab8abcc 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -227,7 +227,7 @@ The above packages are all about catching mistakes (such as
 
 # Package updates
 
-    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges reboot-notifier popularity-contest needrestart
+    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges reboot-notifier popularity-contest needrestart debian-security-support
 
 These tools help me keep packages up to date and remove unnecessary or
 obsolete packages from servers. On Rackspace servers, a small [configuration

Add Atlassian products in the referrer breakage section
diff --git a/posts/tweaking-referrer-for-privacy-in-firefox.mdwn b/posts/tweaking-referrer-for-privacy-in-firefox.mdwn
index 1d72634..a241807 100644
--- a/posts/tweaking-referrer-for-privacy-in-firefox.mdwn
+++ b/posts/tweaking-referrer-for-privacy-in-firefox.mdwn
@@ -117,16 +117,17 @@ example:
 
 - anything that uses the default [Django authentication](https://code.djangoproject.com/ticket/16870)
 - [Launchpad logins](https://bugs.launchpad.net/launchpad/+bug/560246)
+- Atlassian's [JIRA and Confluence](https://github.com/pyllyukko/user.js/issues/329)
 - [AMD driver downloads](https://bugzilla.mozilla.org/show_bug.cgi?id=970092#c7)
 - some [CDN-hosted images](https://www.capbridge.com/visit/shuttle-service/)
 - [Google Hangouts](https://github.com/pyllyukko/user.js/issues/328)
 
-The first two have been worked-around successfully by setting
+The first three have been worked-around successfully by setting
 `network.http.referer.spoofSource` to `true`, an advanced setting
 which always sends the destination URL as the referrer, thereby not leaking
 anything about the original page.
 
-Unfortunately, the last three are examples of the kind of breakage that can
+Unfortunately, the others are examples of the kind of breakage that can
 only be fixed through a whitelist (an approach supported by the [smart
 referer add-on](https://addons.mozilla.org/firefox/addon/smart-referer/)) or
 by temporarily using a different [browser

Comment moderation
diff --git a/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_5_9d3d165b503d8358a142b44f612be973._comment b/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_5_9d3d165b503d8358a142b44f612be973._comment
new file mode 100644
index 0000000..c307326
--- /dev/null
+++ b/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_5_9d3d165b503d8358a142b44f612be973._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ ip="88.214.186.65"
+ subject="enable UEFI"
+ date="2017-07-25T16:28:17Z"
+ content="""
+Also, you need to enable UEFI boot, so the usb or cd can boot... and plug in the AC power
+"""]]

Add a note about Google Hangouts requiring referrers
diff --git a/posts/tweaking-referrer-for-privacy-in-firefox.mdwn b/posts/tweaking-referrer-for-privacy-in-firefox.mdwn
index bd8ca21..1d72634 100644
--- a/posts/tweaking-referrer-for-privacy-in-firefox.mdwn
+++ b/posts/tweaking-referrer-for-privacy-in-firefox.mdwn
@@ -119,13 +119,14 @@ example:
 - [Launchpad logins](https://bugs.launchpad.net/launchpad/+bug/560246)
 - [AMD driver downloads](https://bugzilla.mozilla.org/show_bug.cgi?id=970092#c7)
 - some [CDN-hosted images](https://www.capbridge.com/visit/shuttle-service/)
+- [Google Hangouts](https://github.com/pyllyukko/user.js/issues/328)
 
 The first two have been worked-around successfully by setting
 `network.http.referer.spoofSource` to `true`, an advanced setting
 which always sends the destination URL as the referrer, thereby not leaking
 anything about the original page.
 
-Unfortunately, the last two are examples of the kind of breakage that can
+Unfortunately, the last three are examples of the kind of breakage that can
 only be fixed through a whitelist (an approach supported by the [smart
 referer add-on](https://addons.mozilla.org/firefox/addon/smart-referer/)) or
 by temporarily using a different [browser

Add pulseaudio docking post
diff --git a/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn b/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn
new file mode 100644
index 0000000..696add0
--- /dev/null
+++ b/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn
@@ -0,0 +1,108 @@
+[[!meta title="Toggling Between Pulseaudio Outputs when Docking a Laptop"]]
+[[!meta date="2017-07-11T22:00:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+In addition to
+[selecting the right monitor after docking my ThinkPad](https://feeding.cloud.geek.nz/posts/hooking-into-docking-undocking-events-to-run-scripts/),
+I wanted to set the correct sound output since I have headphones connected
+to my Ultra Dock. This can be done fairly easily using
+[Pulseaudio](https://www.freedesktop.org/wiki/Software/PulseAudio/).
+
+# Switching to a different pulseaudio output
+
+To find the device name and the output name I need to provide to `pacmd`, I
+ran `pacmd list-sinks`:
+
+    2 sink(s) available.
+    ...
+      * index: 1
+    	name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
+    	driver: <module-alsa-card.c>
+    ...
+    	ports:
+    		analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown)
+    			properties:
+    				
+    		analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
+    			properties:
+    				device.icon_name = "audio-speakers"
+
+From there, I extracted the soundcard name
+(`alsa_output.pci-0000_00_1b.0.analog-stereo`) and the names of the two
+output ports (`analog-output` and `analog-output-speaker`).
+
+To switch between the headphones and the speakers, I can therefore run the
+following commands:
+
+    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
+    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
+
+# Listening for headphone events
+
+Then I looked for the ACPI event triggered when my headphones are detected
+by the laptop after docking.
+
+After looking at the output of `acpi_listen`, I found `jack/headphone HEADPHONE plug`.
+
+Combining this with the above pulseaudio names, I put the following in
+`/etc/acpi/events/thinkpad-dock-headphones`:
+
+    event=jack/headphone HEADPHONE plug
+    action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"
+
+to automatically switch to the headphones when I dock my laptop.
+
+# Finding out whether or not the laptop is docked
+
+While it is possible to
+[hook into the docking and undocking ACPI events and run scripts](https://feeding.cloud.geek.nz/posts/hooking-into-docking-undocking-events-to-run-scripts/),
+there doesn't seem to be an easy way from a shell script to tell whether or
+not the laptop is docked.
+
+In the end, I settled on detecting the presence of USB devices.
+
+I ran `lsusb` twice (once docked and once undocked) and then compared the
+output:
+
+    lsusb  > docked 
+    lsusb  > undocked 
+    colordiff -u docked undocked 
+
+This gave me a number of differences since I have a bunch of peripherals
+attached to the dock:
+
+    --- docked	2017-07-07 19:10:51.875405241 -0700
+    +++ undocked	2017-07-07 19:11:00.511336071 -0700
+    @@ -1,15 +1,6 @@
+     Bus 001 Device 002: ID 8087:8000 Intel Corp. 
+     Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+    -Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub
+    -Bus 003 Device 080: ID 17ef:1010 Lenovo 
+     Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
+    -Bus 002 Device 041: ID xxxx:xxxx ...
+    -Bus 002 Device 040: ID xxxx:xxxx ...
+    -Bus 002 Device 039: ID xxxx:xxxx ...
+    -Bus 002 Device 038: ID 17ef:100f Lenovo 
+    -Bus 002 Device 037: ID xxxx:xxxx ...
+    -Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub
+    -Bus 002 Device 036: ID 17ef:1010 Lenovo 
+     Bus 002 Device 002: ID xxxx:xxxx ...
+     Bus 002 Device 004: ID xxxx:xxxx ...
+     Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+
+I picked `17ef:1010` as it appeared to be some internal bus on the Ultra
+Dock (none of my USB devices were connected to Bus 003) and then ended up
+with the following
+[port toggling script](https://github.com/fmarier/user-scripts/blob/master/toggle-pulseaudio-port):
+
+    #!/bin/bash
+    
+    if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then
+        # docked
+        pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
+    else
+        # undocked
+        pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
+    fi
+
+[[!tag debian]] [[!tag nzoss]] [[!tag thinkpad]]

Comment moderation
diff --git a/posts/setting-up-a-network-scanner-using-sane/comment_4_11c177b11331f1d232176d768b571430._comment b/posts/setting-up-a-network-scanner-using-sane/comment_4_11c177b11331f1d232176d768b571430._comment
new file mode 100644
index 0000000..861621f
--- /dev/null
+++ b/posts/setting-up-a-network-scanner-using-sane/comment_4_11c177b11331f1d232176d768b571430._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="184.155.20.14"
+ claimedauthor="Paul K"
+ subject="Re: Point of Network Scanner... Windows"
+ date="2017-07-09T01:26:59Z"
+ content="""
+sane is supported on windows (Xsane for win32, SwingSane), but only as a network client. You can't plug a scanner into a windows machine with USB and use sane, but you can plug a scanner into a linux machine, run saned, and then connect sane on windows to that.
+
+Why would you do this? HP Multifunction printers are notorious for not supporting the latest version of windows. HP will make a \"universal print driver\" and ignore the scanner. So anyone with an older device (something made for XP or Win9x) can't scan from windows normally. saned keeps these devices alive.
+"""]]

Use another docking event which is triggered later
diff --git a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
index dc04f8d..eb22b74 100644
--- a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
+++ b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
@@ -24,7 +24,7 @@ as [suggested in this guide](http://phihag.de/2012/thinkpad-docking.html).
 
 Firstly, `/etc/acpi/events/thinkpad-dock`:
 
-    event=ibm/hotkey LEN0068:00 00000080 00004010
+    event=ibm/hotkey LEN0068:00 00000080 00006030
     action=su francois -c "/home/francois/bin/external-monitor dock"
 
 Secondly, `/etc/acpi/events/thinkpad-undock`:
@@ -36,6 +36,9 @@ then restart acpid:
 
     sudo systemctl restart acpid.service
 
+Note that I'm not using the real "docking" event (`ibm/hotkey LEN0068:00 00000080 00004010`)
+because it seems to be triggered too early and the new displays aren't ready.
+
 ## Finding the right events
 
 To make sure the events are the right ones, lift them off of:

Update instructions for systemd
diff --git a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
index b7bc5de..dc04f8d 100644
--- a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
+++ b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
@@ -32,9 +32,9 @@ Secondly, `/etc/acpi/events/thinkpad-undock`:
     event=ibm/hotkey LEN0068:00 00000080 00004011
     action=su francois -c "/home/francois/bin/external-monitor undock"
 
-then restart udev:
+then restart acpid:
 
-    sudo service udev restart
+    sudo systemctl restart acpid.service
 
 ## Finding the right events
 
@@ -46,7 +46,7 @@ and ensure that your script is actually running by adding:
 
     logger "ACPI event: $*"
 
-at the begininng of it and then looking in `/var/log/syslog` for this lines
+at the begininng of it and then looking in `/var/log/syslog` for lines
 like:
 
     logger: external-monitor undock

Replace "lenovo" tag with "thinkpad" and add it to docking post
diff --git a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
index 45d6e0d..b7bc5de 100644
--- a/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
+++ b/posts/hooking-into-docking-undocking-events-to-run-scripts.mdwn
@@ -71,4 +71,4 @@ I used:
     xrandr -d :0.0 --output eDP1 --auto
     xrandr -d :0.0 --output DP2 --left-of eDP1
 
-[[!tag debian]] [[!tag nzoss]]
+[[!tag debian]] [[!tag nzoss]] [[!tag thinkpad]]
diff --git a/posts/upgrading-lenovo-thinkpad-bios-under-linux.mdwn b/posts/upgrading-lenovo-thinkpad-bios-under-linux.mdwn
index 1b2b5f7..364bd47 100644
--- a/posts/upgrading-lenovo-thinkpad-bios-under-linux.mdwn
+++ b/posts/upgrading-lenovo-thinkpad-bios-under-linux.mdwn
@@ -45,4 +45,4 @@ partition name, for the USB stick):
 then restart and boot from the USB stick by pressing Enter, then F12 when
 you see the Lenovo logo.
 
-[[!tag debian]] [[!tag nzoss]] [[!tag lenovo]] [[!tag thinkpad]]
+[[!tag debian]] [[!tag nzoss]] [[!tag thinkpad]]

Comment moderation
diff --git a/posts/using-dnssec-and-dnscrypt-in-debian/comment_7_8d2220b92f520d2b021df5764746d2ec._comment b/posts/using-dnssec-and-dnscrypt-in-debian/comment_7_8d2220b92f520d2b021df5764746d2ec._comment
new file mode 100644
index 0000000..a6cc05f
--- /dev/null
+++ b/posts/using-dnssec-and-dnscrypt-in-debian/comment_7_8d2220b92f520d2b021df5764746d2ec._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="109.163.234.2"
+ claimedauthor="Martin"
+ url="blog.mdosch.de"
+ subject="Captive Portal"
+ date="2017-07-07T14:34:50Z"
+ content="""
+I'm doing a lot of business trips so I'm using a lot of Airport and Hotel WiFis. So far I could reach all the captive portals when directly typing http://1.1.1.1 into my browser address bar. I don't know if this a standard but so far it seems all the captive portals are reachable this way.
+If it won't work I would have a look at the IP DHCP gave, e.g. 192.168.10.42, and then would try to acces 192.168.10.1 (but I never needed to try this as 1.1.1.1 always worked for me).
+"""]]

Comment moderation
diff --git a/posts/setting-up-raid-on-existing/comment_15_ea7bf9dd2aaddafefc2ca34aebd387a4._comment b/posts/setting-up-raid-on-existing/comment_15_ea7bf9dd2aaddafefc2ca34aebd387a4._comment
new file mode 100644
index 0000000..14281bc
--- /dev/null
+++ b/posts/setting-up-raid-on-existing/comment_15_ea7bf9dd2aaddafefc2ca34aebd387a4._comment
@@ -0,0 +1,21 @@
+[[!comment format=mdwn
+ ip="83.208.32.87"
+ claimedauthor="TyNyT"
+ subject="Proper Grub approach"
+ date="2017-06-25T18:33:53Z"
+ content="""
+Hi, I found the Grub reconfig too complex and not working well in case the /boot is on a separate partition, failing to rescue mode.
+
+Instead of fiddling with the grub console, one can fix the issue before reboot - just to chroot into the mounted md partitions (be aware, CHOOSE TO INSTALL GRUB TO MD-ENABLED DRIVE _ONLY_, just not to touch the \"source\" drive):
+
+    mount -t proc /proc /mnt/mntroot/proc
+    mount --rbind /sys /mnt/mntroot/sys
+    mount --make-rslave /mnt/mntroot/sys
+    mount --rbind /dev /mnt/mntroot/dev
+    mount --make-rslave /mnt/mntroot/dev
+    chroot /mnt/mntroot /bin/bash
+    source /etc/profile
+    dpkg-reconfigure grub-pc  
+
+I consider this approach to be much cleaner.
+"""]]

Remove haveged
This was removed from BetterCrypto.org:
https://github.com/BetterCrypto/Applied-Crypto-Hardening/commit/cf7cef7a870c1b77089b1bd6209ded6525b5a4e0
https://lists.cert.at/pipermail/ach/2017-May/thread.html#2255
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 4638aa6..722d134 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -212,7 +212,7 @@ before reloading these settings using `sysctl -p`.
 
 # Entropy and timekeeping
 
-    apt install haveged rng-tools ntp
+    apt install rng-tools ntp
 
 To keep the system clock accurate and increase the amount of entropy
 available to the server, I install the above packages and add the `tpm_rng`

Remove the recommendation to increase log verbosity
Newer (jessie+) versions of ssh already include the pubkey fingerprints in
the accepted connection messages.
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index f9c889a..9351138 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -108,23 +108,11 @@ You may also want to include the following options to each entry:
 
 # Increasing the amount of logging
 
-The first thing I'd recommend is to increase the level of verbosity in
-`/etc/ssh/sshd_config`:
-
-    LogLevel VERBOSE
-
-which will, amongst other things, log the fingerprints of keys used to login:
-
-    sshd: Connection from 192.0.2.2 port 39671
-    sshd: Found matching RSA key: de:ad:be:ef:ca:fe
-    sshd: Postponed publickey for francois from 192.0.2.2 port 39671 ssh2 [preauth]
-    sshd: Accepted publickey for francois from 192.0.2.2 port 39671 ssh2 
-
-Secondly, if you run [logcheck](https://packages.debian.org/stable/logcheck)
+Ff you run [logcheck](https://packages.debian.org/stable/logcheck)
 and would like to whitelist the "Accepted publickey" messages on your
 server, you'll have to start by deleting the first line of
 `/etc/logcheck/ignore.d.server/sshd`. Then you can add an entry for all of
-the usernames and IP addresses that you expect to see.
+the usernames, IP addresses and ssh keys that you expect to see.
 
 Finally, it is also possible to
 [log all commands issued by a specific user over ssh](http://beardyjay.co.uk/logging-all-ssh-commands/logging-ssh)

Mention healthchecks.io
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index a82cac8..4638aa6 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -311,6 +311,12 @@ to ensure that email doesn't accumulate unmonitored on this box.
 Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then
 test the whole setup using `mail root`.
 
+To monitor that mail never stops flowing, add this machine to a free
+[healthchecks.io](https://healthchecks.io) account and create a
+`/etc/cron.d/healthchecks-io` cronjob:
+
+    0 1 * * * root echo "ping" | mail xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@hchk.io
+
 # Network tuning
 
 To [reduce the server's contribution to

Add Django 400 Bad Request post
diff --git a/posts/mysterious-400-bad-request-error-django-debug.mdwn b/posts/mysterious-400-bad-request-error-django-debug.mdwn
new file mode 100644
index 0000000..2c48fd7
--- /dev/null
+++ b/posts/mysterious-400-bad-request-error-django-debug.mdwn
@@ -0,0 +1,71 @@
+[[!meta title="Mysterious 400 Bad Request in Django debug mode"]]
+[[!meta date="2017-06-10T17:20:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+While upgrading [Libravatar](https://www.libravatar.org) to a more recent
+version of [Django](https://www.djangoproject.com/), I ran into a
+mysterious 400 error.
+
+In debug mode, my site was working fine, but with `DEBUG = False`, I would
+only a page containing this error:
+
+    Bad Request (400)
+
+with no extra details in the web server logs.
+
+# Turning on extra error logging
+
+To see the full error message, I [configured logging to a
+file](https://docs.djangoproject.com/en/1.11/topics/logging/#examples) by
+adding this to `settings.py`:
+
+```
+LOGGING = {
+    'version': 1,
+    'disable_existing_loggers': False,
+    'handlers': {
+        'file': {
+            'level': 'DEBUG',
+            'class': 'logging.FileHandler',
+            'filename': '/tmp/debug.log',
+        },
+    },
+    'loggers': {
+        'django': {
+            'handlers': ['file'],
+            'level': 'DEBUG',
+            'propagate': True,
+        },
+    },
+}
+```
+
+Then I got the following error message:
+
+    Invalid HTTP_HOST header: 'www.example.com'. You may need to add u'www.example.com' to ALLOWED_HOSTS.
+
+# Temporary hack
+
+Sure enough, putting this in `settings.py` would make it work outside of debug mode:
+
+    ALLOWED_HOSTS = ['*']
+
+which means that there's a mismatch between the HTTP_HOST from Apache and
+[the one that Django expects](https://docs.djangoproject.com/en/1.11/topics/security/#host-headers-virtual-hosting).
+
+# Root cause
+
+The underlying problem was that the
+[Libravatar config file was missing the square brackets](https://git.launchpad.net/~libravatar/libravatar/commit/?id=a8c1002a39e7a1ef7d0ed7e5fb2ecf536ad4eede)
+around the
+[`ALLOWED_HOSTS` setting](https://docs.djangoproject.com/en/1.11/ref/settings/#allowed-hosts).
+
+I had this:
+
+    ALLOWED_HOSTS = 'www.example.com'
+
+instead of:
+
+    ALLOWED_HOSTS = ['www.example.com']
+
+[[!tag django]] [[!tag nzoss]] [[!tag debian]] [[!tag libravatar]]

Commit the new ejabberd key
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
index 0f8da88..5fd7dbc 100644
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -16,7 +16,7 @@ Instead, this is the script I put in `/etc/cron.daily/certbot-renew`:
     /usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"
 
     pushd /etc/ > /dev/null
-    /usr/bin/git add letsencrypt
+    /usr/bin/git add letsencrypt ejabberd
     DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
     if [ -n "$DIFFSTAT" ] ; then
         /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"

Emphasize that both cdn and seccdn need to be removed
diff --git a/posts/server-migration-plan.mdwn b/posts/server-migration-plan.mdwn
index 4b64cf1..cb2358e 100644
--- a/posts/server-migration-plan.mdwn
+++ b/posts/server-migration-plan.mdwn
@@ -13,7 +13,7 @@ go through a similar process.
 # Prepare DNS
 
 * Change the TTL on the DNS entry for `libravatar.org` to 3600 seconds.
-* Remove the mirrors I don't control from the DNS load balancer (`cdn` and `seccdn`).
+* Remove the mirrors I don't control from the DNS load balancer (`cdn` **and** `seccdn`).
 * Remove the main server from `cdn` and `seccdn` in DNS.
 
 # Preparing the new server

Add a note about defaulting to a UTF-8 locale
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index bbfda33..a82cac8 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -64,6 +64,10 @@ the list of generated locales:
 
     dpkg-reconfigure locales
 
+Make sure the default locale is **using the UTF-8** encoding since that will
+ensure that things like Postgres default to the One True Encoding when you
+install/bootstrap them.
+
 Other than that, I [harden the ssh configuration](http://feeding.cloud.geek.nz/posts/hardening-ssh-servers/)
 and end up with the following settings in `/etc/ssh/sshd_config` (jessie):
 

Expand on how to copy the data files in /var/lib/libravatar/
diff --git a/posts/server-migration-plan.mdwn b/posts/server-migration-plan.mdwn
index a4c5597..4b64cf1 100644
--- a/posts/server-migration-plan.mdwn
+++ b/posts/server-migration-plan.mdwn
@@ -95,9 +95,25 @@ go through a similar process.
 * [Tweet](https://twitter.com/libravatar/status/364659172983308288) and [dent](https://identi.ca/libravatar/note/UFBI9ne8SsOftkYlSKPHQQ) about the upcoming migration.
 
 * Enable the static file config on the old server (disabling the Django app).
-
-* Copy the database from the old server and restore it on the new server.
+* Disable pgbouncer to ensure that Django cannot access postgres anymore.
+* Copy the database from the old server and restore it on the new server **making sure it's in the UTF8 encoding**.
 * Copy `/var/lib/libravatar` from the old server to the new one.
+  * On the new server:
+
+        chmod a+w /var/lib/libravatar/avatar
+        chmod a+w /var/lib/libravatar/user
+
+  * From laptop:
+
+        rsync -a -H -v husavik.libravatar.org:/var/lib/libravatar/avatar .
+        rsync -a -H -v husavik.libravatar.org:/var/lib/libravatar/user .
+        rsync -a -H -v avatar/* selfoss.libravatar.org:/var/lib/libravatar/avatar/
+        rsync -a -H -v user/* selfoss.libravatar.org:/var/lib/libravatar/avatar/
+
+  * On the new server:
+
+        chmod go-w /var/lib/libravatar/avatar
+        chmod go-w /var/lib/libravatar/user
 
 # Disable mirror sync
 

Use the HTTPS version of identi.ca
diff --git a/posts/server-migration-plan.mdwn b/posts/server-migration-plan.mdwn
index a2a2d13..a4c5597 100644
--- a/posts/server-migration-plan.mdwn
+++ b/posts/server-migration-plan.mdwn
@@ -57,7 +57,7 @@ go through a similar process.
       <html>
       <body>
       <p>We are migrating to a new server. See you soon!</p>
-      <p>- <a href="http://identi.ca/libravatar">@libravatar</a></p>
+      <p>- <a href="https://identi.ca/libravatar">@libravatar</a></p>
       </body>
       </html>
 

Update Apache configs for Apache 2.4
diff --git a/posts/server-migration-plan.mdwn b/posts/server-migration-plan.mdwn
index 0cdf022..a2a2d13 100644
--- a/posts/server-migration-plan.mdwn
+++ b/posts/server-migration-plan.mdwn
@@ -29,7 +29,7 @@ go through a similar process.
 
 # Preparing the old server
 
-* Prepare a static "under migration" Apache config in `/etc/apache2/sites-enables.static/`:
+* Prepare a static "under migration" Apache config in `/etc/apache2/sites-enabled.static/default.conf`:
 
       <VirtualHost *:80>
           RewriteEngine On
@@ -38,24 +38,21 @@ go through a similar process.
 
       <VirtualHost *:443>
           SSLEngine on
-          SSLProtocol TLSv1
-          SSLHonorCipherOrder On
-          SSLCipherSuite RC4-SHA:HIGH:!kEDH
       
           SSLCertificateFile /etc/libravatar/www.crt
           SSLCertificateKeyFile /etc/libravatar/www.pem
           SSLCertificateChainFile /etc/libravatar/www-chain.pem
       
           RewriteEngine On
-          RewriteRule ^ /var/www/migration.html [last]
+          RewriteRule ^ /var/www/html/migration.html [last]
       
-          <Directory /var/www>
+          <Directory /var/www/html>
               Allow from all
               Options -Indexes
           </Directory>
       </VirtualHost>
 
-* Put this static file in /var/www/migration.html:
+* Put this static file in /var/www/html/migration.html:
 
       <html>
       <body>
@@ -68,7 +65,7 @@ go through a similar process.
 
       a2enmod rewrite
 
-* Prepare an Apache config proxying to the new server in `/etc/apache2/sites-enabled.proxy/`:
+* Prepare an Apache config proxying to the new server in `/etc/apache2/sites-enabled.proxy/default.conf`:
 
       <VirtualHost *:80>
           RewriteEngine On
@@ -77,9 +74,6 @@ go through a similar process.
       
       <VirtualHost *:443>
           SSLEngine on
-          SSLProtocol TLSv1
-          SSLHonorCipherOrder On
-          SSLCipherSuite RC4-SHA:HIGH:!kEDH
       
           SSLCertificateFile /etc/libravatar/www.crt
           SSLCertificateKeyFile /etc/libravatar/www.pem

Remove exim4 config files after replacing it with postfix
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 123b394..bbfda33 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -274,6 +274,7 @@ and then run:
 # Mail
 
     apt install postfix
+    apt purge exim4-base exim4-daemon-light exim4-config
 
 Configuring mail properly is tricky but the following has worked for me.
 

Remove more old wheezy notes
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 97a0cd2..123b394 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -85,14 +85,6 @@ and end up with the following settings in `/etc/ssh/sshd_config` (jessie):
     LogLevel VERBOSE
     AllowGroups sshuser
 
-or the following for wheezy servers:
-
-    HostKey /etc/ssh/ssh_host_rsa_key
-    HostKey /etc/ssh/ssh_host_ecdsa_key
-    KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
-    Ciphers aes256-ctr,aes192-ctr,aes128-ctr
-    MACs hmac-sha2-512,hmac-sha2-256
-
 On those servers where I need [duplicity/paramiko to
 work](https://github.com/paramiko/paramiko/issues/509), I also add the following:
 
@@ -228,31 +220,21 @@ module to `/etc/modules`.
 
 The above packages are all about catching mistakes (such as
 [accidental deletions](http://feeding.cloud.geek.nz/posts/preventing-accidental-deletion-of/)).
-However, in order to extend the molly-guard protection to mosh sessions, one needs to
-manually [apply a patch](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=705397).
 
 # Package updates
 
-    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest needrestart
+    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges reboot-notifier popularity-contest needrestart
 
 These tools help me keep packages up to date and remove unnecessary or
 obsolete packages from servers. On Rackspace servers, a small [configuration
 change](http://feeding.cloud.geek.nz/posts/using-unattended-upgrades-on-rackspace-debian-ubuntu-servers/)
 is needed to automatically update the monitoring tools.
 
-In addition to this, I use the `update-notifier-common` package along with
-the following cronjob in `/etc/cron.daily/reboot-required`:
-
-    #!/bin/sh
-    cat /var/run/reboot-required 2> /dev/null || true
-
+On jessie or later, I install
+[reboot-notifier](http://feeding.cloud.geek.nz/posts/introducing-reboot-notifier/)
 to send me a notification whenever a kernel update requires a reboot to take
 effect.
 
-If you're on jessie or later, simply install
-[reboot-notifier](http://feeding.cloud.geek.nz/posts/introducing-reboot-notifier/)
-instead of `update-notifier-common` and you're done!
-
 In addition to knowing when you need to reboot your machine, the
 `needrestart` package will let you know (and offer to do it for you) when
 you need to restart a daemon using an obsolete library.
@@ -266,7 +248,8 @@ enabling data collection in `/etc/default/sysstat` to be useful.
 
 # Apache configuration
 
-    apt install apache2-mpm-event
+    apt install apache2
+    a2enmod mpm_event
 
 While configuring apache is often specific to each server and the services
 that will be running on it, there are a few common changes I make.
@@ -280,18 +263,6 @@ I enable these in `/etc/apache2/conf-enabled/security.conf`:
     ServerTokens Prod
     ServerSignature Off
 
-or `/etc/apache2/conf.d/security` on wheezy):
-
-    <Directory />
-        AllowOverride None
-        Order Deny,Allow
-        Deny from all
-    </Directory>
-    ServerTokens Prod
-    ServerSignature Off
-
-and remove cgi-bin directives from `/etc/apache2/sites-enabled/000-default`.
-
 I also create a new `/etc/apache2/conf-available/servername.conf` which contains:
 
     ServerName machine_hostname

Remove obsolete harden-* packages
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 79dd8e8..97a0cd2 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -189,7 +189,7 @@ and these to `/etc/rkhunter.conf.local`:
 
 # General hardening
 
-    apt install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra libpam-tmpdir
+    apt install apparmor apparmor-profiles apparmor-profiles-extra libpam-tmpdir
 
 While the harden packages are configuration-free, AppArmor must be [manually enabled](https://wiki.debian.org/AppArmor/HowToUse#Enable_AppArmor):
 

Use `apt` instead of `apt-get`
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 9088b95..79dd8e8 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -9,7 +9,7 @@ how I customize recent releases of Debian on those servers.
 
 # Hardware tests
 
-    apt-get install memtest86+ smartmontools e2fsprogs
+    apt install memtest86+ smartmontools e2fsprogs
 
 Prior to spending any time configuring a new physical server, I like to
 ensure that the hardware is fine.
@@ -24,7 +24,7 @@ Then I check the hard drives using:
 
 # Configuration
 
-    apt-get install etckeeper git sudo vim
+    apt install etckeeper git sudo vim
 
 To keep track of the configuration changes I make in `/etc/`, I use etckeeper
 to keep that directory in a git repository and make the following changes to
@@ -52,7 +52,7 @@ following to `/etc/vim/vimrc.local`:
 
 # ssh
 
-    apt-get install openssh-server mosh fail2ban
+    apt install openssh-server mosh fail2ban
 
 Since most of my servers are set to UTC time, I like to [use my local
 timezone](http://petereisentraut.blogspot.com/2012/04/setting-time-zone-on-remote-ssh-hosts.html)
@@ -121,8 +121,8 @@ and add a timeout for root sessions by putting this in `/root/.bash_profile`:
 
 # Security checks
 
-    apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper mcelog rkhunter
-    apt-get remove --purge john john-data rpcbind tripwire unhide unhide.rb
+    apt install logcheck logcheck-database fcheck tiger debsums corekeeper mcelog rkhunter
+    apt remove --purge john john-data rpcbind tripwire unhide unhide.rb
 
 Logcheck is the main tool I use to keep an eye on log files, which is why I
 add a few additional log files to the default list in
@@ -189,7 +189,7 @@ and these to `/etc/rkhunter.conf.local`:
 
 # General hardening
 
-    apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra libpam-tmpdir
+    apt install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra libpam-tmpdir
 
 While the harden packages are configuration-free, AppArmor must be [manually enabled](https://wiki.debian.org/AppArmor/HowToUse#Enable_AppArmor):
 
@@ -216,7 +216,7 @@ before reloading these settings using `sysctl -p`.
 
 # Entropy and timekeeping
 
-    apt-get install haveged rng-tools ntp
+    apt install haveged rng-tools ntp
 
 To keep the system clock accurate and increase the amount of entropy
 available to the server, I install the above packages and add the `tpm_rng`
@@ -224,7 +224,7 @@ module to `/etc/modules`.
 
 # Preventing mistakes
 
-    apt-get install molly-guard safe-rm sl
+    apt install molly-guard safe-rm sl
 
 The above packages are all about catching mistakes (such as
 [accidental deletions](http://feeding.cloud.geek.nz/posts/preventing-accidental-deletion-of/)).
@@ -233,7 +233,7 @@ manually [apply a patch](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=705397
 
 # Package updates
 
-    apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest needrestart
+    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest needrestart
 
 These tools help me keep packages up to date and remove unnecessary or
 obsolete packages from servers. On Rackspace servers, a small [configuration
@@ -259,14 +259,14 @@ you need to restart a daemon using an obsolete library.
 
 # Handy utilities
 
-    apt-get install renameutils atool iotop sysstat lsof mtr-tiny mc
+    apt install renameutils atool iotop sysstat lsof mtr-tiny mc
 
 Most of these tools are configuration-free, except for sysstat, which requires
 enabling data collection in `/etc/default/sysstat` to be useful.
 
 # Apache configuration
 
-    apt-get install apache2-mpm-event
+    apt install apache2-mpm-event
 
 While configuring apache is often specific to each server and the services
 that will be running on it, there are a few common changes I make.
@@ -302,7 +302,7 @@ and then run:
 
 # Mail
 
-    apt-get install postfix
+    apt install postfix
 
 Configuring mail properly is tricky but the following has worked for me.
 

Suggest whitelisting your own IP addresses
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index a0643ef..f9c889a 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -57,6 +57,12 @@ install the [fail2ban](http://www.fail2ban.org/wiki/index.php/Main_Page)
 package. It keeps an eye on the ssh log file (`/var/log/auth.log`) and
 temporarily blocks IP addresses after a number of failed login attempts.
 
+To prevent your own IP addresses from being blocked, add them to
+`/etc/fail2ban/jail.conf`:
+
+    [DEFAULT]
+    ignoreip = 127.0.0.1/8 1.2.3.4
+
 Another approach is to hide the ssh service using
 [Single-Packet Authentication](http://en.wikipedia.org/wiki/Single_Packet_Authorization). I
 have [fwknop](http://www.cipherdyne.org/fwknop/) installed on some of my
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 70a3473..9088b95 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -99,6 +99,13 @@ work](https://github.com/paramiko/paramiko/issues/509), I also add the following
     KexAlgorithms ...,diffie-hellman-group-exchange-sha1
     MACs ...,hmac-sha1
 
+Since [fail2ban](http://www.fail2ban.org/) is used to rate-limit attempts to
+brute-force ssh connections, you may want to whitelist your own IP addresses
+by adding them to `/etc/fail2ban/jail.conf`:
+
+    [DEFAULT]
+    ignoreip = 127.0.0.1/8 1.2.3.4
+
 Then I remove the "Accepted" filter in `/etc/logcheck/ignore.d.server/ssh`
 (first line) to get a notification whenever anybody successfully logs into
 my server.

Remove duplicate Flattr meta directive from homepage
Adding it to the template in 615caebf7e603e7b2cc3c7fdf6a0ab1da0a5be4b also
adds it to the homepage.
diff --git a/index.mdwn b/index.mdwn
index ef3117a..d08446d 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -6,5 +6,3 @@
 
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
-
-[[!meta name="flattr:id" content="4j6y0v"]]

Add Flattr meta tag to the page template
diff --git a/templates/page.tmpl b/templates/page.tmpl
new file mode 100644
index 0000000..3688f11
--- /dev/null
+++ b/templates/page.tmpl
@@ -0,0 +1,223 @@
+<!DOCTYPE html>
+<TMPL_IF HTML_LANG_CODE><html lang="<TMPL_VAR HTML_LANG_CODE>" dir="<TMPL_VAR HTML_LANG_DIR>" xmlns="http://www.w3.org/1999/xhtml"><TMPL_ELSE><html xmlns="http://www.w3.org/1999/xhtml"></TMPL_IF>
+<head>
+<TMPL_IF DYNAMIC>
+<TMPL_IF FORCEBASEURL><base href="<TMPL_VAR FORCEBASEURL>" /><TMPL_ELSE>
+<TMPL_IF BASEURL><base href="<TMPL_VAR BASEURL>" /></TMPL_IF>
+</TMPL_IF>
+</TMPL_IF>
+<TMPL_IF HTML5><meta charset="utf-8" /><TMPL_ELSE><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></TMPL_IF>
+<title><TMPL_VAR TITLE></title>
+<TMPL_IF RESPONSIVE_LAYOUT><meta name="viewport" content="width=device-width, initial-scale=1" /></TMPL_IF>
+<TMPL_IF FAVICON>
+<link rel="icon" href="<TMPL_VAR BASEURL><TMPL_VAR FAVICON>" type="image/x-icon" />
+</TMPL_IF>
+<link rel="stylesheet" href="<TMPL_VAR BASEURL>style.css" type="text/css" />
+<TMPL_IF LOCAL_CSS>
+<link rel="stylesheet" href="<TMPL_VAR BASEURL><TMPL_VAR LOCAL_CSS>" type="text/css" />
+<TMPL_ELSE>
+<link rel="stylesheet" href="<TMPL_VAR BASEURL>local.css" type="text/css" />
+</TMPL_IF>
+
+<TMPL_UNLESS DYNAMIC>
+<TMPL_IF EDITURL>
+<link rel="alternate" type="application/x-wiki" title="Edit this page" href="<TMPL_VAR EDITURL>" />
+</TMPL_IF>
+<TMPL_IF FEEDLINKS><TMPL_VAR FEEDLINKS></TMPL_IF>
+<TMPL_IF RELVCS><TMPL_VAR RELVCS></TMPL_IF>
+<TMPL_IF META><TMPL_VAR META></TMPL_IF>
+<TMPL_LOOP TRAILLOOP>
+<TMPL_IF PREVPAGE>
+<link rel="prev" href="<TMPL_VAR PREVURL>" title="<TMPL_VAR PREVTITLE>" />
+</TMPL_IF>
+<link rel="up" href="<TMPL_VAR TRAILURL>" title="<TMPL_VAR TRAILTITLE>" />
+<TMPL_IF NEXTPAGE>
+<link rel="next" href="<TMPL_VAR NEXTURL>" title="<TMPL_VAR NEXTTITLE>" />
+</TMPL_IF>
+</TMPL_LOOP>
+</TMPL_UNLESS>
+
+<meta name="flattr:id" content="4j6y0v">
+</head>
+<body>
+
+<TMPL_IF HTML5><article class="page"><TMPL_ELSE><div class="page"></TMPL_IF>
+
+<TMPL_IF HTML5><section class="pageheader"><TMPL_ELSE><div class="pageheader"></TMPL_IF>
+<TMPL_IF HTML5><header class="header"><TMPL_ELSE><div class="header"></TMPL_IF>
+<span>
+<span class="parentlinks">
+<TMPL_LOOP PARENTLINKS>
+<a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a>/ 
+</TMPL_LOOP>
+</span>
+<span class="title">
+<TMPL_VAR TITLE>
+<TMPL_IF ISTRANSLATION>
+&nbsp;(<TMPL_VAR PERCENTTRANSLATED>%)
+</TMPL_IF>
+</span>
+</span>
+<TMPL_UNLESS DYNAMIC>
+<TMPL_IF SEARCHFORM>
+<TMPL_VAR SEARCHFORM>
+</TMPL_IF>
+</TMPL_UNLESS>
+<TMPL_IF HTML5></header><TMPL_ELSE></div></TMPL_IF>
+
+<TMPL_IF HAVE_ACTIONS>
+<TMPL_IF HTML5><nav class="actions"><TMPL_ELSE><div class="actions"></TMPL_IF>
+<ul>
+<TMPL_IF EDITURL>
+<li><a href="<TMPL_VAR EDITURL>" rel="nofollow">Edit</a></li>
+</TMPL_IF>
+<TMPL_IF RECENTCHANGESURL>
+<li><a href="<TMPL_VAR RECENTCHANGESURL>">RecentChanges</a></li>
+</TMPL_IF>
+<TMPL_IF HISTORYURL>
+<li><a rel="nofollow" href="<TMPL_VAR HISTORYURL>">History</a></li>
+</TMPL_IF>
+<TMPL_IF GETSOURCEURL>
+<li><a rel="nofollow" href="<TMPL_VAR GETSOURCEURL>">Source</a></li>
+</TMPL_IF>
+<TMPL_IF PREFSURL>
+<li><a rel="nofollow" href="<TMPL_VAR PREFSURL>">Preferences</a></li>
+</TMPL_IF>
+<TMPL_IF ACTIONS>
+<TMPL_LOOP ACTIONS>
+<li><TMPL_VAR ACTION></li>
+</TMPL_LOOP>
+</TMPL_IF>
+<TMPL_IF COMMENTSLINK>
+<li><TMPL_VAR COMMENTSLINK></li>
+<TMPL_ELSE>
+<TMPL_IF DISCUSSIONLINK>
+<li><TMPL_VAR DISCUSSIONLINK></li>
+</TMPL_IF>
+</TMPL_IF>
+</ul>
+<TMPL_IF HTML5></nav><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+
+<TMPL_IF OTHERLANGUAGES>
+<TMPL_IF HTML5><nav id="otherlanguages"><TMPL_ELSE><div id="otherlanguages"></TMPL_IF>
+<ul>
+<TMPL_LOOP OTHERLANGUAGES>
+<li>
+<a href="<TMPL_VAR URL>"><TMPL_VAR LANGUAGE></a>
+<TMPL_IF MASTER>
+(master)
+<TMPL_ELSE>
+&nbsp;(<TMPL_VAR PERCENT>%)
+</TMPL_IF>
+</li>
+</TMPL_LOOP>
+</ul>
+<TMPL_IF HTML5></nav><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+
+<TMPL_UNLESS DYNAMIC>
+<TMPL_VAR TRAILS>
+</TMPL_UNLESS>
+
+<TMPL_IF HTML5></section><TMPL_ELSE></div></TMPL_IF>
+
+<TMPL_UNLESS DYNAMIC>
+<TMPL_IF SIDEBAR>
+<TMPL_IF HTML5><aside class="sidebar"><TMPL_ELSE><div class="sidebar"></TMPL_IF>
+<TMPL_VAR SIDEBAR>
+<TMPL_IF HTML5></aside><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+</TMPL_UNLESS>
+
+<div id="pagebody">
+
+<TMPL_IF HTML5><section<TMPL_ELSE><div</TMPL_IF> id="content" role="main">
+<TMPL_VAR CONTENT>
+<TMPL_IF HTML5></section><TMPL_ELSE></div></TMPL_IF>
+
+<TMPL_IF ENCLOSURE>
+<TMPL_IF HTML5><section id="enclosure"><TMPL_ELSE><div id="enclosure"></TMPL_IF>
+<a href="<TMPL_VAR ENCLOSURE>">Download</a>
+<TMPL_IF HTML5></section><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+
+<TMPL_UNLESS DYNAMIC>
+<TMPL_IF COMMENTS>
+<TMPL_IF HTML5><section<TMPL_ELSE><div</TMPL_IF> id="comments" role="complementary">
+<TMPL_VAR COMMENTS>
+<TMPL_IF ADDCOMMENTURL>
+<div class="addcomment">
+<a rel="nofollow" href="<TMPL_VAR ADDCOMMENTURL>">Add a comment</a>
+</div>
+<TMPL_ELSE>
+<div class="addcomment">Comments on this page are closed.</div>
+</TMPL_IF>
+<TMPL_IF HTML5></section><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+</TMPL_UNLESS>
+
+</div>
+
+<TMPL_IF HTML5><footer<TMPL_ELSE><div</TMPL_IF> id="footer" class="pagefooter" role="contentinfo">
+<TMPL_UNLESS DYNAMIC>
+<TMPL_IF HTML5><nav id="pageinfo"><TMPL_ELSE><div id="pageinfo"></TMPL_IF>
+
+<TMPL_VAR TRAILS>
+
+<TMPL_IF TAGS>
+<TMPL_IF HTML5><nav class="tags"><TMPL_ELSE><div class="tags"></TMPL_IF>
+Tags:
+<TMPL_LOOP TAGS>
+<TMPL_VAR LINK>
+</TMPL_LOOP>
+<TMPL_IF HTML5></nav><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+
+<TMPL_IF BACKLINKS>
+<TMPL_IF HTML5><nav id="backlinks"><TMPL_ELSE><div id="backlinks"></TMPL_IF>
+Links:
+<TMPL_LOOP BACKLINKS>
+<a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a>
+</TMPL_LOOP>
+<TMPL_IF MORE_BACKLINKS>
+<span class="popup">...
+<span class="balloon">
+<TMPL_LOOP MORE_BACKLINKS>
+<a href="<TMPL_VAR URL>"><TMPL_VAR PAGE></a>
+</TMPL_LOOP>
+</span>
+</span>
+</TMPL_IF>
+<TMPL_IF HTML5></nav><TMPL_ELSE></div></TMPL_IF>
+</TMPL_IF>
+

(Diff truncated)
Revert "Move Flattr meta directive to the sidebar"
This reverts commit 0587adcf82d0ccf71bd80c6ae6a2ca064614b086.
It doesn't work from the sidebar apparently.
diff --git a/index.mdwn b/index.mdwn
index d08446d..ef3117a 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -6,3 +6,5 @@
 
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
+
+[[!meta name="flattr:id" content="4j6y0v"]]
diff --git a/sidebar.mdwn b/sidebar.mdwn
index 209a4a0..463ac41 100644
--- a/sidebar.mdwn
+++ b/sidebar.mdwn
@@ -1,5 +1,3 @@
-[[!meta name="flattr:id" content="4j6y0v"]]
-
 # Subscribe to this blog
 
 <a href="https://feeding.cloud.geek.nz/index.rss"><img src="/feed-icon.png" height="32" width="32" align="left">Subscribe in a reader</a>

Move Flattr meta directive to the sidebar
Hopefully it will show up on every page now, not just the homepage.
diff --git a/index.mdwn b/index.mdwn
index ef3117a..d08446d 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -6,5 +6,3 @@
 
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
-
-[[!meta name="flattr:id" content="4j6y0v"]]
diff --git a/sidebar.mdwn b/sidebar.mdwn
index 463ac41..209a4a0 100644
--- a/sidebar.mdwn
+++ b/sidebar.mdwn
@@ -1,3 +1,5 @@
+[[!meta name="flattr:id" content="4j6y0v"]]
+
 # Subscribe to this blog
 
 <a href="https://feeding.cloud.geek.nz/index.rss"><img src="/feed-icon.png" height="32" width="32" align="left">Subscribe in a reader</a>

Fix Flattr meta directive
https://ikiwiki.info/bugs/It__39__s_not_possible_to_add_the_new_Flattr_meta_tag_using_the_meta_directive/
diff --git a/index.mdwn b/index.mdwn
index ac02da5..ef3117a 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta flattr:id="4j6y0v"]]
+[[!meta name="flattr:id" content="4j6y0v"]]

Leave the meta directive in the format that should work
diff --git a/index.mdwn b/index.mdwn
index ef4872c..ac02da5 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta flattr\:id="4j6y0v"]]
+[[!meta flattr:id="4j6y0v"]]

Another attempt at parsing the correct value
diff --git a/index.mdwn b/index.mdwn
index 26d9aa5..ef4872c 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta flattr:id = 4j6y0v]]
+[[!meta flattr\:id="4j6y0v"]]

Trying to work around the parsing of the meta directive
diff --git a/index.mdwn b/index.mdwn
index 26bc9e0..26d9aa5 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta flattr:id=4j6y0v]]
+[[!meta flattr:id = 4j6y0v]]

Yet another attempt
diff --git a/index.mdwn b/index.mdwn
index cf96357..26bc9e0 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta "flattr:id"="4j6y0v"]]
+[[!meta flattr:id=4j6y0v]]

Another attempt at fixing the meta directive
diff --git a/index.mdwn b/index.mdwn
index a50db9e..cf96357 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta flattr:id param="4j6y0v"]]
+[[!meta "flattr:id"="4j6y0v"]]

Another attempt at fixing the use of the meta directive
diff --git a/index.mdwn b/index.mdwn
index ac02da5..a50db9e 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta flattr:id="4j6y0v"]]
+[[!meta flattr:id param="4j6y0v"]]

Fix use of meta directive for Flattr ID
diff --git a/index.mdwn b/index.mdwn
index 1b5254e..ac02da5 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -7,4 +7,4 @@
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
 
-[[!meta  field="flattr:id" param="4j6y0v"]]
+[[!meta flattr:id="4j6y0v"]]

Add Flattr ID
diff --git a/index.mdwn b/index.mdwn
index d08446d..1b5254e 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -6,3 +6,5 @@
 
 [[!inline pages="page(./posts/*) and !*/Discussion" show="10"
 actions=yes rootpage="posts"]]
+
+[[!meta  field="flattr:id" param="4j6y0v"]]

creating tag page tags/lvm
diff --git a/tags/lvm.mdwn b/tags/lvm.mdwn
new file mode 100644
index 0000000..148c5e5
--- /dev/null
+++ b/tags/lvm.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged lvm"]]
+
+[[!inline pages="tagged(lvm)" actions="no" archive="yes"
+feedshow=10]]

Add ejabberd and znc certs to my script
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
index cc0c5b1..0f8da88 100644
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -24,6 +24,12 @@ Instead, this is the script I put in `/etc/cron.daily/certbot-renew`:
     fi
     popd > /dev/null
 
+    # Generate the right certs for ejabberd and znc
+    if test /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem -nt /etc/ejabberd/ejabberd.pem ; then
+        cat /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem > /etc/ejabberd/ejabberd.pem
+    fi
+    cat /etc/letsencrypt/live/irc.fmarier.org/privkey.pem /etc/letsencrypt/live/irc.fmarier.org/fullchain.pem > /home/francois/.znc/znc.pem
+
 It temporarily disables my [Apache](https://httpd.apache.org/) webserver while it renews the
 certificates and then only outputs something to STDOUT (since my cronjob
 will email me any output) if certs have been renewed.
@@ -32,6 +38,17 @@ Since I'm using [etckeeper](https://etckeeper.branchable.com/) to keep track of
 servers, my renewal script also commits to the repository if any certs have
 changed.
 
+Finally, since my
+[XMPP server](https://feeding.cloud.geek.nz/posts/running-your-own-xmpp-server-debian-ubuntu/)
+and
+[IRC bouncer](https://feeding.cloud.geek.nz/posts/hiding-network-disconnections-using-irc-bouncer/)
+need the private key and the full certificate chain to be in the same file,
+so I regenerate these files at the end of the script. In the case of
+ejabberd, I only do so if the certificates have actually changed since
+overwriting `ejabberd.pem` changes its timestamp and triggers an
+[fcheck](https://packages.debian.org/stable/fcheck) notification (since it
+watches all files under `/etc`).
+
 # External Monitoring
 
 In order to catch mistakes or oversights, I use

Add post about LUKS and LVM on Ubuntu
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
new file mode 100644
index 0000000..471bd2a
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
@@ -0,0 +1,93 @@
+[[!meta title="Recovering from an unbootable Ubuntu encrypted LVM root partition"]]
+[[!meta date="2017-05-15T21:10:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+A laptop that was installed using the default Ubuntu 16.10 (xenial)
+[full-disk encryption](https://www.eff.org/deeplinks/2012/11/privacy-ubuntu-1210-full-disk-encryption)
+option stopped booting after receiving a
+kernel update somewhere on the way to Ubuntu 17.04 (zesty).
+
+After showing the boot screen for about 30 seconds, a busybox shell pops up:
+
+    BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
+    Enter 'help' for list of built-in commands.
+    
+    (initramfs)
+
+Typing `exit` will display more information about the failure before
+bringing us back to the same busybox shell:
+
+    Gave up waiting for root device. Common problems:
+      - Boot args (cat /proc/cmdline)
+        - Check rootdelay= (did the system wait long enough?)
+        - Check root= (did the system wait for the right device?)
+      - Missing modules (cat /proc/modules; ls /dev)
+    ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 
+    
+    BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
+    Enter 'help' for list of built-in commands.  
+    
+    (initramfs)
+
+which now complains that the `/dev/mapper/ubuntu--vg-root` root partition
+(which uses
+[LUKS](https://gitlab.com/cryptsetup/cryptsetup/blob/master/README.md) and
+[LVM](https://www.sourceware.org/lvm2/)) cannot be found.
+
+There is some [comprehensive advice out there](https://askubuntu.com/questions/567730/gave-up-waiting-for-root-device-ubuntu-vg-root-doesnt-exist#567897)
+but it didn't quite work for me. This is how I ended up resolving the problem.
+
+# Boot using a USB installation disk
+
+First, create bootable USB disk using the latest Ubuntu installer:
+
+1. [Download an desktop image](https://www.ubuntu.com/download/desktop).
+2. Copy the ISO directly on the USB stick (overwriting it in the process):
+
+        dd if=ubuntu.iso of=/dev/sdc1
+
+and boot the system using that USB stick ([hold the `option` key during boot on Apple hardware](https://support.apple.com/en-us/HT201255)).
+
+# Mount the encrypted partition
+
+Assuming a drive which is partitioned this way:
+
+- `/dev/sda1`: EFI partition
+- `/dev/sda2`: unencrypted boot partition
+- `/dev/sda3`: encrypted LVM partition
+
+Open a terminal and [mount the required partitions](https://superuser.com/questions/165116/mount-dev-proc-sys-in-a-chroot-environment):
+
+    cryptsetup luksOpen /dev/sda3 sda3_crypt
+    vgchange -ay
+    mount /dev/mapper/ubuntu--vg-root /mnt
+    mount /dev/sda2 /mnt/boot
+    mount -t proc proc /mnt/proc
+    mount -o bind /dev /mnt/dev
+
+Note:
+
+- When running `cryptsetup luksOpen`, you must use the same name as the one
+  that is in `/etc/crypttab` on the root parition (`sda3_crypt` in this
+  example).
+
+- All of these partitions must be present (**including `/proc` and `/dev`**) for
+  the initramfs scripts to do all of their work. If you see errors or
+  warnings, you must resolve them.
+
+# Regenerate the initramfs on the boot partition
+
+Then "enter" the root partition using:
+
+    chroot /mnt
+
+and make sure that the [lvm2](https://launchpad.net/ubuntu/+source/lvm2)
+package is installed:
+
+    apt install lvm2
+
+before regenerating the initramfs for all of the installed kernels:
+
+    update-initramfs -c -k all
+
+[[!tag debian]] [[!tag nzoss]] [[!tag ubuntu]] [[!tag luks]] [[!tag lvm]]

Fix the name of the gethash pref
diff --git a/posts/how-safe-browsing-works-in-firefox.mdwn b/posts/how-safe-browsing-works-in-firefox.mdwn
index ea7c734..d48c4a8 100644
--- a/posts/how-safe-browsing-works-in-firefox.mdwn
+++ b/posts/how-safe-browsing-works-in-firefox.mdwn
@@ -86,7 +86,7 @@ whether or not the rest of the hash matches the entry on the Safe Browsing
 list.
 
 In order resolve such conflicts, Firefox requests from the Safe Browsing
-server (`browser.safebrowsing.provider.mozilla.gethashURL`) all of the
+server (`browser.safebrowsing.provider.google.gethashURL`) all of the
 hashes that start with the affected 32-bit prefix and adds these full-length
 hashes to its local database. Turn on `browser.safebrowsing.debug` to see
 some debugging information on the terminal while these "completion" requests

Fix mpd cronjobs
The change in b3226fa96cc6e88ab6aca27aede75295abcd4c4b introduced
a permission problem since the environment variable was only
available to the test command.
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index 936154c..aeeaa61 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -74,10 +74,10 @@ and created a cronjob in `/etc/cron.d/mpd-francois` to update the database
 daily and stop the music automatically in the evening:
 
     # Refresh DB once an hour
-    5 * * * *  mpd  MPD_HOST=Password1@/run/mpd/socket test -r /run/mpd/socket && /usr/bin/mpc --quiet update
+    5 * * * *  mpd  test -r /run/mpd/socket && MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update
     # Think of the neighbours
-    0 22 * * 0-4  mpd  MPD_HOST=Password1@/run/mpd/socket test -r /run/mpd/socket && /usr/bin/mpc --quiet stop
-    0 23 * * 5-6  mpd  MPD_HOST=Password1@/run/mpd/socket test -r /run/mpd/socket && /usr/bin/mpc --quiet stop
+    0 22 * * 0-4  mpd  test -r /run/mpd/socket && MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
+    0 23 * * 5-6  mpd  test -r /run/mpd/socket && MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
 
 # Clients
 

Recommend reboot-notifier instead of update-notifier-common
diff --git a/posts/using-unattended-upgrades-on-rackspace-debian-ubuntu-servers.mdwn b/posts/using-unattended-upgrades-on-rackspace-debian-ubuntu-servers.mdwn
index aa7c446..5f3c515 100644
--- a/posts/using-unattended-upgrades-on-rackspace-debian-ubuntu-servers.mdwn
+++ b/posts/using-unattended-upgrades-on-rackspace-debian-ubuntu-servers.mdwn
@@ -114,18 +114,10 @@ be updated and it keeps doing that until the system is fully up-to-date.
 The only thing missing from this is getting a reminder whenever a package
 update (usually the kernel) **requires a reboot** to take effect. That's
 where the
-[update-notifier-common](https://packages.debian.org/wheezy/update-notifier-common)
+[reboot-notifier](https://feeding.cloud.geek.nz/posts/introducing-reboot-notifier/)
 package comes in.
 
-Because that package will add a hook that will create the
-`/var/run/reboot-required` file whenever a kernel update has been installed,
-all you need to do is create a cronjob like this in
-`/etc/cron.daily/reboot-required`:
-
-    #!/bin/sh
-    cat /var/run/reboot-required 2> /dev/null || true
-
-assuming of course that you are already receiving emails sent to the root
+This assumes that you are already receiving emails sent to the root
 user (if not, add the appropriate alias in `/etc/aliases` and run
 `newaliases`).
 

Update mpd DB once an hour but check it's running first
By updating once an hour, I mostly avoid the need to trigger
updates manually.
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index 0cc4118..936154c 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -73,11 +73,11 @@ silence unnecessary log messages in
 and created a cronjob in `/etc/cron.d/mpd-francois` to update the database
 daily and stop the music automatically in the evening:
 
-    # Refresh DB once a day
-    5 1 * * *  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update
+    # Refresh DB once an hour
+    5 * * * *  mpd  MPD_HOST=Password1@/run/mpd/socket test -r /run/mpd/socket && /usr/bin/mpc --quiet update
     # Think of the neighbours
-    0 22 * * 0-4  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
-    0 23 * * 5-6  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
+    0 22 * * 0-4  mpd  MPD_HOST=Password1@/run/mpd/socket test -r /run/mpd/socket && /usr/bin/mpc --quiet stop
+    0 23 * * 5-6  mpd  MPD_HOST=Password1@/run/mpd/socket test -r /run/mpd/socket && /usr/bin/mpc --quiet stop
 
 # Clients
 

Comment moderation
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_cc5c0c144345837437be6800303ae4f1._comment b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_cc5c0c144345837437be6800303ae4f1._comment
new file mode 100644
index 0000000..17134d5
--- /dev/null
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_cc5c0c144345837437be6800303ae4f1._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="78.60.202.182"
+ claimedauthor="Marius Gedminas"
+ url="https://gedmin.as"
+ subject="Why stop Apache?"
+ date="2017-04-14T13:17:59Z"
+ content="""
+You could use the Apache or webroot plugins to do the renewals without stopping Apache.  Is there anything that prevents you from doing that?
+"""]]

Use systemctl instead of apache2ctl to restart Apache
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
index d7521cf..cc0c5b1 100644
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -13,7 +13,7 @@ Instead, this is the script I put in `/etc/cron.daily/certbot-renew`:
 
     #!/bin/bash
 
-    /usr/bin/certbot renew --quiet --pre-hook "/usr/sbin/apache2ctl stop" --post-hook "/usr/sbin/apache2ctl start"
+    /usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"
 
     pushd /etc/ > /dev/null
     /usr/bin/git add letsencrypt

Use pre and post hooks in certbot command
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
index 6d2e7b8..d7521cf 100644
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -13,9 +13,7 @@ Instead, this is the script I put in `/etc/cron.daily/certbot-renew`:
 
     #!/bin/bash
 
-    /usr/sbin/apache2ctl stop
-    /usr/bin/certbot renew --quiet
-    /usr/sbin/apache2ctl start
+    /usr/bin/certbot renew --quiet --pre-hook "/usr/sbin/apache2ctl stop" --post-hook "/usr/sbin/apache2ctl start"
 
     pushd /etc/ > /dev/null
     /usr/bin/git add letsencrypt
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_75826e56fac368db2417030a76ea2fb4._comment b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_75826e56fac368db2417030a76ea2fb4._comment
deleted file mode 100644
index 771777e..0000000
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_75826e56fac368db2417030a76ea2fb4._comment
+++ /dev/null
@@ -1,30 +0,0 @@
-[[!comment format=mdwn
- ip="93.139.219.66"
- claimedauthor="Ivan"
- url="https://www.tomica.me"
- subject="Cerbot options"
- date="2017-04-13T15:55:52Z"
- content="""
-There are 
-
-```
---pre-hook PRE_HOOK   Command to be run in a shell before obtaining any
-                        certificates. Intended primarily for renewal, where it
-                        can be used to temporarily shut down a webserver that
-                        might conflict with the standalone plugin. This will
-                        only be called if a certificate is actually to be
-                        obtained/renewed. When renewing several certificates
-                        that have identical pre-hooks, only the first will be
-                        executed. (default: None)
---post-hook POST_HOOK
-                        Command to be run in a shell after attempting to
-                        obtain/renew certificates. Can be used to deploy
-                        renewed certificates, or to restart any servers that
-                        were stopped by --pre-hook. This is only run if an
-                        attempt was made to obtain/renew a certificate. If
-                        multiple renewed certificates have identical post-
-                        hooks, only one will be run. (default: None)
-```
-
-command line options for certbot. Why not just use combination of those in cron job?
-"""]]

Comment moderation
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_75826e56fac368db2417030a76ea2fb4._comment b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_75826e56fac368db2417030a76ea2fb4._comment
new file mode 100644
index 0000000..771777e
--- /dev/null
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/comment_1_75826e56fac368db2417030a76ea2fb4._comment
@@ -0,0 +1,30 @@
+[[!comment format=mdwn
+ ip="93.139.219.66"
+ claimedauthor="Ivan"
+ url="https://www.tomica.me"
+ subject="Cerbot options"
+ date="2017-04-13T15:55:52Z"
+ content="""
+There are 
+
+```
+--pre-hook PRE_HOOK   Command to be run in a shell before obtaining any
+                        certificates. Intended primarily for renewal, where it
+                        can be used to temporarily shut down a webserver that
+                        might conflict with the standalone plugin. This will
+                        only be called if a certificate is actually to be
+                        obtained/renewed. When renewing several certificates
+                        that have identical pre-hooks, only the first will be
+                        executed. (default: None)
+--post-hook POST_HOOK
+                        Command to be run in a shell after attempting to
+                        obtain/renew certificates. Can be used to deploy
+                        renewed certificates, or to restart any servers that
+                        were stopped by --pre-hook. This is only run if an
+                        attempt was made to obtain/renew a certificate. If
+                        multiple renewed certificates have identical post-
+                        hooks, only one will be run. (default: None)
+```
+
+command line options for certbot. Why not just use combination of those in cron job?
+"""]]

creating tag page tags/letsencrypt
diff --git a/tags/letsencrypt.mdwn b/tags/letsencrypt.mdwn
new file mode 100644
index 0000000..0f54283
--- /dev/null
+++ b/tags/letsencrypt.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged letsencrypt"]]
+
+[[!inline pages="tagged(letsencrypt)" actions="no" archive="yes"
+feedshow=10]]

Add old articles to a new letsencrypt tag
diff --git a/posts/hiding-network-disconnections-using-irc-bouncer.mdwn b/posts/hiding-network-disconnections-using-irc-bouncer.mdwn
index 9600e75..b4b35ee 100644
--- a/posts/hiding-network-disconnections-using-irc-bouncer.mdwn
+++ b/posts/hiding-network-disconnections-using-irc-bouncer.mdwn
@@ -107,4 +107,4 @@ kernel update, I keep the bouncer running. At the end of the day, I say yes
 to killing the bouncer. That way, I don't have a backlog to go through when
 I wake up the next day.
 
-[[!tag mozilla]] [[!tag debian]] [[!tag irc]] [[!tag irssi]] [[!tag nzoss]]
+[[!tag mozilla]] [[!tag debian]] [[!tag irc]] [[!tag irssi]] [[!tag nzoss]] [[!tag letsencrypt]]
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index b79d687..7f904e9 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -181,4 +181,4 @@ Finally, to ensure that your TLS settings are reasonable, use this
 [automated tool](https://xmpp.net/) to test both the client-to-server (c2s)
 and the server-to-server (s2s) flows.
 
-[[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag sysadmin]] [[!tag xmpp]]
+[[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag sysadmin]] [[!tag xmpp]] [[!tag letsencrypt]]

Add letsencrypt renewal script
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
new file mode 100644
index 0000000..6d2e7b8
--- /dev/null
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -0,0 +1,59 @@
+[[!meta title="Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot"]]
+[[!meta date="2017-04-13T08:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I use [Let's Encrypt](https://letsencrypt.org/)
+[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) certificates
+on my Debian servers along with the [Certbot](https://certbot.eff.org/)
+tool. Since I use the "temporary webserver" method of proving domain
+ownership via the [ACME protocol](https://ietf-wg-acme.github.io/acme/), I
+cannot use the cert renewal cronjob built into Certbot.
+
+Instead, this is the script I put in `/etc/cron.daily/certbot-renew`:
+
+    #!/bin/bash
+
+    /usr/sbin/apache2ctl stop
+    /usr/bin/certbot renew --quiet
+    /usr/sbin/apache2ctl start
+
+    pushd /etc/ > /dev/null
+    /usr/bin/git add letsencrypt
+    DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
+    if [ -n "$DIFFSTAT" ] ; then
+        /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
+        echo "$DIFFSTAT"
+    fi
+    popd > /dev/null
+
+It temporarily disables my [Apache](https://httpd.apache.org/) webserver while it renews the
+certificates and then only outputs something to STDOUT (since my cronjob
+will email me any output) if certs have been renewed.
+
+Since I'm using [etckeeper](https://etckeeper.branchable.com/) to keep track of config changes on my
+servers, my renewal script also commits to the repository if any certs have
+changed.
+
+# External Monitoring
+
+In order to catch mistakes or oversights, I use
+[ssl-cert-check](https://packages.debian.org/stable/ssl-cert-check) to
+monitor my domains once a day:
+
+    ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org
+
+I also signed up with [Cert Spotter](https://sslmate.com/certspotter/) which
+watches the
+[Certificate Transparency](https://www.certificate-transparency.org/) log
+and notifies me of any newly-issued certificates for my domains.
+
+In other words, I get notified:
+
+- if my cronjob fails and a cert is about to expire, or
+- as soon as a new cert is issued.
+
+The whole thing seems to work well, but if there's anything I could be doing
+better, feel free to leave a comment!
+
+[[!tag nzoss]] [[!tag sysadmin]] [[!tag debian]] [[!tag mozilla]]
+[[!tag ubuntu]] [[!tag ssl]] [[!tag apache]] [[!tag letsencrypt]]

How to reset the root password
diff --git a/posts/lxc-setup-on-debian-jessie.mdwn b/posts/lxc-setup-on-debian-jessie.mdwn
index c73f7c9..b46eab6 100644
--- a/posts/lxc-setup-on-debian-jessie.mdwn
+++ b/posts/lxc-setup-on-debian-jessie.mdwn
@@ -66,8 +66,13 @@ logins, so you'll need to log into the console:
     sudo lxc-stop -n sid64
     sudo lxc-start -n sid64 -F
 
-then install a text editor inside the container because the root image
-doesn't have one by default:
+Since the root password is randomly generated, you'll need to reset it before
+you can login as root:
+
+    sudo lxc-attach -n sid64 passwd
+
+Then login as root and install a text editor inside the container because the
+root image doesn't have one by default:
 
     apt install vim
 

Force foreground mode for console logins
The default seems to have changed to daemon mode so it's now necessary
to specify foreground mode to log into the console.
diff --git a/posts/lxc-setup-on-debian-jessie.mdwn b/posts/lxc-setup-on-debian-jessie.mdwn
index 37c83d2..c73f7c9 100644
--- a/posts/lxc-setup-on-debian-jessie.mdwn
+++ b/posts/lxc-setup-on-debian-jessie.mdwn
@@ -64,7 +64,7 @@ The ssh server is configured to require pubkey-based authentication for root
 logins, so you'll need to log into the console:
 
     sudo lxc-stop -n sid64
-    sudo lxc-start -n sid64
+    sudo lxc-start -n sid64 -F
 
 then install a text editor inside the container because the root image
 doesn't have one by default:

Add AppArmor and homedir mounting instructions
diff --git a/posts/lxc-setup-on-debian-jessie.mdwn b/posts/lxc-setup-on-debian-jessie.mdwn
index 087ad23..37c83d2 100644
--- a/posts/lxc-setup-on-debian-jessie.mdwn
+++ b/posts/lxc-setup-on-debian-jessie.mdwn
@@ -79,6 +79,19 @@ by typing this command:
 
     sudo lxc-ls --fancy
 
+# Mounting your home directory inside a container
+
+In order to have my home directory available within the container, I
+created a user account for myself inside the container and then added
+the following to the container config file (`/var/lib/lxc/sid64/config`):
+
+    lxc.mount.entry=/home/francois /var/lib/lxc/sid64/rootfs/home/francois none bind 0 0
+
+before restarting the container:
+
+    lxc-stop -n sid64
+    lxc-start -n sid64 -d
+
 # Fixing locale errors
 
 If you see a bunch of errors like these when you start your container:
@@ -123,4 +136,11 @@ and then start up it later once the locales have been updates:
     service apparmor start
     lxc-start -n sid64 -d
 
+# AppArmor support
+
+If you are running AppArmor, your container probably won't start until you
+add the following to the container config (`/var/lib/lxc/sid64/config`):
+
+    lxc.aa_allow_incomplete = 1
+
 [[!tag debian]] [[!tag lxc]] [[!tag nzoss]]

Add missing tag on RAID1 post
diff --git a/posts/manually-expanding-raid1-array-ubuntu.mdwn b/posts/manually-expanding-raid1-array-ubuntu.mdwn
index 8b77ed4..97712c6 100644
--- a/posts/manually-expanding-raid1-array-ubuntu.mdwn
+++ b/posts/manually-expanding-raid1-array-ubuntu.mdwn
@@ -148,4 +148,4 @@ The last step was to regenerate the initramfs:
 before rebooting into something that looks exactly like the original RAID1
 array but with twice the size.
 
-[[!tag nzoss]] [[!tag sysadmin]] [[!tag debian]] [[!tag raid]] [[!tag ubuntu]]
+[[!tag nzoss]] [[!tag sysadmin]] [[!tag debian]] [[!tag raid]] [[!tag ubuntu]] [[!tag luks]]

Add RAID expansion post
diff --git a/posts/manually-expanding-raid1-array-ubuntu.mdwn b/posts/manually-expanding-raid1-array-ubuntu.mdwn
new file mode 100644
index 0000000..8b77ed4
--- /dev/null
+++ b/posts/manually-expanding-raid1-array-ubuntu.mdwn
@@ -0,0 +1,151 @@
+[[!meta title="Manually expanding a RAID1 array on Ubuntu"]]
+[[!meta date="2017-03-31T23:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Here are the notes I took while manually expanding an non-LVM encrypted
+RAID1 array on an Ubuntu machine.
+
+My original setup consisted of a 1 TB drive along with a 2 TB drive, which
+meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of
+unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB
+drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3
+TB drive).
+
+# Partition the new drive
+
+In order to partition the new 3 TB drive, I started by creating a
+**temporary partition** on the old 2 TB drive (`/dev/sdc`) to use up all of
+the capacity on that drive:
+
+    $ parted /dev/sdc
+    unit s
+    print
+    mkpart
+    print
+
+Then I initialized the partition table and creating the EFI partition
+partition on the new drive (`/dev/sdd`):
+
+    $ parted /dev/sdd
+    unit s
+    mktable gpt
+    mkpart
+
+Since I want to have the RAID1 array be as large as the smaller of the two
+drives, I made sure that the second partition (`/home`) on the
+new 3 TB drive had:
+
+- the same **start position** as the second partition on the old drive
+- the **end position** of the third partition (the temporary one I just
+  created) on the old drive
+
+I created the partition and flagged it as a RAID one:
+
+    mkpart
+    toggle 2 raid
+
+and then deleted the temporary partition on the old 2 TB drive:
+
+    $ parted /dev/sdc
+    print
+    rm 3
+    print
+
+# Create a temporary RAID1 array on the new drive
+
+With the new drive properly partitioned, I created a new RAID array for it:
+
+    mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing
+
+and added it to `/etc/mdadm/mdadm.conf`:
+
+    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
+
+which required manual editing of that file to remove duplicate entries.
+
+# Create the encrypted partition
+
+With the new RAID device in place, I created the encrypted LUKS partition:
+
+    cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
+    cryptsetup luksOpen /dev/md10 chome2
+
+I took the UUID for the temporary RAID partition:
+
+    blkid /dev/md10
+
+and put it in `/etc/crypttab` as `chome2`.
+
+Then, I formatted the new LUKS partition and mounted it:
+
+    mkfs.ext4 -m 0 /dev/mapper/chome2
+    mkdir /home2
+    mount /dev/mapper/chome2 /home2
+
+# Copy the data from the old drive
+
+With the home paritions of both drives mounted, I copied the files over to
+the new drive:
+
+    eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/
+
+making use of
+[wrappers that preserve system reponsiveness](https://feeding.cloud.geek.nz/posts/three-wrappers-to-run-commands-without-impacting-the-rest-of-the-system/)
+during I/O-intensive operations.
+
+# Switch over to the new drive
+
+After the copy, I switched over to the new drive in a step-by-step way:
+
+1. Changed the UUID of `chome` in `/etc/crypttab`.
+2. Changed the UUID and name of `/dev/md1` in `/etc/mdadm/mdadm.conf`.
+3. Rebooted with both drives.
+4. Checked that the new drive was the one used in the encrypted `/home` mount using: `df -h`.
+
+
+# Add the old drive to the new RAID array
+
+With all of this working, it was time to clear the mdadm superblock from the
+old drive:
+
+    mdadm --zero-superblock /dev/sdc1
+
+and then change the second partition of the old drive to make it the same
+size as the one on the new drive:
+
+    $ parted /dev/sdc
+    rm 2
+    mkpart
+    toggle 2 raid
+    print
+
+before adding it to the new array:
+
+    mdadm /dev/md1 -a /dev/sdc1
+
+# Rename the new array
+
+To
+[change the name of the new RAID array](https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array#64356)
+back to what it was on the old drive, I first had to stop both the old and
+the new RAID arrays:
+
+    umount /home
+    cryptsetup luksClose chome
+    mdadm --stop /dev/md10
+    mdadm --stop /dev/md1
+
+before running this command:
+
+    mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2
+
+and updating the name in `/etc/mdadm/mdadm.conf`.
+
+The last step was to regenerate the initramfs:
+
+    update-initramfs -u
+
+before rebooting into something that looks exactly like the original RAID1
+array but with twice the size.
+
+[[!tag nzoss]] [[!tag sysadmin]] [[!tag debian]] [[!tag raid]] [[!tag ubuntu]]