Recent changes to this wiki:

Automatically scale large images
diff --git a/local.css b/local.css
index 068fd89..3f1933d 100644
--- a/local.css
+++ b/local.css
@@ -1,4 +1,9 @@
 /* ikiwiki local style sheet */
+img {
+    max-width: 100%;
+    height: auto;
+}
+
 .blogform, .trail, .inlinefooter .pagelicense, .inlinefooter .tags, .inlinefooter .actions {
     display: none;
 }

Add my restricted passwordless guest account
diff --git a/posts/passwordless-restricted-guest-account-ubuntu.mdwn b/posts/passwordless-restricted-guest-account-ubuntu.mdwn
new file mode 100644
index 0000000..d1bcfe4
--- /dev/null
+++ b/posts/passwordless-restricted-guest-account-ubuntu.mdwn
@@ -0,0 +1,85 @@
+[[!meta title="Passwordless restricted guest account on Ubuntu"]]
+[[!meta date="2019-08-15T20:10:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Here's how I created a restricted but *not ephemeral* guest account on an
+Ubuntu 18.04 desktop computer that can be used without a password.
+
+## Create a user that can login without a password
+
+First of all, I created a new user with a random password (using `pwgen -s 64`):
+
+    adduser guest
+
+Then following [these
+instructions](http://ubuntuhandbook.org/index.php/2019/02/enable-passwordless-login-ubuntu-18-04/),
+I created a new group and added the user to it:
+
+    addgroup nopasswdlogin
+    adduser guest nopasswdlogin
+
+In order to let that user login using
+[GDM](https://wiki.gnome.org/Projects/GDM) without a password, I added the
+following to the top of `/etc/pam.d/gdm-password`:
+
+    auth    sufficient      pam_succeed_if.so user ingroup nopasswdlogin
+
+Note that this user is unable to ssh into this machine since it's not part
+of the [`sshuser` group I have setup in my sshd
+configuration](https://feeding.cloud.geek.nz/posts/hardening-ssh-servers/#Whitelist_approach_to_giving_users_ssh_access).
+
+## Privacy settings
+
+In order to reduce the amount of digital traces left between guest sessions,
+I logged into the account using a GNOME session and then opened
+gnome-control-center. I set the following in the privacy section:
+
+![](/posts/passwordless-restricted-guest-account-ubuntu/privacy-settings.png)
+
+Then I replaced Firefox with [Brave](https://brave.com) in the sidebar,
+set it as the default browser in gnome-control-center:
+
+![](/posts/passwordless-restricted-guest-account-ubuntu/default-applications.png)
+
+and configured it to clear everything on exit:
+
+![](/posts/passwordless-restricted-guest-account-ubuntu/brave-clear-on-exit.png)
+
+## Create a password-less system keyring
+
+In order to suppress [prompts to unlock
+gnome-keyring](https://askubuntu.com/questions/867/how-can-i-stop-being-prompted-to-unlock-the-default-keyring-on-boot),
+I opened [seahorse](https://wiki.gnome.org/Apps/Seahorse) and deleted the
+default keyring.
+
+Then I started Brave, which prompted me to create a new keyring so that it
+can save the contents of its password manager securely. I set an **empty
+password** on that new keyring, since I'm not going to be using it.
+
+I also made sure to disable saving of passwords, payment methods and
+addresses in the browser too.
+
+![](/posts/passwordless-restricted-guest-account-ubuntu/brave-passwords.png)
+
+![](/posts/passwordless-restricted-guest-account-ubuntu/brave-payments.png)
+
+![](/posts/passwordless-restricted-guest-account-ubuntu/brave-addresses.png)
+
+## Restrict user account further
+
+Finally, taking an idea from this [similar
+solution](https://askubuntu.com/a/19696/8368), I prevented the user from
+making any system-wide changes by putting the following in
+`/etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla`:
+
+    [guest-policy]
+    Identity=unix-user:guest
+    Action=*
+    ResultAny=no
+    ResultInactive=no
+    ResultActive=no
+
+If you know of any other restrictions that could be added, please leave a
+comment!
+
+[[!tag ubuntu]] [[!tag debian]] [[!tag nzoss]] [[!tag brave]]
diff --git a/posts/passwordless-restricted-guest-account-ubuntu/brave-addresses.png b/posts/passwordless-restricted-guest-account-ubuntu/brave-addresses.png
new file mode 100644
index 0000000..6911611
Binary files /dev/null and b/posts/passwordless-restricted-guest-account-ubuntu/brave-addresses.png differ
diff --git a/posts/passwordless-restricted-guest-account-ubuntu/brave-clear-on-exit.png b/posts/passwordless-restricted-guest-account-ubuntu/brave-clear-on-exit.png
new file mode 100644
index 0000000..e60c47a
Binary files /dev/null and b/posts/passwordless-restricted-guest-account-ubuntu/brave-clear-on-exit.png differ
diff --git a/posts/passwordless-restricted-guest-account-ubuntu/brave-passwords.png b/posts/passwordless-restricted-guest-account-ubuntu/brave-passwords.png
new file mode 100644
index 0000000..e8795b3
Binary files /dev/null and b/posts/passwordless-restricted-guest-account-ubuntu/brave-passwords.png differ
diff --git a/posts/passwordless-restricted-guest-account-ubuntu/brave-payments.png b/posts/passwordless-restricted-guest-account-ubuntu/brave-payments.png
new file mode 100644
index 0000000..eb2cdab
Binary files /dev/null and b/posts/passwordless-restricted-guest-account-ubuntu/brave-payments.png differ
diff --git a/posts/passwordless-restricted-guest-account-ubuntu/default-applications.png b/posts/passwordless-restricted-guest-account-ubuntu/default-applications.png
new file mode 100644
index 0000000..72e5844
Binary files /dev/null and b/posts/passwordless-restricted-guest-account-ubuntu/default-applications.png differ
diff --git a/posts/passwordless-restricted-guest-account-ubuntu/privacy-settings.png b/posts/passwordless-restricted-guest-account-ubuntu/privacy-settings.png
new file mode 100644
index 0000000..a749079
Binary files /dev/null and b/posts/passwordless-restricted-guest-account-ubuntu/privacy-settings.png differ

Prune old stale files
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index d3398a5..423f702 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -45,6 +45,8 @@ and here are the contents of that script:
     git fetch --prune origin
     git pull
     npm install
+    rm -rf src/brave/*
+    gclient sync -D
     npm run init
     
     echo $(date)

Bump the sccache cache size to match ccache
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index 0936c99..d3398a5 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -72,7 +72,7 @@ It references a `~/.bashrc-brave` file which contains:
     #!/bin/sh
     export PATH="${PATH}:${HOME}/bin:${HOME}/devel/brave-browser/vendor/depot_tools:${HOME}/.cargo/bin"
     export SCCACHE_DIR="${HOME}/.cache/sccache"
-    export SCCACHE_CACHE_SIZE=100G
+    export SCCACHE_CACHE_SIZE=200G
     export NO_AUTH_BOTO_CONFIG="${HOME}/.boto"
 
 ## ccache instead of sccache

creating tag page tags/ccache
diff --git a/tags/ccache.mdwn b/tags/ccache.mdwn
new file mode 100644
index 0000000..bf2591a
--- /dev/null
+++ b/tags/ccache.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged ccache"]]
+
+[[!inline pages="tagged(ccache)" actions="no" archive="yes"
+feedshow=10]]

Switch to ccache
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index b4b8a8e..0936c99 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -75,4 +75,27 @@ It references a `~/.bashrc-brave` file which contains:
     export SCCACHE_CACHE_SIZE=100G
     export NO_AUTH_BOTO_CONFIG="${HOME}/.boto"
 
-[[!tag brave]] [[!tag sccache]]
+## ccache instead of sccache
+
+While I started using sccache for compiling Brave, I recently switched to
+[ccache](https://ccache.dev) as sccache turned out to be fairly
+[unreliable](https://github.com/brave/brave-browser/wiki/sccache-for-faster-builds#troubleshooting-the-install)
+at compiling Chromium.
+
+Switching to `ccache` is easy, simply install the package:
+
+    apt install ccache
+
+and then set the [environment
+variable](https://github.com/brave/brave-browser/wiki/sccache-for-faster-builds#setting-the-environment-variable)
+in `.npmrc`:
+
+    sccache = ccache
+
+Finally, you'll probably want to increase the maximum cache size:
+
+    ccache --max-size=200G
+
+in order to fit all of Chromium/Brave in the cache.
+
+[[!tag brave]] [[!tag sccache]] [[!tag ccache]]

Fix two more possible update/build errors
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index 4a11ece..b4b8a8e 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -38,9 +38,11 @@ and here are the contents of that script:
     rm -rf src/out node_modules src/brave/node_modules
     git clean -f -d
     git checkout HEAD package-lock.json
+    find -name "*.pyc" -delete
     
     echo $(date)
     echo "=> Update repo"
+    git fetch --prune origin
     git pull
     npm install
     npm run init

creating tag page tags/gnubee
diff --git a/tags/gnubee.mdwn b/tags/gnubee.mdwn
new file mode 100644
index 0000000..a60ebdf
--- /dev/null
+++ b/tags/gnubee.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged gnubee"]]
+
+[[!inline pages="tagged(gnubee)" actions="no" archive="yes"
+feedshow=10]]

Add GnuBee Debian installation guide
diff --git a/posts/installing-debian-buster-on-gnubee2.mdwn b/posts/installing-debian-buster-on-gnubee2.mdwn
new file mode 100644
index 0000000..3e80d85
--- /dev/null
+++ b/posts/installing-debian-buster-on-gnubee2.mdwn
@@ -0,0 +1,192 @@
+[[!meta title="Installing Debian buster on a GnuBee PC 2"]]
+[[!meta date="2019-07-14T15:30:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Here is how I installed [Debian 10 /
+buster](https://www.debian.org/releases/buster/) on my [GnuBee Personal
+Cloud 2](http://gnubee.org/), a free hardware device designed as a network
+file server / [NAS](https://en.wikipedia.org/wiki/Network-attached_storage).
+
+## Flashing the LibreCMC firmware with Debian support
+
+Before we can install Debian, we need a firmware that includes all of the
+necessary tools.
+
+On another machine, do the following:
+
+1. Download the [latest `librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin`](https://github.com/gnubee-git/gnubee-git.github.io/tree/master/debian).
+2. Mount a **vfat**-formatted USB stick.
+3. Copy the file onto it and rename it to `gnubee.bin`.
+4. Unmount the USB stick
+
+Then plug a network cable between your laptop and the **black network port**
+and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:
+
+    ssh 192.68.10.0
+    reboot
+
+If you have a [USB serial
+cable](https://github.com/gnubee-git/GnuBee_Docs/blob/master/USB_to_UART/README.md),
+you can use it to monitor the flashing process:
+
+    screen /dev/ttyUSB0 57600
+
+otherwise keep an eye on the [LEDs and wait until they are fully done
+flashing](https://github.com/gnubee-git/GnuBee_Docs/wiki/Install-firmware#via-usb-stick).
+
+## Getting ssh access to LibreCMC
+
+Once the firmware has been updated, turn off the GnuBee manually using the
+power switch and turn it back on.
+
+Now enable SSH access via the built-in [LibreCMC](https://librecmc.org)
+firmware:
+
+1. Plug a network cable between your laptop and the **black network port**.
+2. Open web-based admin panel at <http://192.168.10.0>.
+3. Go to *System | Administration*.
+4. Set a root password.
+5. Disable ssh password auth and root password logins.
+6. Paste in your **RSA** ssh public key.
+7. Click *Save & Apply*.
+8. Go to *Network | Firewall*.
+9. Select "accept" for WAN Input.
+10. Click *Save & Apply*.
+
+Finaly, go to *Network | Interfaces* and note the ipv4 address of the WAN
+port since that will be needed in the next step.
+
+## Installing Debian
+
+The first step is to [install Debian
+jessie](https://github.com/gnubee-git/GnuBee_Docs/wiki/Debian) on the
+GnuBee.
+
+Connect the **blue network port** into your router/switch and ssh into the
+GnuBee using the IP address you noted earlier:
+
+    ssh root@192.168.1.xxx
+
+and the root password you set in the previous section.
+
+Then use `fdisk /dev/sda` to create the following partition layout on the
+first drive:
+
+    Device       Start       End   Sectors   Size Type
+    /dev/sda1     2048   8390655   8388608     4G Linux swap
+    /dev/sda2  8390656 234441614 226050959 107.8G Linux filesystem
+
+Note that I used an 120GB solid-state drive as the system drive in order to
+minimize noise levels.
+
+Then format the swap partition:
+
+    mkswap /dev/sda1
+
+and download the latest version of the jessie installer:
+
+    wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install
+
+(Yes, the `--no-check-certificate` is really unfortunate. Please leave a
+comment if you find a way to work around it.)
+
+The stock installer fails to bring up the correct networking configuration
+on my network and so I have [modified the
+install script](https://github.com/gnubee-git/GnuBee_Docs/pull/102) by changing
+the `eth0.1` blurb to:
+
+    auto eth0.1
+    iface eth0.1 inet static
+        address 192.168.10.1
+        netmask 255.255.255.0
+
+Then you should be able to run the installer succesfully:
+
+    sh ./debian-jessie-install
+
+and reboot:
+
+    reboot
+
+# Restore ssh access in Debian jessie
+
+Once the GnuBee has finished booting, login using the [serial console](https://github.com/gnubee-git/GnuBee_Docs/blob/master/USB_to_UART/README.md):
+
+- username: `root`
+- password: `GnuBee`
+
+and change the root password using `passwd`.
+
+Look for the IPv4 address of `eth0.2` in the output of the `ip addr` command
+and then ssh into the GnuBee from your desktop computer:
+
+    ssh root@192.168.1.xxx  # type password set above
+    mkdir .ssh
+    vim .ssh/authorized_keys  # paste your ed25519 ssh pubkey
+
+## Finish the jessie installation
+
+With this in place, you should be able to ssh into the GnuBee using your
+public key:
+
+    ssh root@192.168.1.172
+
+and then finish the jessie installation:
+
+    wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install
+    bash ./debian-modules-install
+    reboot
+
+After rebooting, I made a few tweaks to make the system more pleasant to
+use:
+
+    update-alternatives --config editor  # choose vim.basic
+    dpkg-reconfigure locales  # enable the locale that your desktop is using
+
+## Upgrade to stretch and then buster
+
+To upgrade to stretch, put this in `/etc/apt/sources.list`:
+
+    deb http://httpredir.debian.org/debian stretch main
+    deb http://httpredir.debian.org/debian stretch-updates main
+    deb http://security.debian.org/ stretch/updates main
+
+Then upgrade the packages:
+
+    apt update
+    apt full-upgrade
+    apt autoremove
+    reboot
+
+To upgrade to buster, put this in `/etc/apt/sources.list`:
+
+    deb http://httpredir.debian.org/debian buster main
+    deb http://httpredir.debian.org/debian buster-updates main
+    deb http://security.debian.org/debian-security buster/updates main
+
+and upgrade the packages:
+
+    apt update
+    apt full-upgrade
+    apt autoremove
+    reboot
+
+## Next steps
+
+At this point, my GnuBee is running the latest version of Debian stable,
+however there are two remaining issues to fix:
+
+1. [openssh-server doesn't
+   work](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932089) and I am
+   forced to access the GnuBee via the serial interface.
+
+2. The firmware is running an outdated version of the Linux kernel though
+   this is [being worked
+   on](https://groups.google.com/d/topic/gnubee/YVM08lfWUUc/discussion) by
+   community members.
+
+I hope to resolve these issues soon, and will update this blog post once I
+do, but you are more than welcome to leave a comment if you know of a
+solution I may have overlooked.
+
+[[!tag debian]] [[!tag gnubee]] [[!tag nzoss]]

Enable apt sandboxing which is now available in buster
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 07d944d..79ca04d 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -238,6 +238,10 @@ and the following to harden the TCP stack:
 
 before reloading these settings using `sysctl -p`.
 
+Sandboxing in apt can be enabled by putting the following in `/etc/apt/apt.conf.d/30-seccomp`:
+
+    APT::Sandbox::Seccomp "true";
+
 I also restrict the use of cron to the `root` user by putting the following in `/etc/cron.allow`:
 
     root

Add the replacement for mcelog
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index c8627d8..07d944d 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -9,7 +9,7 @@ how I customize recent releases of Debian on those servers.
 
 # Hardware tests
 
-    apt install memtest86+ smartmontools e2fsprogs
+    apt install memtest86+ smartmontools e2fsprogs rasdaemon
 
 Prior to spending any time configuring a new physical server, I like to
 ensure that the hardware is fine.

Include the contents of ~/.bashrc-brave
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index 77bb125..4a11ece 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -65,4 +65,12 @@ and here are the contents of that script:
     echo "=> Delete build output"
     rm -rf src/out
 
+It references a `~/.bashrc-brave` file which contains:
+
+    #!/bin/sh
+    export PATH="${PATH}:${HOME}/bin:${HOME}/devel/brave-browser/vendor/depot_tools:${HOME}/.cargo/bin"
+    export SCCACHE_DIR="${HOME}/.cache/sccache"
+    export SCCACHE_CACHE_SIZE=100G
+    export NO_AUTH_BOTO_CONFIG="${HOME}/.boto"
+
 [[!tag brave]] [[!tag sccache]]

Add SIP TLS/SRTP post
diff --git a/posts/sip-encryption-on-voip-ms.mdwn b/posts/sip-encryption-on-voip-ms.mdwn
new file mode 100644
index 0000000..2cb3209
--- /dev/null
+++ b/posts/sip-encryption-on-voip-ms.mdwn
@@ -0,0 +1,73 @@
+[[!meta title="SIP Encryption on VoIP.ms"]]
+[[!meta date="2019-07-06T16:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+My [VoIP provider](https://voip.ms) recently added [support for
+TLS/SRTP-based call
+encryption](https://wiki.voip.ms/article/Call_Encryption_-_TLS/SRTP). Here's
+what I did to enable this feature on my
+[Asterisk](https://www.asterisk.org/) server.
+
+First of all, I changed the registration line in `/etc/asterisk/sip.conf` to
+use the "tls" scheme:
+
+    [general]
+    register => tls://mydid:mypassword@servername.voip.ms
+
+then I enabled incoming TCP connections:
+
+    tcpenable=yes
+
+and TLS:
+
+    tlsenable=yes
+    tlscapath=/etc/ssl/certs/
+
+Finally, I changed my provider entry in the same file to:
+
+    [voipms]
+    type=friend
+    host=servername.voip.ms
+    secret=mypassword
+    username=mydid
+    context=from-voipms
+    allow=ulaw
+    allow=g729
+    insecure=port,invite
+    transport=tls
+    encryption=yes
+
+(Note the last two lines.)
+
+The dialplan didn't change and so I still have the following in
+`/etc/asterisk/extensions.conf`:
+
+    [pstn-voipms]
+    exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
+    exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN})
+    exten => _1NXXNXXXXXX,n,Hangup()
+    exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
+    exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN})
+    exten => _NXXNXXXXXX,n,Hangup()
+    exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>)
+    exten => _011X.,n,Authenticate(1234) ; require password for international calls
+    exten => _011X.,n,Dial(SIP/voipms/${EXTEN})
+    exten => _011X.,n,Hangup(16)
+
+## Server certificate
+
+The only thing I still need to fix is to make this error message go away in
+my logs:
+
+    asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>
+
+It appears to be related to the fact that I didn't set `tlscertfile` in
+`/etc/asterisk/sip.conf` and that it's using its default value of
+`asterisk.pem`, a non-existent file.
+
+Since my Asterisk server is only acting as a TLS *client*, and not a TLS
+*server*, there's probably no harm in not having a certificate. That said,
+it looks pretty easy to [use a Let's Encrypt cert with
+Asterisk](https://community.asterisk.org/t/has-anyone-used-letsencrypt-to-setup-ssl-for-asterisk/67145/6).
+
+[[!tag debian]] [[!tag asterisk]] [[!tag nzoss]] [[!tag letsencrypt]]

Add distro-info to get EOL info
https://askubuntu.com/a/1126933
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 8a9cf7a..c8627d8 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -261,7 +261,7 @@ The above packages are all about catching mistakes (such as
 
 # Package updates
 
-    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges reboot-notifier popularity-contest needrestart debian-security-support
+    apt install apticron unattended-upgrades deborphan debfoster apt-listchanges reboot-notifier popularity-contest needrestart debian-security-support distro-info
 
 These tools help me keep packages up to date and remove unnecessary or
 obsolete packages from servers. On Rackspace servers, a small [configuration

Use the Cloudflare NTP server
https://blog.cloudflare.com/secure-time/
diff --git a/posts/time-synchronization-with-ntp-and-systemd.mdwn b/posts/time-synchronization-with-ntp-and-systemd.mdwn
index 3f54067..f548d26 100644
--- a/posts/time-synchronization-with-ntp-and-systemd.mdwn
+++ b/posts/time-synchronization-with-ntp-and-systemd.mdwn
@@ -60,7 +60,7 @@ you and put it in `/etc/systemd/timesyncd.conf`. For example, mine reads
 like this:
 
     [Time]
-    NTP=ca.pool.ntp.org
+    NTP=time.cloudflare.com
 
 before restarting the daemon:
 

Add LXC post for OpenSUSE 15 on Ubuntu 18.04
diff --git a/posts/opensuse15-lxc-setup-on-ubuntu-bionic.mdwn b/posts/opensuse15-lxc-setup-on-ubuntu-bionic.mdwn
new file mode 100644
index 0000000..9e45f32
--- /dev/null
+++ b/posts/opensuse15-lxc-setup-on-ubuntu-bionic.mdwn
@@ -0,0 +1,101 @@
+[[!meta title="OpenSUSE 15 LXC setup on Ubuntu Bionic 18.04"]]
+[[!meta date="2019-06-14T20:15:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Similarly to what I wrote for [Fedora](https://feeding.cloud.geek.nz/posts/fedora29-lxc-setup-on-ubuntu-bionic/),
+here is how I was able to create an [OpenSUSE](https://www.opensuse.org) 15 LXC
+container on an Ubuntu 18.04 (bionic) laptop.
+
+# Setting up LXC on Ubuntu
+
+First of all, install lxc:
+
+    apt install lxc
+    echo "veth" >> /etc/modules
+    modprobe veth
+
+turn on bridged networking by putting the following in
+`/etc/sysctl.d/local.conf`:
+
+    net.ipv4.ip_forward=1
+
+and applying it using:
+
+    sysctl -p /etc/sysctl.d/local.conf
+
+Then allow the right traffic in your firewall
+(`/etc/network/iptables.up.rules` in my case):
+
+    # LXC containers
+    -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+    -A FORWARD -s 10.0.3.0/24 -j ACCEPT
+    -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
+    -A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
+    -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
+    -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
+
+and apply these changes:
+
+    iptables-apply
+
+before restarting the lxc networking:
+
+    systemctl restart lxc-net.service
+
+# Creating the container
+
+Once that's in place, you can finally create the OpenSUSE 15 container:
+
+    lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64
+
+To see a list of all distros available with the `download` template:
+
+    lxc-create -n foo --template=download -- --list
+
+# Logging in as root
+
+Start up the container and get a login console:
+
+    lxc-start -n opensuse15 -F
+
+In another terminal, set a password for the root user:
+
+    lxc-attach -n opensuse15 passwd
+
+You can now use this password to log into the console you started earlier.
+
+# Logging in as an unprivileged user via ssh
+
+As root, install a few packages:
+
+    zypper install vim openssh sudo man
+    systemctl start sshd
+    systemctl enable sshd
+
+and then create an unprivileged user:
+
+    useradd francois
+    passwd francois
+    cd /home
+    mkdir francois
+    chown francois:100 francois/
+
+and give that user [sudo access](https://en.opensuse.org/SDB:Administer_with_sudo):
+
+    visudo  # uncomment "wheel" line
+    groupadd wheel
+    usermod -aG wheel francois
+
+Now login as that user from the console and add an ssh public key:
+
+    mkdir .ssh
+    chmod 700 .ssh
+    echo "<your public key>" > .ssh/authorized_keys
+    chmod 644 .ssh/authorized_keys
+
+You can now login via ssh. The IP address to use can be seen in the output
+of:
+
+    lxc-ls --fancy
+
+[[!tag debian]] [[!tag lxc]] [[!tag nzoss]] [[!tag ubuntu]]

Comment moderation
diff --git a/posts/installing-vidyo-on-ubuntu-1804/comment_3_ea0a0f985040e8679fed6f98209126f1._comment b/posts/installing-vidyo-on-ubuntu-1804/comment_3_ea0a0f985040e8679fed6f98209126f1._comment
new file mode 100644
index 0000000..3482398
--- /dev/null
+++ b/posts/installing-vidyo-on-ubuntu-1804/comment_3_ea0a0f985040e8679fed6f98209126f1._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="2001:610:120:3000::192:161"
+ claimedauthor="Frans Schreuder"
+ subject="Topicons plus"
+ date="2019-06-14T06:33:04Z"
+ content="""
+The instruction works, but one important thing is missing. On Ubuntu with Gnome (Both 18.04 and 19.04) you will need the gnome extention TopIcons Plus installed in order to launch VidyoDesktop, otherwise the application will fail to start (at least for me).
+"""]]

Fix typo
diff --git a/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment b/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment
index c735c96..6d9f427 100644
--- a/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment
+++ b/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment
@@ -4,5 +4,5 @@
  subject="Re: comment 1"
  date="2018-11-08T06:32:12Z"
  content="""
-I'm not sure why you're saying that it's sloppy for a system-wide binary to be owned by root. That's both [the policy in Debian](https://www.debian.org/doc/debian-policy/ch-files.html#permissions-and-owners) and also it prevents an ordinary user from tampering a binary that could be used by other users.
+I'm not sure why you're saying that it's sloppy for a system-wide binary to be owned by root. That's both [the policy in Debian](https://www.debian.org/doc/debian-policy/ch-files.html#permissions-and-owners) and also it prevents an ordinary user from tampering with a binary that could be used by other users.
 """]]

Disable modelines in vim
https://nvd.nist.gov/vuln/detail/CVE-2019-12735
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 9f5bad3..8a9cf7a 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -64,6 +64,7 @@ following to `/etc/vim/vimrc.local`:
     syntax on
     set background=dark
     set visualbell
+    set nomodeline
 
 # ssh
 

More thorough cleaning of old builds / checkouts
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index aef0e28..77bb125 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -34,16 +34,18 @@ and here are the contents of that script:
     echo
     
     echo $(date)
+    echo "=> Clean up repo and delete old build output"
+    rm -rf src/out node_modules src/brave/node_modules
+    git clean -f -d
+    git checkout HEAD package-lock.json
+    
+    echo $(date)
     echo "=> Update repo"
     git pull
     npm install
     npm run init
     
     echo $(date)
-    echo "=> Delete any old build output"
-    rm -rf src/out
-    
-    echo $(date)
     echo "=> Debug build"
     killall sccache || true
     ionice nice timeout 4h npm run build || ionice nice timeout 4h npm run build

Comment moderation
diff --git a/posts/setting-up-a-network-scanner-using-sane/comment_10_10aacafabba32f9596e18f4163fe9fd9._comment b/posts/setting-up-a-network-scanner-using-sane/comment_10_10aacafabba32f9596e18f4163fe9fd9._comment
new file mode 100644
index 0000000..49b0938
--- /dev/null
+++ b/posts/setting-up-a-network-scanner-using-sane/comment_10_10aacafabba32f9596e18f4163fe9fd9._comment
@@ -0,0 +1,141 @@
+[[!comment format=mdwn
+ ip="88.207.218.2"
+ claimedauthor="Anonymous Coward"
+ subject="Revised configuration necessary for saned under systemd"
+ date="2019-05-29T21:11:00Z"
+ content="""
+After encoutering numerous problems in setting up SANED network access with systemd on a Linux Mint 18.3 , here are some important points 
+
+1) make sure you do not have saned configured to run under inetd or xinetd as might be the case from an upgraded installation
+
+2) systemd needs a file for socket and an instance service file.  To avoid loss of customizations with package upgrades, put these in /etc/systemd/system not overwrite those in /lib/systemd/system
+
+    #*****************************************************************************#
+    #|
+    #|  file : /etc/systemd/system/saned.socket
+    #|
+    #*---------------------------------------------------------------------------*#
+
+    [Unit]
+    Description=SANED network daemon activation socket
+
+    [Socket]
+    Accept=yes
+    ListenStream=6566
+    MaxConnections=1
+
+    [Install]
+    WantedBy=sockets.target
+
+    #*****************************************************************************#
+
+The second item it needs is an instance service file NOT a service file.  This means that the file name contains an \"@\" as in saned@.service and the contents
+
+    #*****************************************************************************#
+    #|
+    #|  file : /etc/systemd/system/saned@.service
+    #|
+    #*---------------------------------------------------------------------------*#
+
+    [Unit]
+    Description=SANE network daemon instance %i
+    Documentation=man:saned(8)
+    After=local-fs.target network-online.target
+    Requires=saned.socket
+
+    [Service]
+    Environment=SANE_CONFIG_DIR=/etc/sane.d
+    ExecStart=/usr/sbin/saned
+    Group=saned
+    User=saned
+    StandardInput=null
+    StandardOutput=syslog
+    StandardError=syslog
+
+    [Install]
+    Also=saned.socket
+
+    #*****************************************************************************#
+
+
+If you want to do debugging you can add additional Environment lines
+
+    Environment=SANE_DEBUG_DLL=255
+    Environment=SANE_DEBUG_NET=255
+
+For completeness, to mirror the setup in /lib/systemd/system and to emphasize the point when looking in /etc/systemd/system that the file is not missing and that saned@.service is not misnamed, symbolically link  /dev/null to /etc/lib/systemd/system/saned.service
+
+     /etc/systemd/system/saned.service -> /dev/null
+
+This is unecessary because of the link present in /lib/systemd/system, but it makes it clear when looking at the /etc/systemd/system directory the configuration being used.
+
+Do a systemctl enable of saned.socket and it will create a symbolic link under the socket.target.wants directory
+
+/etc/services should have the entry
+
+     sane-port		 6566/tcp	sane saned
+    
+Ensure that /proc/sys/net/ipv6/bindv6only is set to 0 (and not set to 1 due to some hack from an old bindipv6only.conf file /etc/sysctl.conf.d) if you are wanting network connections on IPv4 to work.
+
+It may be necessary to ensure that the \"net\" featured is turned on in /etc/sane.d/dll.d configuration file for your scanner if it is some non-standard configuration.
+
+In your /etc/sane.d/saned.conf ensure your have a \"localhost\" entry -- 127.0.0.1 should also work but when saned starts up it says checking for localhost, so this is to be consistent.
+Then add the host names or network IP address ranges permitted to access the service.
+
+Adjust your firewall rules if necessary.
+
+You do not need to add the host IP address to the /etc/sane.d/net.conf file, this will result in you being offered both a local and a network connection to the scanner from the host, so keep it simple for the server host, but do add the server host IP address or name to the net.conf file on the client hosts which need to access the service.
+
+If you then do a systemctl start saned.socket followed by a systemctl status saned.socket you should see
+
+     saned.socket - SANED network daemon activation socket
+       Loaded: loaded (/etc/systemd/system/saned.socket; enabled; vendor preset: enabled)
+       Active: active (listening) since Wed 2019-05-29 20:12:40 BST; 30min ago
+       Listen: [::]:6566 (Stream)
+     Accepted: 21; Connected: 0
+
+Now where the \"magic\" comes in, is that when a connection is made on the socket, systemd fires up an **instance** of the saned service using the socket name as the instance name
+(which is why the standard input is set to \"null\" and NOT \"socket\" in the saned@.service file).
+
+So if when a remote connection is made you do
+
+    systemctl -all -l --no-pager | grep saned you should see
+
+      saned@21-192.168.21.12:6566-192.168.11.12:49314.service                                    loaded    active   running   SANE network daemon instance 3 (192.168.11.12:49314)
+      system-saned.slice                                                                        loaded    active   active    system-saned.slice
+      saned.socket                                                                              loaded    active   listening SANED network daemon activation socket
+
+where the instance number increases by one for each connection.
+
+If you want to advertise the service via Avahi, you could add a service file under /etc/avahi/servvices
+
+    <?xml version=\"1.0\" standalone='no'?>
+    <!DOCTYPE service-group SYSTEM \"avahi-service.dtd\">
+
+    <!-- #********************************************************************# -->
+    <!-- #|                                                                  |# -->
+    <!-- #|  file : /etc/avahi/services/saned.service                        |# -->
+    <!-- #|                                                                  |# -->
+    <!-- #|__________________________________________________________________|# -->
+
+    <service-group>
+
+	     <name replace-wildcards=\"yes\">%h.example.COM Network Scanning</name>
+
+        	<service>
+        		<domain-name>local</domain-name>
+        		<host-name>server_name.local</host-name>
+         		<port>6566</port>
+         		<type>_scanner._tcp</type>
+        	</service>
+
+      </service-group>
+
+     <!-- #********************************************************************# -->
+
+
+replacing example.COM and server_name as appropriate.  The value for \"type\" was taken from
+
+     http://www.dns-sd.org/ServiceTypes.html
+
+"""]]

Simplify sha256sum instructions
https://linuxmint.com/verify.php
diff --git a/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn b/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
index b07b47d..9b983b5 100644
--- a/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
+++ b/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
@@ -36,10 +36,8 @@ signature](https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify-ubuntu):
 
 3. Verify the hash of the ISO file:
 
-        $ sha256sum ubuntu-18.04.2-server-amd64.iso 
-        a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5  ubuntu-18.04.2-server-amd64.iso
-        $ grep ubuntu-18.04.2-server-amd64.iso SHA256SUMS
-        a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5 *ubuntu-18.04.2-server-amd64.iso
+        $ sha256sum --ignore-missing -c SHA256SUMS
+        ubuntu-18.04.2-server-amd64.iso: OK
 
 Then copy it to a USB drive:
 

Add extra heading
diff --git a/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn b/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
index a5acfd1..b07b47d 100644
--- a/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
+++ b/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
@@ -47,6 +47,8 @@ Then copy it to a USB drive:
 
 and boot with it.
 
+## Manual partitioning
+
 Inside the installer, use manual partitioning to:
 
 1. Configure the physical partitions.

Comment moderation
diff --git a/posts/installing-ubuntu-bionic-on-encrypted-raid1/comment_1_d178526dcf96e252ab0196c59c93d1b0._comment b/posts/installing-ubuntu-bionic-on-encrypted-raid1/comment_1_d178526dcf96e252ab0196c59c93d1b0._comment
new file mode 100644
index 0000000..4193372
--- /dev/null
+++ b/posts/installing-ubuntu-bionic-on-encrypted-raid1/comment_1_d178526dcf96e252ab0196c59c93d1b0._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="82.141.154.4"
+ claimedauthor="random from planet debian"
+ subject="LVM"
+ date="2019-05-24T09:43:23Z"
+ content="""
+If you set up [MD-raid> LUKS > LVM > {pool/swap, pool/root etc}] stack then you get one PW prompt and suspend to disk still works...
+
+br: a random guy
+"""]]
diff --git a/posts/mercurial-commit-series-phabricator-using-arcanist/comment_2_4b6c0f885e5dea08a3d6d4cf4e2793a9._comment b/posts/mercurial-commit-series-phabricator-using-arcanist/comment_2_4b6c0f885e5dea08a3d6d4cf4e2793a9._comment
new file mode 100644
index 0000000..7c9bf9b
--- /dev/null
+++ b/posts/mercurial-commit-series-phabricator-using-arcanist/comment_2_4b6c0f885e5dea08a3d6d4cf4e2793a9._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ ip="83.56.36.123"
+ claimedauthor="leplatrem"
+ subject="How to manage updates after review?"
+ date="2019-05-08T10:00:16Z"
+ content="""
+After everything is submitted, I made some changes to my commits.
+
+When I run `arc diff` for a particular commit, the process is really confusing. For example, with `arc diff --update DXXX`, I get prompted with the whole list of revisions... 
+
+
+"""]]

Expand the dunst comments and turn them into a "notifications" section
diff --git a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
index 4dc4b8e..e3a00d7 100644
--- a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
+++ b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
@@ -20,11 +20,24 @@ Because of [a bug in gnome-settings-daemon](https://ask.fedoraproject.org/en/que
 
     dconf write /org/gnome/settings-daemon/plugins/cursor/active false
 
+# Notifications
+
 While my startup script doesn't run this tool directly, installing the
 [dunst package](https://packages.debian.org/stable/dunst) is required to receive desktop notifications:
 
     apt install dunst
 
+You will probably also want to set the following in `/etc/xdg/dunst/dunstrc` to ensure that notifications use your default web browser:
+
+    browser = /usr/bin/sensible-browser
+
+Here are the keyboard shortcuts you'll need to interact with the notifications that popup:
+
+- `Ctrl-Space` to close the current notification
+- `Ctrl-Shift-Space` to close all notifications
+- `Ctrl-\`` to show the last notification
+- `Ctrl-Shift-.` to show the context menu for the current notification
+
 # Screensaver
 
 In addition, gnome-screensaver didn't automatically lock my screen, so I installed [xautolock](https://packages.debian.org/stable/xautolock) and added it to my startup script:

Add re-try logic and compile tests too
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index ea742c1..aef0e28 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -45,12 +45,18 @@ and here are the contents of that script:
     
     echo $(date)
     echo "=> Debug build"
-    ionice nice timeout 4h npm run build
+    killall sccache || true
+    ionice nice timeout 4h npm run build || ionice nice timeout 4h npm run build
+    ionice nice ninja -C src/out/Debug brave_unit_tests
+    ionice nice ninja -C src/out/Debug brave_browser_tests
     echo
     
     echo $(date)
     echo "=>Release build"
-    ionice nice timeout 5h npm run build Release
+    killall sccache || true
+    ionice nice timeout 5h npm run build Release || ionice nice timeout 5h npm run build Release
+    ionice nice ninja -C src/out/Release brave_unit_tests
+    ionice nice ninja -C src/out/Release brave_browser_tests
     echo
     
     echo $(date)

Add my RAID1+LUKS post on Ubuntu
diff --git a/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn b/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
new file mode 100644
index 0000000..a5acfd1
--- /dev/null
+++ b/posts/installing-ubuntu-bionic-on-encrypted-raid1.mdwn
@@ -0,0 +1,186 @@
+[[!meta title="Installing Ubuntu 18.04 using both full-disk encryption and RAID1"]]
+[[!meta date="2019-05-22T21:30:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I recently setup a desktop computer with two SSDs using a software RAID1 and
+full-disk encryption (i.e. [LUKS](https://en.wikipedia.org/wiki/LUKS)).
+Since this is not a supported configuration in Ubuntu desktop, I had to use
+the server installation medium.
+
+This is my version of these [excellent
+instructions](https://askubuntu.com/questions/1066028/install-ubuntu-18-04-desktop-with-raid-1-and-lvm-on-machine-with-uefi-bios#1066041).
+
+## Server installer
+
+Start by downloading the [alternate server
+installer](http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/) and
+[verifying its
+signature](https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify-ubuntu):
+
+1. Download the required files:
+
+        wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso
+        wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS
+        wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
+
+2. Verify the signature on the hash file:
+
+        $ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092
+        $ gpg --verify SHA256SUMS.gpg SHA256SUMS
+        gpg: Signature made Fri Feb 15 08:32:38 2019 PST
+        gpg:                using RSA key D94AA3F0EFE21092
+        gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined]
+        gpg: WARNING: This key is not certified with a trusted signature!
+        gpg:          There is no indication that the signature belongs to the owner.
+        Primary key fingerprint: 8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092
+
+3. Verify the hash of the ISO file:
+
+        $ sha256sum ubuntu-18.04.2-server-amd64.iso 
+        a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5  ubuntu-18.04.2-server-amd64.iso
+        $ grep ubuntu-18.04.2-server-amd64.iso SHA256SUMS
+        a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5 *ubuntu-18.04.2-server-amd64.iso
+
+Then copy it to a USB drive:
+
+    dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX
+
+and boot with it.
+
+Inside the installer, use manual partitioning to:
+
+1. Configure the physical partitions.
+2. Configure the RAID array second.
+2. Configure the encrypted partitions last
+
+Here's the exact configuration I used:
+
+- `/dev/sda1` is 512 MB and used as the EFI parition
+- `/dev/sdb1` is 512 MB but **not used for anything**
+- `/dev/sda2` and `/dev/sdb2` are both 4 GB (RAID)
+- `/dev/sda3` and `/dev/sdb3` are both 512 MB (RAID)
+- `/dev/sda4` and `/dev/sdb4` use up the rest of the disk (RAID)
+
+I only set `/dev/sda2` as the EFI partition because I found that **adding a
+second EFI partition would break the installer**.
+
+I created the following RAID1 arrays:
+
+- `/dev/sda2` and `/dev/sdb2` for `/dev/md2`
+- `/dev/sda3` and `/dev/sdb3` for `/dev/md0`
+- `/dev/sda4` and `/dev/sdb4` for `/dev/md1`
+
+I used `/dev/md0` as my **unencrypted** `/boot` partition.
+
+Then I created the following LUKS partitions:
+
+- `md1_crypt` as the `/` partition using `/dev/md1`
+- `md2_crypt` as the *swap* partition (4 GB) with a **random
+  encryption key** using `/dev/md2`
+
+## Post-installation configuration
+
+Once your new system is up, sync the EFI partitions using DD:
+
+    dd if=/dev/sda1 of=/dev/sdb1
+
+and create a second EFI boot entry:
+
+    efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
+
+Ensure that the RAID drives are fully sync'ed by keeping an eye on
+`/prod/mdstat` and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.
+
+Once you have rebooted, remove the following package to speed up future boots:
+
+    apt purge btrfs-progs
+
+To switch to the desktop variant of Ubuntu, install these meta-packages:
+
+    apt install ubuntu-desktop gnome
+
+then use `debfoster` to remove unnecessary packages (in particular the ones
+that only come with the default Ubuntu server installation).
+
+## Fixing booting with degraded RAID arrays
+
+Since I have run into [RAID startup problems in the
+past](https://feeding.cloud.geek.nz/posts/the-perils-of-raid-and-full-disk-encryption-on-ubuntu/),
+I expected having to fix up a few things to make degraded RAID arrays
+boot correctly.
+
+I did not use [LVM](https://en.wikipedia.org/wiki/LVM2) since I
+didn't really feel the need to add yet another layer of abstraction of top
+of my setup, but I found that the `lvm2` package must still be installed:
+
+    apt install lvm2
+
+with `use_lvmetad = 0` in `/etc/lvm/lvm.conf`.
+
+Then in order to automatically bring up the RAID arrays with 1 out of 2
+drives, I added the following script in
+`/etc/initramfs-tools/scripts/local-top/cryptraid`:
+
+     #!/bin/sh
+     PREREQ="mdadm"
+     prereqs()
+     {
+          echo "$PREREQ"
+     }
+     case $1 in
+     prereqs)
+          prereqs
+          exit 0
+          ;;
+     esac
+     
+     mdadm --run /dev/md0
+     mdadm --run /dev/md1
+     mdadm --run /dev/md2
+
+before making that script executable:
+
+    chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid
+
+and refreshing the initramfs:
+
+    update-initramfs -u -k all
+
+## Disable suspend-to-disk
+
+Since I use a [random encryption key for the swap
+partition](https://feeding.cloud.geek.nz/posts/encrypted-swap-partition-on/)
+(to avoid having a second password prompt at boot time), it means that
+suspend-to-disk is not going to work and so I disabled it by putting the
+following in `/etc/initramfs-tools/conf.d/resume`:
+
+    RESUME=none
+
+and by adding `noresume` to the `GRUB_CMDLINE_LINUX` variable in
+`/etc/default/grub` before applying these changes:
+
+    update-grub
+    update-initramfs -u -k all
+
+## Test your configuration
+
+With all of this in place, you should be able to do a final test of your
+setup:
+
+1. Shutdown the computer and unplug the second drive.
+2. Boot with only the first drive.
+3. Shutdown the computer and plug the second drive back in.
+4. Boot with both drives and re-add the second drive to the RAID array:
+
+        mdadm /dev/md0 -a /dev/sdb3
+        mdadm /dev/md1 -a /dev/sdb4
+        mdadm /dev/md2 -a /dev/sdb2
+
+5. Wait until the RAID is done re-syncing and shutdown the computer.
+6. Repeat steps 2-5 with the first drive unplugged instead of the second.
+7. Reboot with both drives plugged in.
+
+At this point, you have a working setup that will gracefully degrade to a
+one-drive RAID array should one of your drives fail.
+
+[[!tag debian]] [[!tag nzoss]] [[!tag ubuntu]] [[!tag raid]]

Fix list command
diff --git a/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn b/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
index b7c4b22..7b87943 100644
--- a/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
+++ b/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
@@ -51,7 +51,7 @@ Once that's in place, you can finally create the Fedora 29 container:
 
 To see a list of all distros available with the `download` template:
 
-    lxc-create --template=download -- --list
+    lxc-create -n foo --template=download -- --list
 
 # Logging in as root
 

Add post about AnyTone programming in a Windows VM
diff --git a/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox.mdwn b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox.mdwn
new file mode 100644
index 0000000..1d0463a
--- /dev/null
+++ b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox.mdwn
@@ -0,0 +1,78 @@
+[[!meta title="Programming an AnyTone AT-D878UV on Linux using Windows 10 and VirtualBox"]]
+[[!meta date="2019-04-16T22:15:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I recently acquired an [AnyTone AT-D878UV DMR
+radio](https://www.bridgecomsystems.com/collections/amateur-handheld-radios/products/anytone-at-d878uv-dual-band-dmr-handheld-radio-w-gps-programming-cable)
+which is unfortunately not supported by
+[chirp](https://chirp.danplanet.com/projects/chirp/wiki/Home), my usual
+go-to free software package for programming amateur radios.
+
+Instead, I had to setup a Windows 10 virtual machine so that I could setup
+the radio using the manufacturer's computer programming software (CPS).
+
+# Install VirtualBox
+
+Install [VirtualBox](https://www.virtualbox.org):
+
+    apt install virtualbox virtualbox-guest-additions-iso
+
+and add your user account to the `vboxusers` group:
+
+    adduser francois vboxusers
+
+to make filesharing before the host and the guest work.
+
+Finally, **reboot** to ensure that group membership and kernel modules are
+all set.
+
+# Create a Windows 10 virtual machine
+
+Create a new Windows 10 virtual machine within VirtualBox. Then, [download Windows
+10](https://www.microsoft.com/en-in/software-download/windows10ISO) from
+Microsoft then start the virtual machine mounting the `.iso` file as an
+optical drive.
+
+Follow the instructions to install Windows 10, paying attention to the
+[various privacy options you will be
+offered](https://askleo.com/setting-up-windows-10-for-privacy/).
+
+Once Windows is installed, mount the host's
+`/usr/share/virtualbox/VBoxGuestAdditions.iso` as a virtual optical drive
+and install the VirtualBox guest additions.
+
+# Installing the CPS
+
+With Windows fully setup, it's time to download the latest version of the
+[computer programming
+software](https://www.bridgecomsystems.com/pages/anytone-at-d878uv-support-page).
+
+Unpack the downloaded file and then install it as Admin (right-click on the
+`.exe`).
+
+Do NOT install the GD driver update or the USB driver, they do not appear to
+be necessary.
+
+# Program the radio
+
+First, you'll want to download from the radio to get a starting configuration
+that you can change.
+
+To do this:
+
+1. Turn the radio on and wait until it has finished booting.
+2. Plug the USB programming cable onto the computer and the radio.
+3. From the CPS menu choose "Set COM port".
+4. From the CPS menu choose "Read from radio".
+
+**Save this original codeplug to a file as a backup** in case you need to
+easily reset back to the factory settings.
+
+To program the radio, follow this [handy third-party
+guide](https://www.bridgecomsystems.com/blogs/bridgecom-tx-rx-blog/anytone-868-878-programming-guide-v1-33)
+since it's much better than the official manual.
+
+You should be able to use the "Write to radio" menu option without any
+problems once you're done creating your codeplug.
+
+[[!tag ham]]

Mention how to get the list of available distros
diff --git a/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn b/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
index 8bf166c..b7c4b22 100644
--- a/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
+++ b/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
@@ -49,6 +49,10 @@ Once that's in place, you can finally create the Fedora 29 container:
 
     lxc-create -n fedora29 -t download -- -d fedora -r 29 -a amd64
 
+To see a list of all distros available with the `download` template:
+
+    lxc-create --template=download -- --list
+
 # Logging in as root
 
 Start up the container and get a login console:

Improve quality of avatar
diff --git a/avatar.jpg b/avatar.jpg
index d2f346a..ab6c37b 100644
Binary files a/avatar.jpg and b/avatar.jpg differ

Update avatar
diff --git a/avatar.jpg b/avatar.jpg
index 5fb6671..d2f346a 100644
Binary files a/avatar.jpg and b/avatar.jpg differ

Improve formatting of user comment
diff --git a/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment b/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment
index f47d529..8822ece 100644
--- a/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment
+++ b/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment
@@ -5,7 +5,7 @@
  subject="comment 1"
  date="2019-04-13T15:41:02Z"
  content="""
-The -c option is a great recommendation, but I've been trying out https://github.com/StanfordSNR/guardian-agent and I like it even better; it gives you much more information about what is happening: which computer is asking for permission, which key they want to use, what server they're going to connect to, and what command they want to run using it. You can make a much more informed decision, and you can save those decisions so that you only have to decide for novel situations.
+The `-c` option is a great recommendation, but I've been trying out <https://github.com/StanfordSNR/guardian-agent> and I like it even better; it gives you much more information about what is happening: which computer is asking for permission, which key they want to use, what server they're going to connect to, and what command they want to run using it. You can make a much more informed decision, and you can save those decisions so that you only have to decide for novel situations.
 
-Also, the ProxyJump command is much nicer than ProxyCommand, but also newer. It's easier to use and harder to misuse.
+Also, the `ProxyJump` command is much nicer than ProxyCommand, but also newer. It's easier to use and harder to misuse.
 """]]

Add a few hardening steps and link to Mozilla guidelines
Disabling authorized_keys2 is from https://github.com/matrix-org/matrix.org/issues/371
while the rest are from Mozilla.
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index e39622d..7680ac3 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -2,6 +2,9 @@
 [[!meta date="2014-02-20T23:52:00.000+13:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
+These are the settings that I use on my servers. You may want to also review
+the [settings that are used at Mozilla](https://infosec.mozilla.org/guidelines/openssh).
+
 # Basic configuration
 
 There are a few basic things that most admins will already know (and that
@@ -20,6 +23,7 @@ This is what `/etc/ssh/sshd_config` should contain:
     AuthenticationMethods publickey
     PasswordAuthentication no
     PermitRootLogin no
+    AuthorizedKeysFile      .ssh/authorized_keys
 
 Once you've done that, make sure you have all of the required host keys by running:
 
@@ -33,6 +37,16 @@ You may also want to ensure you are using [strong ciphers and hash functions](ht
     Ciphers <insert Mozilla list>
     MACs <insert Mozilla list>
 
+and that you deactivate short Diffie-Hellman moduli:
+
+    awk '$5 >= 3071' /etc/ssh/moduli > /etc/ssh/moduli.tmp
+    mv /etc/ssh/moduli.tmp /etc/ssh/moduli
+
+Finally, if you don't need `sftp` support, Mozilla recommends disabling it,
+which can be done by commenting out this line:
+
+    #Subsystem     sftp    /usr/lib/openssh/sftp-server
+
 # Whitelist approach to giving users ssh access
 
 To ensure that only a few users have ssh access to the server and that newly

Comment moderation
diff --git a/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment b/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment
new file mode 100644
index 0000000..f47d529
--- /dev/null
+++ b/posts/secure-ssh-agent-usage/comment_1_a169f55fa99dd3d9832d21102ebba053._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="96.86.171.70"
+ claimedauthor="db48x"
+ url="http://db48x.net/"
+ subject="comment 1"
+ date="2019-04-13T15:41:02Z"
+ content="""
+The -c option is a great recommendation, but I've been trying out https://github.com/StanfordSNR/guardian-agent and I like it even better; it gives you much more information about what is happening: which computer is asking for permission, which key they want to use, what server they're going to connect to, and what command they want to run using it. You can make a much more informed decision, and you can save those decisions so that you only have to decide for novel situations.
+
+Also, the ProxyJump command is much nicer than ProxyCommand, but also newer. It's easier to use and harder to misuse.
+"""]]

Add ssh-agent post
diff --git a/posts/secure-ssh-agent-usage.mdwn b/posts/secure-ssh-agent-usage.mdwn
new file mode 100644
index 0000000..331c2ff
--- /dev/null
+++ b/posts/secure-ssh-agent-usage.mdwn
@@ -0,0 +1,63 @@
+[[!meta title="Secure ssh-agent usage"]]
+[[!meta date="2019-04-13T06:45:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+`ssh-agent` was in the news recently due to the [matrix.org
+compromise](https://github.com/matrix-org/matrix.org/issues/371). The main
+takeaway from that incident was that one should [avoid the `ForwardAgent`
+(or `-A`) functionality when `ProxyCommand` can
+do](https://heipei.io/2015/02/26/SSH-Agent-Forwarding-considered-harmful/)
+and consider multi-factor authentication on the server-side, for example
+using
+[libpam-google-authenticator](https://wiki.archlinux.org/index.php/Google_Authenticator)
+or [libpam-yubico](https://developers.yubico.com/yubico-pam/YubiKey_and_SSH_via_PAM.html).
+
+That said, there are also two options to `ssh-add` that can help reduce the
+risk of someone else with elevated privileges hijacking your agent to make
+use of your ssh credentials.
+
+## Prompt before each use of a key
+
+The first option is `-c` which will require you to confirm each use of your
+ssh key by pressing Enter when a graphical prompt shows up.
+
+Simply install an `ssh-askpass` frontend like
+[ssh-askpass-gnome](https://packages.debian.org/stable/ssh-askpass-gnome):
+
+    apt install ssh-askpass-gnome
+
+and then use this to when adding your key to the agent:
+
+    ssh-add -c ~/.ssh/key
+
+## Automatically removing keys after a timeout
+
+`ssh-add -D` will remove all identities (i.e. keys) from your ssh agent, but
+requires that you remember to run it manually once you're done.
+
+That's where the second option comes in. Specifying `-t` when adding a key
+will automatically remove that key from the agent after a while.
+
+For example, I have found that this setting works well at work:
+
+    ssh-add -t 10h ~/.ssh/key
+
+where I don't want to have to type my ssh password everytime I push a git
+branch.
+
+At home on the other hand, my use of ssh is more sporadic and so I don't
+mind a shorter timeout:
+
+    ssh-add -t 4h ~/.ssh/key
+
+## Making these options the default
+
+I couldn't find a configuration file to make these settings the default and
+so I ended up putting the following line in my `~/.bash_aliases`:
+
+    alias ssh-add='ssh-add -c -t 4h'
+
+so that I can continue to use `ssh-add` as normal and have not remember
+to include these extra options.
+
+[[!tag debian]] [[!tag nzoss]] [[!tag mozilla ]] [[!tag ssh]] [[!tag sysadmin]] [[!tag security]]

Format URLs properly and update a broken one
diff --git a/posts/hardening-ssh-servers/comment_3_34613b0bcc79caefa7dc6e515acdc43f._comment b/posts/hardening-ssh-servers/comment_3_34613b0bcc79caefa7dc6e515acdc43f._comment
index 6306f56..d004250 100644
--- a/posts/hardening-ssh-servers/comment_3_34613b0bcc79caefa7dc6e515acdc43f._comment
+++ b/posts/hardening-ssh-servers/comment_3_34613b0bcc79caefa7dc6e515acdc43f._comment
@@ -7,5 +7,5 @@
  content="""
 You can also use 2facthor auth with ( for example ) google authenticator:
 
-http://www.howtogeek.com/121650/how-to-secure-ssh-with-google-authenticators-two-factor-authentication/
+<http://www.howtogeek.com/121650/how-to-secure-ssh-with-google-authenticators-two-factor-authentication/>
 """]]
diff --git a/posts/hardening-ssh-servers/comment_5_cae71a2bede8405eb29f0b6d8d4bede4._comment b/posts/hardening-ssh-servers/comment_5_cae71a2bede8405eb29f0b6d8d4bede4._comment
index e22c3d0..ed98028 100644
--- a/posts/hardening-ssh-servers/comment_5_cae71a2bede8405eb29f0b6d8d4bede4._comment
+++ b/posts/hardening-ssh-servers/comment_5_cae71a2bede8405eb29f0b6d8d4bede4._comment
@@ -5,5 +5,5 @@
  subject="2FA & cipher restrictions"
  date="2014-09-13T22:04:59Z"
  content="""
-Nice tutorial. I wrote my own recently with a little more focus on Two-Factor Authentication and strong encryption (limiting weak ciphers/MACs) but found your tutorial really strong in the auditing department.  https://joscor.com/2014/09/hardening-openssh-server-ubuntu-14-04/ 
+Nice tutorial. I wrote my own recently with a little more focus on Two-Factor Authentication and strong encryption (limiting weak ciphers/MACs) but found your tutorial really strong in the auditing department.  <https://joscor.com/blog/hardening-openssh-server-ubuntu-14-04/>
 """]]

Fix typo
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index 1d5de08..e39622d 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -120,7 +120,7 @@ by enabling the `pam_tty_audit` module in `/etc/pam.d/sshd`:
 
 However this module is [not included in wheezy](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699159) and has only recently been [re-added to Debian](http://packages.qa.debian.org/p/pam/news/20131021T001835Z.html).
 
-# Identitying stolen keys
+# Identifying stolen keys
 
 One thing I'd love to have is a way to identify a stolen public key. Given the IP
 restrictions described above, if a public key is stolen and used from a different IP,

Fix invalid dates
diff --git a/posts/checking-your-passwords-against-hibp.mdwn b/posts/checking-your-passwords-against-hibp.mdwn
index adfa2bb..cd6ac83 100644
--- a/posts/checking-your-passwords-against-hibp.mdwn
+++ b/posts/checking-your-passwords-against-hibp.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Checking Your Passwords Against the Have I Been Pwned List"]]
-[[!meta date="2017-10-16T22:10:00:00.000-07:00"]]
+[[!meta date="2017-10-16T22:10:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 Two months ago, Troy Hunt, the security professional behind
diff --git a/posts/mercurial-commit-series-phabricator-using-arcanist.mdwn b/posts/mercurial-commit-series-phabricator-using-arcanist.mdwn
index 746fe80..a1ddc64 100644
--- a/posts/mercurial-commit-series-phabricator-using-arcanist.mdwn
+++ b/posts/mercurial-commit-series-phabricator-using-arcanist.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Mercurial commit series in Phabricator using Arcanist"]]
-[[!meta date="2018-08-01T09:00:00:00.000-07:00"]]
+[[!meta date="2018-08-01T09:00:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 [Phabricator](https://www.phacility.com/phabricator/) supports multi-commit
diff --git a/posts/mysterious-400-bad-request-error-django-debug.mdwn b/posts/mysterious-400-bad-request-error-django-debug.mdwn
index 2c48fd7..cd86245 100644
--- a/posts/mysterious-400-bad-request-error-django-debug.mdwn
+++ b/posts/mysterious-400-bad-request-error-django-debug.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Mysterious 400 Bad Request in Django debug mode"]]
-[[!meta date="2017-06-10T17:20:00:00.000-07:00"]]
+[[!meta date="2017-06-10T17:20:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 While upgrading [Libravatar](https://www.libravatar.org) to a more recent
diff --git a/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn b/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
index 1e801cd..cde2ef8 100644
--- a/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
+++ b/posts/pristine-tar-and-git-buildpackage-work-arounds.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="pristine-tar and git-buildpackage Work-arounds"]]
-[[!meta date="2017-08-09T22:25:00:00.000-07:00"]]
+[[!meta date="2017-08-09T22:25:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 I recently ran into problems trying to package the
diff --git a/posts/proxy-acme-challenges-to-single-machine.mdwn b/posts/proxy-acme-challenges-to-single-machine.mdwn
index aa2ebb5..b258617 100644
--- a/posts/proxy-acme-challenges-to-single-machine.mdwn
+++ b/posts/proxy-acme-challenges-to-single-machine.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Proxy ACME challenges to a single machine"]]
-[[!meta date="2017-11-28T22:10:00:00.000-08:00"]]
+[[!meta date="2017-11-28T22:10:00.000-08:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 The [Libravatar mirrors](https://wiki.libravatar.org/run_a_mirror/) are
diff --git a/posts/recovering-from-botched-mercurial-bookmark-histedit.mdwn b/posts/recovering-from-botched-mercurial-bookmark-histedit.mdwn
index a80567d..7cecb5b 100644
--- a/posts/recovering-from-botched-mercurial-bookmark-histedit.mdwn
+++ b/posts/recovering-from-botched-mercurial-bookmark-histedit.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Recovering from a botched hg histedit on a mercurial bookmark"]]
-[[!meta date="2018-07-26T22:42:00:00.000-07:00"]]
+[[!meta date="2018-07-26T22:42:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 If you are in the middle of a failed
diff --git a/posts/test-mail-server-ubuntu-debian.mdwn b/posts/test-mail-server-ubuntu-debian.mdwn
index 6028b53..3fe9f26 100644
--- a/posts/test-mail-server-ubuntu-debian.mdwn
+++ b/posts/test-mail-server-ubuntu-debian.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Test mail server on Ubuntu and Debian"]]
-[[!meta date="2017-11-13T17:30:00:00.000+08:00"]]
+[[!meta date="2017-11-13T17:30:00.000+08:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 I wanted to setup a mail service on a staging server that would send all
diff --git a/posts/time-synchronization-with-ntp-and-systemd.mdwn b/posts/time-synchronization-with-ntp-and-systemd.mdwn
index 84d3841..3f54067 100644
--- a/posts/time-synchronization-with-ntp-and-systemd.mdwn
+++ b/posts/time-synchronization-with-ntp-and-systemd.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Time Synchronization with NTP and systemd"]]
-[[!meta date="2017-08-06T13:10:00:00.000-07:00"]]
+[[!meta date="2017-08-06T13:10:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 I recently ran into problems with generating
diff --git a/posts/tls_authentication_freenode_and_oftc.mdwn b/posts/tls_authentication_freenode_and_oftc.mdwn
index 31f7f1a..02e7506 100644
--- a/posts/tls_authentication_freenode_and_oftc.mdwn
+++ b/posts/tls_authentication_freenode_and_oftc.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="TLS Authentication on Freenode and OFTC"]]
-[[!meta date="2017-09-08T21:50:00:00.000-07:00"]]
+[[!meta date="2017-09-08T21:50:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 In order to easily authenticate with IRC networks such as
diff --git a/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn b/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn
index 696add0..e5434b5 100644
--- a/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn
+++ b/posts/toggling-between-pulseaudio-outputs-when-docking-a-laptop.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Toggling Between Pulseaudio Outputs when Docking a Laptop"]]
-[[!meta date="2017-07-11T22:00:00:00.000-07:00"]]
+[[!meta date="2017-07-11T22:00:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 In addition to
diff --git a/posts/using-all-5ghz-wifi-frequencies-in-gargoyle-router.mdwn b/posts/using-all-5ghz-wifi-frequencies-in-gargoyle-router.mdwn
index d22ff84..c65cae1 100644
--- a/posts/using-all-5ghz-wifi-frequencies-in-gargoyle-router.mdwn
+++ b/posts/using-all-5ghz-wifi-frequencies-in-gargoyle-router.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Using all of the 5 GHz WiFi frequencies in a Gargoyle Router"]]
-[[!meta date="2017-12-10T18:00:00:00.000-08:00"]]
+[[!meta date="2017-12-10T18:00:00.000-08:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 WiFi in the 2.4 GHz range is usually fairly congested in urban environments.

Add screenshot keybindings
https://faq.i3wm.org/question/202/what-do-you-guys-use-for-printscreen.1.html
https://unix.stackexchange.com/questions/497897/print-screen-key-in-i3
diff --git a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
index 5e52f51..4dc4b8e 100644
--- a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
+++ b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
@@ -65,7 +65,12 @@ While keyboard shortcuts can be configured in GNOME, they don't work within i3,
     # show battery stats
     bindsym XF86Battery exec gnome-power-statistics
 
-to make volume control, screen brightness and battery status buttons work as expected on my laptop.
+    # interactive screenshot by pressing printscreen
+    bindsym Print exec /usr/bin/gnome-screenshot -i
+    # crop-area screenshot by pressing Mod + printscreen
+    bindsym --release $mod+Print exec /usr/bin/gnome-screenshot -a
+
+to make volume control, screen brightness, battery status and print screen buttons work as expected on my laptop.
 
 These bindings require the following packages or scripts:
 

Revert "Comment moderation"
This reverts spam commit cd974a0dee9553c237b876c75ae21c96f206190c.
diff --git a/posts/test-mail-server-ubuntu-debian/comment_1_8d46bd0fae22e7b429d2e7a93b619a52._comment b/posts/test-mail-server-ubuntu-debian/comment_1_8d46bd0fae22e7b429d2e7a93b619a52._comment
deleted file mode 100644
index b597820..0000000
--- a/posts/test-mail-server-ubuntu-debian/comment_1_8d46bd0fae22e7b429d2e7a93b619a52._comment
+++ /dev/null
@@ -1,9 +0,0 @@
-[[!comment format=mdwn
- ip="46.185.122.180"
- claimedauthor="obazxopsum"
- url="http://theprettyguineapig.com/amoxicillin/"
- subject="Raynaud's generations rickettsial grief malignant re-intervention continuing. "
- date="2019-03-07T15:53:40Z"
- content="""
-http://theprettyguineapig.com/amoxicillin/ - Amoxicillin <a href=\"http://theprettyguineapig.com/amoxicillin/\">Amoxicillin No Prescription</a> http://theprettyguineapig.com/amoxicillin/
-"""]]

Comment moderation
diff --git a/posts/test-mail-server-ubuntu-debian/comment_1_8d46bd0fae22e7b429d2e7a93b619a52._comment b/posts/test-mail-server-ubuntu-debian/comment_1_8d46bd0fae22e7b429d2e7a93b619a52._comment
new file mode 100644
index 0000000..b597820
--- /dev/null
+++ b/posts/test-mail-server-ubuntu-debian/comment_1_8d46bd0fae22e7b429d2e7a93b619a52._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="46.185.122.180"
+ claimedauthor="obazxopsum"
+ url="http://theprettyguineapig.com/amoxicillin/"
+ subject="Raynaud's generations rickettsial grief malignant re-intervention continuing. "
+ date="2019-03-07T15:53:40Z"
+ content="""
+http://theprettyguineapig.com/amoxicillin/ - Amoxicillin <a href=\"http://theprettyguineapig.com/amoxicillin/\">Amoxicillin No Prescription</a> http://theprettyguineapig.com/amoxicillin/
+"""]]

Link to another post on how to secure OpenVPN
diff --git a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
index 8013d6c..3f373f3 100644
--- a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
+++ b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
@@ -95,7 +95,7 @@ Then I took the official configuration template:
     cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
     gunzip /etc/openvpn/server.conf.gz
 
-and set the following in `/etc/openvpn/server.conf` (which includes recommendations from [BetterCrypto.org](https://bettercrypto.org/)):
+and set the following in `/etc/openvpn/server.conf` (which includes recommendations from [BetterCrypto.org](https://bettercrypto.org/) and [Gert van Dijk](https://blog.g3rt.nl/openvpn-security-tips.html)):
 
     dh dh2048.pem
     push "redirect-gateway def1 bypass-dhcp"

creating tag page tags/sccache
diff --git a/tags/sccache.mdwn b/tags/sccache.mdwn
new file mode 100644
index 0000000..f8d7fbd
--- /dev/null
+++ b/tags/sccache.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged sccache"]]
+
+[[!inline pages="tagged(sccache)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tags/brave
diff --git a/tags/brave.mdwn b/tags/brave.mdwn
new file mode 100644
index 0000000..d3b44d6
--- /dev/null
+++ b/tags/brave.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged brave"]]
+
+[[!inline pages="tagged(brave)" actions="no" archive="yes"
+feedshow=10]]

Add sccache seeding post for brave-browser
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
new file mode 100644
index 0000000..ea742c1
--- /dev/null
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -0,0 +1,60 @@
+[[!meta title="Seeding sccache for Faster Brave Browser Builds"]]
+[[!meta date="2019-03-22T16:25:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+[Compiling the
+Brave Browser](https://github.com/brave/brave-browser/wiki#build-brave)
+(based on [Chromium](https://www.chromium.org/Home)) on Linux can take a
+really long time and so most developers use
+[sccache](https://github.com/brave/brave-browser/wiki/sccache-for-faster-builds)
+to cache objects files and speed up future re-compilations.
+
+Here's the cronjob I wrote to seed my local cache every work day to
+pre-compile the latest builds:
+
+    30 23 * * 0-4   francois  /usr/bin/chronic /home/francois/bin/seed-brave-browser-cache
+
+and here are the contents of that script:
+
+    #!/bin/bash
+    set -e
+    
+    # Set the path and sccache environment variables correctly
+    source ${HOME}/.bashrc-brave
+    export LANG=en_CA.UTF-8
+    
+    cd ${HOME}/devel/brave-browser-cache
+    
+    echo "Environment:"
+    echo "- HOME = ${HOME}"
+    echo "- PATH = ${PATH}"
+    echo "- PWD = ${PWD}"
+    echo "- SHELL = ${SHELL}"
+    echo "- BASH_ENV = ${BASH_ENV}"
+    echo
+    
+    echo $(date)
+    echo "=> Update repo"
+    git pull
+    npm install
+    npm run init
+    
+    echo $(date)
+    echo "=> Delete any old build output"
+    rm -rf src/out
+    
+    echo $(date)
+    echo "=> Debug build"
+    ionice nice timeout 4h npm run build
+    echo
+    
+    echo $(date)
+    echo "=>Release build"
+    ionice nice timeout 5h npm run build Release
+    echo
+    
+    echo $(date)
+    echo "=> Delete build output"
+    rm -rf src/out
+
+[[!tag brave]] [[!tag sccache]]

Turn on DNS rebinding protection
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 8eb5a06..a33321c 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -20,6 +20,15 @@ In `/etc/unbound/unbound.conf.d/francois.conf`, I enabled the following security
         use-caps-for-id: no # makes lots of queries fail
         hide-identity: yes
         hide-version: yes
+        private-address: 10.0.0.0/8
+        private-address: 100.64.0.0/10
+        private-address: 127.0.0.0/8
+        private-address: 169.254.0.0/16
+        private-address: 172.16.0.0/12
+        private-address: 192.168.0.0/16
+        private-address: fc00::/7
+        private-address: fe80::/10
+        private-address: ::ffff:0:0/96
 
 and turned on prefetching to hopefully keep in cache the sites I visit regularly:
 

Add another Asterisk post
diff --git a/posts/connecting-voip-phone-directly-to-asterisk-server.mdwn b/posts/connecting-voip-phone-directly-to-asterisk-server.mdwn
new file mode 100644
index 0000000..61280da
--- /dev/null
+++ b/posts/connecting-voip-phone-directly-to-asterisk-server.mdwn
@@ -0,0 +1,62 @@
+[[!meta title="Connecting a VoIP phone directly to an Asterisk server"]]
+[[!meta date="2019-02-28T22:25:00.000-08:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+On my [Asterisk](https://www.asterisk.org/) server, I happen to have two
+on-board ethernet boards. Since I only used one of these, I decided to move
+my VoIP phone from the local network switch to being connected directly to
+the Asterisk server.
+
+The main advantage is that this phone, running proprietary software of
+unknown quality, is no longer available on my general home network. Most
+importantly though, it no longer has access to the Internet, without my
+having to firewall it manually.
+
+Here's how I configured everything.
+
+# Private network configuration
+
+On the server, I started by giving the second network interface a static IP
+address in `/etc/network/interfaces`:
+
+    auto eth1
+    iface eth1 inet static
+        address 192.168.2.2
+        netmask 255.255.255.0
+
+On the VoIP phone itself, I set the static IP address to `192.168.2.3` and
+the DNS server to `192.168.2.2`. I then updated the SIP registrar IP address
+to `192.168.2.2`.
+
+The DNS server actually refers to an [unbound
+daemon](https://feeding.cloud.geek.nz/posts/setting-up-your-own-dnssec-aware/)
+running on the Asterisk server. The only configuration change I had to make
+was to listen on the second interface and allow the VoIP phone in:
+
+    server:
+        interface: 127.0.0.1
+        interface: 192.168.2.2
+        access-control: 0.0.0.0/0 refuse
+        access-control: 127.0.0.1/32 allow
+        access-control: 192.168.2.3/32 allow
+
+Finally, I opened the right ports on the server's firewall in
+`/etc/network/iptables.up.rules`:
+
+    -A INPUT -s 192.168.2.3/32 -p udp --dport 5060 -j ACCEPT
+    -A INPUT -s 192.168.2.3/32 -p udp --dport 10000:20000 -j ACCEPT
+
+# Accessing the admin page
+
+Now that the VoIP phone is no longer available on the local network, it's
+not possible to access its admin page. That's a good thing from a security
+point of view, but it's somewhat inconvenient.
+
+Therefore I put the following in my `~/.ssh/config` to make the admin page
+available on `http://localhost:8081` after I connect to the Asterisk server
+via ssh:
+
+    Host asterisk
+        LocalForward 8081 192.168.2.3:80
+
+[[!tag debian]] [[!tag asterisk]] [[!tag nzoss]] [[!tag ubuntu]]

Make a note on how to prevent the CRL from expiring early
https://forums.openvpn.net/viewtopic.php?t=23166
diff --git a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
index fa0c8ec..8013d6c 100644
--- a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
+++ b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
@@ -58,7 +58,15 @@ Create this symbolic link:
 
     ln -s openssl-1.0.0.cnf openssl.cnf
 
-and generate the keys:
+and set the following in `openssl.cnf`:
+
+    default_crl_days= 3650
+
+to avoid having the [CRL expire after one month](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=849909) and throw this error in the logs:
+
+    VERIFY ERROR: depth=0, error=CRL has expired:
+
+Finally, generate the keys:
 
     . ./vars
     ./clean-all

Add another DNSSEC test page
I found it here: https://github.com/bsclifton/home-router-config/commit/8241ba2e51291a3de3e703d277a1aba591cd2417#diff-a87b1e8c9fb22c6cce19716353a4a989R26
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 727b27d..8eb5a06 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -94,6 +94,7 @@ Once everything is configured properly, the best way I found to test that this s
 
   * [http://www.dnssec.cz/](http://www.dnssec.cz/) should show a green key
   * [http://www.rhybar.cz/](http://www.rhybar.cz/) should not be reachable
+  * <https://dnssec.vs.uni-due.de/>
 
 and using dig:
 

Update both ETLD+1, not just the `.org`
diff --git a/posts/server-migration-plan.mdwn b/posts/server-migration-plan.mdwn
index 8d67302..3f3cc89 100644
--- a/posts/server-migration-plan.mdwn
+++ b/posts/server-migration-plan.mdwn
@@ -12,7 +12,7 @@ go through a similar process.
 
 # Prepare DNS
 
-* Change the TTL on the DNS entry for `libravatar.org` (i.e. bare `A` and `AAAA` records) to **3600** seconds.
+* Change the TTL on the DNS entries for `libravatar.org` and `libravatar.com` (i.e. bare `A` and `AAAA` records) to **3600** seconds.
 * Remove the mirrors I don't control from the DNS load balancer (`cdn` **and** `seccdn`).
 * Remove the main server from `cdn` and `seccdn` in DNS.
 
@@ -146,7 +146,7 @@ go through a similar process.
 
 * Test all functionality on the new site.
 * Do a basic version of the previous test using IPv6.
-* If testing is successful, update DNS A and AAAA records to point to the new server with a short TTL (in case we need to revert).
+* If testing is successful, update DNS A and AAAA records (`libravatar.org` and `libravatar.com`) to point to the new server with a short TTL (in case we need to revert).
 
 * Enable the proxy config on the old server.
 

Disable remote control and enable ACLs
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 7a6c320..727b27d 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -35,11 +35,12 @@ and turned on prefetching to hopefully keep in cache the sites I visit regularly
         cache-min-ttl: 3600
         num-threads: 2
 
-Finally, I also enabled the control interface:
+Finally, I also restricted the server to the local machine:
 
-    remote-control:
-        control-enable: yes
-        control-interface: 127.0.0.1
+    server:
+        interface: 127.0.0.1
+        access-control: 0.0.0.0/0 refuse
+        access-control: 127.0.0.1/32 allow
 
 and increased the amount of debugging information:
 

Comment moderation
diff --git a/posts/setting-up-raid-on-existing/comment_17_fb75295319019a2bfe1f33c18559d945._comment b/posts/setting-up-raid-on-existing/comment_17_fb75295319019a2bfe1f33c18559d945._comment
new file mode 100644
index 0000000..9105a5b
--- /dev/null
+++ b/posts/setting-up-raid-on-existing/comment_17_fb75295319019a2bfe1f33c18559d945._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="219.85.234.104"
+ subject="good guide !"
+ date="2019-01-14T06:39:35Z"
+ content="""
+I have follow the guid to do in debian 9 , everything works perfectly , thanks !
+but in the modify /tmp/mntroot/etc/fstb section, debian 9 use UUID instead of sda , and everything rest were almost the same as the guide .
+"""]]
diff --git a/posts/the-perils-of-raid-and-full-disk-encryption-on-ubuntu/comment_2_b0b73314aa7f066466ec1c2490392724._comment b/posts/the-perils-of-raid-and-full-disk-encryption-on-ubuntu/comment_2_b0b73314aa7f066466ec1c2490392724._comment
new file mode 100644
index 0000000..909fe32
--- /dev/null
+++ b/posts/the-perils-of-raid-and-full-disk-encryption-on-ubuntu/comment_2_b0b73314aa7f066466ec1c2490392724._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="31.16.63.150"
+ claimedauthor="mac"
+ subject="works fine"
+ date="2019-02-09T20:22:30Z"
+ content="""
+Thanks, works in ubuntu server 18.04.1 !
+but i need to set the md device --readwrite, too. 
+
+(this \"feature\" is unfixed for over 9 years )
+"""]]

Mention MXToolbox.com domain checker
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 1a9eac5..9f5bad3 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -353,7 +353,9 @@ Create a new cronjob (`/etc/cron.hourly/checkmail`):
 to ensure that email doesn't accumulate unmonitored on this box.
 
 Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then
-test the whole setup using `mail root`.
+test the whole setup using `mail root`. You should also use
+[this online tool](https://mxtoolbox.com/domain) to make sure everything
+looks good.
 
 To monitor that mail never stops flowing, add this machine to a free
 [healthchecks.io](https://healthchecks.io) account and create a

Link to my new Asterisk setup post
diff --git a/posts/asterisk-everyone-busy-congested-at-this-time.mdwn b/posts/asterisk-everyone-busy-congested-at-this-time.mdwn
index 4fad7a3..8ff408a 100644
--- a/posts/asterisk-everyone-busy-congested-at-this-time.mdwn
+++ b/posts/asterisk-everyone-busy-congested-at-this-time.mdwn
@@ -6,7 +6,7 @@ I was trying to figure out why I was getting a BUSY signal from
 [Asterisk](https://www.asterisk.org/) while trying to ring a SIP phone even
 though that phone was not in use.
 
-My asterisk setup looks like this:
+My [asterisk setup](https://feeding.cloud.geek.nz/posts/encrypted-connection-between-sip-phones-using-asterisk/) looks like this:
 
     phone 1 <--SIP--> asterisk 1 <==IAX2==> asterisk 2 <--SIP--> phone 2
 

Add post about IAX encryption setup
diff --git a/posts/encrypted-connection-between-sip-phones-using-asterisk.mdwn b/posts/encrypted-connection-between-sip-phones-using-asterisk.mdwn
new file mode 100644
index 0000000..6d8606e
--- /dev/null
+++ b/posts/encrypted-connection-between-sip-phones-using-asterisk.mdwn
@@ -0,0 +1,150 @@
+[[!meta title="Encrypted connection between SIP phones using Asterisk"]]
+[[!meta date="2019-02-05T22:40:00.000-08:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Here is the setup I put together to have two SIP phones connect together
+over an encrypted channel. Since the two phones do not support encryption, I
+used [Asterisk](https://www.asterisk.org/) to provide the encrypted channel
+over the Internet.
+
+# Installing Asterisk
+
+First of all, each VoIP phone is in a different physical location and so I
+installed an Asterisk server in each house.
+
+One of the server is a Debian stretch machine and the other runs Ubuntu
+bionic 18.04. Regardless, I used a fairly standard configuration and simply
+installed the `asterisk` package on both machines:
+
+    apt install asterisk
+
+# SIP phones
+
+The two phones, both [Snom 300](http://wiki.snom.com/Snom300/Documentation),
+connect to their local asterisk server on its local IP address and use the
+same details as I have put in `/etc/asterisk/sip.conf`:
+
+    [1000]
+    type=friend
+    qualify=yes
+    secret=password1
+    encryption=no
+    context=internal
+    host=dynamic
+    nat=no
+    canreinvite=yes
+    mailbox=1000@internal
+    vmexten=707
+    dtmfmode=rfc2833
+    call-limit=2
+    disallow=all
+    allow=g722
+    allow=ulaw
+
+# Dialplan and voicemail
+
+The extension number above (`1000`) maps to the following configuration
+blurb in `/etc/asterisk/extensions.conf`:
+
+    [home]
+    exten => 1000,1,Dial(SIP/1000,20)
+    exten => 1000,n,Goto(in1000-${DIALSTATUS},1)
+    exten => 1000,n,Hangup
+    exten => in1000-BUSY,1,Hangup(17)
+    exten => in1000-CONGESTION,1,Hangup(3)
+    exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@mailboxes,su)
+    exten => in1000-CHANUNAVAIL,n,Hangup(3)
+    exten => in1000-NOANSWER,1,VoiceMail(1000@mailboxes,su)
+    exten => in1000-NOANSWER,n,Hangup(16)
+    exten => _in1000-.,1,Hangup(16)
+
+the `internal` [context](http://the-asterisk-book.com/1.6/der-context.html)
+maps to the following blurb in `/etc/asterisk/extensions.conf`:
+
+    [internal]
+    include => home
+    include => iax2users
+    exten => 707,1,VoiceMailMain(1000@mailboxes)
+
+and `1000@mailboxes` maps to the following entry in
+`/etc/asterisk/voicemail.conf`:
+
+    [mailboxes]
+    1000 => 1234,home,person@email.com
+
+(with `1234` being the voicemail PIN).
+
+# Encrypted IAX links
+
+In order to create a virtual link between the two servers using the
+[IAX](https://en.wikipedia.org/wiki/Inter-Asterisk_eXchange) protocol, I
+created user credentials on each server in `/etc/asterisk/iax.conf`:
+
+    [iaxuser]
+    type=user
+    auth=md5
+    secret=password2
+    context=iax2users
+    allow=g722
+    allow=speex
+    encryption=aes128
+    trunk=no
+
+then I created an entry for the other server in the same file:
+
+    [server2]
+    type=peer
+    host=server2.dyn.fmarier.org
+    auth=md5
+    secret=password2
+    username=iaxuser
+    allow=g722
+    allow=speex
+    encryption=yes
+    forceencrypt=yes
+    trunk=no
+    qualify=yes
+
+The second machine contains the same configuration with the exception of the
+server name (`server1` instead of `server2`) and hostname
+(`server1.dyn.fmarier.org` instead of `server2.dyn.fmarier.org`).
+
+# Speed dial for the other phone
+
+Finally, to allow each phone to ring one another by dialing `2000`, I put
+the following in `/etc/asterisk/extensions.conf`:
+
+    [iax2users]
+    include => home
+    exten => 2000,1,Set(CALLERID(all)=Francois Marier <2000>)
+    exten => 2000,2,Dial(IAX2/server1/1000)
+
+and of course a similar blurb on the other machine:
+
+    [iax2users]
+    include => home
+    exten => 2000,1,Set(CALLERID(all)=Other Person <2000>)
+    exten => 2000,2,Dial(IAX2/server2/1000)
+
+# Firewall rules
+
+Since we are using the IAX protocol instead of SIP, there is only one port
+to open in `/etc/network/iptables.up.rules` for the remote server:
+
+    # IAX2 protocol
+    -A INPUT -s x.x.x.x/y -p udp --dport 4569 -j ACCEPT
+
+where `x.x.x.x/y` is the IP range allocated to the ISP that the other
+machine is behind.
+
+If you want to restrict traffic on the local network as well, then these
+ports need to be open for the SIP phone to be able to connect to its local
+server:
+
+    # VoIP phones (internal)
+    -A INPUT -s 192.168.1.3/32 -p udp --dport 5060 -j ACCEPT
+    -A INPUT -s 192.168.1.3/32 -p udp --dport 10000:20000 -j ACCEPT
+
+where `192.168.1.3` is the static IP address allocated to the SIP phone.
+
+[[!tag debian]] [[!tag asterisk]] [[!tag nzoss]] [[!tag ubuntu]]

Restrict the use of cron to the root user
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index f36c74b..1a9eac5 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -237,6 +237,10 @@ and the following to harden the TCP stack:
 
 before reloading these settings using `sysctl -p`.
 
+I also restrict the use of cron to the `root` user by putting the following in `/etc/cron.allow`:
+
+    root
+
 # Entropy and timekeeping
 
     apt install rng-tools chrony

Add LXC setup for Fedora on Ubuntu
diff --git a/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn b/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
new file mode 100644
index 0000000..8bf166c
--- /dev/null
+++ b/posts/fedora29-lxc-setup-on-ubuntu-bionic.mdwn
@@ -0,0 +1,100 @@
+[[!meta title="Fedora 29 LXC setup on Ubuntu Bionic 18.04"]]
+[[!meta date="2019-01-26T08:00:00.000-08:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Similarly to what I wrote for [Debian stretch](https://feeding.cloud.geek.nz/posts/lxc-setup-on-debian-stretch/)
+and [jessie](https://feeding.cloud.geek.nz/posts/lxc-setup-on-debian-jessie/),
+here is how I was able to create a [Fedora](https://getfedora.org/) 29 LXC
+container on an Ubuntu 18.04 (bionic) laptop.
+
+# Setting up LXC on Ubuntu
+
+First of all, install lxc:
+
+    apt install lxc
+    echo "veth" >> /etc/modules
+    modprobe veth
+
+turn on bridged networking by putting the following in
+`/etc/sysctl.d/local.conf`:
+
+    net.ipv4.ip_forward=1
+
+and applying it using:
+
+    sysctl -p /etc/sysctl.d/local.conf
+
+Then allow the right traffic in your firewall
+(`/etc/network/iptables.up.rules` in my case):
+
+    # LXC containers
+    -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+    -A FORWARD -s 10.0.3.0/24 -j ACCEPT
+    -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
+    -A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
+    -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
+    -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
+
+and apply these changes:
+
+    iptables-apply
+
+before restarting the lxc networking:
+
+    systemctl restart lxc-net.service
+
+# Create the container
+
+Once that's in place, you can finally create the Fedora 29 container:
+
+    lxc-create -n fedora29 -t download -- -d fedora -r 29 -a amd64
+
+# Logging in as root
+
+Start up the container and get a login console:
+
+    lxc-start -n fedora29 -F
+
+In another terminal, set a password for the root user:
+
+    lxc-attach -n fedora29 passwd
+
+You can now use this password to log into the console you started earlier.
+
+# Logging in as an unprivileged user via ssh
+
+As root, install a few packages:
+
+    dnf install openssh-server vim sudo man
+
+and then create an unprivileged user with sudo access:
+
+    adduser francois -G wheel
+    passwd francois
+
+Now login as that user from the console and add an ssh public key:
+
+    mkdir .ssh
+    chmod 700 .ssh
+    echo "<your public key>" > .ssh/authorized_keys
+    chmod 644 .ssh/authorized_keys
+
+You can now login via ssh. The IP address to use can be seen in the output
+of:
+
+    lxc-ls --fancy
+
+# Enabling all necessary locales
+
+To ensure that you have all available locales and don't see ugly perl
+warnings such as:
+
+    perl: warning: Setting locale failed.
+    perl: warning: Falling back to the standard locale ("C").
+
+install the appropriate language packs:
+
+    dnf install langpacks-en.noarch
+    dnf reinstall dnf
+
+[[!tag debian]] [[!tag lxc]] [[!tag nzoss]] [[!tag ubuntu]]

Tag backup-to-DVD post with luks
diff --git a/posts/encrypted-system-backup-to-dvd.mdwn b/posts/encrypted-system-backup-to-dvd.mdwn
index 85abf06..190c8db 100644
--- a/posts/encrypted-system-backup-to-dvd.mdwn
+++ b/posts/encrypted-system-backup-to-dvd.mdwn
@@ -105,4 +105,4 @@ and remove the temporary files:
     rm /backup.dat /backup.key
 
 
-[[!tag catalyst]] [[!tag debian]] [[!tag backup]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag cryptmount]]
+[[!tag catalyst]] [[!tag debian]] [[!tag backup]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag cryptmount]] [[!tag luks]]

Mention the official 1.1.1.1 test page
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 0035ba2..7a6c320 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -130,6 +130,9 @@ your ISP and other potential on-path attackers. Therefore, forwarding
 traffic to a non-logging trusted recursive resolver appears to be the best
 solution at the moment.
 
+To test that DNS queries are being correctly forwarded to Cloudflare, use
+their [official test page](https://1.1.1.1/help).
+
 ## Integration with OpenVPN
 
 If you are [running your own OpenVPN server](https://feeding.cloud.geek.nz/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu/),

Always put the systray on the main display
https://christopherdecoster.com/posts/i3-wm/
https://github.com/i3/i3/issues/1329
diff --git a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
index 03913c8..5e52f51 100644
--- a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
+++ b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
@@ -139,4 +139,15 @@ Finally, because X sometimes fail to detect my external monitor when docking/und
 
     bindsym XF86Display exec /home/francois/bin/external-monitor
 
+## Putting the system tray on the right monitor
+
+If you find your systray on the wrong display after plugging an external monitor, try adding the following to your i3 config file:
+
+    bar {
+        status_command i3status
+        tray_output primary
+    }
+
+and then restarting i3.
+
 [[!tag debian]] [[!tag i3]] [[!tag gnome]] [[!tag nzoss]] [[!tag systemd]]

Add missing systctl network toggles
https://askubuntu.com/a/893570
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 991c537..f36c74b 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -225,10 +225,15 @@ and the following to harden the TCP stack:
     net.ipv4.conf.all.accept_source_route = 0
     net.ipv4.conf.all.rp_filter=1
     net.ipv4.conf.all.send_redirects = 0
-    net.ipv4.conf.default.rp_filter=1
+    net.ipv4.conf.default.accept_redirects = 0
+    net.ipv4.conf.default.accept_source_route = 0
+    net.ipv4.conf.default.rp_filter = 1
+    net.ipv4.conf.default.send_redirects = 0
     net.ipv4.tcp_syncookies=1
     net.ipv6.conf.all.accept_redirects = 0
     net.ipv6.conf.all.accept_source_route = 0
+    net.ipv6.conf.default.accept_redirects = 0
+    net.ipv6.conf.default.accept_source_route = 0
 
 before reloading these settings using `sysctl -p`.
 

Comment moderation
diff --git a/posts/setting-up-raid-on-existing/comment_16_1d9137b7a808efcb8148bf2f5ceb7019._comment b/posts/setting-up-raid-on-existing/comment_16_1d9137b7a808efcb8148bf2f5ceb7019._comment
new file mode 100644
index 0000000..ae2de8d
--- /dev/null
+++ b/posts/setting-up-raid-on-existing/comment_16_1d9137b7a808efcb8148bf2f5ceb7019._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="2a02:2f0b:bc0d:8000:7062:9de2:cda:370"
+ claimedauthor="Erwin"
+ url="www.astro.ro"
+ subject="comment 16"
+ date="2019-01-09T19:10:30Z"
+ content="""
+I've follow this tutorial, everything went OK but grub. This part was difficult because of two reasons: 
+1) I use the GPT and the error was that boot partition doesn't have the bios_grub flag - don't forget to assign when create partitions on the new disk  
+2) the new 16.04 system has different raid modules to be loaded at grub start like dm_raid, megaraid etc. This part should be updated. 
+"""]]

pm-suspend has been obsoleted by systemd in my s2ram script
diff --git a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
index 8e12cd5..03913c8 100644
--- a/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
+++ b/posts/creating-a-modern-tiling-desktop-environment-using-i3.mdwn
@@ -97,10 +97,6 @@ Instead, when I want to suspend to ram, I use the following keyboard shortcut:
 
 which executes a [custom suspend script](https://github.com/fmarier/user-scripts/blob/master/s2ram) to clear the clipboards (using [xsel](https://packages.debian.org/stable/xsel)), flush writes to disk and lock the screen before going to sleep.
 
-To avoid having to type my sudo password every time [pm-suspend](https://packages.debian.org/stable/pm-utils) is invoked, I added the following line to `/etc/sudoers`:
-
-    francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend
-
 # Window and workspace placement hacks
 
 While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Add missing LUKS link
diff --git a/posts/erasing-persistent-storage-securely.mdwn b/posts/erasing-persistent-storage-securely.mdwn
index 671cd7b..98ae802 100644
--- a/posts/erasing-persistent-storage-securely.mdwn
+++ b/posts/erasing-persistent-storage-securely.mdwn
@@ -8,7 +8,7 @@ thing to do before giving away (or throwing away) old disks.
 
 Ideally though, it's better not to have to rely on secure erasure and start
 use full-disk encryption right from the start, for example, using
-[LUKS](TODO). That way if the secure deletion fails for whatever reason, or
+[LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup). That way if the secure deletion fails for whatever reason, or
 can't be performed (e.g. the drive is dead), then it's not a big deal.
 
 # Rotating hard drives

Add post on wiping disks
diff --git a/posts/erasing-persistent-storage-securely.mdwn b/posts/erasing-persistent-storage-securely.mdwn
new file mode 100644
index 0000000..671cd7b
--- /dev/null
+++ b/posts/erasing-persistent-storage-securely.mdwn
@@ -0,0 +1,54 @@
+[[!meta title="Erasing Persistent Storage Securely on Linux"]]
+[[!meta date="2019-01-08T08:55:00.000-08:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Here are some notes on how to securely delete computer data in a way that
+makes it impractical for anybody to recover that data. This is an important
+thing to do before giving away (or throwing away) old disks.
+
+Ideally though, it's better not to have to rely on secure erasure and start
+use full-disk encryption right from the start, for example, using
+[LUKS](TODO). That way if the secure deletion fails for whatever reason, or
+can't be performed (e.g. the drive is dead), then it's not a big deal.
+
+# Rotating hard drives
+
+With ATA or SCSI hard drives, [DBAN](https://sourceforge.net/projects/dban/)
+seems to be the ideal solution.
+
+1. Burn it on CD,
+2. boot with it,
+3. and following the instructions.
+
+Note that you should **disconnect any drives you don't want to erase**
+before booting with that CD.
+
+This is probably the most trustworth method of wiping since it uses free and
+open source software to write to each sector of the drive several times. The
+methods that follow rely on proprietary software built into the
+firmware of the devices and so you have to trust that it is implemented
+properly and not backdoored.
+
+# ATA / SATA solid-state drives
+
+Due to the nature of solid-state storage (i.e. the lifetime number of writes
+is limited), it's not a good idea to use DBAN for those. Instead, we must
+rely on the vendor's implementation of [ATA Secure
+Erase](https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase).
+
+First, set a password on the drive:
+
+    hdparm --user-master u --security-set-pass p /dev/sdX
+
+and then issue a Secure Erase command:
+
+    hdparm --user-master u --security-erase-enhanced p /dev/sdX
+
+# NVMe solid-state drives
+
+For SSDs using an NVMe connector, simply request a [User Data
+Erase](https://www.mankier.com/1/nvme-format)
+
+    nvme format -s1 /dev/nvme0n1
+
+[[!tag debian]] [[!tag nzoss]] [[!tag luks]]

Comment moderation
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment
new file mode 100644
index 0000000..2e03196
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="75.133.119.91"
+ claimedauthor="thankful"
+ subject="Worked for me with minor tweaks"
+ date="2019-01-07T01:22:19Z"
+ content="""
+I didn't need to install lvm2, as it was on my unbootable system.  I also had some minor partition/volume differences.
+
+My issue is documented at the [Ubuntu forums](https://ubuntuforums.org/showthread.php?t=2409754)
+
+That all said, **I did have a major issue with DNS resolution not functioning** after this was done.  I'm wondering if \"update-initramfs\" lead to this issue specifically (I made other changes I can't recall clearly).
+
+If other experience loss of DNS via systemd.resolved failure, please note it here and on my post in the Ubuntu Forums.  My fix is listed there, although I'm effectively disabling systemd.resolved.
+"""]]

calendar update
diff --git a/archives/2019.mdwn b/archives/2019.mdwn
new file mode 100644
index 0000000..ccc1c91
--- /dev/null
+++ b/archives/2019.mdwn
@@ -0,0 +1 @@
+[[!calendar type=year year=2019 pages="page(posts/*) and !*/Discussion"]]
diff --git a/archives/2019/01.mdwn b/archives/2019/01.mdwn
new file mode 100644
index 0000000..56a59b0
--- /dev/null
+++ b/archives/2019/01.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=01 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(01) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/02.mdwn b/archives/2019/02.mdwn
new file mode 100644
index 0000000..72dbd17
--- /dev/null
+++ b/archives/2019/02.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=02 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(02) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/03.mdwn b/archives/2019/03.mdwn
new file mode 100644
index 0000000..534ad69
--- /dev/null
+++ b/archives/2019/03.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=03 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(03) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/04.mdwn b/archives/2019/04.mdwn
new file mode 100644
index 0000000..b2a8a7e
--- /dev/null
+++ b/archives/2019/04.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=04 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(04) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/05.mdwn b/archives/2019/05.mdwn
new file mode 100644
index 0000000..53f5953
--- /dev/null
+++ b/archives/2019/05.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=05 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(05) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/06.mdwn b/archives/2019/06.mdwn
new file mode 100644
index 0000000..060e3e3
--- /dev/null
+++ b/archives/2019/06.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=06 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(06) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/07.mdwn b/archives/2019/07.mdwn
new file mode 100644
index 0000000..ea6d5c5
--- /dev/null
+++ b/archives/2019/07.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=07 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(07) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/08.mdwn b/archives/2019/08.mdwn
new file mode 100644
index 0000000..aba5c11
--- /dev/null
+++ b/archives/2019/08.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=08 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(08) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/09.mdwn b/archives/2019/09.mdwn
new file mode 100644
index 0000000..664ffb5
--- /dev/null
+++ b/archives/2019/09.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=09 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(09) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/10.mdwn b/archives/2019/10.mdwn
new file mode 100644
index 0000000..020dc4a
--- /dev/null
+++ b/archives/2019/10.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=10 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(10) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/11.mdwn b/archives/2019/11.mdwn
new file mode 100644
index 0000000..86c621a
--- /dev/null
+++ b/archives/2019/11.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=11 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(11) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]
diff --git a/archives/2019/12.mdwn b/archives/2019/12.mdwn
new file mode 100644
index 0000000..46c9d95
--- /dev/null
+++ b/archives/2019/12.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=12 year=2019 pages="page(posts/*) and !*/Discussion"]]
+"""]]
+
+[[!inline pages="creation_month(12) and creation_year(2019) and page(posts/*) and !*/Discussion" show=0 feeds=no reverse=yes]]

Remove bogus user URL
diff --git a/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment b/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment
index 113939a..6a62bdf 100644
--- a/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment
+++ b/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment
@@ -1,7 +1,7 @@
 [[!comment format=mdwn
  ip="82.137.54.181"
  claimedauthor="johanna"
- url="doe"
+ url=""
  subject="ideapad BIOS update under Linux"
  date="2018-12-31T18:49:05Z"
  content="""

Comment moderation
diff --git a/posts/setting-up-a-network-scanner-using-sane/comment_9_dffdef654f6dca98ce343a1752c827ee._comment b/posts/setting-up-a-network-scanner-using-sane/comment_9_dffdef654f6dca98ce343a1752c827ee._comment
new file mode 100644
index 0000000..336f9d5
--- /dev/null
+++ b/posts/setting-up-a-network-scanner-using-sane/comment_9_dffdef654f6dca98ce343a1752c827ee._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="96.37.225.11"
+ claimedauthor="Bgstack15"
+ url="https://bgstack15.wordpress.com/"
+ subject="CentOS 7, systemd units, and usb permission"
+ date="2018-12-28T21:06:26Z"
+ content="""
+Thank you for the useful instructions. This page alone provided all the steps that helped me share my scanner on CentOS 7 to a Fedora client. I struggled with my scanner trying to print something, and I didn't realize it for the longest time, but once I canceled that job, it all works with the instructions here, even with selinux.
+"""]]
diff --git a/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment b/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment
new file mode 100644
index 0000000..113939a
--- /dev/null
+++ b/posts/upgrading-lenovo-thinkpad-bios-under-linux/comment_7_8005230f01f554f0ef41803088454832._comment
@@ -0,0 +1,37 @@
+[[!comment format=mdwn
+ ip="82.137.54.181"
+ claimedauthor="johanna"
+ url="doe"
+ subject="ideapad BIOS update under Linux"
+ date="2018-12-31T18:49:05Z"
+ content="""
+Hello,
+
+After some searching online I tried to update the BIOS on an Ideapad 120s via USB bootable stick with DOS.
+
+How to flash via DOS boot usb.........
+
+
+01. First you need Rufus and an 1GB USB stick - what I had for a stick
+
+01a. create a usb dos boot disk with rufus
+
+02. You need the BIOS, ex 6gcn25ww.exe
+
+03. on an Linux machine you use innoextract to extract the executable fron the auto-unpacking archive. will have same name and extension
+
+04. you put the newly obtained exec in the root of the dos boot usb stick
+
+05. boot the machine with said usb stick
+
+06. type the name of .exe file at the promt, wait for extract process, wait for reboot, let the exe do its job - a new boot window will appear with the progress bar.....wait some more.....wait til machine reboots......check new bios......success !
+
+
+finish
+
+My 2 cents for people with noob skills on Linux.
+
+I hope it will help someone....
+
+Best regards
+"""]]

Syndicate my SSRF blogpost on Planet NZOSS
diff --git a/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn b/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn
index 878631e..f1ecfa3 100644
--- a/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn
+++ b/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn
@@ -114,4 +114,4 @@ URLs that include a port number, and there is [an open issue in
 python-openid](https://github.com/openid/python-openid/issues/83) around
 proxies and OpenID.
 
-[[!tag libravatar]] [[!tag openid]] [[!tag web]] [[!tag owasp]] [[!tag squid]] [[!tag django]]
+[[!tag libravatar]] [[!tag openid]] [[!tag web]] [[!tag owasp]] [[!tag squid]] [[!tag django]] [[!tag nzoss]]

creating tag page tags/openid
diff --git a/tags/openid.mdwn b/tags/openid.mdwn
new file mode 100644
index 0000000..60aced0
--- /dev/null
+++ b/tags/openid.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged openid"]]
+
+[[!inline pages="tagged(openid)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tags/squid
diff --git a/tags/squid.mdwn b/tags/squid.mdwn
new file mode 100644
index 0000000..7297870
--- /dev/null
+++ b/tags/squid.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged squid"]]
+
+[[!inline pages="tagged(squid)" actions="no" archive="yes"
+feedshow=10]]

Add SSRF post
diff --git a/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn b/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn
new file mode 100644
index 0000000..878631e
--- /dev/null
+++ b/posts/restricting-outgoing-webapp-requests-using-squid-proxy.mdwn
@@ -0,0 +1,117 @@
+[[!meta title="Restricting outgoing HTTP traffic in a web application using a squid proxy"]]
+[[!meta date="2018-12-27T18:00:00.000-05:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I recently had to fix a [Server-Side Request Forgery
+bug](https://bugs.launchpad.net/libravatar/+bug/1808720) in Libravatar's
+[OpenID](https://en.wikipedia.org/wiki/OpenID) support. In addition to
+**enabling authentication on internal services** whenever possible, I also
+forced all outgoing network requests from the Django web-application to go
+through a restrictive egress proxy.
+
+# OpenID logins are prone to SSRF
+
+[Server-Side Request
+Forgeries](https://www.acunetix.com/blog/articles/server-side-request-forgery-vulnerability/)
+are vulnerabilities which allow attackers to issue arbitrary [GET
+requests](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods)
+on the server side. Unlike a [Cross-Site Request
+Forgery](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF%29),
+SSRF requests do not include user credentials (e.g. cookies). On the other
+hand, since these requests are done by the server, they typically originate
+from inside the firewall.
+
+This allows attackers to target internal resources and issue arbitrary GET
+requests to them. One could use this to leak information, especially when
+error reports include the request payload, tamper with the state of internal
+services or portscan an internal network.
+
+OpenID 1.x logins are prone to these vulnerabilities because of the way they
+are initiated:
+
+1. Users visit a site's login page.
+2. They enter their OpenID URL in a text field.
+3. The server fetches the given URL to discover the OpenID endpoints.
+4. The server redirects the user to their OpenID provider to continue the
+   rest of the login flow.
+
+The third step is the potentially problematic one since it requires a
+server-side fetch.
+
+# Filtering URLs in the application is not enough
+
+At first, I thought I would filter out undesirable URLs inside the
+application:
+
+- hostnames like `localhost`, `127.0.0.1` or `::1`
+- non-HTTP schemes like `file` or `gopher`
+- non-standard ports like `5432` or `11211`
+
+However this filtering is going to be very easy to bypass:
+
+1. Add a hostname in your DNS zone which resolves to `127.0.0.1`.
+2. Setup a redirect to a blacklisted URL such as `file:///etc/passwd`.
+
+Applying the filter on the original URL is clearly not enough.
+
+# Install and configure a Squid proxy
+
+In order to fully restrict outgoing OpenID requests from the web
+application, I used a [Squid](http://www.squid-cache.org) HTTP proxy.
+
+First, install the package:
+
+    apt install squid3
+
+and set the following in `/etc/squid3/squid.conf`:
+
+    acl to_localnet dst 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
+    acl to_localnet dst 10.0.0.0/8            # RFC 1918 local private network (LAN)
+    acl to_localnet dst 100.64.0.0/10         # RFC 6598 shared address space (CGN)
+    acl to_localnet dst 169.254.0.0/16        # RFC 3927 link-local (directly plugged) machines
+    acl to_localnet dst 172.16.0.0/12         # RFC 1918 local private network (LAN)
+    acl to_localnet dst 192.168.0.0/16        # RFC 1918 local private network (LAN)
+    acl to_localnet dst fc00::/7              # RFC 4193 local private network range
+    acl to_localnet dst fe80::/10             # RFC 4291 link-local (directly plugged) machines
+    
+    acl SSL_ports port 443
+    acl Safe_ports port 80
+    acl Safe_ports port 443
+    acl CONNECT method CONNECT
+    
+    http_access deny !Safe_ports
+    http_access deny CONNECT !SSL_ports
+    http_access deny manager
+    http_access deny to_localhost
+    http_access deny to_localnet
+    http_access allow localhost
+    http_access deny all
+    
+    http_port 127.0.0.1:3128
+
+Ideally, I would like to use a whitelist approach to restrict requests to a
+small set of valid URLs, but in the case of OpenID, the set of valid URLs is
+not fixed. Therefore the only workable approach is a blacklist. The above
+snippet whitelists port numbers (`80` and `443`) and blacklists requests to
+`localhost` (a built-in squid
+[acl](http://www.squid-cache.org/Doc/config/acl/) variable which resolves to
+`127.0.0.1` and `::1`) as well as known local IP ranges.
+
+# Expose the proxy to Django in the WSGI configuration
+
+In order to force all outgoing requests from Django to [go through the
+proxy](https://stackoverflow.com/questions/14284824/working-with-django-proxy-setup),
+I put the following in my [WSGI](http://wsgi.org/) application
+(`/etc/libravatar/django.wsgi`):
+
+    os.environ['ftp_proxy'] = "http://127.0.0.1:3128"
+    os.environ['http_proxy'] = "http://127.0.0.1:3128"
+    os.environ['https_proxy'] = "http://127.0.0.1:3128"
+
+The whole thing seemed to work well in my limited testing. There is however
+[a bug in urllib2](https://bugs.python.org/issue24311) with proxying HTTPS
+URLs that include a port number, and there is [an open issue in
+python-openid](https://github.com/openid/python-openid/issues/83) around
+proxies and OpenID.
+
+[[!tag libravatar]] [[!tag openid]] [[!tag web]] [[!tag owasp]] [[!tag squid]] [[!tag django]]

Update name of Viscosity option and explicitly disable compression
diff --git a/posts/using-openvpn-on-ios-and-osx.mdwn b/posts/using-openvpn-on-ios-and-osx.mdwn
index 990f70b..5c9c99f 100644
--- a/posts/using-openvpn-on-ios-and-osx.mdwn
+++ b/posts/using-openvpn-on-ios-and-osx.mdwn
@@ -83,7 +83,8 @@ connection:
    - Tls-Auth: `ta.key`
    - direction: 1
 - **Options**
-   - peer certificate: require server nsCertType
+   - peer certificate: Require certificate was signed for server use
+   - Compression: Off
 - **Networking**
    - send all traffic on VPN
 - **Advanced**

Improve congestion control
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index da30bd7..991c537 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -383,4 +383,10 @@ queueing discipline (jessie or later) by putting the following in `/etc/sysctl.d
 
     net.core.default_qdisc=fq_codel
 
+and the following to improve congestion control and
+[HTTP/2 prioritization](https://blog.cloudflare.com/http-2-prioritization-with-nginx/):
+
+    net.ipv4.tcp_congestion_control = bbr
+    net.ipv4.tcp_notsent_lowat = 16384
+
 [[!tag sysadmin]] [[!tag debian]] [[!tag nzoss]]

Add a section on forwarding to 1.1.1.1 for DNS-over-TLS
https://www.ctrl.blog/entry/unbound-tls-forwarding
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 500e326..0035ba2 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -7,7 +7,7 @@ Now that the root DNS servers are [signed,](http://www.root-dnssec.org/2010/07/1
 
 Being already packaged in [Debian](http://packages.debian.org/source/unstable/unbound) and [Ubuntu](https://launchpad.net/ubuntu/+source/unbound), unbound is only an `apt-get` away:
 
-    apt install unbound
+    apt install unbound ca-certificates
 
 ## Optional settings
 
@@ -103,6 +103,33 @@ $ dig +dnssec A www.dnssec.cz | grep ad
   
 Are there any other ways of making sure that DNSSEC is fully functional?
 
+## Using DNS-over-TLS using Cloudflare's `1.1.1.1`
+
+In order to make use of [DNS over
+TLS](https://en.wikipedia.org/wiki/DNS_over_TLS) and effectively hide DNS
+queries from anybody looking at your network traffic, one option is to
+forward your queries to [Cloudflare's
+`1.1.1.1`](https://cloudflare-dns.com):
+
+    server:
+        tls-cert-bundle: /etc/ssl/certs/ca-certificates.crt
+    
+    forward-zone:
+        name: "."
+        forward-tls-upstream: yes
+        # Cloudflare DNS
+        forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com
+        forward-addr: 1.1.1.1@853#cloudflare-dns.com
+        forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com
+        forward-addr: 1.0.0.1@853#cloudflare-dns.com
+
+While Unbound appears to support DNS over TLS natively, it's not clear to me
+that it will connect to DNS servers over TLS while doing a recursive name
+resolution. Additionally, it will leak queries to non-encrypted servers to
+your ISP and other potential on-path attackers. Therefore, forwarding
+traffic to a non-logging trusted recursive resolver appears to be the best
+solution at the moment.
+
 ## Integration with OpenVPN
 
 If you are [running your own OpenVPN server](https://feeding.cloud.geek.nz/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu/),

Remove comments which have been merged into the article
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_3_cc2943361afc1181a8920ffbfd028465._comment b/posts/setting-up-your-own-dnssec-aware/comment_3_cc2943361afc1181a8920ffbfd028465._comment
deleted file mode 100644
index b47155d..0000000
--- a/posts/setting-up-your-own-dnssec-aware/comment_3_cc2943361afc1181a8920ffbfd028465._comment
+++ /dev/null
@@ -1,11 +0,0 @@
-[[!comment format=mdwn
- ip="162.243.251.96"
- subject="OpenVPN settings"
- date="2017-08-16T06:28:48Z"
- content="""
-Dear François,
-
-Thank you so much for this! What changes need to be made to /etc/openvpn/server.conf in order to use Unbound from within the VPN tunnel when connected to the server from an external client?
-
-Thanks for your help, François!
-"""]]
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_4_76f7656b5ca945dc2cf6a11ee9402d12._comment b/posts/setting-up-your-own-dnssec-aware/comment_4_76f7656b5ca945dc2cf6a11ee9402d12._comment
deleted file mode 100644
index 39b5f93..0000000
--- a/posts/setting-up-your-own-dnssec-aware/comment_4_76f7656b5ca945dc2cf6a11ee9402d12._comment
+++ /dev/null
@@ -1,11 +0,0 @@
-[[!comment format=mdwn
- username="francois@665656f0ba400877c9b12e8fbb086e45aa01f7c0"
- nickname="francois"
- avatar="http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9"
- subject="Re: OpenVPN settings"
- date="2017-08-16T16:20:31Z"
- content="""
-> What changes need to be made to /etc/openvpn/server.conf in order to use Unbound from within the VPN tunnel when connected to the server from an external client?
-
-I haven't yet figured out how to do that, but it's something I'd really like to add to my [OpenVPN setup](https://feeding.cloud.geek.nz/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu/).
-"""]]
diff --git a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment b/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
deleted file mode 100644
index 4cc2a1a..0000000
--- a/posts/setting-up-your-own-dnssec-aware/comment_5_650c2de462eaf647cf57a7989e8f67fd._comment
+++ /dev/null
@@ -1,47 +0,0 @@
-[[!comment format=mdwn
- ip="162.243.251.96"
- claimedauthor="Eldin Hadzic"
- subject="Solution"
- date="2017-08-26T23:33:27Z"
- content="""
-I figured it out.
-
-In order for OpenVPN to use the locally installed Unbound DNS resolver, do this:
-
-First check for the IP we should use with: `sudo ifconfig`
-
-The IP we need is the one listed at 
-
-    tun0: inet 10.8.0.1
-
-## UNBOUND
-
-Add this to `/etc/unbound/unbound.conf`:
-
-    server:
-        interface: 127.0.0.1
-        interface: 10.8.0.1
-        access-control: 127.0.0.1 allow
-        access-control: 10.8.0.1/24 allow
-
-Then restart Unbound with: `sudo service unbound restart`
-
-Test with: `dig @10.8.0.1 google.com`
-
-(SERVER should read: `SERVER: 10.8.0.1#53(10.8.0.1)`)
-
-## OPENVPN
-
-Add this to (or modify) `/etc/openvpn/server.conf`:
-
-    push \"redirect-gateway def1 bypass-dhcp\"
-    push \"dhcp-option DNS 10.8.0.1\"
-    push \"register-dns\"
-
-Then restart OpenVPN with: `sudo service openvpn restart`
-
-OpenVPN clients should now be using Unbound. Test at <http://dnsleak.com/>.
-
-Eldin Hadzic
-eldinhadzic@protonmail.com
-"""]]

Fix config file blurbs and remove unnecessary lines
diff --git a/posts/setting-up-your-own-dnssec-aware.mdwn b/posts/setting-up-your-own-dnssec-aware.mdwn
index 86d07ba..500e326 100644
--- a/posts/setting-up-your-own-dnssec-aware.mdwn
+++ b/posts/setting-up-your-own-dnssec-aware.mdwn
@@ -2,39 +2,38 @@
 [[!meta date="2010-09-12T18:00:00.000+12:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 Now that the root DNS servers are [signed,](http://www.root-dnssec.org/2010/07/16/status-update-2010-07-16/) I thought it was time I started using [DNSSEC](https://secure.wikimedia.org/wikipedia/en/wiki/Dnssec) on my own PC. However, not wanting to wait for my ISP to enable it, I decided to setup a private recursive DNS resolver for myself using [Unbound](http://unbound.net/).  
-  
 
 ## Installing Unbound
 
 Being already packaged in [Debian](http://packages.debian.org/source/unstable/unbound) and [Ubuntu](https://launchpad.net/ubuntu/+source/unbound), unbound is only an `apt-get` away:
 
-
     apt install unbound
 
 ## Optional settings
 
 In `/etc/unbound/unbound.conf.d/francois.conf`, I enabled the following security options:
 
-    harden-below-nxdomain: yes
-    harden-referral-path: yes
-    harden-algo-downgrade: no # false positives with improperly configured zones
-    use-caps-for-id: no # makes lots of queries fail
-    hide-identity: yes
-    hide-version: yes
+    server:
+        harden-below-nxdomain: yes
+        harden-referral-path: yes
+        harden-algo-downgrade: no # false positives with improperly configured zones
+        use-caps-for-id: no # makes lots of queries fail
+        hide-identity: yes
+        hide-version: yes
 
 and turned on prefetching to hopefully keep in cache the sites I visit regularly:
 
-
-    prefetch: yes
-    prefetch-key: yes
-    msg-cache-size: 128k
-    msg-cache-slabs: 2
-    rrset-cache-size: 8m
-    rrset-cache-slabs: 2
-    key-cache-size: 32m
-    key-cache-slabs: 2
-    cache-min-ttl: 3600
-    num-threads: 2
+    server:
+        prefetch: yes
+        prefetch-key: yes
+        msg-cache-size: 128k
+        msg-cache-slabs: 2
+        rrset-cache-size: 8m
+        rrset-cache-slabs: 2
+        key-cache-size: 32m
+        key-cache-slabs: 2
+        cache-min-ttl: 3600
+        num-threads: 2
 
 Finally, I also enabled the control interface:
 
@@ -44,37 +43,31 @@ Finally, I also enabled the control interface:
 
 and increased the amount of debugging information:
 
-    val-log-level: 2
-    use-syslog: yes
-    verbosity: 1
+    server:
+        val-log-level: 2
+        use-syslog: yes
+        verbosity: 1
 
 before running `sudo unbound-control-setup` to generate the necessary keys.
   
 Once unbound is restarted (`sudo service unbound restart`) stats can be queried to make sure that the DNS resolver is working:
 
-
     unbound-control stats
   
-
 ## Overriding DHCP settings
 
 In order to use my own unbound server for DNS lookups and not the one received via [DHCP](https://secure.wikimedia.org/wikipedia/en/wiki/Dhcp), I added this line to `/etc/dhcp/dhclient.conf`:
 
-
     supersede domain-name-servers 127.0.0.1;
 
-
 and restarted dhclient:
 
-
     sudo killall dhclient
     sudo killall dhclient
     sudo /etc/init.d/network-manager restart
 
-
 If you're not using DHCP, then you simply need to put this in your `/etc/resolv.conf`:
 
-
     nameserver 127.0.0.1
 
 or on more recent distros, the following in `/etc/systemd/resolved.conf`:
@@ -135,6 +128,4 @@ Then restart both services and everything should work:
     systemctl restart unbound.service
     systemctl restart openvpn.service
 
-You can test it on <http://dnsleak.com>.
-
 [[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag security]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag dns]] [[!tag dnssec]] [[!tag openvpn]]

Disable LZO compression on OpenVPN
diff --git a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
index 2ec0414..fa0c8ec 100644
--- a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
+++ b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
@@ -159,8 +159,6 @@ connection of type "OpenVPN":
 
 then click the "Avanced" button and set the following:
 
-* General
-   * Use LZO data compression: `YES`
 * Security
    * Cipher: `AES-256-GCM`
 * TLS Authentication
diff --git a/posts/using-openvpn-on-android-lollipop.mdwn b/posts/using-openvpn-on-android-lollipop.mdwn
index 4a08a0f..631c4c8 100644
--- a/posts/using-openvpn-on-android-lollipop.mdwn
+++ b/posts/using-openvpn-on-android-lollipop.mdwn
@@ -36,7 +36,6 @@ you'll need to use on your phone:
 
 Basic:
 
-- LZO Compression: `YES`
 - Type: `Certificates`
 - CA Certificate: `ca.crt`
 - Client Certificate: `nexus6.crt`
diff --git a/posts/using-openvpn-on-ios-and-osx.mdwn b/posts/using-openvpn-on-ios-and-osx.mdwn
index 41eb522..990f70b 100644
--- a/posts/using-openvpn-on-ios-and-osx.mdwn
+++ b/posts/using-openvpn-on-ios-and-osx.mdwn
@@ -59,7 +59,6 @@ Here is the config I successfully used to connect to my server:
     cert iphone.crt
     key iphone.key
     cipher AES-256-GCM
-    comp-lzo yes
     proto udp
     tls-remote server
     remote-cert-tls server
@@ -85,7 +84,6 @@ connection:
    - direction: 1
 - **Options**
    - peer certificate: require server nsCertType
-   - compression: turn LZO on
 - **Networking**
    - send all traffic on VPN
 - **Advanced**

Add CRL to OpenVPN config
diff --git a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
index 853ef79..2ec0414 100644
--- a/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
+++ b/posts/creating-a-linode-based-vpn-setup-using_openvpn_on_debian_or_ubuntu.mdwn
@@ -78,6 +78,7 @@ On my server, called `hafnarfjordur.fmarier.org`, I installed the
 and then copied the following files from my high-entropy machine:
 
     cp ca.crt dh2048.pem server.key server.crt ta.key /etc/openvpn/
+    touch /etc/openvpn/crl.pem
     chown root:root /etc/openvpn/*
     chmod 600 /etc/openvpn/ta.key /etc/openvpn/server.key
 
@@ -99,6 +100,7 @@ and set the following in `/etc/openvpn/server.conf` (which includes recommendati
     ncp-disable
     user nobody
     group nogroup
+    crl-verify crl.pem
 
 (These DNS servers are the ones I found in `/etc/resolv.conf` on my Linode VPS.)
 

Comment moderation
diff --git a/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment b/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment
new file mode 100644
index 0000000..c735c96
--- /dev/null
+++ b/posts/installing-vidyo-on-ubuntu-1804/comment_2_93c96cdc7713032646438fe0a172a56c._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="francois@665656f0ba400877c9b12e8fbb086e45aa01f7c0"
+ nickname="francois"
+ subject="Re: comment 1"
+ date="2018-11-08T06:32:12Z"
+ content="""
+I'm not sure why you're saying that it's sloppy for a system-wide binary to be owned by root. That's both [the policy in Debian](https://www.debian.org/doc/debian-policy/ch-files.html#permissions-and-owners) and also it prevents an ordinary user from tampering a binary that could be used by other users.
+"""]]

Comment moderation
diff --git a/posts/installing-vidyo-on-ubuntu-1804/comment_1_03e04002d4cb78385f28970bc70bb8ee._comment b/posts/installing-vidyo-on-ubuntu-1804/comment_1_03e04002d4cb78385f28970bc70bb8ee._comment
new file mode 100644
index 0000000..ec7728d
--- /dev/null
+++ b/posts/installing-vidyo-on-ubuntu-1804/comment_1_03e04002d4cb78385f28970bc70bb8ee._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="2620:101:80f8:224:b92d:19e8:b46d:ea95"
+ subject="comment 1"
+ date="2018-11-07T19:02:03Z"
+ content="""
+    sudo chown root:root /usr/bin/VidyoDesktop  
+Why, specifically, does it need to be root?  Simple chown-to-root is operationally sloppy/Windows-think.  Do you have a setcap(8) procedure that could yield a viable result?
+"""]]
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu/comment_6_f1867c6f2b06324f6bb268a4ba839219._comment b/posts/running-your-own-xmpp-server-debian-ubuntu/comment_6_f1867c6f2b06324f6bb268a4ba839219._comment
new file mode 100644
index 0000000..3a8c2f0
--- /dev/null
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu/comment_6_f1867c6f2b06324f6bb268a4ba839219._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="86.42.105.221"
+ claimedauthor="lsjmhar "
+ subject="ejabberd"
+ date="2018-10-31T00:28:40Z"
+ content="""
+You can install freedombox on debian now and it provides apps to bypass all thia - ejabberd, matrix, lets encrypt and more.
+"""]]

Add post about lean data at Mozilla
diff --git a/posts/lean-data-in-practice.mdwn b/posts/lean-data-in-practice.mdwn
new file mode 100644
index 0000000..feaf715
--- /dev/null
+++ b/posts/lean-data-in-practice.mdwn
@@ -0,0 +1,83 @@
+[[!meta title="Lean data in practice"]]
+[[!meta date="2018-11-01T08:05:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Mozilla has been promoting the idea of [lean
+data](https://www.mozilla.org/about/policy/lean-data/) for a while. It's
+about recognizing both that data is valuable and that it is a dangerous
+thing to hold on to. Following these lean data principles forces you to
+clarify the questions you want to answer and think hard about the minimal
+set of information you need to answer these questions.
+
+Out of these general principles came the [Firefox data collection
+guidelines](https://wiki.mozilla.org/Firefox/Data_Collection). These are the
+guidelines that every team must follow when they want to collect [data about
+our users](https://data.firefox.com/) and that are enforced through the data
+stewardship program.
+
+As one of the data steward for Firefox, I have reviewed hundreds of data
+collection requests and can attest to the fact that Mozilla does follow the
+lean data principles it promotes. Mozillians are already aware of the
+problems with collecting large amounts of data, but the Firefox data review
+process provides an additional opportunity for an outsider to question the
+necessity of each piece of data. In my experience, this system is quite
+effective at reducing the data footprint of Firefox.
+
+What does lean data look like in practice? Here are a few examples of
+changes that were made to restrict the data collected by Firefox to what is
+truly needed:
+
+- Collecting a user's country is not particularly identifying in the case of
+  large countries likes the USA, but it can be when it comes to very small
+  island nations. How many Firefox users are there in
+  [Niue](https://en.wikipedia.org/wiki/Niue)? Hard to know, but it's
+  definitely less than the number of Firefox users in Germany. After I
+  raised that issue, the team decided to [put all of the small countries
+  into a single "other"
+  bucket](https://github.com/mozilla/activity-stream/pull/3877/commits/9a48cbec1cc1686758fec5cdfae5995f10918904).
+
+- Similarly, cities generally have enough users to be non-identifying.
+  However, some municipalities are quite small and can lead to the same
+  problems. There are lots of Firefox users in [Portland,
+  Oregon](https://en.wikipedia.org/wiki/Portland,_Oregon) for example, but
+  probably not that many in [Portland,
+  Arkansas](https://en.wikipedia.org/wiki/Portland,_Arkansas) or [Portland,
+  Pennsylvania](https://en.wikipedia.org/wiki/Portland,_Pennsylvania). If
+  you want to tell the [Oregonian
+  Portlanders](https://www.youtube.com/watch?v=cnVjkE87FDY) apart, it might
+  be sufficient to bucket Portland users into "Oregon" and "not Oregon",
+  instead of recording both the city and the state.
+
+- When collecting window sizes and other pixel-based measurements, it's
+  easier to collect the exact value. However, that exact value could be
+  stable for a while and create a temporary
+  [fingerprint](https://en.wikipedia.org/wiki/Device_fingerprint) for a
+  user. In most cases, teams wanting to collect this kind of data have
+  agreed to round the value in order to increase the number of users in each
+  "bucket" without affecting their ability to answer their underlying
+  questions.
+
+- Firefox occasionally runs studies which involve collecting specific URLs
+  that users have consented to share with us (e.g. "this site crashes my
+  Firefox"). In most cases though, the full URL is not needed and so I have
+  often been able to get teams to restrict the collection to the hostname,
+  or to at least remove the query string, which could include username and
+  passwords on badly-designed websites.
+
+- When [making use of Google
+  Analytics](https://hacks.mozilla.org/2016/01/google-analytics-privacy-and-event-tracking/),
+  it may not be necessary to collect everything it supports by default. For
+  example, my [suggestion to trim the
+  referrers](https://github.com/mozilla-services/screenshots/issues/2579)
+  was implemented by one of the teams using Google Analytics since while it
+  would have been an interesting data point, it wasn't necessary to answer
+  the questions they had in mind.
+
+Some of these might sound like small wins, but to me they are a sign that
+the process is working. In most cases, requests are
+very easy to approve because developers have already done the hard work of
+data minimization. In a few cases, by asking questions and getting familiar
+with the problem, the data steward can point out opportunities for further
+reductions in data collection that the team may have missed.
+
+[[!tag mozilla]] [[!tag privacy]]

Fix time of Vidyo post
diff --git a/posts/installing-vidyo-on-ubuntu-1804.mdwn b/posts/installing-vidyo-on-ubuntu-1804.mdwn
index 19291df..3121a0f 100644
--- a/posts/installing-vidyo-on-ubuntu-1804.mdwn
+++ b/posts/installing-vidyo-on-ubuntu-1804.mdwn
@@ -1,5 +1,5 @@
 [[!meta title="Installing Vidyo on Ubuntu 18.04"]]
-[[!meta date="2018-10-29T15:45:00:00.000-07:00"]]
+[[!meta date="2018-10-29T15:45:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
 Following [these

creating tag page tags/vidyo
diff --git a/tags/vidyo.mdwn b/tags/vidyo.mdwn
new file mode 100644
index 0000000..54dd80c
--- /dev/null
+++ b/tags/vidyo.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged vidyo"]]
+
+[[!inline pages="tagged(vidyo)" actions="no" archive="yes"
+feedshow=10]]

Add a post about Vidyo on Ubuntu 18.04
diff --git a/posts/installing-vidyo-on-ubuntu-1804.mdwn b/posts/installing-vidyo-on-ubuntu-1804.mdwn
new file mode 100644
index 0000000..19291df
--- /dev/null
+++ b/posts/installing-vidyo-on-ubuntu-1804.mdwn
@@ -0,0 +1,60 @@
+[[!meta title="Installing Vidyo on Ubuntu 18.04"]]
+[[!meta date="2018-10-29T15:45:00:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Following [these
+instructions](http://information-technology.web.cern.ch/services/fe/howto/users-use-vidyo-linux)
+as well as the comments in there, I was able to get
+[Vidyo](https://www.vidyo.com/), the proprietary videoconferencing system
+that Mozilla uses internally, to work on [Ubuntu](https://www.ubuntu.com/)
+18.04 (Bionic Beaver). The same instructions should work on recent versions
+of [Debian](https://www.debian.org/) too.
+
+# Installing dependencies
+
+First of all, install all of the package dependencies:
+
+    sudo apt install libqt4-designer libqt4-opengl libqt4-svg libqtgui4 libqtwebkit4 sni-qt overlay-scrollbar-gtk2 libcanberra-gtk-module
+
+Then, ensure you have a [system tray application
+running](https://bugzilla.mozilla.org/show_bug.cgi?id=989811#c3). This
+should be the case for most desktop environments.
+
+# Building a custom Vidyo package
+
+Download [version 3.6.3](https://vidyoportal.cern.ch/upload/VidyoDesktopInstaller-ubuntu64-TAG_VD_3_6_3_017.deb)
+from the [CERN Vidyo
+Portal](https://vidyoportal.cern.ch/download.html?lang=en) but don't expect
+to be able to install it right away.
+
+You need to first hack the package in order to [remove obsolete
+dependencies](https://support.vidyocloud.com/hc/en-us/articles/226103528-VidyoDesktop-3-6-3-for-Linux-and-Ubuntu-15-04-and-higher).
+
+Once that's done, install the resulting package:
+
+    sudo dpkg -i vidyodesktop-custom.deb
+
+# Packaging fixes and configuration
+
+There are a few more things to fix before it's ready to be used.
+
+First, fix the ownership on the main executable:
+
+    sudo chown root:root /usr/bin/VidyoDesktop
+
+Then disable autostart since you don't probably don't want to keep the
+client running all of the time (and listening on the network) given it
+hasn't received any updates in a long time and has apparently been abandoned
+by Vidyo:
+
+    sudo rm /etc/xdg/autostart/VidyoDesktop.desktop
+
+Remove any old configs in your home directory that could interfere with this
+version:
+
+    rm -rf ~/.vidyo ~/.config/Vidyo
+
+Finally, launch `VidyoDesktop` and go into the settings to check "Always use
+VidyoProxy".
+
+[[!tag mozilla]] [[!tag vidyo]]

Create the tmp directory if it doesn't exist already
diff --git a/posts/crashplan-and-non-executable-tmp-directories.mdwn b/posts/crashplan-and-non-executable-tmp-directories.mdwn
index c63036c..e03925b 100644
--- a/posts/crashplan-and-non-executable-tmp-directories.mdwn
+++ b/posts/crashplan-and-non-executable-tmp-directories.mdwn
@@ -65,6 +65,12 @@ machine), by adding something like this to the `SRV_JAVA_OPTS` variable of
 
     -Djava.io.tmpdir=/var/tmp/crashplan
 
+To ensure that the directory exists, you can put the following in `/etc/rc.local`:
+
+    #!/bin/sh -e
+    mkdir -p /var/tmp/crashplan
+    exit 0
+
 Finally, it seems like you **need to restart the machine** before this
 starts working. I'm not sure why restarting crashplan isn't enough.
 

Ensure that mon isn't listening on all network interfaces
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 3807f51..da30bd7 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -359,6 +359,10 @@ To monitor that mail never stops flowing, add this machine to a free
 In order to ensure that the root partition never has less than 1G of free
 space, I put the following in `/etc/mon/mon.cf`:
 
+    serverbind = 127.0.0.1
+    trapbind = 127.0.0.1
+    clientallow = 127.0.0.1
+    
     watch localhost
         service freespace
             interval 10m

Add diskspace monitoring with mon
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 19582f0..3807f51 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -352,6 +352,25 @@ To monitor that mail never stops flowing, add this machine to a free
 
     0 1 * * * root echo "ping" | mail xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@hchk.io
 
+# Monitoring
+
+    apt install --no-install-recommends mon libfilesys-diskspace-perl
+
+In order to ensure that the root partition never has less than 1G of free
+space, I put the following in `/etc/mon/mon.cf`:
+
+    watch localhost
+        service freespace
+            interval 10m
+            monitor freespace.monitor /:1048576 ;;
+            period
+                numalerts 10
+                alert mail.alert root
+                upalert mail.alert root
+                alertevery 60m
+
+and then `systemctl restart mon.service`.
+
 # Network tuning
 
 To [reduce the server's contribution to

Comment moderation
diff --git a/posts/tls_authentication_freenode_and_oftc/comment_3_ff61fc9636d7df82c721a08c8be0b9a7._comment b/posts/tls_authentication_freenode_and_oftc/comment_3_ff61fc9636d7df82c721a08c8be0b9a7._comment
new file mode 100644
index 0000000..77af10a
--- /dev/null
+++ b/posts/tls_authentication_freenode_and_oftc/comment_3_ff61fc9636d7df82c721a08c8be0b9a7._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="185.220.102.6"
+ claimedauthor="austere"
+ subject="tls authentication freenode and oftc"
+ date="2018-10-08T00:55:14Z"
+ content="""
+Has the irssi certificate leakage been fixed yet?
+"""]]

Improving formatting of filenames and package name.
diff --git a/posts/encrypted-swap-partition-on.mdwn b/posts/encrypted-swap-partition-on.mdwn
index a61e38b..9204e2a 100644
--- a/posts/encrypted-swap-partition-on.mdwn
+++ b/posts/encrypted-swap-partition-on.mdwn
@@ -5,15 +5,15 @@ The swap partition can hold a lot of unencrypted confidential information and th
   
 Encrypting a swap partition however is slightly tricky if one wants to also support suspend-to-disk (also called hibernation). Here's a procedure that worked for me on both Debian Stretch and Ubuntu 18.04 (Bionic Beaver):
   
-1. Install the cryptsetup package:  
+1. Install the [cryptsetup package](https://packages.debian.org/stable/cryptsetup):
 
        apt install cryptsetup
 
-2. Add this line to /etc/crypttab:  
+2. Add this line to `/etc/crypttab`:
 
        sda2_crypt /dev/sda2 /dev/urandom cipher=aes-xts-plain64,size=256,swap,discard
 
-3. Set the swap partition to be this in /etc/fstab:  
+3. Set the swap partition to be this in `/etc/fstab`:
 
        /dev/mapper/sda2_crypt none swap sw 0 0
 

Update for newer Ubuntu and Debian, switch to a random key.
diff --git a/posts/encrypted-swap-partition-on.mdwn b/posts/encrypted-swap-partition-on.mdwn
index a21c399..a61e38b 100644
--- a/posts/encrypted-swap-partition-on.mdwn
+++ b/posts/encrypted-swap-partition-on.mdwn
@@ -3,33 +3,22 @@
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 The swap partition can hold a lot of unencrypted confidential information and the fact that it persists after shutting down the computer can be a problem.  
   
-Encrypting a swap partition however is slightly tricky if one wants to also support suspend-to-disk (also called hibernation). Here's a procedure that worked for me on both Debian Lenny and Ubuntu 7.10 (Gutsy Gibbon):  
+Encrypting a swap partition however is slightly tricky if one wants to also support suspend-to-disk (also called hibernation). Here's a procedure that worked for me on both Debian Stretch and Ubuntu 18.04 (Bionic Beaver):
   
 1. Install the cryptsetup package:  
 
-       apt-get install cryptsetup
-
-1. Setup the encrypted partition as root:  
-
-       swapoff -a
-       cryptsetup -h sha256 -c aes-cbc-essiv:sha256 -s 256 luksFormat /dev/hda2
-       cryptsetup luksOpen /dev/hda2 cswap
-       mkswap /dev/mapper/cswap
+       apt install cryptsetup
 
 2. Add this line to /etc/crypttab:  
 
-       cswap /dev/hda2 none swap,luks,timeout=30
+       sda2_crypt /dev/sda2 /dev/urandom cipher=aes-xts-plain64,size=256,swap,discard
 
 3. Set the swap partition to be this in /etc/fstab:  
 
-       /dev/mapper/cswap none swap sw 0 0
+       /dev/mapper/sda2_crypt none swap sw 0 0
 
-4. Configure uswsusp to use **/dev/mapper/cswap** and **write unencrypted data**  
+You will of course want to replace `/dev/sda2` with the partition that currently holds your unencrypted swap.  
 
-       dpkg-reconfigure -plow uswsusp
-
-You will of course want to replace `/dev/hda2` with the partition that currently holds your unencrypted swap.  
-  
-(This is loosely based on a similar [procedure for Ubuntu 6.10](http://www.c3l.de/linux/howto-completly-encrypted-harddisk-including-suspend-to-encrypted-disk-with-ubuntu-6.10-edgy-eft.html).)
+This is loosely based on a similar [procedure for Ubuntu 6.10](http://www.c3l.de/linux/howto-completly-encrypted-harddisk-including-suspend-to-encrypted-disk-with-ubuntu-6.10-edgy-eft.html), but I don't use suspend-to-disk and so I simplified the setup and use a random encryption key instead of a passphrase.
 
 [[!tag debian]] [[!tag security]] [[!tag ubuntu]] [[!tag luks]]

Fix formatting
diff --git a/posts/encrypted-swap-partition-on.mdwn b/posts/encrypted-swap-partition-on.mdwn
index a3b2712..a21c399 100644
--- a/posts/encrypted-swap-partition-on.mdwn
+++ b/posts/encrypted-swap-partition-on.mdwn
@@ -7,30 +7,29 @@ Encrypting a swap partition however is slightly tricky if one wants to also supp
   
 1. Install the cryptsetup package:  
 
-        apt-get install cryptsetup
+       apt-get install cryptsetup
 
 1. Setup the encrypted partition as root:  
 
-        swapoff -a
-        cryptsetup -h sha256 -c aes-cbc-essiv:sha256 -s 256 luksFormat /dev/hda2
-        cryptsetup luksOpen /dev/hda2 cswap
-        mkswap /dev/mapper/cswap
+       swapoff -a
+       cryptsetup -h sha256 -c aes-cbc-essiv:sha256 -s 256 luksFormat /dev/hda2
+       cryptsetup luksOpen /dev/hda2 cswap
+       mkswap /dev/mapper/cswap
 
 2. Add this line to /etc/crypttab:  
 
-        cswap /dev/hda2 none swap,luks,timeout=30
+       cswap /dev/hda2 none swap,luks,timeout=30
 
 3. Set the swap partition to be this in /etc/fstab:  
 
-        /dev/mapper/cswap none swap sw 0 0
+       /dev/mapper/cswap none swap sw 0 0
 
 4. Configure uswsusp to use **/dev/mapper/cswap** and **write unencrypted data**  
 
-        dpkg-reconfigure -plow uswsusp
+       dpkg-reconfigure -plow uswsusp
 
 You will of course want to replace `/dev/hda2` with the partition that currently holds your unencrypted swap.  
   
 (This is loosely based on a similar [procedure for Ubuntu 6.10](http://www.c3l.de/linux/howto-completly-encrypted-harddisk-including-suspend-to-encrypted-disk-with-ubuntu-6.10-edgy-eft.html).)
 
-
 [[!tag debian]] [[!tag security]] [[!tag ubuntu]] [[!tag luks]]

Fix formatting
diff --git a/posts/encrypting-your-home-directory-using.mdwn b/posts/encrypting-your-home-directory-using.mdwn
index 96873ba..9f2e21e 100644
--- a/posts/encrypting-your-home-directory-using.mdwn
+++ b/posts/encrypting-your-home-directory-using.mdwn
@@ -5,41 +5,32 @@ Laptops are easily lost or stolen and in order to protect your emails, web passw
 
 If you happen to have `/home` on a separate partition already (`/dev/sda5` in this example), then it's a really easy process:
 
-  1. Copy your home directory to a temporary directory on a different partition:
+1. Copy your home directory to a temporary directory on a different partition:
 
+       mkdir /homebackup
+       cp -a /home/* /homebackup
 
-        mkdir /homebackup
-        cp -a /home/* /homebackup
+2. Encrypt your home partition:
 
+       umount /home
+       cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/sda5
+       cryptsetup luksOpen /dev/sda5 chome
+       mkfs.ext4 -m 0 /dev/mapper/chome
 
-  2. Encrypt your home partition:
+3. Add this line to `/etc/crypttab`:
 
-        umount /home
-        cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/sda5
-        cryptsetup luksOpen /dev/sda5 chome
-        mkfs.ext4 -m 0 /dev/mapper/chome
+       chome    /dev/sda5    none    luks,timeout=30
 
-  3. Add this line to `/etc/crypttab`:
+4. Set the home partition to this in `/etc/fstab`:
 
+       /dev/mapper/chome /home ext4 nodev,nosuid,noatime 0 2
 
-        chome    /dev/sda5    none    luks,timeout=30
-
-
-  4. Set the home partition to this in `/etc/fstab`:
-
-
-        /dev/mapper/chome /home ext4 nodev,nosuid,noatime 0 2
-
-
-  5. Copy your home data back into the encrypted partition:
-
-
-        mount /home
-        cp -a /homebackup/* /home
-        rm -rf /homebackup
+5. Copy your home data back into the encrypted partition:
 
+       mount /home
+       cp -a /homebackup/* /home
+       rm -rf /homebackup
 
 That's it. Now to fully secure your laptop against theft, you should think about an [encrypted backup strategy](http://packages.debian.org/sid/duplicity) for your data...
 
-
 [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag luks]]

Install apt-file as a handy utility
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index c83d5e0..19582f0 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -269,7 +269,7 @@ you need to restart a daemon using an obsolete library.
 
 # Handy utilities
 
-    apt install renameutils atool iotop sysstat lsof mtr-tiny mc netcat-openbsd command-not-found nocache
+    apt install renameutils atool iotop sysstat lsof mtr-tiny mc netcat-openbsd command-not-found nocache apt-file
 
 Most of these tools are configuration-free, except for sysstat, which requires
 enabling data collection in `/etc/default/sysstat` to be useful.

Turn off commit signing in etckeeper
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 0024fdf..c83d5e0 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -43,6 +43,11 @@ and then put these config files in `/etc/.gitignore`:
     /subgid-
     /subuid-
 
+and this in `/etc/.git/config`:
+
+    [commit]
+        gpgsign = false
+
 To get more control over the various packages I install, I change the
 default debconf level to medium:
 

Include a copy of my /etc/.gitignore for etckeeper
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 0b0bf0a..0024fdf 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -33,6 +33,16 @@ the default `/etc/etckeeper/etckeeper.conf`:
 - turn off daily auto-commits
 - turn off auto-commits before package installs
 
+and then put these config files in `/etc/.gitignore`:
+
+    /aliases.db
+    /group-
+    /gshadow-
+    /passwd-
+    /shadow-
+    /subgid-
+    /subuid-
+
 To get more control over the various packages I install, I change the
 default debconf level to medium:
 

Comment moderation
diff --git a/posts/creating-freedos-bootable-usb-stick-to/comment_4_22e10c8246f646a56e100d6e23cc84cf._comment b/posts/creating-freedos-bootable-usb-stick-to/comment_4_22e10c8246f646a56e100d6e23cc84cf._comment
new file mode 100644
index 0000000..5625269
--- /dev/null
+++ b/posts/creating-freedos-bootable-usb-stick-to/comment_4_22e10c8246f646a56e100d6e23cc84cf._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="2a02:1810:3f24:700:fdf1:b732:6e67:2697"
+ claimedauthor="Hamid"
+ subject="Awesome tutorial"
+ date="2018-09-25T07:10:16Z"
+ content="""
+I was looking into this so bad. My Bios update images are big enough to not fit into the default floppy or usb installer of freedos specially that I want to boot it over pxe. This definitely helped me.
+Thank you.
+"""]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment
new file mode 100644
index 0000000..ebafd4c
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="2607:fea8:a51f:f592::2"
+ claimedauthor="JC"
+ subject="Got mine back with a Btrfs root"
+ date="2018-09-27T20:41:52Z"
+ content="""
+I got the same problem after upgrading to 18.04, I don't use LVM but Btrfs, all I had to change was
+
+```apt install btrfs-progs```
+
+Everything else was exactly the same.
+
+Thank you.
+"""]]

Update my network hardening settings to what I currently use
Also hide martian packets since they are too common and annoying.
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index e54879e..0b0bf0a 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -206,11 +206,14 @@ from unprivileged processes:
 
 and the following to harden the TCP stack:
 
-    net.ipv4.conf.all.send_redirects = 0
+    net.ipv4.conf.all.accept_redirects = 0
     net.ipv4.conf.all.accept_source_route = 0
-    net.ipv6.conf.all.accept_source_route = 0
-    net.ipv4.conf.all.log_martians = 1
+    net.ipv4.conf.all.rp_filter=1
+    net.ipv4.conf.all.send_redirects = 0
+    net.ipv4.conf.default.rp_filter=1
     net.ipv4.tcp_syncookies=1
+    net.ipv6.conf.all.accept_redirects = 0
+    net.ipv6.conf.all.accept_source_route = 0
 
 before reloading these settings using `sysctl -p`.