Recent changes to this wiki:

Add necessary firewall rules.
diff --git a/posts/sip-encryption-on-voip-ms.mdwn b/posts/sip-encryption-on-voip-ms.mdwn
index 0d99118..37697ae 100644
--- a/posts/sip-encryption-on-voip-ms.mdwn
+++ b/posts/sip-encryption-on-voip-ms.mdwn
@@ -70,4 +70,22 @@ Since my Asterisk server is only acting as a TLS *client*, and not a TLS
 it looks pretty easy to [use a Let's Encrypt cert with
 Asterisk](https://community.asterisk.org/t/has-anyone-used-letsencrypt-to-setup-ssl-for-asterisk/67145/6).
 
-[[!tag debian]] [[!tag asterisk]] [[!tag nzoss]] [[!tag letsencrypt]] [[!tag voipms]]
+## Firewall
+
+This originally appeared not to be necessary, but I found that I ran into a
+number of intermittent connection errors such as:
+
+    asterisk[1280841]: ERROR[1537920]: tcptls.c:553 in ast_tcptls_client_start: Unable to connect SIP socket to w.x.y.z:5061: Connection reset by peer
+
+and so I put the [official firewall
+recommendations](https://wiki.voip.ms/article/Firewall) in
+`/etc/network/iptables.up.rules`:
+
+    # SIP and RTP on TCP/UDP (servername.voip.ms)
+    -A INPUT -s w.x.y.z/32 -p tcp --dport 5061 -j ACCEPT
+    -A INPUT -s w.x.y.z/32 -p udp --sport 5004:5005 --dport 10001:20000 -j ACCEPT
+
+where `w.x.y.z` is the IP address of `servername.voip.ms` as returned by
+`dig +short servername.voip.ms`.
+
+[[!tag debian]] [[!tag asterisk]] [[!tag letsencrypt]] [[!tag voipms]]

creating tag page tags/ext4
diff --git a/tags/ext4.mdwn b/tags/ext4.mdwn
new file mode 100644
index 0000000..57c407f
--- /dev/null
+++ b/tags/ext4.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged ext4"]]
+
+[[!inline pages="tagged(ext4)" actions="no" archive="yes"
+feedshow=10]]

Add post about ext4 root partition corruption.
diff --git a/posts/repairing-corrupt-ext4-root-partition.mdwn b/posts/repairing-corrupt-ext4-root-partition.mdwn
new file mode 100644
index 0000000..00d3ef7
--- /dev/null
+++ b/posts/repairing-corrupt-ext4-root-partition.mdwn
@@ -0,0 +1,112 @@
+[[!meta title="Repairing a corrupt ext4 root partition"]]
+[[!meta date="2020-09-26T12:45:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I ran into filesystem corruption
+([ext4](https://en.wikipedia.org/wiki/Ext4)) on the root partition of my
+[backup server](https://feeding.cloud.geek.nz/posts/backing-up-to-gnubee2/)
+which caused it to go into read-only mode. Since it's the root partition,
+it's not possible to unmount it and repair it while it's running. Normally I
+would boot from an [Ubuntu live CD / USB
+stick](https://ubuntu.com/download/alternative-downloads), but in this case
+the machine is using the
+[`mipsel`](https://en.wikipedia.org/wiki/MIPS_architecture) architecture and
+so that's not an option.
+
+# Repair using a USB enclosure
+
+I had to pull the shutdown the server and then pull the SSD drive out. I
+then moved it to an external USB enclosure and connected it to my laptop.
+
+I started with an automatic filesystem repair:
+
+    fsck.ext4 -pf /dev/sde2
+
+which failed for some reason and so I moved to an interactive repair:
+
+    fsck.ext4 -f /dev/sde2
+
+Once all of the errors were fixed, I ran a full surface scan to update the
+list of bad blocks:
+
+    fsck.ext4 -c /dev/sde2
+
+Finally, I forced another check to make sure that everything was fixed at
+the filesystem level:
+
+    fsck.ext4 -f /dev/sde2
+
+# Fix invalid alternate GPT
+
+The other thing I noticed is this messge in my `dmesg` log:
+
+    scsi 8:0:0:0: Direct-Access     KINGSTON  SA400S37120     SBFK PQ: 0 ANSI: 6
+    sd 8:0:0:0: Attached scsi generic sg4 type 0
+    sd 8:0:0:0: [sde] 234441644 512-byte logical blocks: (120 GB/112 GiB)
+    sd 8:0:0:0: [sde] Write Protect is off
+    sd 8:0:0:0: [sde] Mode Sense: 31 00 00 00
+    sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+    sd 8:0:0:0: [sde] Optimal transfer size 33553920 bytes
+    Alternate GPT is invalid, using primary GPT.
+     sde: sde1 sde2
+
+I therefore checked to see if the partition table looked fine and got the
+following:
+
+    $ fdisk -l /dev/sde
+    GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
+    The backup GPT table is not on the end of the device. This problem will be corrected by write.
+    Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors
+    Disk model: KINGSTON SA400S3
+    Units: sectors of 1 * 512 = 512 bytes
+    Sector size (logical/physical): 512 bytes / 512 bytes
+    I/O size (minimum/optimal): 512 bytes / 512 bytes
+    Disklabel type: gpt
+    Disk identifier: 799CD830-526B-42CE-8EE7-8C94EF098D46
+    
+    Device       Start       End   Sectors   Size Type
+    /dev/sde1     2048   8390655   8388608     4G Linux swap
+    /dev/sde2  8390656 234441614 226050959 107.8G Linux filesystem
+
+It turns out that all I had to do, since only the backup / alternate GPT
+partition table was corrupt and the primary one was fine, was to re-write
+the partition table:
+
+    $ fdisk /dev/sde
+    
+    Welcome to fdisk (util-linux 2.33.1).
+    Changes will remain in memory only, until you decide to write them.
+    Be careful before using the write command.
+    
+    GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
+    The backup GPT table is not on the end of the device. This problem will be corrected by write.
+    
+    Command (m for help): w
+    
+    The partition table has been altered.
+    Syncing disks.
+
+# Run SMART checks
+
+Since I still didn't know what caused the filesystem corruption in the first
+place, I decided to do one last check:
+[SMART](https://en.wikipedia.org/wiki/S.M.A.R.T.) errors.
+
+I couldn't do this via the USB enclosure since the SMART commands aren't
+forwarded to the drive and so I popped the drive back into the backup
+server and booted it up.
+
+First, I checked whether any SMART errors had been reported using
+[smartmontools](https://www.smartmontools.org/):
+
+    smartctl -a /dev/sda
+
+That didn't show any errors and so I kicked off an extended test:
+
+    smartctl -t long /dev/sda
+
+which ran for 30 minutes and then passed without any errors.
+
+The mystery remains unsolved.
+
+[[!tag gnubee]] [[!tag smart]] [[!tag ext4]] [[!tag debian]]

Create a new ext4 tag.
diff --git a/posts/encrypting-your-home-directory-using.mdwn b/posts/encrypting-your-home-directory-using.mdwn
index 9f2e21e..44e3f99 100644
--- a/posts/encrypting-your-home-directory-using.mdwn
+++ b/posts/encrypting-your-home-directory-using.mdwn
@@ -33,4 +33,4 @@ If you happen to have `/home` on a separate partition already (`/dev/sda5` in th
 
 That's it. Now to fully secure your laptop against theft, you should think about an [encrypted backup strategy](http://packages.debian.org/sid/duplicity) for your data...
 
-[[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag luks]]
+[[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag luks]] [[!tag ext4]]
diff --git a/posts/manually-expanding-raid1-array-ubuntu.mdwn b/posts/manually-expanding-raid1-array-ubuntu.mdwn
index 97712c6..58df32d 100644
--- a/posts/manually-expanding-raid1-array-ubuntu.mdwn
+++ b/posts/manually-expanding-raid1-array-ubuntu.mdwn
@@ -148,4 +148,4 @@ The last step was to regenerate the initramfs:
 before rebooting into something that looks exactly like the original RAID1
 array but with twice the size.
 
-[[!tag nzoss]] [[!tag sysadmin]] [[!tag debian]] [[!tag raid]] [[!tag ubuntu]] [[!tag luks]]
+[[!tag ext4]] [[!tag sysadmin]] [[!tag debian]] [[!tag raid]] [[!tag ubuntu]] [[!tag luks]]
diff --git a/posts/raid1-alternative-for-ssd-drives.mdwn b/posts/raid1-alternative-for-ssd-drives.mdwn
index 2176781..a29371b 100644
--- a/posts/raid1-alternative-for-ssd-drives.mdwn
+++ b/posts/raid1-alternative-for-ssd-drives.mdwn
@@ -56,4 +56,4 @@ Finally, after reading this [excellent LWN article](http://lwn.net/Articles/4084
   
 Is there anything else I should be doing to make sure I get the most out of my SSD?
 
-[[!tag grub]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag raid]]
+[[!tag grub]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag raid]] [[!tag ext4]]
diff --git a/posts/setting-up-raid-on-existing.mdwn b/posts/setting-up-raid-on-existing.mdwn
index a1a40d5..e02a640 100644
--- a/posts/setting-up-raid-on-existing.mdwn
+++ b/posts/setting-up-raid-on-existing.mdwn
@@ -217,4 +217,4 @@ Something else you should seriously consider is to install the `smartmontools` p
 
 These checks, performed by the hard disk controllers directly, could warn you of imminent failures ahead of time. Personally, when I start seeing errors in the SMART log (`smartctl -a /dev/sda`), I order a new drive straight away.
 
-[[!tag grub]] [[!tag raid]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag nzoss]]
+[[!tag grub]] [[!tag raid]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag ext4]]
diff --git a/posts/two-tier-encryption-strategy-archiving.mdwn b/posts/two-tier-encryption-strategy-archiving.mdwn
index 7502669..7f68ae0 100644
--- a/posts/two-tier-encryption-strategy-archiving.mdwn
+++ b/posts/two-tier-encryption-strategy-archiving.mdwn
@@ -47,4 +47,4 @@ and:
 
     cryptmount -u archives
 
-[[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag security]] [[!tag ubuntu]] [[!tag cryptmount]]
+[[!tag ext4]] [[!tag debian]] [[!tag sysadmin]] [[!tag security]] [[!tag ubuntu]] [[!tag cryptmount]]

Poor man's RAID-1 on the GnuBee.
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index 2226a34..3fcac7c 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -62,6 +62,28 @@ and added the following to `/etc/fstab`:
 
     /dev/md127 /mnt/data/ ext4 noatime,nodiratime 0 2
 
+### Keeping a copy of the root partition
+
+In order to survive a failing SSD drive, I could have bought a second SSD
+and gone for a
+[RAID-1](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1) setup.
+Instead, I went for a cheaper option, a [poor man's
+RAID-1](https://feeding.cloud.geek.nz/posts/poor-mans-raid1-between-ssd-and-hard-drive/),
+where I will have to reinstall the machine but it will be very quick and I
+won't lose any of my configuration.
+
+The way that it works is that I periodically sync the contents of the root
+partition onto the RAID-5 array using a cronjob in `/etc/cron.d/hdd-sync`:
+
+    0 10 * * *     root    /usr/local/sbin/ssd_root_backup
+
+which runs the `/usr/local/sbin/ssd_root_backup` script:
+
+    #!/bin/sh
+    nocache nice ionice -c3 rsync -aHx --delete --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/mnt/* --exclude=/lost+found/* --exclude=/media/* --exclude=/var/tmp/* /* /mnt/data/root/
+
+### Drive spin down
+
 To reduce unnecessary noise and reduce power consumption, I also installed
 [hdparm](https://sourceforge.net/projects/hdparm/):
 
@@ -86,6 +108,8 @@ and then reloaded the configuration:
 
      /usr/lib/pm-utils/power.d/95hdparm-apm resume
 
+### Monitoring drive health
+
 Finally I setup [smartmontools](https://www.smartmontools.org/) by putting
 the following in `/etc/smartd.conf`:
 

Remove superfluous words.
diff --git a/posts/npr-modem-setup-testing-linux.mdwn b/posts/npr-modem-setup-testing-linux.mdwn
index 8ef6ca2..c2b1318 100644
--- a/posts/npr-modem-setup-testing-linux.mdwn
+++ b/posts/npr-modem-setup-testing-linux.mdwn
@@ -76,7 +76,7 @@ and confirmed that they were able to successfully connect to each other:
 
 # Monitoring RF
 
-To monitor what is happening on the air and check and quickly determine
+To monitor what is happening on the air and quickly determine
 whether or not the modems are chatting, you can use a [software-defined
 radio](https://www.nooelec.com/store/sdr/sdr-receivers/nesdr/nesdr-mini.html)
 along with [gqrx](https://gqrx.dk/) with the following settings:

Small fixes to NPR post.
diff --git a/posts/npr-modem-setup-testing-linux.mdwn b/posts/npr-modem-setup-testing-linux.mdwn
index e9f4088..8ef6ca2 100644
--- a/posts/npr-modem-setup-testing-linux.mdwn
+++ b/posts/npr-modem-setup-testing-linux.mdwn
@@ -1,9 +1,9 @@
-[[!meta title="Setting and testing an NPR modem on Linux"]]
-[[!meta date="2020-09-17T23:20:00.000-07:00"]]
+[[!meta title="Setting up and testing an NPR modem on Linux"]]
+[[!meta date="2020-09-17T23:35:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
-After acquiring a [New Packet Radio
-modem](https://hackaday.io/project/164092-npr-new-packet-radio) on behalf of
+After acquiring a pair of [New Packet Radio
+modems](https://hackaday.io/project/164092-npr-new-packet-radio) on behalf of
 [VECTOR](https://vectorradio.ca), I set it up on my Linux machine and ran
 some basic tests to check whether it could achieve the advertised 500 kbps
 transfer rates, which are much higher than

creating tag page tags/iperf
diff --git a/tags/iperf.mdwn b/tags/iperf.mdwn
new file mode 100644
index 0000000..6ab9b02
--- /dev/null
+++ b/tags/iperf.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged iperf"]]
+
+[[!inline pages="tagged(iperf)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tags/npr
diff --git a/tags/npr.mdwn b/tags/npr.mdwn
new file mode 100644
index 0000000..ac08df6
--- /dev/null
+++ b/tags/npr.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged npr"]]
+
+[[!inline pages="tagged(npr)" actions="no" archive="yes"
+feedshow=10]]

Add NPR setup post.
diff --git a/posts/npr-modem-setup-testing-linux.mdwn b/posts/npr-modem-setup-testing-linux.mdwn
new file mode 100644
index 0000000..e9f4088
--- /dev/null
+++ b/posts/npr-modem-setup-testing-linux.mdwn
@@ -0,0 +1,245 @@
+[[!meta title="Setting and testing an NPR modem on Linux"]]
+[[!meta date="2020-09-17T23:20:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+After acquiring a [New Packet Radio
+modem](https://hackaday.io/project/164092-npr-new-packet-radio) on behalf of
+[VECTOR](https://vectorradio.ca), I set it up on my Linux machine and ran
+some basic tests to check whether it could achieve the advertised 500 kbps
+transfer rates, which are much higher than
+[AX25](https://en.wikipedia.org/wiki/AX.25)) packet radio.
+
+The exact equipment I used was:
+
+- [NPR-70 v05 modems](https://elekitsorparts.com/product/npr-70-modem-by-f4hdk-new-packet-radio-over-70cm-band-amateur-radio-packet-radio)
+- [Bingfu Dual Band antennas](https://www.amazon.ca/gp/product/B07WPWK5JK/)
+- [Alinco DM-330MV power supply](https://www.radioworld.ca/ali-dm330mvt)
+
+![](/posts/npr-modem-setup-testing-linux/physical_setup.jpg)
+
+# Radio setup
+
+After connecting the modems to the power supply and their respective
+antennas, I connected both modems to my laptop via micro-USB cables and used
+[minicom](https://salsa.debian.org/minicom-team/minicom) to connect to their
+console on `/dev/ttyACM[01]`:
+
+    minicom -8 -b 921600 -D /dev/ttyACM0
+    minicom -8 -b 921600 -D /dev/ttyACM1
+
+To confirm that the firmware was the latest one, I used the following command:
+
+    ready> version
+    firmware: 2020_02_23
+    freq band: 70cm
+
+then I immediately turned off the radio:
+
+    radio off
+
+which can be verified with:
+
+    status
+
+Following the [British Columbia 70 cm band
+plan](http://bcarcc.org/440planA.pdf), I picked the following frequency,
+modulation (bandwidth of 360 kHz), and power (0.05 W):
+
+    set frequency 433.500
+    set modulation 22
+    set RF_power 7
+
+and then did the rest of the configuration for the master:
+
+    set callsign VA7GPL_0
+    set is_master yes
+    set DHCP_active no
+    set telnet_active no
+
+and the client:
+
+    set callsign VA7GPL_1
+    set is_master no
+    set DHCP_active yes
+    set telnet_active no
+
+and that was enough to get the two modems to talk to one another.
+
+On both of them, I ran the following:
+
+    save
+    reboot
+
+and confirmed that they were able to successfully connect to each other:
+
+    who
+
+# Monitoring RF
+
+To monitor what is happening on the air and check and quickly determine
+whether or not the modems are chatting, you can use a [software-defined
+radio](https://www.nooelec.com/store/sdr/sdr-receivers/nesdr/nesdr-mini.html)
+along with [gqrx](https://gqrx.dk/) with the following settings:
+
+    frequency: 433.500 MHz
+    filter width: user (80k)
+    filter shape: normal
+    mode: Raw I/Q
+
+I found it quite helpful to keep this running the whole time I was working
+with these modems. The background "keep alive" sounds are quite distinct
+from the heavy traffic sounds.
+
+# IP setup
+
+The radio bits out of the way, I turned to the networking configuration.
+
+On the master, I set the following so that I could connect the master to my
+home network (`192.168.1.0/24`) without conflicts: 
+
+    set def_route_active yes
+    set DNS_active no
+    set modem_IP 192.168.1.254
+    set IP_begin 192.168.1.225
+    set master_IP_size 29
+    set netmask 255.255.255.0
+
+(My router's DHCP server is configured to allocate dynamic IP addresses from
+`192.168.1.100` to `192.168.1.224`.)
+
+At this point, I connected my laptop to the client using a
+[CAT-5](https://en.wikipedia.org/wiki/Category_5_cable) network cable and
+the master to the ethernet switch, essentially following *Annex 5* of the
+[Advanced User
+Guide](https://cdn.hackaday.io/files/1640927020512128/NPR_advanced_guide_v2.14.pdf).
+
+My laptop got assigned IP address `192.168.1.225` and so I used another
+computer on the same network to ping my laptop via the NPR modems:
+
+    ping 192.168.1.225
+
+This gave me a round-trip time of around 150-250 ms.
+
+# Performance test
+
+Having successfully established an
+[IP](https://en.wikipedia.org/wiki/Internet_Protocol) connection between the
+two machines, I decided to run a quick test to measure the available
+bandwidth in an ideal setting (i.e. the two antennas very close to each
+other).
+
+On both computers, I installed [iperf](https://iperf.fr/):
+
+    apt install iperf
+
+and then setup the iperf server on my desktop computer:
+
+    sudo iptables -A INPUT -s 192.168.1.0/24 -p TCP --dport 5001 -j ACCEPT
+    sudo iptables -A INPUT -s 192.168.1.0/24 -u UDP --dport 5001 -j ACCEPT
+    iperf --server
+
+On the laptop, I set the MTU to `750` in NetworkManager:
+
+![](/posts/npr-modem-setup-testing-linux/mtu-750-networkmanager.png)
+
+and restarted the network.
+
+Then I created a new user account (`npr` with a uid of `1001`):
+
+    sudo adduser npr
+
+and made sure that only that account could access the network by running the
+following as `root`:
+
+    # Flush all chains.
+    iptables -F
+    
+    # Set defaults policies.
+    iptables -P INPUT DROP
+    iptables -P OUTPUT DROP
+    iptables -P FORWARD DROP
+    
+    # Don't block localhost and ICMP traffic.
+    iptables -A INPUT -i lo -j ACCEPT
+    iptables -A INPUT -p icmp -j ACCEPT
+    iptables -A OUTPUT -o lo -j ACCEPT
+    
+    # Don't re-evaluate already accepted connections.
+    iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+    iptables -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+    
+    # Allow connections to/from the test user.
+    iptables -A OUTPUT -m owner --uid-owner 1001 -m conntrack --ctstate NEW -j ACCEPT
+    
+    # Log anything that gets blocked.
+    iptables -A INPUT -j LOG
+    iptables -A OUTPUT -j LOG
+    iptables -A FORWARD -j LOG
+
+then I started the test as the `npr` user:
+
+    sudo -i -u npr
+    iperf --client 192.168.1.8
+
+# Results
+
+The results were as good as advertised both with modulation 22 (360 kHz
+bandwidth):
+
+    $ iperf --client 192.168.1.8 --time 30
+    ------------------------------------------------------------
+    Client connecting to 192.168.1.8, TCP port 5001
+    TCP window size: 85.0 KByte (default)
+    ------------------------------------------------------------
+    [  3] local 192.168.1.225 port 58462 connected with 192.168.1.8 port 5001

(Diff truncated)
Comment moderation
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_88b2fb718e4fb1b9b1f2c4f6ff9b0128._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_88b2fb718e4fb1b9b1f2c4f6ff9b0128._comment
new file mode 100644
index 0000000..9354a9c
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_88b2fb718e4fb1b9b1f2c4f6ff9b0128._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="80.123.19.32"
+ claimedauthor="FlascheLeer"
+ subject="comment 10"
+ date="2020-09-08T09:48:42Z"
+ content="""
+This helped a lot. Thanks!
+For a younger Ubuntu (20.04), I also had to mount /sys:
+
+    mount --rbind /sys /mnt/sys/
+
+I didn't try the mount -o bind method with sys.
+"""]]

Link to the Debian bug report for user services.
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index 2682895..38fba01 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -29,8 +29,7 @@ then open `/etc/mpd.conf` and set these:
 
 Note that you can find the right sound device on your machine using the `aplay -L` command.
 
-Since this is a headless system setup, it may be necessary to disable the
-user service:
+Since this is a headless system setup, it may be necessary to [disable the user service](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=959693):
 
     rm /etc/xdg/autostart/mpd.desktop
     systemctl --global disable mpd.service

Use the correct sound device in mpd.conf.
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index d46a7ea..2682895 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -23,10 +23,12 @@ then open `/etc/mpd.conf` and set these:
     audio_output {
        type       "alsa"
        name       "My ALSA Device"
-       device     "hw:0,0"
+       device     "hw:CARD=DAC,DEV=0"
        mixer_type "software"
     }
 
+Note that you can find the right sound device on your machine using the `aplay -L` command.
+
 Since this is a headless system setup, it may be necessary to disable the
 user service:
 

Remove zeroconf since it doesn't work with systemd sockets
Sep 06 11:40 : zeroconf: No global port, disabling zeroconf
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index cdac277..d46a7ea 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -18,7 +18,6 @@ then open `/etc/mpd.conf` and set these:
     music_directory    "/path/to/music/"
     bind_to_address    "0.0.0.0"
     bind_to_address    "/run/mpd/socket"
-    zeroconf_enabled   "yes"
     password           "Password1"
     
     audio_output {

Disable user service interfering with main mpd service.
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index 3f03efd..cdac277 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -28,6 +28,24 @@ then open `/etc/mpd.conf` and set these:
        mixer_type "software"
     }
 
+Since this is a headless system setup, it may be necessary to disable the
+user service:
+
+    rm /etc/xdg/autostart/mpd.desktop
+    systemctl --global disable mpd.service
+
+in order to prevent systemd from launching the mpd service whenever a user
+logs in, leading to error messages like:
+
+    systemd[324808]: mpd.socket: Failed to create listening socket ([::]:6600): Address already in use
+    systemd[324808]: mpd.socket: Failed to listen on sockets: Address already in use
+    systemd[324808]: mpd.socket: Failed with result 'resources'.
+    systemd[324808]: Failed to listen on mpd.socket.
+    mpd[324823]: exception: failed to open log file "/var/log/mpd/mpd.log" (config line 39): Permission denied
+    systemd[324808]: mpd.service: Main process exited, code=exited, status=1/FAILURE
+    systemd[324808]: mpd.service: Failed with result 'exit-code'.
+    systemd[324808]: Failed to start Music Player Daemon.
+
 Once all of that is in place, restart the mpd daemon:
 
     systemctl restart mpd.service

Simplify and fix the Apache configuration.
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index 949dd5a..3f03efd 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -72,22 +72,21 @@ from a local web server I have installed
 
     apt install apache2
 
-and configured it to serve the covers by putting the following in
-`/etc/apache2/conf-available/mpd.conf`:
+and configured it to serve the covers by putting the following in the
+default vhost section of `/etc/apache2/sites-available/000-default.conf`:
 
+    Alias /music /path/to/music
+    
     <Directory /path/to/music>
+        Options -MultiViews -Indexes
         AllowOverride None
-        Require all granted
+        Order allow,deny
+        allow from all
     </Directory>
 
-and then the following line in the default vhost section of
-`/etc/apache2/sites-available/000-default.conf`:
-
-    Alias /music /path/to/music
-
-Finally, I enabled the new configuration and restarted Apache:
+Finally, I enabled the new vhost and restarted Apache:
 
-    a2enconf mpd.conf
+    a2ensite 000-default
     systemctl restart apache2.service
 
 # Clients

Switch to alsa to simplify headless operation
diff --git a/posts/home-music-server-with-mpd.mdwn b/posts/home-music-server-with-mpd.mdwn
index 6f443bb..949dd5a 100644
--- a/posts/home-music-server-with-mpd.mdwn
+++ b/posts/home-music-server-with-mpd.mdwn
@@ -16,47 +16,21 @@ Start by installing the server and the client package:
 then open `/etc/mpd.conf` and set these:
 
     music_directory    "/path/to/music/"
-    bind_to_address    "192.168.1.2"
+    bind_to_address    "0.0.0.0"
     bind_to_address    "/run/mpd/socket"
     zeroconf_enabled   "yes"
     password           "Password1"
-
-before replacing the alsa output:
-
-    audio_output {
-       type    "alsa"
-       name    "My ALSA Device"
-    }
-
-with a pulseaudio one:
-
+    
     audio_output {
-       type    "pulse"
-       name    "Pulseaudio Output"
-       server  "127.0.0.1"
+       type       "alsa"
+       name       "My ALSA Device"
+       device     "hw:0,0"
+       mixer_type "software"
     }
 
-and exposing pulseaudio to localhost via `/etc/pulse/default.pa`:
-
-    ### Network access (may be configured with paprefs, so leave this commented
-    ### here if you plan to use paprefs)
-    load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1
-
-In order for the automatic detection (zeroconf) of your music server
-to work, you need to [prevent systemd from creating the network
-socket](https://www.mail-archive.com/mpd-devel@musicpd.org/msg00239.html):
-
-    systemctl stop mpd.service
-    systemctl stop mpd.socket
-    systemctl disable mpd.socket
-
-otherwise you'll see this in `/var/log/mpd/mpd.log`:
-
-    zeroconf: No global port, disabling zeroconf
-
-Once all of that is in place, start the mpd daemon:
+Once all of that is in place, restart the mpd daemon:
 
-    systemctl start mpd.service
+    systemctl restart mpd.service
 
 and create an index of your music files:
 

Comment moderation
diff --git a/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_8_c6382c5a5eb077a8992dbeffb9dc6f6e._comment b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_8_c6382c5a5eb077a8992dbeffb9dc6f6e._comment
new file mode 100644
index 0000000..7e08d41
--- /dev/null
+++ b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_8_c6382c5a5eb077a8992dbeffb9dc6f6e._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ username="francois@665656f0ba400877c9b12e8fbb086e45aa01f7c0"
+ nickname="francois"
+ subject="Re: Wine for Anytone 878"
+ date="2020-09-01T16:15:33Z"
+ content="""
+> If I understand you correctly, Wine does not let CPS read or write to the Anytone 878.
+
+I have not tried Wine so I can't comment on this.
+
+> Does that mean we need to purchase a Windows 10 license and run it from VirtualBox?
+
+You could probably use one of the [Windows 10 IE / Legacy Edge testing VMs](https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/) that Microsoft offers for free for 90 days.
+"""]]

Comment moderation
diff --git a/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_7_d9d686bb1c13a519639d76276da07451._comment b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_7_d9d686bb1c13a519639d76276da07451._comment
new file mode 100644
index 0000000..99d5d93
--- /dev/null
+++ b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_7_d9d686bb1c13a519639d76276da07451._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="134.223.230.152"
+ claimedauthor="Glen Flint"
+ url="GlenFlint@aol.com"
+ subject="Wine for Anytone 878"
+ date="2020-09-01T14:53:09Z"
+ content="""
+If I understand you correctly, Wine does not let CPS read or write to the Anytone 878.  Does that mean we need to purchase a Windows 10 license and run it from VirtualBox?  There is a different version of CPS for Windows 7.  Does that work better with Wine?
+"""]]

Comment moderation
diff --git a/posts/restricting-outgoing-webapp-requests-using-squid-proxy/comment_1_66de753ab892687677eb8740d4913f74._comment b/posts/restricting-outgoing-webapp-requests-using-squid-proxy/comment_1_66de753ab892687677eb8740d4913f74._comment
new file mode 100644
index 0000000..82dee38
--- /dev/null
+++ b/posts/restricting-outgoing-webapp-requests-using-squid-proxy/comment_1_66de753ab892687677eb8740d4913f74._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="167.123.240.150"
+ claimedauthor="Thrawn"
+ subject="How to minimise Squid overhead?"
+ date="2020-08-27T23:28:48Z"
+ content="""
+This type of filtering could be very useful for one of our applications, but there are concerns about the overhead of running an extra process on our servers, and I notice that Squid's FAQ says it uses memory fairly aggressively to improve caching. How would we configure it to discard all of the caching (and associated memory usage) and just do IP filtering?
+"""]]

Disable the /server-status Apache endpoint
https://httpd.apache.org/docs/2.4/mod/mod_status.html
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 353a62d..a8412ea 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -296,6 +296,7 @@ Also, [`command-not-found` won't work until you update the apt cache](https://bu
 
     apt install apache2
     a2enmod mpm_event
+    a2dismod status
 
 While configuring apache is often specific to each server and the services
 that will be running on it, there are a few common changes I make.

Add fake-hwclock to the GnuBee instructions.
diff --git a/posts/installing-debian-buster-on-gnubee2.mdwn b/posts/installing-debian-buster-on-gnubee2.mdwn
index 04185d0..7773915 100644
--- a/posts/installing-debian-buster-on-gnubee2.mdwn
+++ b/posts/installing-debian-buster-on-gnubee2.mdwn
@@ -247,4 +247,43 @@ following contents:
     ExecStart=
     ExecStart=-/sbin/agetty -o '-p -- \\u' 57600 %I $TERM
 
+## Fixing the hardware clock between restarts
+
+When the GnuBee boots, you may have noticed that the clock is wrong until
+`systemd-timesyncd` updates the time using
+[NTP](https://en.wikipedia.org/wiki/Network_Time_Protocol). This leads to
+messages like these:
+
+    Aug 23 02:46:15 hostname systemd-fsck[839]: GNUBEE-ROOT: Superblock last mount time is in the future.
+    Aug 23 02:46:15 hostname systemd-fsck[839]: #011(by less than a day, probably due to the hardware clock being incorrectly set)
+    ...
+    Aug 23 02:46:41 hostname systemd[1]: systemd-fsckd.service: Succeeded.
+    Aug 23 13:04:30 hostname systemd-timesyncd[1309]: Synchronized to time server for the first time 162.159.200.1:123 (time.cloudflare.com).
+
+and unnecessary executions of `fsck`.
+
+Often these hardware issues are due to a lack of a battery to keep the clock
+alive while the unit is powered down. In order to work around this, I
+installed the [`fake-hwclock`
+package](https://packages.debian.org/buster/fake-hwclock) and then edited
+the `/lib/systemd/system/fake-hwclock.service` file to change the following
+line from:
+
+    Before=sysinit.target
+
+to:
+
+    Before=sysinit.target systemd-fsck-root.service
+
+so that the clock is [restored before the filesystem
+check](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=908504).
+
+I also added the following to `/etc/.gitignore` to make
+[`etckeeper`](https://packages.debian.org/buster/etckeeper) happy:
+
+    /fake-hwclock.data
+
+since `fake-hwclock` unfortunately [keeps its data file in
+`/etc/`](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=782314).
+
 [[!tag debian]] [[!tag gnubee]]

Comment moderation
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu/comment_7_64da47f3eb6457603a4ce1db5fcc814a._comment b/posts/running-your-own-xmpp-server-debian-ubuntu/comment_7_64da47f3eb6457603a4ce1db5fcc814a._comment
new file mode 100644
index 0000000..bd5507d
--- /dev/null
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu/comment_7_64da47f3eb6457603a4ce1db5fcc814a._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="cheako+feeding_cloud_geek_nz@7d91c66ae019b345d8af95e5b431b39b51b58fdc"
+ nickname="cheako+feeding_cloud_geek_nz"
+ avatar="http://cdn.libravatar.org/avatar/f1612673dd5b6775c359139483ca389e"
+ subject="About the DNS records you showed."
+ date="2020-08-18T19:37:52Z"
+ content="""
+Keep in mind that CNAME redirects \"every\" record type lookup elsewhere.  Your TLZ will have records that these hostnames should not.  For example SOA, NS, and the TXT/spf.  So using CNAME in that way should be discouraged, it's most defiantly not what you want.
+"""]]

Comment moderation
diff --git a/posts/time-synchronization-with-ntp-and-systemd/comment_6_c15a5d1faef39093c04def602cc68b10._comment b/posts/time-synchronization-with-ntp-and-systemd/comment_6_c15a5d1faef39093c04def602cc68b10._comment
new file mode 100644
index 0000000..29e0df3
--- /dev/null
+++ b/posts/time-synchronization-with-ntp-and-systemd/comment_6_c15a5d1faef39093c04def602cc68b10._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="71.10.210.18"
+ claimedauthor="Jeff"
+ subject="systemD time synchronization has a long way to go"
+ date="2020-08-17T14:01:45Z"
+ content="""
+Since systemd only does basic time synchronization, I think it's really, *really* misleading to say, \"there is no need to run the full-fledged ntpd daemon anymore.\"  I can think of several uses for time-slewing, and persistent time carry over between boots is necessary.
+"""]]

Comment moderation
diff --git a/posts/extend-gpg-key-expiry/comment_1_e47e857d19697a36db39356e098db51e._comment b/posts/extend-gpg-key-expiry/comment_1_e47e857d19697a36db39356e098db51e._comment
new file mode 100644
index 0000000..af86e8e
--- /dev/null
+++ b/posts/extend-gpg-key-expiry/comment_1_e47e857d19697a36db39356e098db51e._comment
@@ -0,0 +1,16 @@
+[[!comment format=mdwn
+ ip="2001:16b8:205e:cc00:f080:3d2a:c76a:4b0a"
+ subject="Quicker method"
+ date="2020-07-31T09:27:53Z"
+ content="""
+There's a quicker method if you just want to extend the expiration date:
+
+    gpg --quick-set-expire KEYID PERIOD
+
+…and for the subkeys:
+
+    gpg --quick-set-expire KEYID PERIOD '*'
+
+
+PS: Did you know you could lint your PGP keys? The [hopenpgp-tools](https://salsa.debian.org/clint/hopenpgp-tools) include `hokey lint`.
+"""]]

Use the right post title
diff --git a/posts/set-default-web-browser-debian-ubuntu.mdwn b/posts/set-default-web-browser-debian-ubuntu.mdwn
index 9b9056c..7b46927 100644
--- a/posts/set-default-web-browser-debian-ubuntu.mdwn
+++ b/posts/set-default-web-browser-debian-ubuntu.mdwn
@@ -1,4 +1,4 @@
-[[!meta title="Extending GPG key expiry"]]
+[[!meta title="Setting the default web browser on Debian and Ubuntu"]]
 [[!meta date="2020-08-07T21:10:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 

Add a post abouf setting a default browser on Debian
diff --git a/posts/set-default-web-browser-debian-ubuntu.mdwn b/posts/set-default-web-browser-debian-ubuntu.mdwn
new file mode 100644
index 0000000..9b9056c
--- /dev/null
+++ b/posts/set-default-web-browser-debian-ubuntu.mdwn
@@ -0,0 +1,84 @@
+[[!meta title="Extending GPG key expiry"]]
+[[!meta date="2020-08-07T21:10:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+If you are wondering what your default web browser is set to on a
+Debian-based system, there are several things to look at:
+
+    $ xdg-settings get default-web-browser
+    brave-browser.desktop
+    
+    $ xdg-mime query default x-scheme-handler/http
+    brave-browser.desktop
+    
+    $ xdg-mime query default x-scheme-handler/https
+    brave-browser.desktop
+    
+    $ ls -l /etc/alternatives/x-www-browser
+    lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/x-www-browser -> /usr/bin/brave-browser-stable*
+    
+    $ ls -l /etc/alternatives/gnome-www-browser
+    lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/gnome-www-browser -> /usr/bin/brave-browser-stable*
+
+## Debian-specific tools
+
+The contents of `/etc/alternatives/` is system-wide defaults and must
+therefore be set as `root`:
+
+    sudo update-alternatives --config x-www-browser
+    sudo update-alternatives --config gnome-www-browser
+
+The `sensible-browser` tool (from the [`sensible-utils`
+package](https://packages.debian.org/stable/sensible-utils)) will use these
+to automatically launch the most appropriate web browser depending on the
+desktop environment.
+
+## Standard MIME tools
+
+The others can be changed as a normal user. Using `xdg-settings`:
+
+    xdg-settings set default-web-browser brave-browser-beta.desktop
+
+will also change what the two `xdg-mime` commands return:
+
+    $ xdg-mime query default x-scheme-handler/http
+    brave-browser-beta.desktop
+    
+    $ xdg-mime query default x-scheme-handler/https
+    brave-browser-beta.desktop
+
+since it puts the following in `~/.config/mimeapps.list`:
+
+    [Default Applications]
+    text/html=brave-browser-beta.desktop
+    x-scheme-handler/http=brave-browser-beta.desktop
+    x-scheme-handler/https=brave-browser-beta.desktop
+    x-scheme-handler/about=brave-browser-beta.desktop
+    x-scheme-handler/unknown=brave-browser-beta.desktop
+
+Note that if you delete these entries, then the system-wide defaults,
+defined in `/etc/mailcap`, will be used, as provided by the
+[`mime-support` package](https://packages.debian.org/stable/mime-support).
+
+Changing the `x-scheme-handler/http` (or `x-scheme-handler/https`)
+association directly using:
+
+    xdg-mime default brave-browser-nightly.desktop x-scheme-handler/http
+
+will only change that particular one. I suppose this means you could have
+one browser for [insecure HTTP
+sites](https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/)
+(hopefully with [HTTPS Everywhere
+installed](https://www.eff.org/https-everywhere)) and one for HTTPS sites though
+I'm not sure why anybody would want that.
+
+## Summary
+
+In short, if you want to set your default browser everywhere (using
+[Brave](https://brave.com) in this example), do the following:
+
+    sudo update-alternatives --config x-www-browser
+    sudo update-alternatives --config gnome-www-browser
+    xdg-settings set default-web-browser brave-browser.desktop
+
+[[!tag debian]] [[!tag brave]]

Make sure IP addresses are never cached.
diff --git a/posts/displaying-ip-address-apache-server-side-includes.mdwn b/posts/displaying-ip-address-apache-server-side-includes.mdwn
index bc95e85..7008f65 100644
--- a/posts/displaying-ip-address-apache-server-side-includes.mdwn
+++ b/posts/displaying-ip-address-apache-server-side-includes.mdwn
@@ -37,6 +37,7 @@ options to a `Location` or `Directory` section:
         SSLRequireSSL
         Header set Content-Security-Policy: "default-src 'none'"
         Header set X-Content-Type-Options: "nosniff"
+        Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
     </Location>
 
 before adding the necessary modules:

Sync up with the script I use
diff --git a/posts/seeding-brave-browser-sccache.mdwn b/posts/seeding-brave-browser-sccache.mdwn
index 5697ea9..2292322 100644
--- a/posts/seeding-brave-browser-sccache.mdwn
+++ b/posts/seeding-brave-browser-sccache.mdwn
@@ -46,7 +46,10 @@ and here are the contents of that script:
     git pull
     npm install
     rm -rf src/brave/*
+    git -C src/third_party/devtools-frontend/src/ reset --hard
     gclient sync -D
+    git -C src/brave pull
+    git -C src/brave reset --hard
     npm run init
     
     echo $(date)

Tag a few more GPG-related posts
diff --git a/posts/encrypted-mailing-list-on-debian-and-ubuntu.mdwn b/posts/encrypted-mailing-list-on-debian-and-ubuntu.mdwn
index e1de05e..0953ff6 100644
--- a/posts/encrypted-mailing-list-on-debian-and-ubuntu.mdwn
+++ b/posts/encrypted-mailing-list-on-debian-and-ubuntu.mdwn
@@ -107,4 +107,4 @@ it should be signed by the list admin.
 
 After that, anybody requesting the list key will get your signature as well.
 
-[[!tag debian]] [[!tag security]] [[!tag nzoss]]
+[[!tag debian]] [[!tag security]] [[!tag gpg]]
diff --git a/posts/mutts-openpgp-support-and-firegpg.mdwn b/posts/mutts-openpgp-support-and-firegpg.mdwn
index 7174089..3a5730e 100644
--- a/posts/mutts-openpgp-support-and-firegpg.mdwn
+++ b/posts/mutts-openpgp-support-and-firegpg.mdwn
@@ -32,4 +32,4 @@ However, this didn't actually work with FireGPG and the way that it puts encrypt
 
 
 
-[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag email]]
+[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag email]] [[!tag gpg]]
diff --git a/posts/things-that-work-well-with-tor.mdwn b/posts/things-that-work-well-with-tor.mdwn
index 110a615..d28bd0c 100644
--- a/posts/things-that-work-well-with-tor.mdwn
+++ b/posts/things-that-work-well-with-tor.mdwn
@@ -139,4 +139,4 @@ I can take advantage of GMail's excellent caching and preloading and run the
 whole thing over Tor by setting that entire browser profile to run its
 traffic through the Tor SOCKS proxy on port `9050`.
 
-[[!tag debian]] [[!tag privacy]] [[!tag tor]] [[!tag nzoss]] [[!tag mozilla]] [[!tag xmpp]] [[!tag gmail]]
+[[!tag debian]] [[!tag privacy]] [[!tag tor]] [[!tag gpg]] [[!tag mozilla]] [[!tag xmpp]] [[!tag gmail]]

Also need to upload key to Debian keyserver
diff --git a/posts/extend-gpg-key-expiry.mdwn b/posts/extend-gpg-key-expiry.mdwn
index ce5bd3b..75fd062 100644
--- a/posts/extend-gpg-key-expiry.mdwn
+++ b/posts/extend-gpg-key-expiry.mdwn
@@ -13,8 +13,9 @@ Update the expiry on the main key and the subkey:
     > expire
     > save
 
-Upload the updated key to the keyserver:
+Upload the updated key to the keyservers:
 
     gpg --export KEYID | curl -T - https://keys.openpgp.org
+    gpg --keyserver keyring.debian.org --send-keys KEYID
 
 [[!tag debian]] [[!tag gpg]]

Remove link to Identica account.
diff --git a/sidebar.mdwn b/sidebar.mdwn
index d42b2e6..52a5d66 100644
--- a/sidebar.mdwn
+++ b/sidebar.mdwn
@@ -13,7 +13,7 @@
 <br><a href="mailto:francois@fmarier.org">francois@fmarier.org</a>
 <br>Free and Open Source software developer
 <br>
-<br>[Twitter](https://twitter.com/fmarier) / [Identica](https://identi.ca/fmarier)
+<br>[Twitter](https://twitter.com/fmarier)
 <br>[Linked In](https://linkedin.com/in/fmarier)
 
 # More from this blog

creating tag page tags/gpg
diff --git a/tags/gpg.mdwn b/tags/gpg.mdwn
new file mode 100644
index 0000000..c594b70
--- /dev/null
+++ b/tags/gpg.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged gpg"]]
+
+[[!inline pages="tagged(gpg)" actions="no" archive="yes"
+feedshow=10]]

Add GPG key expiry blog post
diff --git a/posts/extend-gpg-key-expiry.mdwn b/posts/extend-gpg-key-expiry.mdwn
new file mode 100644
index 0000000..ce5bd3b
--- /dev/null
+++ b/posts/extend-gpg-key-expiry.mdwn
@@ -0,0 +1,20 @@
+[[!meta title="Extending GPG key expiry"]]
+[[!meta date="2020-07-30T20:45:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Extending the expiry on a GPG key is not very hard, but it's easy to forget
+a step. Here's how I did my last expiry bump.
+
+Update the expiry on the main key and the subkey:
+
+    gpg --edit-key KEYID
+    > expire
+    > key 1
+    > expire
+    > save
+
+Upload the updated key to the keyserver:
+
+    gpg --export KEYID | curl -T - https://keys.openpgp.org
+
+[[!tag debian]] [[!tag gpg]]

Mention the tmd710-tncsetup package now in Debian.
diff --git a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
index a4f8910..e37609a 100644
--- a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
+++ b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
@@ -51,11 +51,18 @@ correctly:
    mentioned in a comment in `/etc/default/ax25`:
 
         gcc -o tmd710_tncsetup tmd710_tncsetup.c
+        sudo cp tmd710_tncsetup /usr/local/bin
+
+   Note: on a [Debian bullseye](https://www.debian.org/releases/bullseye/) or later system, all you need to do is install the [`tmd710-tncsetup` package](https://packages.debian.org/bullseye/tmd710-tncsetup):
+
+        apt install tmd710-tncsetup
 
 7. Add the `tmd710_tncsetup` script in `/etc/default/ax25` and use these command
    line parameters (`-B 0` specifies band A, use `-B 1` for band B):
 
-        tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
+        /usr/local/bin/tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
+
+   Note: the path is `/usr/bin/tmd710_tncsetup` if using the official Debian package.
 
 8. Start ax25 driver:
 

Rephrase a sentence that's now slightly obsolete
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
index 57de28e..9304921 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
@@ -81,8 +81,7 @@ Then "enter" the root partition using:
 
     chroot /mnt
 
-and make sure that the [lvm2](https://launchpad.net/ubuntu/+source/lvm2)
-package is installed:
+and make sure that you have the necessary packages installed:
 
     apt install lvm2 cryptsetup-initramfs
 

Improve user comment formatting
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_1_dbf4ae9f9fe087f9b03cfb0961a4fe57._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_1_dbf4ae9f9fe087f9b03cfb0961a4fe57._comment
index 76a00f1..6624804 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_1_dbf4ae9f9fe087f9b03cfb0961a4fe57._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_1_dbf4ae9f9fe087f9b03cfb0961a4fe57._comment
@@ -5,7 +5,7 @@
  subject="Without using a live instance"
  date="2018-05-06T20:54:15Z"
  content="""
-I successfully used your recommended approach without booting via USB. This can be accomplished by selecting to boot into a previous kernel via the Grub boot menu during startup, and then (without the need to mount local partitions) simply ensure the latest version of lvm2 is installed and regenerating the initramfs for all of the installed kernels (as recommended). I also have a fully encrypted drive configuration and found no issues when performing these steps. 
+I successfully used your recommended approach without booting via USB. This can be accomplished by selecting to boot into a previous kernel via the Grub boot menu during startup, and then (without the need to mount local partitions) simply ensure the latest version of `lvm2` is installed and regenerating the initramfs for all of the installed kernels (as recommended). I also have a fully encrypted drive configuration and found no issues when performing these steps. 
 
 Thank you for putting this article together. While I normally find the forums to be of great assistance, this issue was not one that is easy to find real working solutions for. Keep up the great work.
 """]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_2_344f04840164a73701084d11ef52358c._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_2_344f04840164a73701084d11ef52358c._comment
index 8ce3c8b..bac5d01 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_2_344f04840164a73701084d11ef52358c._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_2_344f04840164a73701084d11ef52358c._comment
@@ -6,7 +6,7 @@
  content="""
 I wanted to make sure the next time it happens I could recover quickly with just the LiveCD available.
 
-I wrote it to detect the correct name from the /mnt/etc/crypttab to ensure the `update-initramfs` command can properly update. 
+I wrote it to detect the correct name from the `/mnt/etc/crypttab` to ensure the `update-initramfs` command can properly update. 
 
-https://gist.github.com/dragon788/e777ba64d373210e4f6306ad40ee0e80
+<https://gist.github.com/dragon788/e777ba64d373210e4f6306ad40ee0e80>
 """]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment
index ebafd4c..4708577 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_3_cbd36f2900e966992f874221a5182e8e._comment
@@ -6,7 +6,7 @@
  content="""
 I got the same problem after upgrading to 18.04, I don't use LVM but Btrfs, all I had to change was
 
-```apt install btrfs-progs```
+    apt install btrfs-progs
 
 Everything else was exactly the same.
 
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment
index 2e03196..5ae7ec8 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_4_0dcba6e86d49f32540ebb57d54fc49e4._comment
@@ -4,11 +4,11 @@
  subject="Worked for me with minor tweaks"
  date="2019-01-07T01:22:19Z"
  content="""
-I didn't need to install lvm2, as it was on my unbootable system.  I also had some minor partition/volume differences.
+I didn't need to install `lvm2`, as it was on my unbootable system.  I also had some minor partition/volume differences.
 
 My issue is documented at the [Ubuntu forums](https://ubuntuforums.org/showthread.php?t=2409754)
 
-That all said, **I did have a major issue with DNS resolution not functioning** after this was done.  I'm wondering if \"update-initramfs\" lead to this issue specifically (I made other changes I can't recall clearly).
+That all said, **I did have a major issue with DNS resolution not functioning** after this was done.  I'm wondering if `update-initramfs` lead to this issue specifically (I made other changes I can't recall clearly).
 
 If other experience loss of DNS via systemd.resolved failure, please note it here and on my post in the Ubuntu Forums.  My fix is listed there, although I'm effectively disabling systemd.resolved.
 """]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_5_0367ad2561d124b47e307502f1b85a96._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_5_0367ad2561d124b47e307502f1b85a96._comment
index 70b0845..1fcec11 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_5_0367ad2561d124b47e307502f1b85a96._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_5_0367ad2561d124b47e307502f1b85a96._comment
@@ -4,16 +4,15 @@
  subject="This really woks with minor change"
  date="2019-11-22T08:09:19Z"
  content="""
-First I recovered the /etc/fstab and /etc/crypttab from the backup with backup-tool because I had tried something and messed up these files.
+First I recovered the `/etc/fstab` and `/etc/crypttab` from the backup with backup-tool because I had tried something and messed up these files.
 
-Then I followed these instructions but I left the command 'vgchange -ay' out. Reason for that was because after that I couldn't mount my partitions to anything. Without it mounting was done nicely and the rest of the steps could be done.
+Then I followed these instructions but I left the command `vgchange -ay` out. Reason for that was because after that I couldn't mount my partitions to anything. Without it mounting was done nicely and the rest of the steps could be done.
 
 It had the consequence that in the end I couldn't unmount and close the partition but that wasn't in this instruction and so I paid no attention to that.
 
-I encountered the problem when updating the initramfs (I was missing some firmware library and it gave some warnings). Solution to that was found here: https://askubuntu.com/questions/832524/possible-missing-frmware-lib-firmware-i915/832528 and the updating of initramfs were done without warnings.
+I encountered the problem when updating the initramfs (I was missing some firmware library and it gave some warnings). Solution to that was [found here](https://askubuntu.com/questions/832524/possible-missing-frmware-lib-firmware-i915/832528) and the updating of initramfs were done without warnings.
 
 In the end I prayed a little and rebooted and everything was fine after these changes and now I can log in to my ubuntu again. 
 
-
 Thanks for clear instructions!
 """]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_7_6d782ed81d9fbdfbcc70fdf6da15fbed._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_7_6d782ed81d9fbdfbcc70fdf6da15fbed._comment
index 2b8ef36..45d5fa2 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_7_6d782ed81d9fbdfbcc70fdf6da15fbed._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_7_6d782ed81d9fbdfbcc70fdf6da15fbed._comment
@@ -4,5 +4,5 @@
  subject="Boot to earlier kernel worked better"
  date="2020-02-05T02:19:59Z"
  content="""
-As William (William — 13:54, 06 May 2018) did, I booted to the preceding kernel version, and as I logged in, I saw LivePatch flash by saying it had just updated something. I used apt to update everything and restarted, my machine is now runnign like a Swiss watch!
+As William (William — 13:54, 06 May 2018) did, I booted to the preceding kernel version, and as I logged in, I saw LivePatch flash by saying it had just updated something. I used `apt` to update everything and restarted, my machine is now runnign like a Swiss watch!
 """]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment
index 135bc6d..87154e7 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment
@@ -6,9 +6,8 @@
 Hello,
 
 i have tried out your solution after failing about the one which did not help you also. 
-But seems i am stuck in this update nightmare: https://askubuntu.com/questions/1256247/ubuntu-20-kernel-upgrade-encrypted-volume-group-cannot-be-found-crypttab-em
+But seems i am stuck in [this update nightmare](https://askubuntu.com/questions/1256247/ubuntu-20-kernel-upgrade-encrypted-volume-group-cannot-be-found-crypttab-em).
 
 If you have any idea how to solve this i would be very greatful! 
 
-
 """]]
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment
index 19cf755..e27f1ed 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment
@@ -7,8 +7,8 @@
 Had a very similar issue after an update of Ubuntu 20.04 on a Dell XPS13 (2020).
 Searched for hours, the solution was actually super easy.
 
-reboot and go to BIOS using \"fn and F2\"  
-BIOS > System Configuration > Sata Operation > switch to \"AHCI\" from \"RAID On\"
+1. reboot and go to BIOS using \"fn and F2\"  
+2. BIOS > System Configuration > Sata Operation > switch to \"AHCI\" from \"RAID On\"
 
 For some reason, this BIOS setting was switched.
 """]]

Add missing package based on Vitaalz's comment
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
index 471bd2a..57de28e 100644
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition.mdwn
@@ -84,7 +84,7 @@ Then "enter" the root partition using:
 and make sure that the [lvm2](https://launchpad.net/ubuntu/+source/lvm2)
 package is installed:
 
-    apt install lvm2
+    apt install lvm2 cryptsetup-initramfs
 
 before regenerating the initramfs for all of the installed kernels:
 
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_a1eef6d3212d51402a4817e2e2432ec9._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_a1eef6d3212d51402a4817e2e2432ec9._comment
deleted file mode 100644
index b1c5f0b..0000000
--- a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_a1eef6d3212d51402a4817e2e2432ec9._comment
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!comment format=mdwn
- ip="62.63.132.50"
- claimedauthor="Vitaalz"
- subject="comment 10"
- date="2020-07-29T14:32:15Z"
- content="""
-On Debian-based systems before running `update-initramfs` command make sure that `cryptsetup-initramfs` package is installed. If not, install it first:
-
-`apt-get install -y cryptsetup-initramfs`
-"""]]

Comment moderation
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_a1eef6d3212d51402a4817e2e2432ec9._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_a1eef6d3212d51402a4817e2e2432ec9._comment
new file mode 100644
index 0000000..b1c5f0b
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_10_a1eef6d3212d51402a4817e2e2432ec9._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="62.63.132.50"
+ claimedauthor="Vitaalz"
+ subject="comment 10"
+ date="2020-07-29T14:32:15Z"
+ content="""
+On Debian-based systems before running `update-initramfs` command make sure that `cryptsetup-initramfs` package is installed. If not, install it first:
+
+`apt-get install -y cryptsetup-initramfs`
+"""]]

Comment moderation
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment
new file mode 100644
index 0000000..19cf755
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_9_f534393495aa28bd0f034b20d2ae0704._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="81.164.136.123"
+ claimedauthor="Koen"
+ subject="comment 9"
+ date="2020-07-14T16:13:52Z"
+ content="""
+Had a very similar issue after an update of Ubuntu 20.04 on a Dell XPS13 (2020).
+Searched for hours, the solution was actually super easy.
+
+reboot and go to BIOS using \"fn and F2\"  
+BIOS > System Configuration > Sata Operation > switch to \"AHCI\" from \"RAID On\"
+
+For some reason, this BIOS setting was switched.
+"""]]

Stopping the etckeeper timer is also necessary to fully disable it
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index a29424f..353a62d 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -51,6 +51,7 @@ and this in `/etc/.git/config`:
 Note that in order to fully turn off auto-commits, it's also necessary
 to run the following:
 
+    systemctl stop etckeeper.timer
     systemctl disable etckeeper.timer
 
 To get more control over the various packages I install, I change the

Comment moderation
diff --git a/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment
new file mode 100644
index 0000000..135bc6d
--- /dev/null
+++ b/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/comment_8_556ade9bd0b423bbba3a4791ce49b6c2._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="88.153.228.176"
+ subject="recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition"
+ date="2020-07-05T10:49:43Z"
+ content="""
+Hello,
+
+i have tried out your solution after failing about the one which did not help you also. 
+But seems i am stuck in this update nightmare: https://askubuntu.com/questions/1256247/ubuntu-20-kernel-upgrade-encrypted-volume-group-cannot-be-found-crypttab-em
+
+If you have any idea how to solve this i would be very greatful! 
+
+
+"""]]

Refer to the correct step
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 52c35e1..cfbf1b9 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -118,7 +118,7 @@ to requests after running for a while:
 
 Note that if you'd like to be able to talk to contacts via the GMail XMPP
 server, you will unfortunately need to change the `s2s_use_starttls`
-setting in step 3 to the following:
+setting in step 4 to the following:
 
       s2s_use_starttls: optional
 

Fix formatting
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index d4db455..52c35e1 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -58,14 +58,16 @@ to solve the [Pidgin](http://pidgin.im) "Not authorized" connection problems.
 
 2. Set the following in `/etc/ejabberd/ejabberd.yml`:
 
-      acl:
-        admin:
-           user:
-               - "admin@fmarier.org"
-      hosts:
-        - "fmarier.org"
-      auth_password_format: scram
-      fqdn: "jabber-gw.fmarier.org"
+       acl:
+         admin:
+            user:
+                - "admin@fmarier.org"
+       
+       hosts:
+         - "fmarier.org"
+       
+       auth_password_format: scram
+       fqdn: "jabber-gw.fmarier.org"
 
 3. Copy the SSL certificate into the `/etc/ejabberd/` directory and set the
 permissions correctly:
@@ -75,21 +77,21 @@ permissions correctly:
 
 4. Improve the client-to-server and server-to-server TLS configuration:
 
-      define_macro:
-        # ...
-        'DH_FILE': "/etc/ejabberd/dhparams.pem"
+       define_macro:
+         # ...
+         'DH_FILE': "/etc/ejabberd/dhparams.pem"
+       
+       c2s_dhfile: 'DH_FILE'
+       s2s_dhfile: 'DH_FILE'
+       
+       listen:
+         -
+           port: 5222
+           ip: "::"
+           module: ejabberd_c2s
+           starttls_required: true
       
-      c2s_dhfile: 'DH_FILE'
-      s2s_dhfile: 'DH_FILE'
-      
-      listen:
-        -
-          port: 5222
-          ip: "::"
-          module: ejabberd_c2s
-          starttls_required: true
-      
-      s2s_use_starttls: required
+       s2s_use_starttls: required
 
 5. Create the required `dhparams.pem` file:
 

Switch to Apache authenticator and add echo.fmarier.org domain
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 95184cc..d4db455 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -14,22 +14,23 @@ put everything together.
 My personal domain is `fmarier.org` and so I created the following DNS
 records:
 
+    echo                 CNAME    fmarier.org.
     jabber-gw            CNAME    fmarier.org.
     _xmpp-client._tcp    SRV      5 0 5222 jabber-gw.fmarier.org.
     _xmpp-server._tcp    SRV 	  5 0 5269 jabber-gw.fmarier.org.
 
-Then I went to get a free TLS certificate for `jabber-gw.fmarier.org` and `fmarier.org`.
+Then I went to get a free TLS certificate for the above.
 
 ## Let's Encrypt
 
 The easiest way to get a certificate is to install [certbot](https://certbot.eff.org/):
 
-    apt install certbot
+    apt install certbot python3-certbot-apache
 
 Then, shutdown your existing webserver if you have one running and request
 a cert like this:
 
-    certbot certonly -d jabber-gw.fmarier.org,fmarier.org --standalone
+    certbot --duplicate certonly --apache -d jabber-gw.fmarier.org -d echo.fmarier.org -d fmarier.org
 
 Once you have the cert, you can merge the private and public keys
 into the file that ejabberd expects:

Remove CertSpotter (no longer free) and add ejabberd tag
diff --git a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
index 084aeb2..6d6f623 100644
--- a/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
+++ b/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot.mdwn
@@ -62,18 +62,9 @@ monitor my domains once a day:
 
     ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org
 
-I also signed up with [Cert Spotter](https://sslmate.com/certspotter/) which
-watches the
-[Certificate Transparency](https://www.certificate-transparency.org/) log
-and notifies me of any newly-issued certificates for my domains.
-
-In other words, I get notified:
-
-- if my cronjob fails and a cert is about to expire, or
-- as soon as a new cert is issued.
-
 The whole thing seems to work well, but if there's anything I could be doing
 better, feel free to leave a comment!
 
 [[!tag nzoss]] [[!tag sysadmin]] [[!tag debian]] [[!tag mozilla]]
 [[!tag ubuntu]] [[!tag ssl]] [[!tag apache]] [[!tag letsencrypt]]
+[[!tag ejabberd]]

Add DB backups
diff --git a/posts/automated-mythtv-maintenance-tasks.mdwn b/posts/automated-mythtv-maintenance-tasks.mdwn
index 3773d04..43cce7c 100644
--- a/posts/automated-mythtv-maintenance-tasks.mdwn
+++ b/posts/automated-mythtv-maintenance-tasks.mdwn
@@ -6,7 +6,20 @@ Here is the daily/weekly cronjob I put together over the years to perform
 [MythTV](https://www.mythtv.org)-related maintenance tasks on my backend
 server.
 
-The first part runs a contrib script to [optimize the database
+The first part performs a [database backup](https://www.mythtv.org/wiki/User_Manual:Periodic_Maintenance#The_database):
+
+    5 1 * * *  mythtv  /usr/share/mythtv/mythconverg_backup.pl
+
+which I previously configured by putting the following in `/home/mythtv/.mythtv/backuprc`:
+
+    DBBackupDirectory=/var/backups/mythtv
+
+and creating a new directory for it:
+
+    mkdir /var/backups/mythtv
+    chown mythtv:mythtv /var/backups/mythtv
+
+The second part of `/etc/cron.d/mythtv-maintenance` runs a contrib script to [optimize the database
 tables](https://www.mythtv.org/wiki/User_Manual:Periodic_Maintenance#Optimize_the_Database):
 
     10 1 * * *  mythtv  /usr/bin/chronic /usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl

creating tag page tags/xfs
diff --git a/tags/xfs.mdwn b/tags/xfs.mdwn
new file mode 100644
index 0000000..73aa86f
--- /dev/null
+++ b/tags/xfs.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged xfs"]]
+
+[[!inline pages="tagged(xfs)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tags/smart
diff --git a/tags/smart.mdwn b/tags/smart.mdwn
new file mode 100644
index 0000000..c75b300
--- /dev/null
+++ b/tags/smart.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged smart"]]
+
+[[!inline pages="tagged(smart)" actions="no" archive="yes"
+feedshow=10]]

Add MythTV cronjob post
diff --git a/posts/automated-mythtv-maintenance-tasks.mdwn b/posts/automated-mythtv-maintenance-tasks.mdwn
new file mode 100644
index 0000000..3773d04
--- /dev/null
+++ b/posts/automated-mythtv-maintenance-tasks.mdwn
@@ -0,0 +1,56 @@
+[[!meta title="Automated MythTV-related maintenance tasks"]]
+[[!meta date="2020-06-24T09:45:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+Here is the daily/weekly cronjob I put together over the years to perform
+[MythTV](https://www.mythtv.org)-related maintenance tasks on my backend
+server.
+
+The first part runs a contrib script to [optimize the database
+tables](https://www.mythtv.org/wiki/User_Manual:Periodic_Maintenance#Optimize_the_Database):
+
+    10 1 * * *  mythtv  /usr/bin/chronic /usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl
+
+once a day. It requires the `libmythtv-perl` and `libxml-simple-perl` packages
+to be installed on Debian-based systems.
+
+It is quickly followed by a check of the recordings and [automatic repair of
+the seektable](https://www.mythtv.org/wiki/Repairing_the_Seektable) (when possible):
+
+    20 1 * * *  mythtv  /usr/bin/chronic /usr/bin/mythutil --checkrecordings --fixseektable
+
+Next, I force a scan of the music and video databases to pick up anything new
+that may have been added externally via
+[NFS](https://en.wikipedia.org/wiki/Network_File_System) mounts:
+
+    30 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanvideos
+    31 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanmusic
+
+Finally, I [defragment the XFS
+partition](https://www.mythtv.org/wiki/Optimizing_Performance#XFS-Specific_Tips)
+for two hours every day except Friday:
+
+    45 1 * * 1-4,6-7  root  /usr/sbin/xfs_fsr
+
+and resync the
+[RAID-1](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1) arrays
+once a week to ensure that they stay consistent and error-free:
+
+    15 3 * * 2  root  /usr/local/sbin/raid_parity_check md0
+    15 3 * * 4  root  /usr/local/sbin/raid_parity_check md2
+
+using a [trivial
+script](https://github.com/fmarier/root-scripts/blob/master/raid_parity_check).
+
+In addition to that cronjob, I also have
+[smartmontools](https://packages.debian.org/stable/smartmontools) run daily
+short and weekly long [SMART](https://en.wikipedia.org/wiki/S.M.A.R.T.)
+tests via this blurb in `/etc/smartd.conf`:
+
+    /dev/sda -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)
+    /dev/sdb -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)
+
+If there are any other automated maintenance tasks you do on your MythTV
+server, please leave a comment!
+
+[[!tag debian]] [[!tag mythtv]] [[!tag raid]] [[!tag xfs]] [[!tag smart]]

Note a way to confirm that the computer and the radio can talk
diff --git a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
index f5e3ced..a4f8910 100644
--- a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
+++ b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
@@ -61,6 +61,10 @@ correctly:
 
         systemctl start ax25.service
 
+As the AX25 unit starts up and initializes the TNC using `tmd710_tncsetup`,
+you should see `STA` and `CON` indicators flash briefly in the top-right
+corner of the screen.
+
 # Connecting to a winlink gateway
 
 To monitor what is being received and transmitted:

Add a few obvious but important steps
diff --git a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
index 5dafb5d..f5e3ced 100644
--- a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
+++ b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
@@ -31,6 +31,7 @@ along with the systemd script that comes with Pat:
 Once the packages are installed, it's time to configure everything
 correctly:
 
+0. Plug the radio onto the computer using a mini-USB cable.
 1. Power cycle the radio.
 2. Enable TNC in `packet12` mode (**band A***).
 3. Tune band A to [VECTOR
@@ -42,7 +43,9 @@ correctly:
 
         wl2k    CALLSIGN    9600    128    4    Winlink
 
-5. Set `HBAUD` to **`1200`** in `/etc/default/ax25`.
+5. Set `HBAUD` to **`1200`** in `/etc/default/ax25` and make sure that the
+   `DEV` variable is set to the correct `/dev/TTYUSBx` device (check the
+   output of `dmesg` after turning on the radio).
 6. Download and compile the [`tmd710_tncsetup`
    script](https://github.com/fmarier/tmd710_tncsetup/blob/master/tmd710_tncsetup.c)
    mentioned in a comment in `/etc/default/ax25`:
@@ -73,7 +76,12 @@ Then create aliases like these in `~/.wl2k/config.json`:
       },
     }
 
-and use them to connect to your preferred Winlink gateways.
+and use them to connect to your preferred Winlink gateways by starting pat
+using the following:
+
+    pat http
+
+and then opening the interface in a webbrowser: <http://localhost:8080/>
 
 # Troubleshooting
 

Point to my new tmd710_tncsetup repository
diff --git a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
index d97037a..5dafb5d 100644
--- a/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
+++ b/posts/using-kenwood-th-d72a-with-pat-linux-ax25.mdwn
@@ -44,7 +44,7 @@ correctly:
 
 5. Set `HBAUD` to **`1200`** in `/etc/default/ax25`.
 6. Download and compile the [`tmd710_tncsetup`
-   script](http://www.trinityos.com/HAM/CentosDigitalModes/usr/src/misc/D710/tmd710_tncsetup.c)
+   script](https://github.com/fmarier/tmd710_tncsetup/blob/master/tmd710_tncsetup.c)
    mentioned in a comment in `/etc/default/ax25`:
 
         gcc -o tmd710_tncsetup tmd710_tncsetup.c

Remove unnecessary work-around
My PR got merged in the 5.7 kernels:
https://github.com/neilbrown/gnubee-tools/pull/24
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index c98a9c7..2226a34 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -30,8 +30,6 @@ directory](https://github.com/neilbrown/gnubee-tools/issues/23) and so I
 tightened the security of some of the default mount points by putting the following
 in `/etc/rc.local`:
 
-    mount -o remount,nodev,nosuid /etc/network
-    mount -o remount,nodev,nosuid /lib/modules
     chmod 755 /etc/network
     exit 0
 

Fix typo
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index 53ff816..c98a9c7 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -147,7 +147,7 @@ On each machine, I added the following to `/root/.ssh/config`:
 
 The reason for setting the ssh cipher and disabling compression is to [speed
 up the ssh connection](https://gist.github.com/KartikTalwar/4393116) as much
-as possible given that the [GnuBee has avery small RAM
+as possible given that the [GnuBee has a very small RAM
 bandwidth](https://groups.google.com/d/msg/gnubee/5_nKjgmKSoY/a0ER5fEcBAAJ).
 
 Another performance-related change I made on the GnuBee was switching to the [internal sftp

Fix mistake in hosts file
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index 43a42bb..53ff816 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -20,7 +20,7 @@ I changed the default hostname:
 
 - `/etc/hostname`: `foobar`
 - `/etc/mailname`: `foobar.example.com`
-- `/etc/hosts`: `127.0.0.1  foobar.example.com vogar localhost`
+- `/etc/hosts`: `127.0.0.1  foobar.example.com foobar localhost`
 
 and then installed the `avahi-daemon` package to be able to reach this box
 using `foobar.local`.

Fix last CSS change
diff --git a/local.css b/local.css
index 25f2108..91e9d6f 100644
--- a/local.css
+++ b/local.css
@@ -4,7 +4,7 @@ img {
     height: auto;
 }
 
-.blogform, .trail, .inlinefooter .pagelicense, .inlinefooter .tags, .inlinefooter .actions, .comments .comment .actions {
+.blogform, .trail, .inlinefooter .pagelicense, .inlinefooter .tags, .inlinefooter .actions, #comments .comment .actions {
     display: none;
 }
 

Hide unnecessary "Remove comment" buttons
diff --git a/local.css b/local.css
index 69f70db..25f2108 100644
--- a/local.css
+++ b/local.css
@@ -4,7 +4,7 @@ img {
     height: auto;
 }
 
-.blogform, .trail, .inlinefooter .pagelicense, .inlinefooter .tags, .inlinefooter .actions {
+.blogform, .trail, .inlinefooter .pagelicense, .inlinefooter .tags, .inlinefooter .actions, .comments .comment .actions {
     display: none;
 }
 

Comment moderation
diff --git a/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_6_702c0ae4816e6cac81929847b9148cf1._comment b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_6_702c0ae4816e6cac81929847b9148cf1._comment
new file mode 100644
index 0000000..ba1d87f
--- /dev/null
+++ b/posts/programming-anytone-d878uv-on-linux-using-windows10-and-virtualbox/comment_6_702c0ae4816e6cac81929847b9148cf1._comment
@@ -0,0 +1,18 @@
+[[!comment format=mdwn
+ ip="77.98.107.200"
+ claimedauthor="Eric W"
+ subject="Using Linux"
+ date="2020-06-10T14:54:39Z"
+ content="""
+Well, the good news is that the CPS for the 868, 878 & 578 will run under Wine. There are a couple of caveats in as much as version of Wine should be the latest version as this now auto-creates virtual com ports up to com 33. Plugging in the programming lead will automatically create an extra com port - in my case com34, but check your own settings.
+
+All functions are available, including firmware updates etc.
+
+** NOTE FOR 878 ONLY: Although the CPS works well enough in itself, it will NOT read the com port, so you can't read/write to the radio. I've attempted to find ways around this problem, but I'm not a coder and as yet not found a resolution.
+
+Overall it works well in Wine, though you do need to be careful when editing any files you export from the radio. Apparently Linux uses a different 'end of line' format that isn't compatible with Windows and can result in file import errors. Once edited, I suggest re-importing your edited CSV into a text editor such as Xed, which has the facility to save files with a Linux or Windows line ending - choose Windows version and all is well
+
+It really is a pity that the 878 wont find the working com port, otherwise I would 100% recommend using Linux/Wine for these radios.
+
+Eric - G6FGY (UK)
+"""]]

Update security configuration options for ejabberd 18
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 2ca0678..95184cc 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -55,13 +55,12 @@ to solve the [Pidgin](http://pidgin.im) "Not authorized" connection problems.
 
       apt install ejabberd
 
-2. Set the following in `/etc/ejabberd/ejabberd.yml` (don't forget the
-trailing dots!):
+2. Set the following in `/etc/ejabberd/ejabberd.yml`:
 
       acl:
         admin:
            user:
-               - "admin": "fmarier.org"
+               - "admin@fmarier.org"
       hosts:
         - "fmarier.org"
       auth_password_format: scram
@@ -73,41 +72,27 @@ permissions correctly:
       chown root:ejabberd /etc/ejabberd/ejabberd.pem
       chmod 640 /etc/ejabberd/ejabberd.pem
 
-4. [Improve](https://bettercrypto.org/) the client-to-server TLS configuration
-by adding `starttls_required` to this block:
+4. Improve the client-to-server and server-to-server TLS configuration:
 
+      define_macro:
+        # ...
+        'DH_FILE': "/etc/ejabberd/dhparams.pem"
+      
+      c2s_dhfile: 'DH_FILE'
+      s2s_dhfile: 'DH_FILE'
+      
       listen:
         -
           port: 5222
           ip: "::"
           module: ejabberd_c2s
-          certfile: "/etc/ejabberd/ejabberd.pem"
-          starttls: true
           starttls_required: true
-          protocol_options:
-            - "no_sslv3"
-            - "no_tlsv1"
-            - "no_tlsv1_1"
-            - "cipher_server_preference"
-          ciphers: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
-          tls_compression: false
-          dhfile: "/etc/ejabberd/dh2048.pem"
-          max_stanza_size: 65536
-          shaper: c2s_shaper
-          access: c2s
       
-      s2s_use_starttls: required_trusted
-      s2s_protocol_options:
-        - "no_sslv3"
-        - "no_tlsv1"
-        - "no_tlsv1_1"
-        - "cipher_server_preference"
-      s2s_dhfile: "/etc/ejabberd/dh2048.pem"
-      s2s_ciphers: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
+      s2s_use_starttls: required
 
-5. Create the required dh2048.pem file:
+5. Create the required `dhparams.pem` file:
 
-       openssl dhparam -out /etc/ssl/ejabberd/dh2048.pem 2048
+       openssl dhparam -out /etc/ejabberd/dhparams.pem 2048
 
 6. Restart the ejabberd daemon:
 
@@ -178,8 +163,9 @@ federate with yours by putting the following in
 
     access:
       s2s:
-        trusted_servers: allow
-        all: deny
+        - allow: trusted_servers
+        - deny
+    
     s2s_access: s2s
 
 The above was all I needed in order to be able to use the

Update for buster
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index bf36d02..2ca0678 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -4,7 +4,7 @@
 
 In order to get closer to my goal of reducing my dependence on centralized
 services, I decided to setup my own XMPP / Jabber server on a server
-running [Debian wheezy](http://www.debian.org/releases/wheezy/). I chose
+running [Debian buster](http://www.debian.org/releases/buster/). I chose
 [ejabberd](http://www.ejabberd.im/) since it was recommended by the
 [RTC Quick Start](http://www.rtcquickstart.org/) website and here's how I
 put everything together.
@@ -22,15 +22,9 @@ Then I went to get a free TLS certificate for `jabber-gw.fmarier.org` and `fmari
 
 ## Let's Encrypt
 
-The easiest way to get a certificate is to install [certbot](https://certbot.eff.org/) from
-[debian-backports](https://backports.debian.org/) by adding the following to
-your `/etc/apt/sources.list`:
+The easiest way to get a certificate is to install [certbot](https://certbot.eff.org/):
 
-    deb http://httpredir.debian.org/debian jessie-backports main contrib non-free
-
-and then installing the package:
-
-    apt update && apt install certbot
+    apt install certbot
 
 Then, shutdown your existing webserver if you have one running and request
 a cert like this:
@@ -56,10 +50,10 @@ with an
 [additional customization](http://www.die-welt.net/2013/05/wheezy-ejabberd-pidgin-and-srv-records/)
 to solve the [Pidgin](http://pidgin.im) "Not authorized" connection problems.
 
-1. Install the [package](http://packages.debian.org/wheezy/ejabberd), using
+1. Install the [package](http://packages.debian.org/stable/ejabberd), using
 "admin" as the username for the administrative user:
 
-      apt-get install ejabberd
+      apt install ejabberd
 
 2. Set the following in `/etc/ejabberd/ejabberd.yml` (don't forget the
 trailing dots!):
@@ -117,7 +111,7 @@ by adding `starttls_required` to this block:
 
 6. Restart the ejabberd daemon:
 
-       /etc/init.d/ejabberd restart
+       systemctl restart ejabberd.service
 
 7. Create a new user account for yourself:
 
@@ -134,7 +128,7 @@ to requests after running for a while:
 
       0 4 * * *      root    /bin/systemctl restart ejabberd.service
 
-Note that if you'd like to be able to talk to contact via the GMail XMPP
+Note that if you'd like to be able to talk to contacts via the GMail XMPP
 server, you will unfortunately need to change the `s2s_use_starttls`
 setting in step 3 to the following:
 

Mention what's needed to whitelist JMP.chat
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 4601649..bf36d02 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -180,7 +180,7 @@ federate with yours by putting the following in
         server:
           - "cheogram.com"
           - "conference.soprani.ca"
-          - "conversations.im"
+          - "jmp.chat"
 
     access:
       s2s:
@@ -188,4 +188,7 @@ federate with yours by putting the following in
         all: deny
     s2s_access: s2s
 
+The above was all I needed in order to be able to use the
+[JMP](https://jmp.chat/) SMS-to-XMPP service.
+
 [[!tag debian]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag sysadmin]] [[!tag xmpp]] [[!tag letsencrypt]] [[!tag ejabberd]]

Remove defunct free TLS certificate provider
diff --git a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
index 75177d6..4601649 100644
--- a/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
+++ b/posts/running-your-own-xmpp-server-debian-ubuntu.mdwn
@@ -48,20 +48,6 @@ and then restart the service:
 
 I wrote a [cronjob to renew this certificate automatically using certbot](https://feeding.cloud.geek.nz/posts/automatically-renewing-letsencrypt-certs-on-debian-using-certbot/).
 
-## StartSSL
-
-I have also used [StartSSL](https://startssl.com) successfully. This is how I generated the CSR
-(Certificate Signing Request) on a high-entropy machine:
-
-    openssl req -new -newkey rsa:2048 -sha256 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/CN=jabber-gw.fmarier.org"
-
-I downloaded the signed certificate as well as the
-[StartSSL intermediate certificate](https://startssl.com/certs/) and
-[combined them](http://hyperstruct.net/2007/06/20/installing-the-startcom-ssl-certificate-in-ejabberd/)
-this way:
-
-    cat ssl.crt ssl.key sub.class1.server.ca.pem > ejabberd.pem
-
 # ejabberd installation
 
 Installing ejabberd on Debian is pretty simple and I mostly followed the

Add post about MythTV locale setting
diff --git a/posts/fixing-locale-problem-mythtv-30.mdwn b/posts/fixing-locale-problem-mythtv-30.mdwn
new file mode 100644
index 0000000..e9c4644
--- /dev/null
+++ b/posts/fixing-locale-problem-mythtv-30.mdwn
@@ -0,0 +1,65 @@
+[[!meta title="Fixing locale problem in MythTV 30"]]
+[[!meta date="2020-05-28T15:30:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+After upgrading to MythTV 30, I noticed that the interface of mythfrontend
+switched from the French language to English, despite having the following
+in my `~/.xsession` for the `mythtv` user:
+
+    export LANG=fr_CA.UTF-8
+    exec ~/bin/start_mythtv
+
+I noticed a few related error messages in `/var/log/syslog`:
+
+    mythbackend[6606]: I CoreContext mythcorecontext.cpp:272 (Init) Assumed character encoding: fr_CA.UTF-8
+    mythbackend[6606]: N CoreContext mythcorecontext.cpp:1780 (InitLocale) Setting QT default locale to FR_US
+    mythbackend[6606]: I CoreContext mythcorecontext.cpp:1813 (SaveLocaleDefaults) Current locale FR_US
+    mythbackend[6606]: E CoreContext mythlocale.cpp:110 (LoadDefaultsFromXML) No locale defaults file for FR_US, skipping
+    mythpreviewgen[9371]: N CoreContext mythcorecontext.cpp:1780 (InitLocale) Setting QT default locale to FR_US
+    mythpreviewgen[9371]: I CoreContext mythcorecontext.cpp:1813 (SaveLocaleDefaults) Current locale FR_US
+    mythpreviewgen[9371]: E CoreContext mythlocale.cpp:110 (LoadDefaultsFromXML) No locale defaults file for FR_US, skipping
+
+Searching for that non-existent `fr_US` locale, I found that [others have
+this in their logs](https://mythtv-fr.org/forums/viewtopic.php?id=2202) 
+and that it's [apparently set by
+QT](https://bugreports.qt.io/browse/QTBUG-8452?focusedCommentId=149446&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel)
+as a combination of the language and country codes.
+
+I therefore looked in the database and found the following:
+
+    MariaDB [mythconverg]> SELECT value, data FROM settings WHERE value = 'Language';
+    +----------+------+
+    | value    | data |
+    +----------+------+
+    | Language | FR   |
+    +----------+------+
+    1 row in set (0.000 sec)
+
+    MariaDB [mythconverg]> SELECT value, data FROM settings WHERE value = 'Country';
+    +---------+------+
+    | value   | data |
+    +---------+------+
+    | Country | US   |
+    +---------+------+
+    1 row in set (0.000 sec)
+
+which explains the non-sensical `FR-US` locale.
+
+I fixed the country setting like this
+
+    MariaDB [mythconverg]> UPDATE settings SET data = 'CA' WHERE value = 'Country';
+    Query OK, 1 row affected (0.093 sec)
+    Rows matched: 1  Changed: 1  Warnings: 0
+
+After logging out and logging back in, the user interface of the frontend is now
+using the `fr_CA` locale again and the database setting looks good:
+
+    MariaDB [mythconverg]> SELECT value, data FROM settings WHERE value = 'Country';
+    +---------+------+
+    | value   | data |
+    +---------+------+
+    | Country | CA   |
+    +---------+------+
+    1 row in set (0.000 sec)
+
+[[!tag mythtv]]

Comment moderation
diff --git a/posts/backing-up-to-s3-with-duplicity/comment_1_0471bf8b0d6af376a11b2f03bdafdd27._comment b/posts/backing-up-to-s3-with-duplicity/comment_1_0471bf8b0d6af376a11b2f03bdafdd27._comment
new file mode 100644
index 0000000..4446e05
--- /dev/null
+++ b/posts/backing-up-to-s3-with-duplicity/comment_1_0471bf8b0d6af376a11b2f03bdafdd27._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="2403:5800:3100:142::2494"
+ claimedauthor="Hamish"
+ subject="ListAllMyBuckets"
+ date="2020-05-28T03:37:17Z"
+ content="""
+What error do you get if you don't grant ListAllMyBuckets?
+
+I've been running duplicity to S3 without that permission for years and never encountered an issue. Though I see lots of other web sites saying the same thing. I'm actually using the duply frontend to duplicity but I think it's unlikely that makes a difference.
+"""]]

Hide mailq-check user from the GDM list
diff --git a/posts/simple-remote-mail-queue-monitoring.mdwn b/posts/simple-remote-mail-queue-monitoring.mdwn
index 91c0a2a..03b4de7 100644
--- a/posts/simple-remote-mail-queue-monitoring.mdwn
+++ b/posts/simple-remote-mail-queue-monitoring.mdwn
@@ -38,6 +38,14 @@ and then authorized my new ssh key (see next section):
     mkdir ~/.ssh/
     cat - > ~/.ssh/authorized_keys
 
+If your server also allows users to login using [GDM](https://wiki.gnome.org/Projects/GDM),
+then you'll probably want to hide that user from the list of available
+users by putting the following in `/var/lib/AccountsService/users/mailq-check`:
+
+    [User]
+    XSession=
+    SystemAccount=true
+
 # Laptop setup
 
 On my laptop, the machine from where I monitor the server's mail queue, I

Optimize image
diff --git a/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png b/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png
index 1ea243e..328da55 100644
Binary files a/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png and b/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png differ

creating tag page tags/printing
diff --git a/tags/printing.mdwn b/tags/printing.mdwn
new file mode 100644
index 0000000..d401bac
--- /dev/null
+++ b/tags/printing.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged printing"]]
+
+[[!inline pages="tagged(printing)" actions="no" archive="yes"
+feedshow=10]]

Add blog post about unprintable PDFs
diff --git a/posts/printing-hard-to-print-pdfs-on-linux.mdwn b/posts/printing-hard-to-print-pdfs-on-linux.mdwn
new file mode 100644
index 0000000..27c2a7d
--- /dev/null
+++ b/posts/printing-hard-to-print-pdfs-on-linux.mdwn
@@ -0,0 +1,58 @@
+[[!meta title="Printing hard-to-print PDFs on Linux"]]
+[[!meta date="2020-05-23T20:05:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I recently found a few PDFs which I was unable to print due to
+those files causing [insufficient printer memory
+errors](https://support.hp.com/us-en/product/model/9365402/document/c05049204):
+
+![](/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png)
+
+I found a [detailed
+explanation](https://tex.stackexchange.com/questions/71001/why-do-some-vector-graphics-included-into-a-document-force-rasterization-of-the#71050)
+of what might be causing this which pointed the finger at transparent
+images, a PDF 1.4 feature which apparently requires a more recent version of
+[PostScript](https://en.wikipedia.org/wiki/PostScript) than what my printer
+supports.
+
+Using [Okular](https://okular.kde.org/)'s *Force rasterization* option
+(accessible via the print dialog) does work by essentially rendering
+everything ahead of time and outputing a big image to be sent to the
+printer. The quality is not very good however.
+
+# Converting a PDF to DjVu
+
+The [best solution I found](https://superuser.com/a/1489923) makes use of a
+different file format: [.djvu](https://en.wikipedia.org/wiki/DjVu)
+
+Such files are not PDFs, but can still be opened in [Evince](https://wiki.gnome.org/Apps/Evince) and
+[Okular](https://okular.kde.org/), as well as in the dedicated
+[DjVuLibre](http://djvu.sourceforge.net/) application.
+
+As an example, I was unable to print page 11 of [this
+paper](https://arxiv.org/pdf/2002.04049.pdf). Using `pdfinfo`, I found that
+it is in PDF 1.5 format and so the transparency effects could be the cause
+of the out-of-memory printer error.
+
+Here's how I converted it to a high-quality DjVu file I could print without
+problems using Evince:
+
+    pdf2djvu -d 1200 2002.04049.pdf > 2002.04049-1200dpi.djvu
+
+# Converting a PDF to PDF 1.3
+
+I also tried the DjVu trick on a [different unprintable
+PDF](https://www.boardgamegeek.com/filepage/113639/dead-winter-official-faq-v11),
+but it failed to print, even after lowering the resolution to 600dpi:
+
+    pdf2djvu -d 600 dow-faq_v1.1.pdf > dow-faq_v1.1-600dpi.djvu
+
+In this case, I used a different technique and simply converted the PDF to
+version 1.3 (from version 1.6 according to `pdfinfo`):
+
+    ps2pdf13 -r1200x1200 dow-faq_v1.1.pdf dow-faq_v1.1-1200dpi.pdf
+
+This eliminates the problematic transparency and rasterizes the elements
+that version 1.3 doesn't support.
+
+[[!tag debian]] [[!tag printing]]
diff --git a/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png b/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png
new file mode 100644
index 0000000..1ea243e
Binary files /dev/null and b/posts/printing-hard-to-print-pdfs-on-linux/insufficient-printer-memory.png differ

Add Apache SSI post
diff --git a/posts/displaying-ip-address-apache-server-side-includes.mdwn b/posts/displaying-ip-address-apache-server-side-includes.mdwn
new file mode 100644
index 0000000..bc95e85
--- /dev/null
+++ b/posts/displaying-ip-address-apache-server-side-includes.mdwn
@@ -0,0 +1,97 @@
+[[!meta title="Displaying client IP address using Apache Server-Side Includes"]]
+[[!meta date="2020-05-18T14:50:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+If you use a [Dynamic DNS
+setup](https://feeding.cloud.geek.nz/posts/dynamic-dns-on-own-domain/) to
+reach machines which are not behind a stable IP address, you will likely
+have a need to probe these machines' public IP addresses. One option is to
+use an insecure service like Oracle's <http://checkip.dyndns.com/> which
+echoes back your client IP, but you can also do this on your own server if
+you have one.
+
+There are multiple options to do this, like writing a CGI or PHP script, but
+those are fairly heavyweight if that's all you need [mod_cgi](https://httpd.apache.org/docs/current/mod/mod_cgi.html) or
+[PHP](https://cwiki.apache.org/confluence/display/HTTPD/PHP) for. Instead, I
+decided to use Apache's built-in [Server-Side
+Includes](https://httpd.apache.org/docs/current/howto/ssi.html).
+
+## Apache configuration
+
+Start by turning on the [include
+filter](https://httpd.apache.org/docs/current/mod/mod_include.html) by
+adding the following in `/etc/apache2/conf-available/ssi.conf`:
+
+    AddType text/html .shtml
+    AddOutputFilter INCLUDES .shtml
+
+and making that configuration file active:
+
+    a2enconf ssi
+
+Then, find the vhost file where you want to enable SSI and add the following
+options to a `Location` or `Directory` section:
+
+    <Location /ssi_files>
+        Options +IncludesNOEXEC
+        SSLRequireSSL
+        Header set Content-Security-Policy: "default-src 'none'"
+        Header set X-Content-Type-Options: "nosniff"
+    </Location>
+
+before adding the necessary modules:
+
+    a2enmod headers
+    a2enmod include
+
+and restarting Apache:
+
+    apache2ctl configtest && systemctl restart apache2.service
+
+## Create an `shtml` page
+
+With the web server ready to process SSI instructions, the following HTML
+blurb can be used to display the client IP address:
+
+    <!--#echo var="REMOTE_ADDR" -->
+
+or any other [built-in
+variable](https://httpd.apache.org/docs/current/expr.html#vars).
+
+Note that you don't need to write a valid HTML for the variable to be
+substituted and so the above one-liner is all I use on my server.
+
+## Security concerns
+
+The first thing to note is that the configuration section uses the
+`IncludesNOEXEC` option in order to disable [arbitrary command
+execution](https://httpd.apache.org/docs/current/howto/ssi.html#exec) via
+SSI. In addition, you can also make sure that the `cgi` module is disabled
+since that's a dependency of the more dangerous side of SSI:
+
+    a2dismod cgi
+
+Of course, if you rely on this IP address to be accurate, for example
+because you'll be putting it in your DNS, then you should make sure that you
+**only serve this page over HTTPS**, which can be enforced via the
+[`SSLRequireSSL`
+directive](https://httpd.apache.org/docs/current/mod/mod_ssl.html#sslrequiressl).
+
+I included two other headers in the above vhost config
+([`Content-Security-Policy`](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP)
+and
+[`X-Content-Type-Options`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options))
+in order to limit the damage that could be done in case a malicious file was
+accidentally dropped in that directory.
+
+Finally, I suggest making sure that **only the `root` user has writable
+access to the directory** which has server-side includes enabled:
+
+    $ ls -la /var/www/ssi_includes/
+    total 12
+    drwxr-xr-x  2 root     root     4096 May 18 15:58 .
+    drwxr-xr-x 16 root     root     4096 May 18 15:40 ..
+    -rw-r--r--  1 root     root        0 May 18 15:46 index.html
+    -rw-r--r--  1 root     root       32 May 18 15:58 whatsmyip.shtml
+
+[[!tag apache]] [[!tag dns]] [[!tag debian]]
diff --git a/posts/dynamic-dns-on-own-domain.mdwn b/posts/dynamic-dns-on-own-domain.mdwn
index 6fed580..fd95874 100644
--- a/posts/dynamic-dns-on-own-domain.mdwn
+++ b/posts/dynamic-dns-on-own-domain.mdwn
@@ -76,6 +76,10 @@ Note that you do need to change the default update interval or the
 `checkip.dyndns.com` server [will ban your IP
 address](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=489997).
 
+Alternatively, just [setup your own lightweight IP address echoing
+service](https://feeding.cloud.geek.nz/posts/displaying-ip-address-apache-server-side-includes/)
+and avoid the problem entirely.
+
 # Testing
 
 To test that the client software is working, wait 6 minutes (there is an

Remove Brave and VoIP.ms referral links
As described in d612135d6e5cb20b475c055cbbf27b0dfdae5fee, using referral links
might involve following some jurisdiction-specific rules. This is too much
hassle for the amount of (minimal or non-existent) money.
This reverts commits 71fe2a2c12ce7d6bf3761b339337a249d6f15130 and
07aa6e5862282a2aebfb1ffa95320582fab9aed0.
diff --git a/posts/how-to-get-direct-webrtc-connection-between-computers.mdwn b/posts/how-to-get-direct-webrtc-connection-between-computers.mdwn
index 6608027..da4f9e7 100644
--- a/posts/how-to-get-direct-webrtc-connection-between-computers.mdwn
+++ b/posts/how-to-get-direct-webrtc-connection-between-computers.mdwn
@@ -32,7 +32,7 @@ Note that this test page makes use of a Google TURN server which is locked
 to particular HTTP referrers and so you'll need to disable privacy features
 that might interfere with this:
 
-- [Brave](https://brave.com/clo187): Disable Shields entirely for that page
+- [Brave](https://brave.com/): Disable Shields entirely for that page
   (Simple view) or *allow all cookies* for that page (Advanced view).
 
 ![](/posts/how-to-get-direct-webrtc-connection-between-computers/brave-shields-cookies.png)
diff --git a/posts/making-sip-calls-voipms-without-pstn.mdwn b/posts/making-sip-calls-voipms-without-pstn.mdwn
index 09fbc0d..ee96aee 100644
--- a/posts/making-sip-calls-voipms-without-pstn.mdwn
+++ b/posts/making-sip-calls-voipms-without-pstn.mdwn
@@ -2,7 +2,7 @@
 [[!meta date="2020-03-05T19:00:00.000-08:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
-If you want to reach a [VoIP.ms](https://voip.ms/en/invite/MjE0NTI2) subscriber from
+If you want to reach a [VoIP.ms](https://voip.ms/) subscriber from
 [Asterisk](https://www.asterisk.org/) without using the
 [PSTN](https://en.wikipedia.org/wiki/Public_switched_telephone_network),
 there is a way to do so via [SIP
diff --git a/posts/passwordless-restricted-guest-account-ubuntu.mdwn b/posts/passwordless-restricted-guest-account-ubuntu.mdwn
index 5ad5b4a..d1bcfe4 100644
--- a/posts/passwordless-restricted-guest-account-ubuntu.mdwn
+++ b/posts/passwordless-restricted-guest-account-ubuntu.mdwn
@@ -36,7 +36,7 @@ gnome-control-center. I set the following in the privacy section:
 
 ![](/posts/passwordless-restricted-guest-account-ubuntu/privacy-settings.png)
 
-Then I replaced Firefox with [Brave](https://brave.com/clo187) in the sidebar,
+Then I replaced Firefox with [Brave](https://brave.com) in the sidebar,
 set it as the default browser in gnome-control-center:
 
 ![](/posts/passwordless-restricted-guest-account-ubuntu/default-applications.png)
diff --git a/posts/sip-encryption-on-voip-ms.mdwn b/posts/sip-encryption-on-voip-ms.mdwn
index 7a13599..0d99118 100644
--- a/posts/sip-encryption-on-voip-ms.mdwn
+++ b/posts/sip-encryption-on-voip-ms.mdwn
@@ -2,7 +2,7 @@
 [[!meta date="2019-07-06T16:00:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
-My [VoIP provider](https://voip.ms/en/invite/MjE0NTI2) recently added [support for
+My [VoIP provider](https://voip.ms) recently added [support for
 TLS/SRTP-based call
 encryption](https://wiki.voip.ms/article/Call_Encryption_-_TLS/SRTP). Here's
 what I did to enable this feature on my
diff --git a/posts/using-gogo-wifi-linux.mdwn b/posts/using-gogo-wifi-linux.mdwn
index 961d4ba..8dfe92f 100644
--- a/posts/using-gogo-wifi-linux.mdwn
+++ b/posts/using-gogo-wifi-linux.mdwn
@@ -12,7 +12,7 @@ however possible to work-around this restriction by faking your browser
 
 I tried the [User-Agent Switcher for
 Chrome](https://chrome.google.com/webstore/detail/user-agent-switcher-for-c/djflhoibgkdhkhhcedjiklpkjnoahfmg)
-extension on Chrome and [Brave](https://brave.com/clo187) but it didn't work
+extension on Chrome and [Brave](https://brave.com/) but it didn't work
 for some reason.
 
 What did work was using Firefox and adding the following prefs in

Use a referral URL when linking to VoIP.ms
diff --git a/posts/making-sip-calls-voipms-without-pstn.mdwn b/posts/making-sip-calls-voipms-without-pstn.mdwn
index ee96aee..09fbc0d 100644
--- a/posts/making-sip-calls-voipms-without-pstn.mdwn
+++ b/posts/making-sip-calls-voipms-without-pstn.mdwn
@@ -2,7 +2,7 @@
 [[!meta date="2020-03-05T19:00:00.000-08:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
-If you want to reach a [VoIP.ms](https://voip.ms/) subscriber from
+If you want to reach a [VoIP.ms](https://voip.ms/en/invite/MjE0NTI2) subscriber from
 [Asterisk](https://www.asterisk.org/) without using the
 [PSTN](https://en.wikipedia.org/wiki/Public_switched_telephone_network),
 there is a way to do so via [SIP
diff --git a/posts/sip-encryption-on-voip-ms.mdwn b/posts/sip-encryption-on-voip-ms.mdwn
index 0d99118..7a13599 100644
--- a/posts/sip-encryption-on-voip-ms.mdwn
+++ b/posts/sip-encryption-on-voip-ms.mdwn
@@ -2,7 +2,7 @@
 [[!meta date="2019-07-06T16:00:00.000-07:00"]]
 [[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
 
-My [VoIP provider](https://voip.ms) recently added [support for
+My [VoIP provider](https://voip.ms/en/invite/MjE0NTI2) recently added [support for
 TLS/SRTP-based call
 encryption](https://wiki.voip.ms/article/Call_Encryption_-_TLS/SRTP). Here's
 what I did to enable this feature on my

Add snapshot listing command and cleanup local cache during backup
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index e251d69..43a42bb 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -181,6 +181,11 @@ to reuse on all of my computers:
     	RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL ls latest
     	exit 0
     
+    # Show list of available snapshots
+    elif [ "$1" = "--list-snapshots" ]; then
+	    RESTIC_PASSWORD=$GPG_PASSWORD restic --quiet -r $REMOTE_URL snapshots
+	    exit 0
+    
     # Restore the given file
     elif [ "$1" = "--file-to-restore" ]; then
     	if [ "$2" = "" ]; then
@@ -217,8 +222,8 @@ to reuse on all of my computers:
     /sbin/fdisk -l /dev/sda > $PARTITION_FILE
     /sbin/fdisk -l /dev/sdb > $PARTITION_FILE
     
-    # Do the actual backup using Duplicity
-    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL backup / --exclude-file $EXCLUDE_FILE
+    # Do the actual backup
+    RESTIC_PASSWORD=$PASSWORD restic --quiet --cleanup-cache -r $REMOTE_URL backup / --exclude-file $EXCLUDE_FILE
 
 I run it with the following cronjob in `/etc/cron.d/backups`:
 

Use hdparm to spin down idle disks
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index 5f1136b..e251d69 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -64,6 +64,30 @@ and added the following to `/etc/fstab`:
 
     /dev/md127 /mnt/data/ ext4 noatime,nodiratime 0 2
 
+To reduce unnecessary noise and reduce power consumption, I also installed
+[hdparm](https://sourceforge.net/projects/hdparm/):
+
+    apt install hdparm
+
+and configured all spinning drives to spin down after being idle for 10
+minutes by putting the following in `/etc/hdparm.conf`:
+
+    /dev/sdb {
+           spindown_time = 120
+    }
+    
+    /dev/sdc {
+           spindown_time = 120
+    }
+    
+    /dev/sdd {
+           spindown_time = 120
+    }
+
+and then reloaded the configuration:
+
+     /usr/lib/pm-utils/power.d/95hdparm-apm resume
+
 Finally I setup [smartmontools](https://www.smartmontools.org/) by putting
 the following in `/etc/smartd.conf`:
 

Add a note that a user shell is not needed anymore
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index adfcba8..5f1136b 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -96,6 +96,7 @@ the GnuBee:
     adduser machine1
     adduser machine1 sshuser
     adduser machine1 sftponly
+    chsh machine1 -s /bin/false
 
 and then matching directories under `/mnt/data/home/`:
 
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index 436ae94..be2ac3f 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -126,6 +126,7 @@ sftp:
 Then for each user, we need to do the following:
 
     adduser user1 sftp-only
+    chsh user1 -s /bin/false
     mkdir -p /mnt/data/home/user1
     chown user1:user1 /mnt/data/home/user1
     chmod 700 /mnt/data/home/user1

Update my restricted shell ssh instructions and use them on the GnuBee
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index e70567a..adfcba8 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -88,15 +88,20 @@ same backup finished in about half the time.
 
 ### User and ssh setup
 
-I created a user account for each machine needing to backup onto the GnuBee:
+After [hardening the ssh
+setup](https://feeding.cloud.geek.nz/posts/hardening-ssh-servers/) as I
+usually do, I created a user account for each machine needing to backup onto
+the GnuBee:
 
     adduser machine1
     adduser machine1 sshuser
+    adduser machine1 sftponly
 
-and then a matching directory under `/mnt/data/`:
+and then matching directories under `/mnt/data/home/`:
 
-    mkdir /mnt/data/machine1
-    chown machine1:machine1 /mnt/data/machine1    
+    mkdir /mnt/data/home/machine1
+    chown machine1:machine1 /mnt/data/home/machine1
+    chmod 700 /mnt/data/home/machine1
 
 Then I created a custom ssh key for each machine:
 
@@ -120,14 +125,12 @@ up the ssh connection](https://gist.github.com/KartikTalwar/4393116) as much
 as possible given that the [GnuBee has avery small RAM
 bandwidth](https://groups.google.com/d/msg/gnubee/5_nKjgmKSoY/a0ER5fEcBAAJ).
 
-On the GnuBee, I switched to the [internal sftp
+Another performance-related change I made on the GnuBee was switching to the [internal sftp
 server](https://serverfault.com/questions/660160/openssh-difference-between-internal-sftp-and-sftp-server#660325)
 by putting the following in `/etc/ssh/sshd_config`:
 
     Subsystem      sftp    internal-sftp
 
-to hopefully improve the performance.
-
 ### Restic script
 
 After reading through the excellent [restic
@@ -139,7 +142,7 @@ to reuse on all of my computers:
     # Configure for each host
     PASSWORD="XXXX"  # use `pwgen -s 64` to generate a good random password
     BACKUP_HOME="/root/backup"
-    REMOTE_URL="sftp:foobar.local:/mnt/data/machine1"
+    REMOTE_URL="sftp:foobar.local:"
     RETENTION_POLICY="--keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 2"
     
     # Internal variables
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index 17f136a..436ae94 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -89,22 +89,57 @@ servers and use small
 [scripts](https://github.com/fmarier/user-scripts/blob/master/spascp)
 to connect to them.
 
-# Using restricted shells
+# Restricting shell access
 
 For those users who only need an ssh account on the server in order to
-transfer files (using `scp` or `rsync`), it's a good idea to set their shell
-(via [chsh](http://linux.die.net/man/1/chsh)) to a restricted one like
-[rssh](http://www.pizzashack.org/rssh/).
-
-Should they attempt to log into the server, these users will be greeted with
-the following error message:
-
-    This account is restricted by rssh.
-    Allowed commands: rsync 
-    
-    If you believe this is in error, please contact your system administrator.
-    
-    Connection to server.example.com closed.
+transfer files (using `scp` or `rsync`), it's a good idea to restrict their
+access further. I used to switch these users' shell (via [chsh](http://linux.die.net/man/1/chsh)) to a restricted one like
+[rssh](http://www.pizzashack.org/rssh/), but that project has been
+[abandoned](https://tracker.debian.org/news/1033905/removed-234-12-from-unstable/).
+
+I now use a [different
+approach](https://www.allthingsdigital.nl/2013/05/12/setting-up-an-sftp-only-account-with-openssh/)
+which consists of using an essentially empty chroot for these users and
+limiting them to `internal-sftp` by putting the following in
+`/etc/ssh/sshd_config`:
+
+    Match Group sftponly
+      ForceCommand internal-sftp
+      ChrootDirectory /mnt/data
+
+creating a group:
+
+    adduser sftponly
+
+and a base chroot directory:
+
+    mkdir -p /mnt/data/home
+
+Note that the base directory, and each parent directory all the way to the
+root directory, must be owned by `root:root` (user **and** group) otherwise
+you'll see an unhelpful error message like this when you try to connect via
+sftp:
+
+    $ sftp user1@server.example
+    client_loop: send disconnect: Broken pipe
+
+Then for each user, we need to do the following:
+
+    adduser user1 sftp-only
+    mkdir -p /mnt/data/home/user1
+    chown user1:user1 /mnt/data/home/user1
+    chmod 700 /mnt/data/home/user1
+
+before restarting the ssh daemon:
+
+    systemctl restart sshd.service
+
+Should one of these users attempt to connect via ssh instead of stp, they
+will see the following:
+
+    $ ssh user1@server.example
+    This service allows sftp connections only.
+    Connection to server.example closed.
 
 # Restricting authorized keys to certain IP addresses
 

Switch to internal-sftp server for ssh
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index fd28a06..e70567a 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -120,6 +120,14 @@ up the ssh connection](https://gist.github.com/KartikTalwar/4393116) as much
 as possible given that the [GnuBee has avery small RAM
 bandwidth](https://groups.google.com/d/msg/gnubee/5_nKjgmKSoY/a0ER5fEcBAAJ).
 
+On the GnuBee, I switched to the [internal sftp
+server](https://serverfault.com/questions/660160/openssh-difference-between-internal-sftp-and-sftp-server#660325)
+by putting the following in `/etc/ssh/sshd_config`:
+
+    Subsystem      sftp    internal-sftp
+
+to hopefully improve the performance.
+
 ### Restic script
 
 After reading through the excellent [restic
diff --git a/posts/backing-up-to-gnubee2/comment_1_fc4bfe71d22f6d6f3682674e1a839fbf._comment b/posts/backing-up-to-gnubee2/comment_1_fc4bfe71d22f6d6f3682674e1a839fbf._comment
deleted file mode 100644
index c4da712..0000000
--- a/posts/backing-up-to-gnubee2/comment_1_fc4bfe71d22f6d6f3682674e1a839fbf._comment
+++ /dev/null
@@ -1,7 +0,0 @@
-[[!comment format=mdwn
- ip="72.239.48.49"
- subject="further hardening"
- date="2020-05-05T11:25:39Z"
- content="""
-consider using/mentioning the sshd internal-sftp option for further security from backup accounts
-"""]]
diff --git a/posts/hardening-ssh-servers.mdwn b/posts/hardening-ssh-servers.mdwn
index 7680ac3..17f136a 100644
--- a/posts/hardening-ssh-servers.mdwn
+++ b/posts/hardening-ssh-servers.mdwn
@@ -47,6 +47,12 @@ which can be done by commenting out this line:
 
     #Subsystem     sftp    /usr/lib/openssh/sftp-server
 
+On the other hand, if you do need it, it's generally better to replace it
+with the [internal sftp
+server](https://serverfault.com/questions/660160/openssh-difference-between-internal-sftp-and-sftp-server#660325):
+
+    Subsystem     sftp    internal-sftp
+
 # Whitelist approach to giving users ssh access
 
 To ensure that only a few users have ssh access to the server and that newly

Comment moderation
diff --git a/posts/backing-up-to-gnubee2/comment_1_fc4bfe71d22f6d6f3682674e1a839fbf._comment b/posts/backing-up-to-gnubee2/comment_1_fc4bfe71d22f6d6f3682674e1a839fbf._comment
new file mode 100644
index 0000000..c4da712
--- /dev/null
+++ b/posts/backing-up-to-gnubee2/comment_1_fc4bfe71d22f6d6f3682674e1a839fbf._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ ip="72.239.48.49"
+ subject="further hardening"
+ date="2020-05-05T11:25:39Z"
+ content="""
+consider using/mentioning the sshd internal-sftp option for further security from backup accounts
+"""]]

Comment moderation
diff --git a/posts/lxc-setup-on-debian-stretch/comment_2_ba09c54f7093eda0e92cb3c0859e81cc._comment b/posts/lxc-setup-on-debian-stretch/comment_2_ba09c54f7093eda0e92cb3c0859e81cc._comment
new file mode 100644
index 0000000..9e5ae05
--- /dev/null
+++ b/posts/lxc-setup-on-debian-stretch/comment_2_ba09c54f7093eda0e92cb3c0859e81cc._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="167.62.217.58"
+ claimedauthor="Sergio_L"
+ subject="How about bridging"
+ date="2020-04-24T19:09:30Z"
+ content="""
+Thank you very much for the article, following it was the first time I got my containers to have internet access, I was trying to expose them to my DHCP before with no success (using debian stretch) so If you feel like adding that option to your article I'll be very grateful! 
+"""]]
diff --git a/posts/using-gogo-wifi-linux/comment_1_bc941207d2f8703bf0e0c20c2d5f988c._comment b/posts/using-gogo-wifi-linux/comment_1_bc941207d2f8703bf0e0c20c2d5f988c._comment
new file mode 100644
index 0000000..f6bdb21
--- /dev/null
+++ b/posts/using-gogo-wifi-linux/comment_1_bc941207d2f8703bf0e0c20c2d5f988c._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ ip="2601:646:202:a820:2ceb:9c42:581:3448"
+ claimedauthor="Stefano Rivera"
+ url="https://stefanorivera.com/"
+ subject="comment 1"
+ date="2020-04-28T21:19:43Z"
+ content="""
+Hrm, I've never found that necessary on Gogo on Delta.
+
+I just browse directly to http://airborne.gogoinflight.com/
+Or, more recently, https://airbornesecure.gogoinflight.com/
+"""]]

Add link to ionice, nice, and nocache blog post.
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
index c316cb3..fd28a06 100644
--- a/posts/backing-up-to-gnubee2.mdwn
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -189,6 +189,8 @@ I run it with the following cronjob in `/etc/cron.d/backups`:
     30 8 * * *    root  ionice nice nocache /root/backup/backup-machine1-to-foobar
     30 2 * * Sun  root  ionice nice nocache /root/backup/backup-machine1-to-foobar --prune
 
+in a way that [doesn't impact the rest of the system too much](https://feeding.cloud.geek.nz/posts/three-wrappers-to-run-commands-without-impacting-the-rest-of-the-system/).
+
 Finally, I printed a copy of each of my backup script, using
 [enscript](https://www.gnu.org/software/enscript/), to stash in a safe place:
 

creating tag page tags/restic
diff --git a/tags/restic.mdwn b/tags/restic.mdwn
new file mode 100644
index 0000000..0d2f649
--- /dev/null
+++ b/tags/restic.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged restic"]]
+
+[[!inline pages="tagged(restic)" actions="no" archive="yes"
+feedshow=10]]

Add GnuBee backup post
diff --git a/posts/backing-up-to-gnubee2.mdwn b/posts/backing-up-to-gnubee2.mdwn
new file mode 100644
index 0000000..c316cb3
--- /dev/null
+++ b/posts/backing-up-to-gnubee2.mdwn
@@ -0,0 +1,200 @@
+[[!meta title="Backing up to a GnuBee PC 2"]]
+[[!meta date="2020-05-02T18:05:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+After [installing Debian buster on my
+GnuBee](https://feeding.cloud.geek.nz/posts/installing-debian-buster-on-gnubee2/), 
+I set it up for receiving backups from my other computers.
+
+## Software setup
+
+I started by configuring it [like a typical
+server](https://feeding.cloud.geek.nz/posts/usual-server-setup/) but without
+a few packages that either take a lot of memory or CPU:
+
+- [fail2ban](https://packages.debian.org/buster/fail2ban)
+- [rkhunter](https://packages.debian.org/buster/rkhunter)
+- [sysstat](https://packages.debian.org/buster/sysstat)
+
+I changed the default hostname:
+
+- `/etc/hostname`: `foobar`
+- `/etc/mailname`: `foobar.example.com`
+- `/etc/hosts`: `127.0.0.1  foobar.example.com vogar localhost`
+
+and then installed the `avahi-daemon` package to be able to reach this box
+using `foobar.local`.
+
+I noticed the presence of a [world-writable
+directory](https://github.com/neilbrown/gnubee-tools/issues/23) and so I
+tightened the security of some of the default mount points by putting the following
+in `/etc/rc.local`:
+
+    mount -o remount,nodev,nosuid /etc/network
+    mount -o remount,nodev,nosuid /lib/modules
+    chmod 755 /etc/network
+    exit 0
+
+## Hardware setup
+
+My OS drive (`/dev/sda`) is a small SSD so that the GnuBee can run silently when the
+spinning disks aren't needed. To hold the backup data on the other hand, I
+got three 4-TB drives drives which I setup in a
+[RAID-5](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5) array.
+If the data were valuable, I'd use
+[RAID-6](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_6) instead
+since it can survive two drives failing at the same time, but in this case
+since it's only holding backups, I'd have to lose the original machine at
+the same time as two of the 3 drives, a very unlikely scenario.
+
+I created new gpt partition tables on `/dev/sdb`, `/dev/sdbc`, `/dev/sdd`
+and used `fdisk` to create a single partition of `type 29` (Linux RAID) on
+each of them.
+
+Then I created the RAID array:
+
+    mdadm /dev/md127 --create -n 3 --level=raid5 -a /dev/sdb1 /dev/sdc1 /dev/sdd1
+
+and waited more than 24 hours for that operation to finish. Next, I
+formatted the array:
+
+    mkfs.ext4 -m 0 /dev/md127
+
+and added the following to `/etc/fstab`:
+
+    /dev/md127 /mnt/data/ ext4 noatime,nodiratime 0 2
+
+Finally I setup [smartmontools](https://www.smartmontools.org/) by putting
+the following in `/etc/smartd.conf`:
+
+    /dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03)
+    /dev/sdb -a -o on -S on -s (S/../.././02|L/../../6/03)
+    /dev/sdc -a -o on -S on -s (S/../.././02|L/../../6/03)
+    /dev/sdd -a -o on -S on -s (S/../.././02|L/../../6/03)
+
+and restarting the daemon:
+
+    systemctl restart smartd.service
+
+## Backup setup
+
+I started by using [duplicity](http://duplicity.nongnu.org/) since I have
+been using that tool for many years, but a 190GB backup took around 15 hours
+on the GnuBee with gigabit ethernet.
+
+After a [friend](https://stumbles.id.au/) suggested it, I took a look at
+[restic](https://restic.net) and I have to say that I am impressed. The
+same backup finished in about half the time.
+
+### User and ssh setup
+
+I created a user account for each machine needing to backup onto the GnuBee:
+
+    adduser machine1
+    adduser machine1 sshuser
+
+and then a matching directory under `/mnt/data/`:
+
+    mkdir /mnt/data/machine1
+    chown machine1:machine1 /mnt/data/machine1    
+
+Then I created a custom ssh key for each machine:
+
+    ssh-keygen -f /root/.ssh/foobar_backups -t ed25519
+
+and placed it in `/home/machine1/.ssh/authorized_keys` on the GnuBee.
+
+On each machine, I added the following to `/root/.ssh/config`:
+
+    Host foobar.local
+        User machine1
+        Compression no
+        Ciphers aes128-ctr
+        IdentityFile /root/backup/foobar_backups
+        IdentitiesOnly yes
+        ServerAliveInterval 60
+        ServerAliveCountMax 240
+
+The reason for setting the ssh cipher and disabling compression is to [speed
+up the ssh connection](https://gist.github.com/KartikTalwar/4393116) as much
+as possible given that the [GnuBee has avery small RAM
+bandwidth](https://groups.google.com/d/msg/gnubee/5_nKjgmKSoY/a0ER5fEcBAAJ).
+
+### Restic script
+
+After reading through the excellent [restic
+documentation](https://restic.readthedocs.io/en/stable/), I wrote the
+following backup script, based on my [old duplicity
+script](https://sources.debian.org/src/duplicity/0.8.11.1612-1/debian/examples/system-backup/),
+to reuse on all of my computers:
+
+    # Configure for each host
+    PASSWORD="XXXX"  # use `pwgen -s 64` to generate a good random password
+    BACKUP_HOME="/root/backup"
+    REMOTE_URL="sftp:foobar.local:/mnt/data/machine1"
+    RETENTION_POLICY="--keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 2"
+    
+    # Internal variables
+    SSH_IDENTITY="IdentityFile=$BACKUP_HOME/foobar_backups"
+    EXCLUDE_FILE="$BACKUP_HOME/exclude"
+    PKG_FILE="$BACKUP_HOME/dpkg-selections"
+    PARTITION_FILE="$BACKUP_HOME/partitions"
+    
+    # If the list of files has been requested, only do that
+    if [ "$1" = "--list-current-files" ]; then
+    	RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL ls latest
+    	exit 0
+    
+    # Restore the given file
+    elif [ "$1" = "--file-to-restore" ]; then
+    	if [ "$2" = "" ]; then
+    		echo "You must specify a file to restore"
+    		exit 2
+    	fi
+    	RESTORE_DIR="$(mktemp -d ./restored_XXXXXXXX)"
+    	RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL restore latest --target "$RESTORE_DIR" --include "$2" || exit 1
+    	echo "$2 was restored to $RESTORE_DIR"
+    	exit 0
+    
+    # Delete old backups
+    elif [ "$1" = "--prune" ]; then
+        # Expire old backups
+        RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL forget $RETENTION_POLICY
+    
+        # Delete files which are no longer necessary (slow)
+        RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL prune
+        exit 0
+    
+    # Catch invalid arguments
+    elif [ "$1" != "" ]; then
+    	echo "Invalid argument: $1"
+    	exit 1
+    fi
+    
+    # Check the integrity of existing backups
+    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL check || exit 1
+    
+    # Dump list of Debian packages
+    dpkg --get-selections > $PKG_FILE
+    
+    # Dump partition tables from harddrives
+    /sbin/fdisk -l /dev/sda > $PARTITION_FILE
+    /sbin/fdisk -l /dev/sdb > $PARTITION_FILE
+    
+    # Do the actual backup using Duplicity
+    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL backup / --exclude-file $EXCLUDE_FILE
+
+I run it with the following cronjob in `/etc/cron.d/backups`:
+
+    30 8 * * *    root  ionice nice nocache /root/backup/backup-machine1-to-foobar
+    30 2 * * Sun  root  ionice nice nocache /root/backup/backup-machine1-to-foobar --prune
+
+Finally, I printed a copy of each of my backup script, using
+[enscript](https://www.gnu.org/software/enscript/), to stash in a safe place:
+

(Diff truncated)
Something else is needed to turn off etckeeper auto-commits
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index ec41847..a29424f 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -48,6 +48,11 @@ and this in `/etc/.git/config`:
     [commit]
         gpgsign = false
 
+Note that in order to fully turn off auto-commits, it's also necessary
+to run the following:
+
+    systemctl disable etckeeper.timer
+
 To get more control over the various packages I install, I change the
 default debconf level to medium:
 

Add instructions for fixing the serial console output
diff --git a/posts/installing-debian-buster-on-gnubee2.mdwn b/posts/installing-debian-buster-on-gnubee2.mdwn
index 7a22444..04185d0 100644
--- a/posts/installing-debian-buster-on-gnubee2.mdwn
+++ b/posts/installing-debian-buster-on-gnubee2.mdwn
@@ -230,4 +230,21 @@ Finally, I cleaned up a deprecated and no-longer-needed package:
 
 and removed its invocation from `/etc/rc.local` and `/etc/cron.d/ntp`.
 
+## Fixing the serial console
+
+The serial console, [automatically started by
+systemd](https://github.com/systemd/systemd/issues/15611), seems to get
+corrupted every now and then. If you see garbled output (i.e. binary
+characters instead of text), then you are running into this problem.
+
+The fix, [suggested by Jernej
+Jakob](https://groups.google.com/d/msg/gnubee/N4fxGgwOyiQ/pntsYccgBAAJ), is
+to override the default systemd unit file by creating a
+`/etc/systemd/system/serial-getty@ttyS0.service.d/override.conf` with the
+following contents:
+
+    [Service]
+    ExecStart=
+    ExecStart=-/sbin/agetty -o '-p -- \\u' 57600 %I $TERM
+
 [[!tag debian]] [[!tag gnubee]]

Mention how to exit from `screen`
diff --git a/posts/installing-debian-buster-on-gnubee2.mdwn b/posts/installing-debian-buster-on-gnubee2.mdwn
index 32a97db..7a22444 100644
--- a/posts/installing-debian-buster-on-gnubee2.mdwn
+++ b/posts/installing-debian-buster-on-gnubee2.mdwn
@@ -33,6 +33,9 @@ you can use it to monitor the flashing process:
 
 otherwise keep an eye on the [LEDs and wait until they are fully done
 flashing](https://github.com/gnubee-git/GnuBee_Docs/wiki/Install-firmware#via-usb-stick).
+When you want to [exit
+screen](https://stackoverflow.com/questions/4847691/how-do-i-get-out-of-a-screen-without-typing-exit#),
+use `Ctrl-a` then `k`.
 
 ## Getting ssh access to LibreCMC
 

Add a note about using mosh and pagekite together
diff --git a/posts/letting-someone-ssh-into-your-laptop-using-pagekite.mdwn b/posts/letting-someone-ssh-into-your-laptop-using-pagekite.mdwn
index 90562b2..1e475df 100644
--- a/posts/letting-someone-ssh-into-your-laptop-using-pagekite.mdwn
+++ b/posts/letting-someone-ssh-into-your-laptop-using-pagekite.mdwn
@@ -88,4 +88,33 @@ before restarting the pagekite daemon using:
 
     systemctl restart pagekite
 
-[[!tag mozilla]] [[!tag debian]] [[!tag sysadmin]] [[!tag ssh]] [[!tag nzoss]] [[!tag pagekite]]
+# Using mosh and pagekite
+
+[Mosh](https://mosh.org/) is a nice way to interface with ssh over
+high-latency netowrks. However, it's not possible to tunnel mosh directly
+through pagekited since [pagekite only supports
+TCP](https://groups.google.com/d/topic/pagekite-discuss/YUfhVfWyYsU/discussion).
+
+I ended up with a hybrid setup where I don't have to expose the ssh service
+to the local network (and therefore remember to disable it when I'm done)
+but I do have to open a UDP port on my firewall for mosh.
+
+First, I assigned a stable IP to my laptop on my router, based on its MAC
+address. I also had to disable [MAC address spoofing in Network Manager](https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/) (setting it to permanent).
+
+This is what my `/etc/NetworkManager/system-connections/Ethernet automatique` config looks like:
+
+    [ethernet]
+    cloned-mac-address=preserve
+    
+    [ipv4]
+    method=auto
+    
+    [ipv6]
+    addr-gen-mode=stable-privacy
+    ip6-privacy=2
+    method=auto
+
+Then I forwarded port 9000 (UDP) traffic to the static IP address above.
+
+[[!tag mozilla]] [[!tag debian]] [[!tag sysadmin]] [[!tag ssh]] [[!tag pagekite]]

Limit Planet Sysadmin feed further, just in case
diff --git a/tags/sysadmin.mdwn b/tags/sysadmin.mdwn
index 04240f9..56369d8 100644
--- a/tags/sysadmin.mdwn
+++ b/tags/sysadmin.mdwn
@@ -1,4 +1,4 @@
 [[!meta title="pages tagged sysadmin"]]
 
 [[!inline pages="tagged(sysadmin)" actions="no" archive="yes"
-feedshow=10 feedpages=created_after(posts/debugging-openwrt-routers-by-shipping)]]
+feedshow=10 feedpages=created_after(posts/secure-ssh-agent-usage)]]

creating tag page tags/email
diff --git a/tags/email.mdwn b/tags/email.mdwn
new file mode 100644
index 0000000..c7295f6
--- /dev/null
+++ b/tags/email.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged email"]]
+
+[[!inline pages="tagged(email)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tags/gmail
diff --git a/tags/gmail.mdwn b/tags/gmail.mdwn
new file mode 100644
index 0000000..44af0cd
--- /dev/null
+++ b/tags/gmail.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged gmail"]]
+
+[[!inline pages="tagged(gmail)" actions="no" archive="yes"
+feedshow=10]]

Add a DMARC post.
diff --git a/posts/disabling-mail-sending-from-domain.mdwn b/posts/disabling-mail-sending-from-domain.mdwn
new file mode 100644
index 0000000..e097240
--- /dev/null
+++ b/posts/disabling-mail-sending-from-domain.mdwn
@@ -0,0 +1,33 @@
+[[!meta title="Disabling mail sending from your domain"]]
+[[!meta date="2020-04-23T22:10:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+I noticed that I was receiving some bounced email notifications from a
+domain I own (`cloud.geek.nz`) to host my blog. These notifications were all
+for spam messages spoofing the `From` address since I do not use that domain
+for email.
+
+I decided to try setting a strict [DMARC
+policy](https://dmarcly.com/blog/how-to-implement-dmarc-dkim-spf-to-stop-email-spoofing-phishing-the-definitive-guide)
+to see if DMARC-using mail servers (e.g. GMail) would then drop these
+spoofed emails without notifying me about it.
+
+I started by setting this initial DMARC policy in DNS in order to monitor the change:
+
+    @ TXT v=spf1 -all
+    _dmarc TXT v=DMARC1; p=none; ruf=mailto:dmarc@fmarier.org; sp=none; aspf=s; fo=0:1:d:s;
+
+Then I waited three weeks without receiving anything before updating the
+relevant DNS records to this final DMARC policy:
+
+    @ TXT v=spf1 -all
+    _dmarc TXT v=DMARC1; p=reject; sp=reject; aspf=s;
+
+This policy states that nobody is allowed to send emails for this domain and
+that any incoming email claiming to be from this domain should be silently
+rejected.
+
+I haven't noticed any bounce notifications for messages spoofing this domain
+in a while, so maybe it's working?
+
+[[!tag sysadmin]] [[!tag debian]] [[!tag dns]] [[!tag email]]

Create "email" and "gmail" tags for existing posts
diff --git a/posts/disabling-gmail-spam-filter-and.mdwn b/posts/disabling-gmail-spam-filter-and.mdwn
index 3e61973..e7fb5c0 100644
--- a/posts/disabling-gmail-spam-filter-and.mdwn
+++ b/posts/disabling-gmail-spam-filter-and.mdwn
@@ -35,4 +35,4 @@ This is done using [procmail](http://www.procmail.org/) with the following bit i
     * ^X-Spam-Status: Yes
     /home/francois/mail/spam
 
-[[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] 
+[[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag email]] [[!tag gmail]]
diff --git a/posts/handling-multiple-identitiesaccounts-in.mdwn b/posts/handling-multiple-identitiesaccounts-in.mdwn
index fe5b521..6a7a1ae 100644
--- a/posts/handling-multiple-identitiesaccounts-in.mdwn
+++ b/posts/handling-multiple-identitiesaccounts-in.mdwn
@@ -44,4 +44,4 @@ Finally, I've got this convenient shortcut which allows me to switch to my inbox
 Next up: [[indexing your emails using mairix|posts/searching-through-contents-of-emails-in/]].
 
 
-[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag ubuntu]] 
+[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag ubuntu]] [[!tag email]]
diff --git a/posts/keeping-gmail-in-separate-browser.mdwn b/posts/keeping-gmail-in-separate-browser.mdwn
index cfa141b..2d24f4a 100644
--- a/posts/keeping-gmail-in-separate-browser.mdwn
+++ b/posts/keeping-gmail-in-separate-browser.mdwn
@@ -48,4 +48,4 @@ Then log into GMail and tick the "Trust this computer" checkbox at the 2-factor
 With these settings, your browsing history will be cleared and you will be logged out of GMail every time you close your browser but will still be able to skip the 2-factor step on that device.
 
 
-[[!tag firefox]] [[!tag debian]] [[!tag ubuntu]] [[!tag privacy]] [[!tag nzoss]] [[!tag mozilla]] 
+[[!tag firefox]] [[!tag debian]] [[!tag ubuntu]] [[!tag privacy]] [[!tag nzoss]] [[!tag mozilla]] [[!tag gmail]]
diff --git a/posts/mutts-openpgp-support-and-firegpg.mdwn b/posts/mutts-openpgp-support-and-firegpg.mdwn
index b7b2818..7174089 100644
--- a/posts/mutts-openpgp-support-and-firegpg.mdwn
+++ b/posts/mutts-openpgp-support-and-firegpg.mdwn
@@ -32,4 +32,4 @@ However, this didn't actually work with FireGPG and the way that it puts encrypt
 
 
 
-[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] 
+[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag sysadmin]] [[!tag ubuntu]] [[!tag email]]
diff --git a/posts/preventing-man-in-middle-attacks-on.mdwn b/posts/preventing-man-in-middle-attacks-on.mdwn
index 6d2c76e..c2b6c55 100644
--- a/posts/preventing-man-in-middle-attacks-on.mdwn
+++ b/posts/preventing-man-in-middle-attacks-on.mdwn
@@ -60,4 +60,4 @@ smtp_tls_fingerprint_cert_match =
    <i>12:34:AB:CD:56:78:EF:90:12:AB:CD:34:56:EF:78:90:AB:CD:12:34:AB:DD:44:66:DA:77:CF:DB:E4:A7:02:E1</i>
 </pre>
 
-[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag security]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag fetchmail]] [[!tag postfix]]
+[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag security]] [[!tag ubuntu]] [[!tag nzoss]] [[!tag fetchmail]] [[!tag postfix]] [[!tag email]] [[!tag gmail]]
diff --git a/posts/searching-through-contents-of-emails-in.mdwn b/posts/searching-through-contents-of-emails-in.mdwn
index 88317e7..19d2d68 100644
--- a/posts/searching-through-contents-of-emails-in.mdwn
+++ b/posts/searching-through-contents-of-emails-in.mdwn
@@ -40,4 +40,4 @@ If you use GPG, you should also add this to your `~/.muttrc` to make sure that m
     bind pager s decrypt-save
 
 
-[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag ubuntu]] 
+[[!tag mutt]] [[!tag catalyst]] [[!tag debian]] [[!tag ubuntu]] [[!tag email]]
diff --git a/posts/test-mail-server-ubuntu-debian.mdwn b/posts/test-mail-server-ubuntu-debian.mdwn
index 3fe9f26..f444c0f 100644
--- a/posts/test-mail-server-ubuntu-debian.mdwn
+++ b/posts/test-mail-server-ubuntu-debian.mdwn
@@ -34,4 +34,4 @@ and then view the mailbox like this:
 
     mutt -f /var/mail/root
 
-[[!tag debian]] [[!tag nzoss]] [[!tag postfix]]
+[[!tag debian]] [[!tag nzoss]] [[!tag postfix]] [[!tag email]]
diff --git a/posts/things-that-work-well-with-tor.mdwn b/posts/things-that-work-well-with-tor.mdwn
index 1885807..110a615 100644
--- a/posts/things-that-work-well-with-tor.mdwn
+++ b/posts/things-that-work-well-with-tor.mdwn
@@ -139,4 +139,4 @@ I can take advantage of GMail's excellent caching and preloading and run the
 whole thing over Tor by setting that entire browser profile to run its
 traffic through the Tor SOCKS proxy on port `9050`.
 
-[[!tag debian]] [[!tag privacy]] [[!tag tor]] [[!tag nzoss]] [[!tag mozilla]] [[!tag xmpp]]
+[[!tag debian]] [[!tag privacy]] [[!tag tor]] [[!tag nzoss]] [[!tag mozilla]] [[!tag xmpp]] [[!tag gmail]]

Comment moderation
diff --git a/posts/making-sip-calls-voipms-without-pstn/comment_1_5b51ef3a4c00b6f1182718fa75c08b49._comment b/posts/making-sip-calls-voipms-without-pstn/comment_1_5b51ef3a4c00b6f1182718fa75c08b49._comment
new file mode 100644
index 0000000..70496e7
--- /dev/null
+++ b/posts/making-sip-calls-voipms-without-pstn/comment_1_5b51ef3a4c00b6f1182718fa75c08b49._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="185.242.5.35"
+ claimedauthor="seth black wider"
+ subject="two flavors"
+ date="2020-04-18T01:35:51Z"
+ content="""
+there are two flavors of sip uri there.
+
+[DID]@sip.voip.ms
+[subaccount]@[POP].voip.ms
+
+and they are not interchangeable.
+"""]]

Remove deprecated option
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index fa4f750..ec41847 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -95,8 +95,6 @@ and end up with the following settings in `/etc/ssh/sshd_config` (jessie):
     Ciphers chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
     MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
  
-    UsePrivilegeSeparation sandbox
-
     AuthenticationMethods publickey
     PasswordAuthentication no
     PermitRootLogin no

Add missing postfix-related package for SASL
https://www.howtoforge.com/community/threads/solved-problem-with-outgoing-mail-from-server.53920/
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 1387dbf..fa4f750 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -315,7 +315,7 @@ and then run:
 
 # Mail
 
-    apt install postfix
+    apt install postfix libsasl2-modules
     apt purge exim4-base exim4-daemon-light exim4-config
 
 Configuring mail properly is tricky but the following has worked for me.

Add missing perl package for mon
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index a1bb7bc..1387dbf 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -368,7 +368,7 @@ To monitor that mail never stops flowing, add this machine to a free
 
 # Monitoring
 
-    apt install --no-install-recommends mon libfilesys-diskspace-perl
+    apt install --no-install-recommends mon libfilesys-diskspace-perl libfilesys-df-perl
 
 In order to ensure that the root partition never has less than 1G of free
 space, I put the following in `/etc/mon/mon.cf`:

Add another user which sometimes receives mail
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 5230bd3..a1bb7bc 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -341,7 +341,7 @@ Set the following aliases in `/etc/aliases`:
 
 - set `francois` as the destination of `root` emails
 - set an external email address for `francois`
-- set `root` as the destination for `www-data` emails
+- set `root` as the destination for `mon` and `www-data` emails
 
 before running `newaliases` to update the aliases database.
 

Fix checkmail execute bit and healthchecks.io domain
diff --git a/posts/usual-server-setup.mdwn b/posts/usual-server-setup.mdwn
index 5f27fc7..5230bd3 100644
--- a/posts/usual-server-setup.mdwn
+++ b/posts/usual-server-setup.mdwn
@@ -350,7 +350,10 @@ Create a new cronjob (`/etc/cron.hourly/checkmail`):
     #!/bin/sh
     ls /var/mail
 
-to ensure that email doesn't accumulate unmonitored on this box.
+to ensure that email doesn't accumulate unmonitored on this box. Don't
+forget to make the script executable:
+
+    chmod +x /etc/cron.hourly/checkmail
 
 Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then
 test the whole setup using `mail root`. You should also use
@@ -361,7 +364,7 @@ To monitor that mail never stops flowing, add this machine to a free
 [healthchecks.io](https://healthchecks.io) account and create a
 `/etc/cron.d/healthchecks-io` cronjob:
 
-    0 1 * * * root echo "ping" | mail xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@hchk.io
+    0 1 * * * root echo "ping" | mail xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@hc-ping.com
 
 # Monitoring
 

Add additional ntpdate caller.
diff --git a/posts/installing-debian-buster-on-gnubee2.mdwn b/posts/installing-debian-buster-on-gnubee2.mdwn
index 139a67f..32a97db 100644
--- a/posts/installing-debian-buster-on-gnubee2.mdwn
+++ b/posts/installing-debian-buster-on-gnubee2.mdwn
@@ -225,6 +225,6 @@ Finally, I cleaned up a deprecated and no-longer-needed package:
 
     apt purge ntpdate
 
-and removed its invocation from `/etc/rc.local`.
+and removed its invocation from `/etc/rc.local` and `/etc/cron.d/ntp`.
 
 [[!tag debian]] [[!tag gnubee]]

Add Gogo on Linux post
diff --git a/posts/using-gogo-wifi-linux.mdwn b/posts/using-gogo-wifi-linux.mdwn
new file mode 100644
index 0000000..961d4ba
--- /dev/null
+++ b/posts/using-gogo-wifi-linux.mdwn
@@ -0,0 +1,43 @@
+[[!meta title="Using Gogo WiFi on Linux"]]
+[[!meta date="2020-04-11T16:30:00.000-07:00"]]
+[[!meta license="[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)"]]
+
+[Gogo](https://www.gogoair.com/for-passengers/), the WiFi provider for
+[airlines like Air Canada](https://www.gogoair.com/participating-airlines/),
+is not available to Linux users even though it advertises ["access using any
+Wi-Fi enabled laptop, tablet or
+smartphone"](https://www.gogoair.com/ac-bbyf/one-way-pass/detail/). It is
+however possible to work-around this restriction by faking your browser
+[user agent](https://en.wikipedia.org/wiki/User_agent).
+
+I tried the [User-Agent Switcher for
+Chrome](https://chrome.google.com/webstore/detail/user-agent-switcher-for-c/djflhoibgkdhkhhcedjiklpkjnoahfmg)
+extension on Chrome and [Brave](https://brave.com/clo187) but it didn't work
+for some reason.
+
+What did work was using Firefox and adding the following prefs in
+`about:config` to spoof its user agent to Chrome for Windows:
+
+    general.useragent.override=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36
+    general.useragent.updates.enabled=false
+    privacy.resistFingerprinting=false
+
+The last two prefs are necessary in order for the hidden
+`general.useragent.override` pref to [not be
+ignored](https://searchfox.org/mozilla-central/rev/8ed108064bf1c83e508208e069a90cffb4045977/dom/base/Navigator.cpp#1892-1904).
+
+# Opt out of mandatory arbitration
+
+As an aside, the Gogo [terms of
+service](https://content.gogoair.com/terms/aca/?lang=en_US) automatically
+enroll you into [mandatory
+arbitration](https://www.hotcoffeethemovie.com/default.asp?pg=mandatory_arbitration)
+unless you opt out by sending an email to
+[customercare@gogoair.com](mailto:customercare@gogoair.com) within 30 days
+of using their service.
+
+You may want to create an email template for this so that you can fire off a
+quick email to them as soon as you connect. I will probably write a script
+for it next time I use this service.
+
+[[!tag debian]] [[!tag firefox]]

Remove obsolete comments from GnuBee post
diff --git a/posts/installing-debian-buster-on-gnubee2/comment_1_2a3c537445b6d27e05446e43471ddc81._comment b/posts/installing-debian-buster-on-gnubee2/comment_1_2a3c537445b6d27e05446e43471ddc81._comment
deleted file mode 100644
index 83f1c74..0000000
--- a/posts/installing-debian-buster-on-gnubee2/comment_1_2a3c537445b6d27e05446e43471ddc81._comment
+++ /dev/null
@@ -1,9 +0,0 @@
-[[!comment format=mdwn
- ip="188.192.119.43"
- claimedauthor="Hein Osenberg"
- subject="Works for me"
- date="2019-09-02T19:21:34Z"
- content="""
-With Neil Browns new kernel (5.2.8 - see <http://neil.brown.name/gnubee/>), installed (see README) upgrading to Debian Buster worked perfectly for me. SSH access works
-out of the box without problems.
-"""]]
diff --git a/posts/installing-debian-buster-on-gnubee2/comment_2_bc2dba3b221d2d92ae9c306d1be4fc0d._comment b/posts/installing-debian-buster-on-gnubee2/comment_2_bc2dba3b221d2d92ae9c306d1be4fc0d._comment
deleted file mode 100644
index 1ea2359..0000000
--- a/posts/installing-debian-buster-on-gnubee2/comment_2_bc2dba3b221d2d92ae9c306d1be4fc0d._comment
+++ /dev/null
@@ -1,9 +0,0 @@
-[[!comment format=mdwn
- ip="165.225.114.98"
- claimedauthor="Antoine"
- subject="Same issue with SSH"
- date="2019-09-09T08:49:57Z"
- content="""
-Hi, 
-Just stumbled onto this post. I have the same issue with SSH, on a GnuBee PC1. I believe openssh is running (according to OpenMediaVault GUI), but it rejects all connections attempts. Please keep updating this post if you find a solution.
-"""]]
diff --git a/posts/installing-debian-buster-on-gnubee2/comment_3_5d038744e65cf79c39b9fd03b937c812._comment b/posts/installing-debian-buster-on-gnubee2/comment_3_5d038744e65cf79c39b9fd03b937c812._comment
deleted file mode 100644
index 7d29cdc..0000000
--- a/posts/installing-debian-buster-on-gnubee2/comment_3_5d038744e65cf79c39b9fd03b937c812._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- ip="188.192.119.43"
- claimedauthor="Hein Osenberg"
- subject="Works for me"
- date="2019-09-02T19:21:34Z"
- content="""
-With Neil Browns new kernel (5.2.8 - see http://neil.brown.name/gnubee/), installed (see README) upgrading to Debian Buster worked perfectly for me. SSH access works out of the box without problems. 
-"""]]