RSS Atom Add a new post titled:
Formatting an SD card for a Garmin device on Linux

Some Garmin devices may pretend that they can format an SD card into the format they expect, but in my experience, you can instead get stuck in a loop in their user interface and never get the SD card recognized.

Here's what worked for me:

  1. Plug the SD card onto the Linux computer using a USB adapter.
  2. Find out the device name (e.g. /dev/sdc on my computer) using dmesg.
  3. Start fdisk /dev/sdc as root.
  4. Delete any partitions using the d command.
  5. Create a new DOS partition table using the o command.
  6. Create a new primary partition using the n command and accept all of the defaults.
  7. Set the type of that partition to W95 FAT32 (0b).
  8. Save everything using the w command.

Now if I run fdisk -l /dev/sdc, I see the following:

Disk /dev/sdc: 14.84 GiB, 15931539456 bytes, 31116288 sectors
Disk model: Mass-Storage    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7f2ef0ad

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdc1        2048 31116287 31114240 14.8G  b W95 FAT32

and that appears to be recognized directly by my Garmin DriveSmart 61.

Setting up a JMP SIP account on Asterisk

JMP offers VoIP calling via XMPP, but it's also possibly to use the VoIP using SIP.

The underlying VoIP calling functionality in JMP is provided by Bandwidth, but their old Asterisk instructions didn't quite work for me. Here's how I set it up in my Asterisk server.

Get your SIP credentials

After signing up for JMP and setting it up in your favourite XMPP client, send the following message to the cheogram.com gateway contact:

reset sip account

In response, you will receive a message containing:

  • a numerical username
  • a password (e.g. three lowercase words separated by spaces)

Add SIP account to your Asterisk config

First of all, I added the following near the top of my /etc/asterisk/sip.conf:

[general]
register => username:three secret words@jmp.cbcbc7.auth.bandwidth.com:5008

The other non-default options I have set in [general] are:

context=public
allowoverlap=no
udpbindaddr=0.0.0.0
tcpenable=yes
tcpbindaddr=0.0.0.0
tlsenable=yes
transport=udp
srvlookup=no
vmexten=voicemail
relaxdtmf=yes
useragent=Asterisk PBX
tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key
tlscapath=/etc/ssl/certs/
externhost=machinename.dyndns.org
localnet=192.168.0.0/255.255.0.0

Note that you can have more than one register line in your config if you use more than one SIP provider, but you must register with the server whether you want to receive incoming calls or not.

Then I added a new blurb to the bottom of the same file:

[jmp]
type=peer
host=mp.cbcbc7.auth.bandwidth.com
port=5008
secret=three secret words
defaultuser=username
context=from-jmp
disallow=all
allow=ulaw
allow=g729
insecure=port,invite
canreinvite=no
dtmfmode=rfc2833

and for reference, here's the blurb for my Snom 300 SIP phone:

[1001]
; Snom 300
type=friend
qualify=yes
secret=password
encryption=no
context=full
host=dynamic
nat=no
directmedia=no
mailbox=1000@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw

I checked that the registration was successful by running asterisk -r and then typing:

sip set debug on

before reloading the configuration using:

reload

Create Asterisk extensions to send and receive calls

Once I got registration to work, I hooked this up with my other extensions so that I could send and receive calls using my JMP number.

In /etc/asterisk/extensions.conf, I added the following:

[from-jmp]
include => home
exten => s,1,Goto(1000,1)

where home is the context which includes my local SIP devices and 1000 is the extension I want to ring.

Then I added the following to enable calls to any destination within the North American Numbering Plan:

[pstn-jmp]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231434>)
exten => _1NXXNXXXXXX,n,Dial(SIP/jmp/${EXTEN})
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231234>)
exten => _NXXNXXXXXX,n,Dial(SIP/jmp/1${EXTEN})
exten => _NXXNXXXXXX,n,Hangup()

Here 5551231234 is my JMP phone number, not my bwsip numerical username.

For reference, here's the rest of my dialplan in /etc/asterisk/extensions.conf:

[general]
static=yes
writeprotect=no
clearglobalvars=no

[public]
exten => _X.,1,Hangup(3)

[sipdefault]
exten => _X.,1,Hangup(3)

[default]
exten => _X.,1,Hangup(3)

[internal]
include => home

[full]
include => internal
include => pstn-jmp
exten => 707,1,VoiceMailMain(1000@internal)

[home]
; Internal extensions
exten => 1000,1,Dial(SIP/1001,20)
exten => 1000,n,Goto(in1000-${DIALSTATUS},1)
exten => 1000,n,Hangup
exten => in1000-BUSY,1,Hangup(17)
exten => in1000-CONGESTION,1,Hangup(3)
exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@internal,su)
exten => in1000-CHANUNAVAIL,n,Hangup(3)
exten => in1000-NOANSWER,1,VoiceMail(1000@internal,su)
exten => in1000-NOANSWER,n,Hangup(16)
exten => _in1000-.,1,Hangup(16)

Firewall

Finally, I opened a few ports in my firewall by putting the following in /etc/network/iptables.up.rules:

# SIP and RTP on UDP (jmp.cbcbc7.auth.bandwidth.com)
-A INPUT -s 67.231.2.13/32 -p udp --dport 5008 -j ACCEPT
-A INPUT -s 216.82.238.135/32 -p udp --dport 5008 -j ACCEPT
-A INPUT -s 67.231.2.13/32 -p udp --sport 5004:5005 --dport 10001:20000 -j ACCEPT
-A INPUT -s 216.82.238.135/32 -p udp --sport 5004:5005 --dport 10001:20000 -j ACCEPT

Outbound calls not working

While the above setup works for me for inbound calls, it doesn't currently work for outbound calls.

The hostname currently resolves to one of two IP addresses:

$ dig +short jmp.cbcbc7.auth.bandwidth.com
67.231.2.13
216.82.238.135

If I pin it to the first one by putting the following in my /etc/hosts file:

67.231.2.13 jmp.cbcbc7.auth.bandwidth.com

then I get a 486 error back from the server when I dial 1-555-456-4567:

<--- SIP read from UDP:67.231.2.13:5008 --->
SIP/2.0 486 Busy Here
Via: SIP/2.0/UDP 127.0.0.1:5060;branch=z9hG4bK03210a30
From: "Francois Marier" <sip:5551231234@127.0.0.1>
To: <sip:15554564567@jmp.cbcbc7.auth.bandwidth.com:5008>
Call-ID: 575f21f36f57951638c1a8062f3a5201@127.0.0.1:5060
CSeq: 103 INVITE
Content-Length: 0

On the other hand, if I pin it to 216.82.238.135, then I get a 600 error:

<--- SIP read from UDP:216.82.238.135:5008 --->
SIP/2.0 600 Busy Everywhere
Via: SIP/2.0/UDP 127.0.0.1:5060;branch=z9hG4bK7b7f7ed9
From: "Francois Marier" <sip:5551231234@127.0.0.1>
To: <sip:15554564567@jmp.cbcbc7.auth.bandwidth.com:5008>
Call-ID: 5bebb8d05902c1732c6b9f4776844c66@127.0.0.1:5060
CSeq: 103 INVITE
Content-Length: 0

If you have any idea what might be wrong here, or if you got outbound calls to work on Bandwidth.com, please leave a comment!

Using implicit TLS in Postfix

In order to mitigate the NO STARTTLS vulnerabilities, I recently switched my local SMTP smarthosts from STARTTLS (port 587) to implicit TLS (port 465).

Here are the key configuration parameters for Postfix (i.e. /etc/postfix/main.cf):

relayhost = [smtp.kolabnow.com]:465
smtp_tls_wrappermode = yes
smtp_tls_security_level = secure

Note that this is for KolabNow, but the same works for GMail and Novus.

The square brackets around the hostname tell Postfix not to look up the MX name using DNS and instead to use the SMTP server name as-is.

Setting the smtp_tls_security_level parameter to secure ensures that the server is using a valid TLS certificate.

Time-stretch in Kodi

VLC has a really neat feature which consists of time-stretching audio to allow users to speed up or slow video playback with the [ and ] keys without affecting the pitch of the sound. I recently switched to Kodi as my video player of choice and I was looking for the equivalent feature.

Kodi equivalent

To enable this feature in Kodi, you first need to enable Sync playback to display in Settings | Player | Videos.

Then map the tempoup and tempodown commands to the same keyboard shorcuts as VLC.

In my case however, I wanted to map these functions to buttons on my Streamzap remote and so I put the following in my ~/.kodi/userdata/keymaps/remote.xml:

  <FullscreenVideo>
    <remote>
      <pageminus>PlayerControl(tempodown)</pageminus>
      <pageplus>PlayerControl(tempoup)</pageplus>
    </remote>
  </FullscreenVideo>

which allows me to press the Ch + and Ch - buttons on the remote to adjust the speed while the video is playing (in full-screen mode only, not with the menu displayed).

Examples

Here are three ways I use this functionality:

  • I set it to 0.9x for movies in languages I'm not totally proficient in.
  • I set it to 1.1x for almost everything since the difference is not especially perceptible, but it still allows me to watch 10% more movies in the same amount of time :)
  • I set it to 1.2x for Rick & Morty because it makes Rick even more hilariously reckless and impatient.

Unfortunately, I haven't found a way to set the default tempo value. The closest setting I could find is the one which allows you to set the maximum tempo value maxtempo. If you know of a way, please leave a comment!

Zoom WebRTC links

Most people connect to Zoom via a proprietary client which has been on the receiving end of a number of security and privacy issues over the past year, with some experts even describing it as malware.

It's not widely known however that Zoom offers a half-decent WebRTC client which means cross-platform one-click access to a Zoom room or webinar without needing to install any software.

Given a Zoom link such as https://companyname.zoom.us/j/123456789?pwd=letmein, you can use https://zoom.us/wc/join/123456789?pwd=letmein to connect in your browser.

Notice that the pool of Zoom room IDs is global and you can just drop the companyname from the URL.

In my experience however, Jitsi has much better performance than Zoom's WebRTC client. For instance, I've never been able to use Zoom successfully on a Raspberry Pi 4 (8GB), but Jitsi works quite well. If you have a say in the choice of conference platform, go with Jitsi instead.

Removing unsafe-inline from Ikiwiki's style-src directive

After moving my Ikiwiki blog to my own server and enabling a basic CSP policy, I decided to see if I could tighten up the policy some more and stop relying on style-src 'unsafe-inline'.

This does require that OpenID logins be disabled, but as a bonus, it also removes the need for jQuery to be present on the server.

Revised CSP policy

First of all, I visited all of my pages in a Chromium browser and took note of the missing hashes listed in the developer tools console (Firefox doesn't show the missing hashes):

  • 'sha256-4Su6mBWzEIFnH4pAGMOuaeBrstwJN4Z3pq/s1Kn4/KQ='
  • 'sha256-j0bVhc2Wj58RJgvcJPevapx5zlVLw6ns6eYzK/hcA04='
  • 'sha256-j6Tt8qv7z2kSc7fUs0YHbrxawwsQcS05fVaX1r2qrbk='
  • 'sha256-p4cncjf0hAIeTSS5tXecf7qTUanDC27KdlKhT9eOsZU='
  • 'sha256-Y6v8OCtFfMmI5mbpwqCreLofmGZQfXYK7jJHCoHvn7A='
  • 'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='

which took care of all of the inline styles.

Note that I kept unsafe-inline in the directive since it will be automatically ignored by browsers who understand hashes, but will be honored and make the site work on older browsers.

Next I added the new unsafe-hashes source expression along with the hash of the CSS fragment (clear: both) that is present on all pages related to comments in Ikiwiki:

$ echo -n "clear: both" | openssl dgst -sha256 -binary | openssl base64 -A
matwEc6givhWX0+jiSfM1+E5UMk8/UGLdl902bjFBmY=

My final style-src directive is therefore the following:

style-src 'self' 'unsafe-inline' 'unsafe-hashes' 'sha256-4Su6mBWzEIFnH4pAGMOuaeBrstwJN4Z3pq/s1Kn4/KQ=' 'sha256-j0bVhc2Wj58RJgvcJPevapx5zlVLw6ns6eYzK/hcA04=' 'sha256-j6Tt8qv7z2kSc7fUs0YHbrxawwsQcS05fVaX1r2qrbk=' 'sha256-p4cncjf0hAIeTSS5tXecf7qTUanDC27KdlKhT9eOsZU=' 'sha256-Y6v8OCtFfMmI5mbpwqCreLofmGZQfXYK7jJHCoHvn7A=' 'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=' 'sha256-matwEc6givhWX0+jiSfM1+E5UMk8/UGLdl902bjFBmY='

Browser compatibility

While unsafe-hashes is not yet implemented in Firefox, it happens to work just fine due to a bug (i.e. unsafe-hashes is always enabled whether or not the policy contains it).

It's possible that my new CSP policy won't work in Safari, but these CSS clears don't appear to be needed anyways and so it's just going to mean extra CSP reporting noise.

Removing jQuery

Since jQuery appears to only be used to provide the authentication system selector UI, I decided to get rid of it.

I couldn't find a way to get Ikiwiki to stop pulling it in and so I put the following hack in my Apache config file:

# Disable jQuery.
Redirect 204 /ikiwiki/jquery.fileupload.js
Redirect 204 /ikiwiki/jquery.fileupload-ui.js
Redirect 204 /ikiwiki/jquery.iframe-transport.js
Redirect 204 /ikiwiki/jquery.min.js
Redirect 204 /ikiwiki/jquery.tmpl.min.js
Redirect 204 /ikiwiki/jquery-ui.min.css
Redirect 204 /ikiwiki/jquery-ui.min.js
Redirect 204 /ikiwiki/login-selector/login-selector.js

Replacing the files on disk with an empty reponse seems to work very well and removes a whole lot of code that would otherwise be allowed by the script-src directive of my CSP policy. While there is a slight cosmetic change to the login page, I think the reduction in the attack surface is well worth it.

Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies

Here are all of the extra Debian packages I had to install on my server:

apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl

Then I enabled the CGI module in Apache:

a2enmod cgi

disabled gitweb (which is pulled in by ikiwiki-hosting-web):

a2disconf gitweb

and un-commented the following in /etc/apache2/mods-available/mime.conf:

AddHandler cgi-script .cgi

Creating a separate user account

Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:

adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup

Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):

git clone --bare git://feedingthecloud.branchable.com/ source.git

Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work.

Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:

git branch -d setup
git remote rm origin

Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:

cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud

I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop.

Finaly, I generated a new ssh key without a passphrase:

ssh-keygen -t ed25519

and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config

While I started with the Branchable setup file, I changed the following things in it:

adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()

Then I created the destdir:

mkdir /var/www/blog
chown blog:blog /var/www/blog

and generated the initial copy of the blog as the blog user:

ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild

One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config

Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

    Include /etc/fmarier-org/blog-common
</VirtualHost>

<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

and the common config I put in /etc/fmarier-org/blog-common:

ServerAdmin webmaster@fmarier.org

DocumentRoot /var/www/blog

LogLevel core:info
CustomLog ${APACHE_LOG_DIR}/blog-access.log combined
ErrorLog ${APACHE_LOG_DIR}/blog-error.log

AddType application/rss+xml .rss

<Location /blog.cgi>
        Options +ExecCGI
</Location>

before enabling all of this using:

a2ensite blog
apache2ctl configtest
systemctl restart apache2.service

The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements

Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served.

First of all, I enabled the HTTP/2 and Brotli modules:

a2enmod http2
a2enmod brotli

and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:

<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType ${TRANSFER_COMPRESSION} text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/rss+xml
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/xml
  </IfModule>
</IfDefine>

and replacing /etc/apache2/mods-available/deflate.conf with the following:

# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632

before enabling this new config:

a2enconf compression

Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion

    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 

<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>

Then I followed the Mozilla Observatory recommendations and enabled the following security headers:

Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"

Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure.

I also used the Mozilla TLS config generator to improve the TLS config for my server.

Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:

Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt

I also followed these instructions to create a sitemap for my blog with the following alias:

Alias /sitemap.xml /var/www/blog/sitemap/index.rss

Finally, I simplified a few error pages to save bandwidth:

ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s

Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/blog-error.log 

Based on that, I added a few redirects to point bots and users to the location of my RSS feed:

Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss

and to tell them to stop trying to fetch obsolete resources:

Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi

I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:

Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom

I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:

User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Dealing with spam

In my Ikiwiki setup file, I locked all pages except for comments and then made all change moderated:

emailauth_sender: login@fmarier.org
anonok_pagespec: postcomment(*)
locked_pages: '* and !postcomment(*)'
moderate_pagespec: '*'

However, some bots appear to be trying to login anyways with invalid email addresses:

From: Mail Delivery Subsystem <mailer-daemon@googlemail.com>
Subject: Delivery Status Notification (Failure)

** Address not found **

Your message wasn't delivered to rakletbot@secmail.pro because the domain secmail.pro
couldn't be found. Check for typos or unnecessary spaces and try again.

The response was:

DNS Error: 13082484 DNS type 'mx' lookup of secmail.pro responded with code NXDOMAIN
Domain name not found: secmail.pro

In order to avoid receiving these bounce emails, I added the following to my /etc/postfix/main.cf:

transport_maps = hash:/etc/postfix/transport

and the following to /etc/postfix/transport:

rakletbot@secmail.pro discard

before running the following:

postmap /etc/postfix/transport
systemctl restart postfix.service

Now emails sent to that abusive email address from Ikiwiki are silently dropped.

Future improvements

There are a few things I'd like to improve on my current setup.

The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for.

Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:

[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"

but I'd like to also reject unsigned pushes.

While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Update: now fixed.

Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:

[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom

This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

Upgrading an ext4 filesystem for the year 2038

If you see a message like this in your logs:

ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)

it's an indication that your filesystem is not Y2k38-safe.

You can also check this manually using:

$ tune2fs -l /dev/sda1 | grep "Inode size:"
Inode size:           128

where an inode size of 128 is insufficient beyond 2038 and an inode size of 256 is what you want.

The safest way to change this is to copy the contents of your partition to another ext4 partition:

cp -a /boot /mnt/backup/

and then reformat with the correct inode size:

umount /boot
mkfs.ext4 -I 256 /dev/sda1

before copying everything back:

mount /boot
cp -a /mnt/backup/boot/* /boot/
Deleting non-decryptable restic snapshots

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree 4645312b:
  decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed
error for tree 2c3248ce:
  decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed
Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b
repository b0b0516c opened successfully, password is correct
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot e75876ed (2021-02-28 08:35:29)

$ restic find --tree 2c3248ce
repository b0b0516c opened successfully, password is correct
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  2 / 2 files deleted

$ restic prune 
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:23] 100.00%  58964 / 58964 packs
repository contains 58964 packs (1417910 blobs) with 278.913 GiB
processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 20 snapshots
[1:15] 100.00%  20 / 20 snapshots
found 1364852 of 1417910 data blobs still in use, removing 53058 blobs
will remove 0 invalid files
will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB
[10:50] 31.96%  434 / 1358 packs rewritten
hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904
github.com/restic/restic/internal/repository.Repack
    github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
    github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
    github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
    github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra/command.go:897
main.main
    github.com/restic/restic/cmd/restic/main.go:98
runtime.main
    runtime/proc.go:204
runtime.goexit
    runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  13 / 13 files deleted

$ restic prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:37] 100.00%  60020 / 60020 packs
repository contains 60020 packs (1548315 blobs) with 283.466 GiB
processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate
load all snapshots
find data that is still in use for 8 snapshots
[0:53] 100.00%  8 / 8 snapshots
found 1219895 of 1548315 data blobs still in use, removing 328420 blobs
will remove 0 invalid files
will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB
[23:37] 100.00%  1275 / 1275 packs rewritten
counting files in repo
[11:45] 100.00%  52822 / 52822 packs
finding old index files
saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa]
remove 23 old index files
[0:00] 100.00%  23 / 23 files deleted
remove 7507 old packs
[0:08] 100.00%  7507 / 7507 files deleted
done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

Using a Streamzap remote control with Kodi

After installing Kodi on a Raspberry Pi 4, I found that my Streamzap remote control worked for everything except the Ok and Exit buttons (which are supposed to get mapped to Enter and Back respectively).

A very old set of instructions for this is archived on the Kodi wiki but here's a more modern version of it.

Root cause

I finally tracked down the problem by enabling debug logging in Kodi settings. I saw the following in ~/.kodi/temp/kodi.log when presing the OK button:

DEBUG: Keyboard: scancode: 0x00, sym: 0x0000, unicode: 0x0000, modifier: 0x0
DEBUG: GetActionCode: Trying Hardy keycode for 0xf200
DEBUG: Previous line repeats 3 times.
DEBUG: HandleKey: long-0 (0x100f200, obc-16838913) pressed, action is
DEBUG: Keyboard: scancode: 0x00, sym: 0x0000, unicode: 0x0000, modifier: 0x0

and this when pressing the Down button:

DEBUG: CLibInputKeyboard::ProcessKey - using delay: 500ms repeat: 125ms
DEBUG: Thread Timer start, auto delete: false
DEBUG: Keyboard: scancode: 0x6c, sym: 0x0112, unicode: 0x0000, modifier: 0x0
DEBUG: HandleKey: down (0xf081) pressed, action is Down
DEBUG: Thread Timer 2502349008 terminating
DEBUG: Keyboard: scancode: 0x6c, sym: 0x0112, unicode: 0x0000, modifier: 0x0

This suggests that my Streamzap remote is recognized as a keyboard, which I can confirm using:

$ cat /proc/bus/input/devices 
I: Bus=0003 Vendor=0e9c Product=0000 Version=0100
N: Name="Streamzap PC Remote Infrared Receiver (0e9c:0000)"
P: Phys=usb-0000:01:00.0-1.2/input0
S: Sysfs=/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb1/1-1/1-1.2/1-1.2:1.0/rc/rc0/input4
U: Uniq=
H: Handlers=kbd event0 
B: PROP=20
B: EV=100017
B: KEY=3ff 0 0 0 fc000 1 0 0 0 0 18000 4180 c0000801 9e1680 0 0 0
B: REL=3
B: MSC=10

Installing LIRC

The fix I found is to put the following in /etc/X11/xorg.conf.d/90-streamzap-disable.conf:

Section "InputClass"
    Identifier "Ignore Streamzap IR"
    MatchProduct "Streamzap"
    MatchIsKeyboard "true"
    Option "Ignore" "true"
EndSection

to prevent the remote from being used as a keyboard and to instead use it via LIRC, which can be installed like this:

apt install lirc

Put the following in /etc/lirc/lirc_options:

driver=default
device=/dev/lirc0

and install this remote configuration as /etc/lirc/lircd.conf.d/streamzap.conf:

cd /etc/lirc/lircd.conf.d/
curl https://raw.githubusercontent.com/graysky2/streamzap/master/00-Streamzap_PC_Remote.conf > streamzap.conf

Make sure you don't use the config file that comes with the lirc-compat-remotes package or you will likely end up with an over-sensitive remote which tends to double key presses (e.g. pressing the down arrow will go down more than once).

Testing

Now you should be able to test the remote using:

mode2

to see the undecoded infra-red signal, and:

irw

to display the decoded key presses.

Kodi configuration

Finally, as the pi user, put the following config in ~/.kodi/userdata/Lircmap.xml:

<lircmap>
  <remote device="Streamzap_PC_Remote">
    <power>KEY_POWER</power>
    <play>KEY_PLAY</play>
    <pause>KEY_PAUSE</pause>
    <stop>KEY_STOP</stop>
    <forward>KEY_FORWARD</forward>
    <reverse>KEY_REWIND</reverse>
    <left>KEY_LEFT</left>
    <right>KEY_RIGHT</right>
    <up>KEY_UP</up>
    <down>KEY_DOWN</down>
    <pageplus>KEY_CHANNELUP</pageplus>
    <pageminus>KEY_CHANNELDOWN</pageminus>
    <select>KEY_OK</select>
    <back>KEY_EXIT</back>
    <menu>KEY_MENU</menu>
    <red>KEY_RED</red>
    <green>KEY_GREEN</green>
    <yellow>KEY_YELLOW</yellow>
    <blue>KEY_BLUE</blue>
    <skipplus>KEY_NEXT</skipplus>
    <skipminus>KEY_PREVIOUS</skipminus>
    <record>KEY_RECORD</record>
    <volumeplus>KEY_VOLUMEUP</volumeplus>
    <volumeminus>KEY_VOLUMEDOWN</volumeminus>
    <mute>KEY_MUTE</mute>
    <record>KEY_RECORD</record>
    <one>KEY_1</one>
    <two>KEY_2</two>
    <three>KEY_3</three>
    <four>KEY_4</four>
    <five>KEY_5</five>
    <six>KEY_6</six>
    <seven>KEY_7</seven>
    <eight>KEY_8</eight>
    <nine>KEY_9</nine>
    <zero>KEY_0</zero>
  </remote>
</lircmap>

In order for all of this to take effect, I simply rebooted the Pi:

sudo systemctl reboot