RSS Atom Add a new post titled:
Time-stretch in Kodi

VLC has a really neat feature which consists of time-stretching audio to allow users to speed up or slow video playback with the [ and ] keys without affecting the pitch of the sound. I recently switched to Kodi as my video player of choice and I was looking for the equivalent feature.

Kodi equivalent

To enable this feature in Kodi, you first need to enable Sync playback to display in Settings | Player | Videos.

Then map the tempoup and tempodown commands to the same keyboard shorcuts as VLC.

In my case however, I wanted to map these functions to buttons on my Streamzap remote and so I put the following in my ~/.kodi/userdata/keymaps/remote.xml:

  <FullscreenVideo>
    <remote>
      <pageminus>PlayerControl(tempodown)</pageminus>
      <pageplus>PlayerControl(tempoup)</pageplus>
    </remote>
  </FullscreenVideo>

which allows me to press the Ch + and Ch - buttons on the remote to adjust the speed while the video is playing (in full-screen mode only, not with the menu displayed).

Examples

Here are three ways I use this functionality:

  • I set it to 0.9x for movies in languages I'm not totally proficient in.
  • I set it to 1.1x for almost everything since the difference is not especially perceptible, but it still allows me to watch 10% more movies in the same amount of time :)
  • I set it to 1.2x for Rick & Morty because it makes Rick even more hilariously reckless and impatient.

Unfortunately, I haven't found a way to set the default tempo value. The closest setting I could find is the one which allows you to set the maximum tempo value maxtempo. If you know of a way, please leave a comment!

Zoom WebRTC links

Most people connect to Zoom via a proprietary client which has been on the receiving end of a number of security and privacy issues over the past year, with some experts even describing it as malware.

It's not widely known however that Zoom offers a half-decent WebRTC client which means cross-platform one-click access to a Zoom room or webinar without needing to install any software.

Given a Zoom link such as https://companyname.zoom.us/j/123456789?pwd=letmein, you can use https://zoom.us/wc/join/123456789?pwd=letmein to connect in your browser.

Notice that the pool of Zoom room IDs is global and you can just drop the companyname from the URL.

In my experience however, Jitsi has much better performance than Zoom's WebRTC client. For instance, I've never been able to use Zoom successfully on a Raspberry Pi 4 (8GB), but Jitsi works quite well. If you have a say in the choice of conference platform, go with Jitsi instead.

Removing unsafe-inline from Ikiwiki's style-src directive

After moving my Ikiwiki blog to my own server and enabling a basic CSP policy, I decided to see if I could tighten up the policy some more and stop relying on style-src 'unsafe-inline'.

This does require that OpenID logins be disabled, but as a bonus, it also removes the need for jQuery to be present on the server.

Revised CSP policy

First of all, I visited all of my pages in a Chromium browser and took note of the missing hashes listed in the developer tools console (Firefox doesn't show the missing hashes):

  • 'sha256-4Su6mBWzEIFnH4pAGMOuaeBrstwJN4Z3pq/s1Kn4/KQ='
  • 'sha256-j0bVhc2Wj58RJgvcJPevapx5zlVLw6ns6eYzK/hcA04='
  • 'sha256-j6Tt8qv7z2kSc7fUs0YHbrxawwsQcS05fVaX1r2qrbk='
  • 'sha256-p4cncjf0hAIeTSS5tXecf7qTUanDC27KdlKhT9eOsZU='
  • 'sha256-Y6v8OCtFfMmI5mbpwqCreLofmGZQfXYK7jJHCoHvn7A='
  • 'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='

which took care of all of the inline styles.

Note that I kept unsafe-inline in the directive since it will be automatically ignored by browsers who understand hashes, but will be honored and make the site work on older browsers.

Next I added the new unsafe-hashes source expression along with the hash of the CSS fragment (clear: both) that is present on all pages related to comments in Ikiwiki:

$ echo -n "clear: both" | openssl dgst -sha256 -binary | openssl base64 -A
matwEc6givhWX0+jiSfM1+E5UMk8/UGLdl902bjFBmY=

My final style-src directive is therefore the following:

style-src 'self' 'unsafe-inline' 'unsafe-hashes' 'sha256-4Su6mBWzEIFnH4pAGMOuaeBrstwJN4Z3pq/s1Kn4/KQ=' 'sha256-j0bVhc2Wj58RJgvcJPevapx5zlVLw6ns6eYzK/hcA04=' 'sha256-j6Tt8qv7z2kSc7fUs0YHbrxawwsQcS05fVaX1r2qrbk=' 'sha256-p4cncjf0hAIeTSS5tXecf7qTUanDC27KdlKhT9eOsZU=' 'sha256-Y6v8OCtFfMmI5mbpwqCreLofmGZQfXYK7jJHCoHvn7A=' 'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=' 'sha256-matwEc6givhWX0+jiSfM1+E5UMk8/UGLdl902bjFBmY='

Browser compatibility

While unsafe-hashes is not yet implemented in Firefox, it happens to work just fine due to a bug (i.e. unsafe-hashes is always enabled whether or not the policy contains it).

It's possible that my new CSP policy won't work in Safari, but these CSS clears don't appear to be needed anyways and so it's just going to mean extra CSP reporting noise.

Removing jQuery

Since jQuery appears to only be used to provide the authentication system selector UI, I decided to get rid of it.

I couldn't find a way to get Ikiwiki to stop pulling it in and so I put the following hack in my Apache config file:

# Disable jQuery.
Redirect 204 /ikiwiki/jquery.fileupload.js
Redirect 204 /ikiwiki/jquery.fileupload-ui.js
Redirect 204 /ikiwiki/jquery.iframe-transport.js
Redirect 204 /ikiwiki/jquery.min.js
Redirect 204 /ikiwiki/jquery.tmpl.min.js
Redirect 204 /ikiwiki/jquery-ui.min.css
Redirect 204 /ikiwiki/jquery-ui.min.js
Redirect 204 /ikiwiki/login-selector/login-selector.js

Replacing the files on disk with an empty reponse seems to work very well and removes a whole lot of code that would otherwise be allowed by the script-src directive of my CSP policy. While there is a slight cosmetic change to the login page, I think the reduction in the attack surface is well worth it.

Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies

Here are all of the extra Debian packages I had to install on my server:

apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl

Then I enabled the CGI module in Apache:

a2enmod cgi

disabled gitweb (which is pulled in by ikiwiki-hosting-web):

a2disconf gitweb

and un-commented the following in /etc/apache2/mods-available/mime.conf:

AddHandler cgi-script .cgi

Creating a separate user account

Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:

adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup

Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):

git clone --bare git://feedingthecloud.branchable.com/ source.git

Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work.

Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:

git branch -d setup
git remote rm origin

Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:

cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud

I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop.

Finaly, I generated a new ssh key without a passphrase:

ssh-keygen -t ed25519

and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config

While I started with the Branchable setup file, I changed the following things in it:

adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()

Then I created the destdir:

mkdir /var/www/blog
chown blog:blog /var/www/blog

and generated the initial copy of the blog as the blog user:

ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild

One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config

Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

    Include /etc/fmarier-org/blog-common
</VirtualHost>

<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

and the common config I put in /etc/fmarier-org/blog-common:

ServerAdmin webmaster@fmarier.org

DocumentRoot /var/www/blog

LogLevel core:info
CustomLog ${APACHE_LOG_DIR}/blog-access.log combined
ErrorLog ${APACHE_LOG_DIR}/blog-error.log

AddType application/rss+xml .rss

<Location /blog.cgi>
        Options +ExecCGI
</Location>

before enabling all of this using:

a2ensite blog
apache2ctl configtest
systemctl restart apache2.service

The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements

Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served.

First of all, I enabled the HTTP/2 and Brotli modules:

a2enmod http2
a2enmod brotli

and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:

<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType ${TRANSFER_COMPRESSION} text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/rss+xml
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/xml
  </IfModule>
</IfDefine>

and replacing /etc/apache2/mods-available/deflate.conf with the following:

# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632

before enabling this new config:

a2enconf compression

Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion

    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 

<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>

Then I followed the Mozilla Observatory recommendations and enabled the following security headers:

Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"

Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure.

I also used the Mozilla TLS config generator to improve the TLS config for my server.

Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:

Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt

I also followed these instructions to create a sitemap for my blog with the following alias:

Alias /sitemap.xml /var/www/blog/sitemap/index.rss

Finally, I simplified a few error pages to save bandwidth:

ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s

Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/blog-error.log 

Based on that, I added a few redirects to point bots and users to the location of my RSS feed:

Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss

and to tell them to stop trying to fetch obsolete resources:

Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi

I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:

Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom

I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:

User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements

There are a few things I'd like to improve on my current setup.

The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for.

Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:

[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"

but I'd like to also reject unsigned pushes.

While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Update: now fixed.

Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:

[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom

This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

Upgrading an ext4 filesystem for the year 2038

If you see a message like this in your logs:

ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)

it's an indication that your filesystem is not Y2k38-safe.

You can also check this manually using:

$ tune2fs -l /dev/sda1 | grep "Inode size:"
Inode size:           128

where an inode size of 128 is insufficient beyond 2038 and an inode size of 256 is what you want.

The safest way to change this is to copy the contents of your partition to another ext4 partition:

cp -a /boot /mnt/backup/

and then reformat with the correct inode size:

umount /boot
mkfs.ext4 -I 256 /dev/sda1

before copying everything back:

mount /boot
cp -a /mnt/backup/boot/* /boot/
Deleting non-decryptable restic snapshots

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree 4645312b:
  decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed
error for tree 2c3248ce:
  decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed
Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b
repository b0b0516c opened successfully, password is correct
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot e75876ed (2021-02-28 08:35:29)

$ restic find --tree 2c3248ce
repository b0b0516c opened successfully, password is correct
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  2 / 2 files deleted

$ restic prune 
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:23] 100.00%  58964 / 58964 packs
repository contains 58964 packs (1417910 blobs) with 278.913 GiB
processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 20 snapshots
[1:15] 100.00%  20 / 20 snapshots
found 1364852 of 1417910 data blobs still in use, removing 53058 blobs
will remove 0 invalid files
will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB
[10:50] 31.96%  434 / 1358 packs rewritten
hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904
github.com/restic/restic/internal/repository.Repack
    github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
    github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
    github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
    github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra/command.go:897
main.main
    github.com/restic/restic/cmd/restic/main.go:98
runtime.main
    runtime/proc.go:204
runtime.goexit
    runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  13 / 13 files deleted

$ restic prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:37] 100.00%  60020 / 60020 packs
repository contains 60020 packs (1548315 blobs) with 283.466 GiB
processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate
load all snapshots
find data that is still in use for 8 snapshots
[0:53] 100.00%  8 / 8 snapshots
found 1219895 of 1548315 data blobs still in use, removing 328420 blobs
will remove 0 invalid files
will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB
[23:37] 100.00%  1275 / 1275 packs rewritten
counting files in repo
[11:45] 100.00%  52822 / 52822 packs
finding old index files
saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa]
remove 23 old index files
[0:00] 100.00%  23 / 23 files deleted
remove 7507 old packs
[0:08] 100.00%  7507 / 7507 files deleted
done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

Using a Streamzap remote control with Kodi

After installing Kodi on a Raspberry Pi 4, I found that my Streamzap remote control worked for everything except the Ok and Exit buttons (which are supposed to get mapped to Enter and Back respectively).

A very old set of instructions for this is archived on the Kodi wiki but here's a more modern version of it.

Root cause

I finally tracked down the problem by enabling debug logging in Kodi settings. I saw the following in ~/.kodi/temp/kodi.log when presing the OK button:

DEBUG: Keyboard: scancode: 0x00, sym: 0x0000, unicode: 0x0000, modifier: 0x0
DEBUG: GetActionCode: Trying Hardy keycode for 0xf200
DEBUG: Previous line repeats 3 times.
DEBUG: HandleKey: long-0 (0x100f200, obc-16838913) pressed, action is
DEBUG: Keyboard: scancode: 0x00, sym: 0x0000, unicode: 0x0000, modifier: 0x0

and this when pressing the Down button:

DEBUG: CLibInputKeyboard::ProcessKey - using delay: 500ms repeat: 125ms
DEBUG: Thread Timer start, auto delete: false
DEBUG: Keyboard: scancode: 0x6c, sym: 0x0112, unicode: 0x0000, modifier: 0x0
DEBUG: HandleKey: down (0xf081) pressed, action is Down
DEBUG: Thread Timer 2502349008 terminating
DEBUG: Keyboard: scancode: 0x6c, sym: 0x0112, unicode: 0x0000, modifier: 0x0

This suggests that my Streamzap remote is recognized as a keyboard, which I can confirm using:

$ cat /proc/bus/input/devices 
I: Bus=0003 Vendor=0e9c Product=0000 Version=0100
N: Name="Streamzap PC Remote Infrared Receiver (0e9c:0000)"
P: Phys=usb-0000:01:00.0-1.2/input0
S: Sysfs=/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb1/1-1/1-1.2/1-1.2:1.0/rc/rc0/input4
U: Uniq=
H: Handlers=kbd event0 
B: PROP=20
B: EV=100017
B: KEY=3ff 0 0 0 fc000 1 0 0 0 0 18000 4180 c0000801 9e1680 0 0 0
B: REL=3
B: MSC=10

Installing LIRC

The fix I found is to put the following in /etc/X11/xorg.conf.d/90-streamzap-disable.conf:

Section "InputClass"
    Identifier "Ignore Streamzap IR"
    MatchProduct "Streamzap"
    MatchIsKeyboard "true"
    Option "Ignore" "true"
EndSection

to prevent the remote from being used as a keyboard and to instead use it via LIRC, which can be installed like this:

apt install lirc

Put the following in /etc/lirc/lirc_options:

driver=default
device=/dev/lirc0

and install this remote configuration as /etc/lirc/lircd.conf.d/streamzap.conf:

cd /etc/lirc/lircd.conf.d/
curl https://raw.githubusercontent.com/graysky2/streamzap/master/00-Streamzap_PC_Remote.conf > streamzap.conf

Make sure you don't use the config file that comes with the lirc-compat-remotes package or you will likely end up with an over-sensitive remote which tends to double key presses (e.g. pressing the down arrow will go down more than once).

Testing

Now you should be able to test the remote using:

mode2

to see the undecoded infra-red signal, and:

irw

to display the decoded key presses.

Kodi configuration

Finally, as the pi user, put the following config in ~/.kodi/userdata/Lircmap.xml:

<lircmap>
  <remote device="Streamzap_PC_Remote">
    <power>KEY_POWER</power>
    <play>KEY_PLAY</play>
    <pause>KEY_PAUSE</pause>
    <stop>KEY_STOP</stop>
    <forward>KEY_FORWARD</forward>
    <reverse>KEY_REWIND</reverse>
    <left>KEY_LEFT</left>
    <right>KEY_RIGHT</right>
    <up>KEY_UP</up>
    <down>KEY_DOWN</down>
    <pageplus>KEY_CHANNELUP</pageplus>
    <pageminus>KEY_CHANNELDOWN</pageminus>
    <select>KEY_OK</select>
    <back>KEY_EXIT</back>
    <menu>KEY_MENU</menu>
    <red>KEY_RED</red>
    <green>KEY_GREEN</green>
    <yellow>KEY_YELLOW</yellow>
    <blue>KEY_BLUE</blue>
    <skipplus>KEY_NEXT</skipplus>
    <skipminus>KEY_PREVIOUS</skipminus>
    <record>KEY_RECORD</record>
    <volumeplus>KEY_VOLUMEUP</volumeplus>
    <volumeminus>KEY_VOLUMEDOWN</volumeminus>
    <mute>KEY_MUTE</mute>
    <record>KEY_RECORD</record>
    <one>KEY_1</one>
    <two>KEY_2</two>
    <three>KEY_3</three>
    <four>KEY_4</four>
    <five>KEY_5</five>
    <six>KEY_6</six>
    <seven>KEY_7</seven>
    <eight>KEY_8</eight>
    <nine>KEY_9</nine>
    <zero>KEY_0</zero>
  </remote>
</lircmap>

In order for all of this to take effect, I simply rebooted the Pi:

sudo systemctl reboot
Creating a Kodi media PC using a Raspberry Pi 4

Here's how I set up a media PC using Kodi (formerly XMBC) and a Raspberry Pi 4.

Hardware

The hardware is fairly straightforward, but here's what I ended up getting:

You'll probably want to add a remote control to that setup. I used an old Streamzap I had lying around.

Installing the OS on the SD-card

Plug the SD card into a computer using a USB adapter.

Download the imager and use it to install Raspbian on the SDcard.

Then you can simply plug the SD card into the Pi and boot.

System configuration

Using sudo raspi-config, I changed the following:

  • Set hostname (System Options)
  • Wait for network at boot (System Options): needed for NFS
  • Disable screen blanking (Display Options)
  • Enable ssh (Interface Options)
  • Configure locale, timezone and keyboard (Localisation Options)
  • Set WiFi country (Localisation Options)

Then I enabled automatic updates:

apt install unattended-upgrades anacron

echo 'Unattended-Upgrade::Origins-Pattern {
        "origin=Debian,codename=${distro_codename},label=Debian";
        "origin=Debian,codename=${distro_codename},label=Debian-Security";
        "origin=Raspbian,codename=${distro_codename},label=Raspbian";
        "origin=Raspberry Pi Foundation,codename=${distro_codename},label=Raspberry Pi Foundation";
};' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-raspbian

Headless setup

Should you need to do the setup without a monitor, you can enable ssh by inserting the SD card into a computer and then creating an empty file called ssh in the boot partition.

Plug it into your router and boot it up. Check the IP that it received by looking at the active DHCP leases in your router's admin panel.

Then login:

ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no pi@192.168.1.xxx

using the default password of raspberry.

Hardening

In order to secure the Pi, I followed most of the steps I usually take when setting up a new Linux server.

I created a new user account for admin and ssh access:

adduser francois
addgroup sshuser
adduser francois sshuser
adduser francois sudo

and changed the pi user password to a random one:

pwgen -sy 32
sudo passwd pi

before removing its admin permissions:

deluser pi adm
deluser pi sudo
deluser pi dialout
deluser pi cdrom
deluser pi lpadmin

Finally, I enabled the Uncomplicated Firewall by installing its package:

apt install ufw

and only allowing ssh connections.

After starting ufw using systemctl start ufw.service, you can check that it's configured as expected using ufw status. It should display the following:

Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)

Installing Kodi

Kodi is very straightforward to install since it's now part of the Raspbian repositories:

apt install kodi

To make it start at boot/login, while still being able to exit and use other apps if needed:

cp /etc/xdg/lxsession/LXDE-pi/autostart ~/.config/lxsession/LXDE-pi/
echo "@kodi" >> ~/.config/lxsession/LXDE-pi/autostart

In order to improve privacy while fetching metadata, I also installed Tor:

apt install tor

and then set a proxy in the Kodi System | Internet access settings:

  • Proxy type: SOCKS5 with remote DNS resolving
  • Server: localhost
  • Port: 9050

Network File System

In order to avoid having to have all media storage connected directly to the Pi via USB, I setup an NFS share over my local network.

First, give static IP allocations to the server and the Pi in your DHCP server, then add it to the /etc/hosts file on your NFS server:

192.168.1.3    pi

Install the NFS server package:

apt instal nfs-kernel-server

Setup the directories to share in /etc/exports:

/pub/movies    pi(ro,insecure,all_squash,subtree_check)
/pub/tv_shows  pi(ro,insecure,all_squash,subtree_check)

Open the right ports on your firewall by putting this in /etc/network/iptables.up.rules:

-A INPUT -s 192.168.1.3 -p udp -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 123 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 2049 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 2049 -j ACCEPT

Finally, apply all of these changes:

iptables-apply
systemctl restart nfs-kernel-server.service

On the Pi, put the server's static IP in /etc/hosts:

192.168.1.2    fileserver

and this in /etc/fstab:

fileserver:/data/movies  /kodi/movies  nfs  ro,bg,hard,noatime,async,nolock  0  0
fileserver:/data/tv      /kodi/tv      nfs  ro,bg,hard,noatime,async,nolock  0  0

Then create the mount points and mount everything:

mkdir -p /kodi/movies
mkdir /kodi/tv
mount /kodi/movies
mount /kodi/tv
Programming a DMR radio with its CPS

Here are some notes I took around programming my AnyTone AT-D878UV radio to operate on DMR using the CPS software that comes with it.

Note that you can always tune in to a VFO channel by hand if you haven't had time to add it to your codeplug yet.

DMR terminology

First of all, the terminology of DMR is quite different from that of the regular analog FM world.

Here are the basic terms:

  • Frequency: same meaning as in the analog world
  • Repeater: same meaning as in the analog world
  • Timeslot: Each frequency is split into two timeslots (1 and 2) and what that means that there can be two simultaneous transmissions on each frequency.
  • Color code: This is the digital equivalent of a CTCSS tone (sometimes called privacy tone) in that using the incorrect code means that you will tie up one of the timeslots on the frequency, but nobody else will hear you. These are not actually named after colors, but are instead just numerical IDs from 0 to 15.

There are two different identification mechanisms (both are required):

  • Callsign: This is the same identifier issued to you by your country's amateur radio authority. Mine is VA7GPL.
  • Radio ID: This is a unique numerical ID tied to your callsign which you must register for ahead of time and program into your radio. Mine is 3027260.

The following is where this digital mode becomes most interesting:

  • Talkgroup: a "chat room" where everything you say will be heard by anybody listening to that talkgroup
  • Network: a group of repeaters connected together over the Internet (typically) and sharing a common list of talkgroups
  • Hotspot: a personal simplex device which allows you to connect to a network with your handheld and access all of the talkgroups available on that network

The most active network these days is Brandmeister, but there are several others.

  • Access: This can either be Always on which means that a talkgroup will be permanently broadcasting on a timeslot and frequency, or PTT which means a talkgroup will not be broadcast until it is first "woken up" by pressing the push-to-talk button and then will broadcast for a certain amount of time before going to sleep again.
  • Channel: As in the analog world, this is what you select on your radio when you want to talk to a group of people. In the digital world however, it is tied not only to a frequency (and timeslot) and tone (color code), but also to a specific talkgroup.

Ultimately what you want to do when you program your radio is to find the talkgroups you are interested in (from the list offered by your local repeater) and then assign them to specific channel numbers on your radio. More on that later.

Callsign and Radio IDs

Before we get to talkgroups, let's set your callsign and Radio ID:

Then you need to download the latest list of Radio IDs so that your radio can display people's names and callsigns instead of just their numerical IDs.

One approach is to only download the list of users who recently talked on talkgroups you are interested in. For example, I used to download the contacts for the following talkgroups: 91,93,95,913,937,3026,3027,302,30271,30272,530,5301,5302,5303,5304,3100,3153,31330 but these days, what I normally do is to just download the entire worldwide database (user.csv) since my radio still has enough storage (200k entries) for it.

In order for the user.csv file to work with the AnyTone CPS, it needs to have particular columns and use the DOS end-of-line characters (apt install dos2unix if you want to do it manually). I wrote a script to do all of the work for me.

If you use dmrconfig to program this radio instead, then the conversion is unnecessary. The user.csv file can be used directly, however it will be truncated due to an incorrect limit hard-coded in the software.

Talkgroups

Next, you need to pick the talkgroups you would like to allocate to specific channels on your radio.

Start by looking at the documentation for your local repeaters (e.g. VE7RAG and VE7NWR in the Vancouver area).

In addition to telling you the listen and transmit frequencies of the repeater (again, this works the same way as with analog FM), these will tell you which talkgroups are available and what timeslots and color codes they have been set to. It will also tell you the type of access for each of these talkgroups.

This is how I programmed a channel:

and a talkgroup on the VE7RAG repeater in my radio:

If you don't have a local repeater with DMR capability, or if you want to access talkgroups available on a different network, then you will need to get a DMR hotspot such as one that's compatible with the Pi-Star software.

This is an excerpt from the programming I created for the talkgroups I made available through my hotspot:

One of the unfortunate limitations of the CPS software for the AnyTone 878 is that talkgroup numbers are globally unique identifiers. This means that if TG1234 (hypothetical example) is Ragchew 3000 on DMR-MARC but Iceland-wide chat on Brandmeister, then you can't have two copies of it with different names. The solution I found for this was to give that talkgroup the name "TG1234" instead of "Ragchew3k" or "Iceland". I use a more memorable name for non-conflicting talkgroups, but for the problematic ones, I simply repeat the talkgroup number.

Simplex

Talkgroups are not required to operate on DMR. Just like analog FM, you can talk to another person point-to-point using a simplex channel.

The convention for all simplex channels is the following:

  • Talkgroup: 99
  • Color code: 1
  • Timeslot: 1
  • Admit criteria: Always
  • In Call Criteria: TX or Always

After talking to the British Columbia Amateur Radio Coordination Council, I found that the following frequency ranges are most suitable for DMR simplex:

  • 145.710-145.790 MHz (simplex digital transmissions)
  • 446.000-446.975 MHz (all simplex modes)

The VECTOR list identifies two frequencies in particular:

  • 446.075 MHz
  • 446.500 MHz

Learn more

If you'd like to learn more about DMR, I would suggest you start with this excellent guide (also mirrored here).

List of Planet Linux Australia blogs

I've been following Planet Linux Australia for many years and discovered many interesting FOSS blogs through it. I was sad to see that it got shut down a few weeks ago and so I decided to manually add all of the feeds to my RSS reader to avoid missing posts from people I have been indirectly following for years.

Since all feeds have been removed from the site, I recovered the list of blogs available from an old copy of the site preserved by the Internet Archive.

Here is the resulting .opml file if you'd like to subscribe.

Changes

Once I had the full list, I removed all blogs that are gone, empty or broken (e.g. domain not resolving, returning a 404, various database or server errors).

I updated the URLs of a few blogs which had moved but hadn't updated their feeds on the planet. I also updated the name of a blogger who was still listed under a previous last name.

Finally, I removed LA-specific tags from feeds since these are unlikely to be used again.

Work-arounds

The following LiveJournal feeds didn't work in my RSS reader but opened fine in a browser:

However since none of them have them updated in the last 7 years, I just left them out.

A couple appear to be impossible to fetch over Tor, presumably due to a Cloudflare setting:

Since only the last two have been updated in the last 9 years, I added these to Feedburner and added the following "proxied" URLs to my reader:

Similarly, I couldn't fetch the following over Tor for some other reasons:

I excluded the first two which haven't been updated in 6 years and proxied the other ones: