RSS Atom Add a new post titled:
Using a Streamzap remote control with VLC on a Raspberry Pi

First of all, I am starting from a working Streamzap remote in Kodi. If VLC is the first application you are setting up with the Streamzap remote then you will probably need to read the above blog post first.

Once you know you have a working remote, put the following lircrc config into /home/pi/.lircrc:

  prog = vlc
  button = KEY_PLAY
  config = key-play

  prog = vlc
  button = KEY_PAUSE
  config = key-pause

  prog = vlc
  button = KEY_STOP
  config = key-stop

  prog = vlc
  button = KEY_POWER
  config = key-quit

  prog = vlc
  button = KEY_NEXT
  config = key-next

  prog = vlc
  button = KEY_PREVIOUS
  config = key-prev

  prog = vlc
  button = KEY_RED
  config = key-toggle-fullscreen

  prog = vlc
  button = KEY_REWIND
  config = key-slower

  prog = vlc
  button = KEY_FORWARD
  config = key-faster

  prog = vlc
  config = key-vol-down

  prog = vlc
  button = KEY_VOLUMEUP
  config = key-vol-up

  prog = vlc
  button = KEY_BLUE
  config = key-audio-track

  prog = vlc
  button = KEY_MUTE
  config = key-vol-mute

  prog = vlc
  button = KEY_LEFT
  config = key-nav-left 

  prog = vlc
  button = KEY_DOWN
  config = key-nav-down

  prog = vlc
  button = KEY_UP
  config = key-nav-up

  prog = vlc
  button = KEY_RIGHT
  config = key-nav-right

  prog = vlc
  button = KEY_MENU
  config = key-nav-activate

  prog = vlc
  button = KEY_GREEN
  config = key-subtitle-track

and then after starting VLC:

  1. Open Tools | Preferences.
  2. Select All under Show Settings in the bottom left corner.
  3. Open Interface | Control Interfaces in the left side-bar.
  4. Enable Infrared remote control interface.

Now you should see lirc in the text box at the bottom of Control Interfaces and the following in your ~/.config/vlc/vlcrc:


If you're looking to customize the above key mapping, you can find the VLC key codes in the output of vlc -H --extended | grep -- --key-.

Remote logging of Turris Omnia log messages using syslog-ng and rsyslog

As part of debugging an upstream connection problem I've been seeing recently, I wanted to be able to monitor the logs from my Turris Omnia router. Here's how I configured it to send its logs to a server I already had on the local network.

Server setup

The first thing I did was to open up my server's rsyslog (Debian's default syslog server) to remote connections since it's going to be the destination host for the router's log messages.

I added the following to /etc/rsyslog.d/router.conf:

input(type="imtcp" port="514")

if $fromhost-ip == '' then {
    if $syslogseverity <= 5 then {
        action(type="omfile" file="/var/log/router.log")

This is using the latest rsyslog configuration method: a handy scripting language called RainerScript. Severity level 5 maps to "notice" which consists of unusual non-error conditions, and is of course the IP address of the router on the LAN side. With this, I'm directing all router log messages to a separate file, filtering out anything less important than severity 5.

In order for rsyslog to pick up this new configuration file, I restarted it:

systemctl restart rsyslog.service

and checked that it was running correctly (e.g. no syntax errors in the new config file) using:

systemctl status rsyslog.service

Since I added a new log file, I also setup log rotation for it by putting the following in /etc/logrotate.d/router:

    rotate 4

In addition, since I use logcheck to monitor my server logs and email me errors, I had to add /var/log/router.log to /etc/logcheck/logcheck.logfiles.

Finally I opened the rsyslog port to the router in my server's firewall by adding the following to /etc/network/iptables.up.rules:

# Allow logs from the router
-A INPUT -s -p tcp --dport 514 -j ACCEPT

and ran iptables-apply.

With all of this in place, it was time to get the router to send messages.

Router setup

As suggested on the Turris forum, I ssh'ed into my router and added this in /etc/syslog-ng.d/remote.conf:

destination d_loghost {
        network("" time-zone("America/Vancouver"));

source dns {

log {

Setting the timezone to the same as my server was needed because the router messages were otherwise sent with UTC timestamps.

To ensure that the destination host always gets the same IP address (, I went to the advanced DHCP configuration page and added a static lease for the server's MAC address so that it always gets assigned If that wasn't already the server's IP address, you'll have to restart it for this to take effect.

Finally, I restarted the syslog-ng daemon on the router to pick up the new config file:

/etc/init.d/syslog-ng restart


In order to test this configuration, I opened three terminal windows:

  1. tail -f /var/log/syslog on the server
  2. tail -f /var/log/router.log on the server
  3. tail -f /var/log/messages on the router

I immediately started to see messages from the router in the third window and some of these, not all because of my severity-5 filter, were flowing to the second window as well. Also important is that none of the messages make it to the first window, otherwise log messages from the router would be mixed in with the server's own logs. That's the purpose of the stop command in /etc/rsyslog.d/router.conf.

To force a log messages to be emitted by the router, simply ssh into it and issue the following command:

logger Test

It should show up in the second and third windows immediately if you've got everything setup correctly

Timezone problems

If I do the following on my router:

/etc/init.d/syslog-ng restart logger TestA

I see the following in /var/log/messages:

Aug 14 20:39:35 hostname syslog-ng[9860]: syslog-ng shutting down; version='3.37.1' Aug 14 20:39:36 hostname syslog-ng[10024]: syslog-ng starting up; version='3.37.1' Aug 15 03:39:49 hostname root: TestA

The correct timezone is the one in the first two lines. Other daemon messages are displayed using an incorrect timezone like logger.

Thanks to a very helpful syslog-ng mailing list thread, I found that this is actually an upstream OpenWRT bug.

My favourite work-around is to tell syslog-ng to simply ignore the timestamp provided by the application and to use the time of reception (of the log message) instead. To do this, simply change the following in /etc/syslog-ng.conf:

source src {


source src {
    unix-dgram("/dev/log", keep-timestamp(no));

Unfortunately, I wasn't able to fix it in a way that would survive a syslog-ng package update, but since this is supposedly fixed in Turris 6.0, it shouldn't be a problem for much longer.

Using Gandi DNS for Let's Encrypt certbot verification

I had some problems getting the Gandi certbot plugin to work in Debian bullseye since the documentation appears to be outdated.

When running certbot renew --dry-run, I saw the following error message:

Plugin legacy name certbot-plugin-gandi:dns may be removed in a future version. Please use dns instead.

Thanks to an issue in another DNS plugin, I was able to easily update my configuration to the new naming convention.


Get an API key from Gandi and then put it in /etc/letsencrypt/gandi.ini:

# live dns v5 api key

before make it only readable by root:

chown root:root /etc/letsencrypt/gandi.ini
chmod 600 /etc/letsencrypt/gandi.ini

Then install the required package:

apt install python3-certbot-dns-gandi

Getting an initial certificate

To get an initial certificate using the Gandi plugin, simply use the following command:

certbot certonly -a dns --dns-credentials /etc/letsencrypt/gandi.ini -d

Setting up automatic renewal

If you have automatic renewals enabled, you'll want to ensure your /etc/letsencrypt/renewal/ file looks like this:

# renew_before_expiry = 30 days
version = 1.12.0
archive_dir = /etc/letsencrypt/archive/
cert = /etc/letsencrypt/live/
privkey = /etc/letsencrypt/live/
chain = /etc/letsencrypt/live/
fullchain = /etc/letsencrypt/live/

account = abcdef
authenticator = dns
server =
dns_credentials = /etc/letsencrypt/gandi.ini
Crashplan 10 won't start on Ubuntu derivatives

CrashPlan recently updated itself to version 10 on my Pop!_OS laptop and stopped backing anything up.

When trying to start the client, I got faced with this error message:

Code42 cannot connect to its backend service.

Digging through log files

In /usr/local/crashplan/log/service.log.0, I found the reason why the service didn't start:

[05.18.22 07:40:05.756 ERROR main           com.backup42.service.CPService] Error starting up, java.lang.IllegalStateException: Failed to start authorized services.
STACKTRACE:: java.lang.IllegalStateException: Failed to start authorized services.
        at com.backup42.service.ClientServiceManager.authorize(
        at com.backup42.service.CPService.startServices(
        at com.backup42.service.CPService.start(
        at com.backup42.service.CPService.main(
Caused by: Unable to provision, see the following errors:

1) Error injecting constructor, java.lang.UnsatisfiedLinkError: Unable to load library 'uaw': cannot open shared object file: No such file or directory cannot open shared object file: No such file or directory
Native library (linux-x86-64/ not found in resource path (lib/com.backup42.desktop.jar:lang)
  at com.code42.service.useractivity.UserActivityWatcherServiceImpl.<init>(
  at com.code42.service.useractivity.UserActivityWatcherServiceImpl.class(
  while locating com.code42.service.useractivity.UserActivityWatcherServiceImpl
  at com.code42.service.AbstractAuthorizedModule.addServiceWithoutBinding(
  while locating com.code42.service.IAuthorizedService annotated with,uniqueId=34, type=MULTIBINDER, keyType=)
  while locating java.util.Set<com.code42.service.IAuthorizedService>

1 error
        at com.backup42.service.ClientServiceManager.getServices(
        at com.backup42.service.ClientServiceManager.authorize(
        ... 3 more
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'uaw': cannot open shared object file: No such file or directory cannot open shared object file: No such file or directory
Native library (linux-x86-64/ not found in resource path (lib/com.backup42.desktop.jar:lang)
        at com.sun.jna.NativeLibrary.loadLibrary(
        at com.sun.jna.NativeLibrary.getInstance(
        at com.sun.jna.Library$Handler.<init>(
        at com.sun.jna.Native.load(
        at com.sun.jna.Native.load(
        at com.code42.service.useractivity.UserActivityWatcherServiceImpl.<init>(
        at com.code42.service.useractivity.UserActivityWatcherServiceImpl$$FastClassByGuice$$4bcc96f8.newInstance(<generated>)
        at com.code42.service.AuthorizedScope$1.get(
        at com.code42.service.AuthorizedScope$1.get(
        ... 6 more
        Suppressed: java.lang.UnsatisfiedLinkError: cannot open shared object file: No such file or directory
                at Method)
                at com.sun.jna.NativeLibrary.loadLibrary(
                ... 28 more
        Suppressed: java.lang.UnsatisfiedLinkError: cannot open shared object file: No such file or directory
                at Method)
                at com.sun.jna.NativeLibrary.loadLibrary(
                ... 28 more
        Suppressed: Native library (linux-x86-64/ not found in resource path (lib/com.backup42.desktop.jar:lang)
                at com.sun.jna.Native.extractFromResourcePath(
                at com.sun.jna.NativeLibrary.loadLibrary(
                ... 28 more

[05.18.22 07:40:05.756 INFO  main         42.service.history.HistoryLogger] HISTORY:: Code42 stopped, version 10.0.0
[05.18.22 07:40:05.756 INFO  main           com.backup42.service.CPService] *****  STOPPING  *****
[05.18.22 07:40:05.757 INFO  Thread-0       com.backup42.service.CPService] ShutdownHook...calling cleanup
[05.18.22 07:40:05.759 INFO  STOPPING       com.backup42.service.CPService] SHUTDOWN:: Stopping service...

This suggests that a new library dependency (uaw) didn't get installed during the last upgrade.

Looking at the upgrade log (/usr/local/crashplan/log/upgrade..log), I found that it detected my operating system as "pop 20":

Fri May 13 07:39:51 PDT 2022: Info : Resolve Native Libraries for pop 20...
Fri May 13 07:39:51 PDT 2022: Info :   Keep common libs
Fri May 13 07:39:51 PDT 2022: Info :   Keep pop 20 libs

I unpacked the official installer (login required):

$ tar zxf CrashPlanSmb_10.0.0_15252000061000_303_Linux.tgz 
$ cd code42-install
$ gzip -dc CrashPlanSmb_10.0.0.cpi | cpio -i

and found that is only shipped for 4 supported platforms (rhel7, rhel8, ubuntu18 and ubuntu20):

$ find nlib/

Fixing the installation script

Others have fixed this problem by copying the files manually but since Pop!_OS is based on Ubuntu, I decided to fix this by forcing the OS to be detected as "ubuntu" in the installer.

I simply edited like this:

--- 2022-05-18 16:47:52.176199965 -0700
+++  2022-05-18 16:57:26.231723044 -0700
@@ -15,7 +15,7 @@
 readonly IS_ROOT=$([[ $(id -u) -eq 0 ]] && echo true || echo false)
 readonly REQ_CMDS="chmod chown cp cpio cut grep gzip hash id ls mkdir mv sed"
 readonly APP_VERSION_FILE=""
-readonly OS_NAME=$(grep "^ID=" /etc/os-release | cut -d = -f 2 | tr -d \" | tr '[:upper:]' '[:lower:]')
+readonly OS_NAME=ubuntu
 readonly OS_VERSION=$(grep "^VERSION_ID=" /etc/os-release | cut -d = -f 2 | tr -d \" | cut -d . -f1)

 SCRIPT_DIR="${0:0:${#0} - ${#SCRIPT_NAME}}"

and then ran that install script as root again to upgrade my existing installation.

Upgrading the Wi-Fi cards in a Turris Omnia 2020

I've been very happy with my Turris Omnia router and decided recently to take advantage of the fact that is is easily upgradable to replace the original radios for Wave 2 models.

I didn't go for a Wi-Fi 6-capable card because I don't have any devices that support it at this point. There is also an official WiFi 6 upgrade kit in the works and so I might just go with that later.

Wi-Fi card selection

After seeing a report that someone was already using these cards on the Omnia, I decided to look for the following:

Compex themselves don't appear to sell to consumers, but I found an American store that would sell them to me and ship to Canada:

Each card uses 4 antennas, which means that I would need an additional diplexer, an extra pigtail to SMA-RP connector, and two more antennas to wire everything up. Thankfully, the Omnia already comes with two extra holes drilled into the back of the router (covered by plastic caps) and so there is no need for drilling the case.

I put the two cards in the middle and right-most slots (they don't seem to go in the left-most slot because of the SIM card holder being in the way) without worrying about antennas just yet.

Driver installation

I made sure that the chipsets were supported in OpenWRT 19.07 (LTS 4.14 kernel) and found that support for the Qualcomm QCA9984 chipset was added in the ath10k driver as of kernel 4.8 but only for two cards apparently.

I installed the following proprietary firmware package via the advanced configuration interface:


and that automatically pulled in the free ath10k driver. After rebooting, I was able to see one of the two cards in the ReForis admin page and configure it.

Note that there is an alternative firmware available in OpenWRT as well (look for packages ending in -ct), but since both firmware/driver combinations gave me the same initial results, I decided to go with the defaults.


The first problem I ran into is that I could only see one of the two cards in the output of lspci (after sshing into the router). Looking for ath or wlan in the dmesg output, it doesn't look like the second card is being recognized at all.

Neither the 2.4 GHz or 5 GHz Wave 2 card worked in the right-most slot, but either of them works fine when moved to the middle slot. The stock cards work just fine in the right-most slot. I have no explanation for this.

The second problem was that I realized that the antenna holes are not all the same. The two on each end are fully round and can accommodate the diplexers which come with a round SMA-RP connector.

On the other hand, the three middle ones have a notch at the top which can only accommodate the single antenna connectors which have a flat bit on one side. I would have to file one of holes in order to add a third diplexer to my setup.

Final working configuration

Since I didn't see a way to use both new cards at once, I ended up on a different configuration that would nevertheless still upgrade both my 2.4 GHz and 5 GHz Wi-Fi.

I moved the original dual-band card to the right-most slot and switched it to the 2.4 GHz band since it's more powerful (both in dB and in throughput) than the original half-length card.

Then I put the WLE1216V5-20 into the middle slot.

The only extra thing I had to buy were two extra pigtails to SMA-RP connectors and antennas.

Here's what the final product looks like:

Using a Streamzap remote control with MythTV on Debian Bullseye

After upgrading my MythTV machine to Debian Bullseye and MythTV 31, my Streamzap remote control stopped working correctly: the up and down buttons were working, but the OK button wasn't.

Here's the complete solution that made it work with the built-in kernel support (i.e. without LIRC).

Button re-mapping

Since some of the buttons were working, but not others, I figured that the buttons were probably not mapped to the right keys.

Inspired by these old v4l-utils-based instructions, I made my own custom keymap by by copying the original keymap:

cp /lib/udev/rc_keymaps/streamzap.toml /etc/rc_keymaps/

and then modifying it to adapt it to what MythTV needs. This is what I ended up with:

<span class="createlink"><a href="/blog.cgi?do=create&amp;from=posts%2Fusing-streamzap-remote-with-mythtv-debian-bullseye&amp;page=protocols" rel="nofollow">?</a>protocols</span>
name = "streamzap"
protocol = "rc-5-sz"
0x28c0 = "KEY_0"
0x28c1 = "KEY_1"
0x28c2 = "KEY_2"
0x28c3 = "KEY_3"
0x28c4 = "KEY_4"
0x28c5 = "KEY_5"
0x28c6 = "KEY_6"
0x28c7 = "KEY_7"
0x28c8 = "KEY_8"
0x28c9 = "KEY_9"
0x28ca = "KEY_ESC"
0x28cb = "KEY_MUTE"
0x28cc = "KEY_UP"
0x28ce = "KEY_DOWN"
0x28cf = "KEY_LEFTBRACE"
0x28d0 = "KEY_UP"
0x28d1 = "KEY_LEFT"
0x28d2 = "KEY_ENTER"
0x28d3 = "KEY_RIGHT"
0x28d4 = "KEY_DOWN"
0x28d5 = "KEY_M"
0x28d6 = "KEY_ESC"
0x28d7 = "KEY_L"
0x28d8 = "KEY_P"
0x28d9 = "KEY_ESC"
0x28da = "KEY_BACK"
0x28db = "KEY_FORWARD"
0x28dc = "KEY_R"
0x28dd = "KEY_PAGEUP"
0x28de = "KEY_PAGEDOWN"
0x28e0 = "KEY_D"
0x28e1 = "KEY_I"
0x28e2 = "KEY_END"
0x28e3 = "KEY_A"

Note that the keycodes can be found in the kernel source code.

With my own keymap in place at /etc/rc_keymaps/streamzap.toml, I changed /etc/rc_maps.cfg to have the kernel driver automatically use it:

--- a/rc_maps.cfg
+++ b/rc_maps.cfg
@@ -126,7 +126,7 @@
 *      rc-real-audio-220-32-keys real_audio_220_32_keys.toml
 *      rc-reddo                 reddo.toml
 *      rc-snapstream-firefly    snapstream_firefly.toml
-*      rc-streamzap             streamzap.toml
+*      rc-streamzap             /etc/rc_keymaps/streamzap.toml
 *      rc-su3000                su3000.toml
 *      rc-tango                 tango.toml
 *      rc-tanix-tx3mini         tanix_tx3mini.toml

Button repeat delay

To adjust the delay before button presses are repeated, I followed these old out-of-date instructions on the MythTV wiki and put the following in /etc/udev/rules.d/streamzap.rules:

ACTION=="add", ATTRS{idVendor}=="0e9c", ATTRS{idProduct}=="0000", RUN+="/usr/bin/ir-keytable -s rc0 -D 1000 -P 250"

Note that the -d option has been replaced with -s in the latest version of ir-keytable.

To check that the Streamzap is indeed detected as rc0 on your system, use this command:

$ ir-keytable 
Found /sys/class/rc/rc0/ with:
    Name: Streamzap PC Remote Infrared Receiver (0e9c:0000)
    Driver: streamzap
    Default keymap: rc-streamzap

Make sure you don't pass the -c to ir-keytable or else it will clear the keymap set via /etc/rc_maps.cfg, removing all of the button mappings.

Ways to refer to locahost in Chromium

The filter rules preventing websites from portscanning the local machine have recently been tightened in Brave. It turns out there are a surprising number of ways to refer to the local machine in Chromium.

localhost and friends is the first address that comes to mind when thinking of the local machine. localhost is typically aliased to that address (via /etc/hosts), though that convention is not mandatory. The IPv6 equivalent is [::1]. is not a routable address, but that's what's used to tell a service to bind (listen) on all network interfaces. In Chromium, it resolves to the local machine, just like The IPv6 equivalent is [::].


Of course, another way to encode these numerical URLs is to create A / AAAA records for them under a domain you control. I've done this under my personal domain:

For these to work, you'll need to:

  • Make sure you can connect to IPv6-only hosts, for example by connecting to an appropriate VPN if needed.
  • Put nameserver in /etc/resolv.conf since you need a DNS server that will not filter these localhost domains. (For example, Unbound will do that if you use private-address: in the server config.)
  • Go into chrome://settings/security and disable Always use secure connections to make sure the OS resolver is used.
  • Turn off the chrome://flags/#block-insecure-private-network-requests flag since that security feature (CORS-RFC1918) is designed to protect against these kinds of requests. subnet

Technically, the entire subnet can used to refer to the local machine. However, it's not a reliable way to portscan a machine from a web browser because it only catches the services that listen on all interfaces (i.e.

For example, on my machine, if I nmap, I get:

22/tcp   open  ssh       OpenSSH 8.2p1
25/tcp   open  smtp      Postfix smtpd

whereas if I nmap, I only get:

22/tcp open  ssh     OpenSSH 8.2p1

That's because I've got the following in /etc/postfix/

inet_interfaces = loopback-only

which I assume is explicitly binding

Nevertheless, it would be good to get that fixed in Brave too.

Error loading emacs' python-mode in Ubuntu 20.04 (focal)

Ever since upgrading to Ubuntu 20.04 (focal) I started getting the following error in emacs when opening Python files (in the built-in python-mode):

Filemode specification error: (wong-type argument stringp nil)

I used M-x toggle-debug-on-error in order to see where the error originated and saw the following immediately after opening a Python file:

Debugger entered--Lisp error: (wrong-type-argument stringp nil)
  string-match("\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" nil)
  (if (string-match "\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" user-mail-address) (progn (add-to-list (quote tramp-default-user-alist) (list "\\`gdrive\\'" nil (match-string 1 user-mail-address))) (add-to-list (quote tramp-default-host-alist) (quote ("\\`gdrive\\'" nil (\, (match-string 2 user-mail-address)))))))
  eval-buffer(#<buffer  *load*> nil "/usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el" nil t)  ; Reading at buffer position 25605
  load-with-code-conversion("/usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el" "/usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el" nil t)
  byte-code("\300\301!\210\300\302!\210\300\303!\210\300\304!\210\300\305!\210\300\306!\210\300\307!\210\300\310!\210\300\311!\210\300\312!\210\300\313!\210\300\314!\207" [require auth-source advice cl-lib custom format-spec parse-time password-cache shell timer ucs-normalize trampver tramp-loaddefs] 2)
  byte-code("\300\301!\210\300\302!\210\303\304\305\306\307\310\307\311\312\313\314\315&\013\210\316\317\320\321\322DD\323\307\304\324\325&\007\210\316\326\320\321\327DD\330\307\304\324\331&\007\210\316\332\320\321\333DD\334\307\304\324\335&\007\210\316\336\320\321\337DD\340\307\304\324\341&\007\210\316\342\320\321\343DD\344\307\304\324\345&\007\210\316\346\320\321\347DD\350\307\304\324\351&\007\210\316\352\320\321\353DD\354\314\355\307\304\324\356&\011\207" [require tramp-compat cl-lib custom-declare-group tramp nil "Edit remote files with a combination of ssh, scp, etc." :group files comm :link (custom-manual "(tramp)Top") :version "22.1" custom-declare-variable tramp-mode funcall function #f(compiled-function () #<bytecode 0x13896d5>) "Whether Tramp is enabled.\nIf it is set to nil, all remote file names are used literally." :type boolean tramp-verbose #f(compiled-function () #<bytecode 0x1204c1d>) "Verbosity level for Tramp messages.\nAny level x includes messages for all levels 1 .. x-1.  The levels are\n\n 0  silent (no tramp messages at all)\n 1  errors\n 2  warnings\n 3  connection to remote hosts (default level)\n 4  activities\n 5  internal\n 6  sent and received strings\n 7  file caching\n 8  connection properties\n 9  test commands\n10  traces (huge)." integer tramp-backup-directory-alist #f(compiled-function () #<bytecode 0x10f7791>) "Alist of filename patterns and backup directory names.\nEach element looks like (REGEXP . DIRECTORY), with the same meaning like\nin `backup-directory-alist'.  If a Tramp file is backed up, and DIRECTORY\nis a local file name, the backup directory is prepended with Tramp file\nname prefix (method, user, host) of file.\n\n(setq tramp-backup-directory-alist backup-directory-alist)\n\ngives the same backup policy for Tramp files on their hosts like the\npolicy for local files." (repeat (cons (regexp :tag "Regexp matching filename") (directory :tag "Backup directory name"))) tramp-auto-save-directory #f(compiled-function () #<bytecode 0x1072e71>) "Put auto-save files in this directory, if set.\nThe idea is to use a local directory so that auto-saving is faster.\nThis setting has precedence over `auto-save-file-name-transforms'." (choice (const :tag "Use default" nil) (directory :tag "Auto save directory name")) tramp-encoding-shell #f(compiled-function () #<bytecode 0x1217129>) "Use this program for encoding and decoding commands on the local host.\nThis shell is used to execute the encoding and decoding command on the\nlocal host, so if you want to use `~' in those commands, you should\nchoose a shell here which groks tilde expansion.  `/bin/sh' normally\ndoes not understand tilde expansion.\n\nFor encoding and decoding, commands like the following are executed:\n\n    /bin/sh -c COMMAND < INPUT > OUTPUT\n\nThis variable can be used to change the \"/bin/sh\" part.  See the\nvariable `tramp-encoding-command-switch' for the \"-c\" part.\n\nIf the shell must be forced to be interactive, see\n`tramp-encoding-command-interactive'.\n\nNote that this variable is not used for remote commands.  There are\nmechanisms in tramp.el which automatically determine the right shell to\nuse for the remote host." (file :must-match t) tramp-encoding-command-switch #f(compiled-function () #<bytecode 0x106de75>) "Use this switch together with `tramp-encoding-shell' for local commands.\nSee the variable `tramp-encoding-shell' for more information." string tramp-encoding-command-interactive #f(compiled-function () #<bytecode 0xeaeafd>) "Use this switch together with `tramp-encoding-shell' for interactive shells.\nSee the variable `tramp-encoding-shell' for more information." "24.1" (choice (const nil) string)] 12)
  byte-code("\300\301!\210\302\303\304\305\306DD\307\310\301\311\312\313\314&\011\210\302\315\304\305\316DD\317\310\301\313\320&\007\210\302\321\304\305\322DD\323\310\301\313\324&\007\210\302\325\304\305\326DD\327\310\301\311\330\313\331&\011\207" [require tramp custom-declare-variable tramp-inline-compress-start-size funcall function #f(compiled-function () #<bytecode 0x10f4681>) "The minimum size of compressing where inline transfer.\nWhen inline transfer, compress transferred data of file\nwhose size is this value or above (up to `tramp-copy-size-limit').\nIf it is nil, no compression at all will be applied." :group :version "26.3" :type (choice (const nil) integer) tramp-copy-size-limit #f(compiled-function () #<bytecode 0x10f3a81>) "The maximum file size where inline copying is preferred over an out-of-the-band copy.\nIf it is nil, out-of-the-band copy will be used without a check." (choice (const nil) integer) tramp-terminal-type #f(compiled-function () #<bytecode 0x1097881>) "Value of TERM environment variable for logging in to remote host.\nBecause Tramp wants to parse the output of the remote shell, it is easily\nconfused by ANSI color escape sequences and suchlike.  Often, shell init\nfiles conditionalize this setup based on the TERM environment variable." string tramp-histfile-override #f(compiled-function () #<bytecode 0x1020b49>) "When invoking a shell, override the HISTFILE with this value.\nWhen setting to a string, it redirects the shell history to that\nfile.  Be careful when setting to \"/dev/null\"; this might\nresult in undesired results when using \"bash\" as shell.\n\nThe value t unsets any setting of HISTFILE, and sets both\nHISTFILESIZE and HISTSIZE to 0.  If you set this variable to nil,\nhowever, the *override* is disabled, so the history will go to\nthe default storage location, e.g. \"$HOME/.sh_history\"." "25.2" (choice (const :tag "Do not override HISTFILE" nil) (const :tag "Unset HISTFILE" t) (string :tag "Redirect to a file"))] 10)
  byte-code("\300\301!\210\300\302!\210\300\303!\210\300\304!\210\300\305!\210\306\307\310\"\210\306\311\312\"\210\313\314\315\316!\317B\"\210\313\320\315\321!\317B\"\210\322\323\324\325\326\327\330\331\332\333&\011\210\334\335!\204H\0\336\335\337\"\210\324\207" [require ansi-color cl-lib comint json tramp-sh autoload comint-mode "comint" help-function-arglist "help-fns" add-to-list auto-mode-alist purecopy "\\.py[iw]?\\'" python-mode interpreter-mode-alist "python[0-9.]*" custom-declare-group python nil "Python Language's flying circus support for Emacs." :group languages :version "24.3" :link (emacs-commentary-link "python") fboundp prog-first-column defalias #f(compiled-function () #<bytecode 0x10ffb91>)] 10)
  set-auto-mode-0(python-mode nil)
  after-find-file(nil t)
  find-file-noselect-1(#<buffer> "~/devel/brave-browser/src/brave/script/" nil nil "~/devel/brave-browser/src/brave/script/" (51777401 64769))
  find-file-noselect("/home/francois/devel/brave-browser/src/brave/script/" nil nil)
  #f(compiled-function () (interactive nil) #<bytecode 0xe59051>)()
  ad-Advice-ido-find-file(#f(compiled-function () (interactive nil) #<bytecode 0xe59051>))
  apply(ad-Advice-ido-find-file #f(compiled-function () (interactive nil) #<bytecode 0xe59051>) nil)
  call-interactively(ido-find-file nil nil)

The error comes from line 581 of /usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el:

(when (string-match "\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" user-mail-address) (add-to-list 'tramp-default-user-alist `("\\`gdrive\\'" nil ,(match-string 1 user-mail-address))) (add-to-list 'tramp-default-host-alist '("\\`gdrive\\'" nil (\, (match-string 2 user-mail-address)))))

Commenting that line makes the problem go away.

For reference, in emacs 27.1, that blurb looks like this:

     (string-match "\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" user-mail-address)
   (add-to-list 'tramp-default-user-alist `("\\`gdrive\\'" nil ,
                                            (match-string 1 user-mail-address)))
   (add-to-list 'tramp-default-host-alist '("\\`gdrive\\'" nil (\,
                                                                (match-string 2 user-mail-address)

with the only difference being the use of (tramp--with-startup), a function which doesn't exist in emacs 26.3 apparently.

Given that the line references user-mail-address, I took a look at my ~/.emacs and found the following:

 '(user-full-name "Francois Marier")
 '(user-mail-address (getenv "EMAIL"))

Removing the (user-mail-address) line also makes the problem go away.

I ended up usin this latter approach instead in order to avoid modifying upstream emacs code. At least until I discover a need for setting my email address correctly in emacs.

Removing an alias/domain from a Let's Encrypt certificate managed by certbot

I recently got an error during a certbot renewal:

Challenge failed for domain
Failed to renew certificate with error: Some challenges have failed.
The following renewals failed:
  /etc/letsencrypt/live/ (failure)
1 renew failure(s), 0 parse failure(s)

due to the fact that I had removed the DNS entry for

I tried to find a way to remove that name from the certificate before renewing it, but it seems like the only way to do it is to create a new certificate without that alternative name.

First, I looked for the domains included in the certificate:

$ certbot certificates
  Certificate Name:
    Serial Number: 31485424904a33fb2ab43ab174b4b146512
    Key Type: RSA
    Expiry Date: 2022-01-04 05:28:57+00:00 (VALID: 29 days)
    Certificate Path: /etc/letsencrypt/live/
    Private Key Path: /etc/letsencrypt/live/

Then, deleted the existing certificate:

$ certbot delete

and finally created a new certificate with all other names except for the obsolete one:

$ certbot certonly -d -d --duplicate
Formatting an SD card for a Garmin device on Linux

Some Garmin devices may pretend that they can format an SD card into the format they expect, but in my experience, you can instead get stuck in a loop in their user interface and never get the SD card recognized.

Here's what worked for me:

  1. Plug the SD card onto the Linux computer using a USB adapter.
  2. Find out the device name (e.g. /dev/sdc on my computer) using dmesg.
  3. Start fdisk /dev/sdc as root.
  4. Delete any partitions using the d command.
  5. Create a new DOS partition table using the o command.
  6. Create a new primary partition using the n command and accept all of the defaults.
  7. Set the type of that partition to W95 FAT32 (0b).
  8. Save everything using the w command.
  9. Format the newly-created partition with mkfs.vfat /dev/sdc1.

Now if I run fdisk -l /dev/sdc, I see the following:

Disk /dev/sdc: 14.84 GiB, 15931539456 bytes, 31116288 sectors
Disk model: Mass-Storage    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7f2ef0ad

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdc1        2048 31116287 31114240 14.8G  b W95 FAT32

and that appears to be recognized directly by my Garmin DriveSmart 61.