First of all, I am starting from a working Streamzap remote in Kodi. If VLC is the first application you are setting up with the Streamzap remote then you will probably need to read the above blog post first.
Once you know you have a working remote, put the following lircrc
config
into /home/pi/.lircrc
:
begin
prog = vlc
button = KEY_PLAY
config = key-play
end
begin
prog = vlc
button = KEY_PAUSE
config = key-pause
end
begin
prog = vlc
button = KEY_STOP
config = key-stop
end
begin
prog = vlc
button = KEY_POWER
config = key-quit
end
begin
prog = vlc
button = KEY_NEXT
config = key-next
end
begin
prog = vlc
button = KEY_PREVIOUS
config = key-prev
end
begin
prog = vlc
button = KEY_RED
config = key-toggle-fullscreen
end
begin
prog = vlc
button = KEY_REWIND
config = key-slower
end
begin
prog = vlc
button = KEY_FORWARD
config = key-faster
end
begin
prog = vlc
button = KEY_VOLUMEDOWN
config = key-vol-down
end
begin
prog = vlc
button = KEY_VOLUMEUP
config = key-vol-up
end
begin
prog = vlc
button = KEY_BLUE
config = key-audio-track
end
begin
prog = vlc
button = KEY_MUTE
config = key-vol-mute
end
begin
prog = vlc
button = KEY_LEFT
config = key-nav-left
end
begin
prog = vlc
button = KEY_DOWN
config = key-nav-down
end
begin
prog = vlc
button = KEY_UP
config = key-nav-up
end
begin
prog = vlc
button = KEY_RIGHT
config = key-nav-right
end
begin
prog = vlc
button = KEY_MENU
config = key-nav-activate
end
begin
prog = vlc
button = KEY_GREEN
config = key-subtitle-track
end
and then after starting VLC:
- Open Tools | Preferences.
- Select All under Show Settings in the bottom left corner.
- Open Interface | Control Interfaces in the left side-bar.
- Enable Infrared remote control interface.
Now you should see lirc
in the text box at the bottom of Control
Interfaces and the following in your ~/.config/vlc/vlcrc
:
[core]
control=lirc
If you're looking to customize the above key mapping, you can find
the VLC key codes in the output of vlc -H --extended | grep -- --key-
.
As part of debugging an upstream connection problem I've been seeing recently, I wanted to be able to monitor the logs from my Turris Omnia router. Here's how I configured it to send its logs to a server I already had on the local network.
Server setup
The first thing I did was to open up my server's rsyslog (Debian's default syslog server) to remote connections since it's going to be the destination host for the router's log messages.
I added the following to /etc/rsyslog.d/router.conf
:
module(load="imtcp")
input(type="imtcp" port="514")
if $fromhost-ip == '192.168.1.1' then {
if $syslogseverity <= 5 then {
action(type="omfile" file="/var/log/router.log")
}
stop
}
This is using the latest rsyslog configuration method: a handy scripting
language called
RainerScript.
Severity level 5
maps to "notice" which consists of unusual non-error conditions, and
192.168.1.1
is of course the IP address of the router on the LAN side.
With this, I'm directing all router log messages to a separate file,
filtering out anything less important than severity 5.
In order for rsyslog to pick up this new configuration file, I restarted it:
systemctl restart rsyslog.service
and checked that it was running correctly (e.g. no syntax errors in the new config file) using:
systemctl status rsyslog.service
Since I added a new log file, I also setup log rotation for it by putting
the following in /etc/logrotate.d/router
:
/var/log/router.log
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
In addition, since I use
logcheck to monitor my server
logs and email me errors, I had to add /var/log/router.log
to
/etc/logcheck/logcheck.logfiles
.
Finally I opened the rsyslog port to the router in my server's firewall by
adding the following to /etc/network/iptables.up.rules
:
# Allow logs from the router
-A INPUT -s 192.168.1.1 -p tcp --dport 514 -j ACCEPT
and ran iptables-apply
.
With all of this in place, it was time to get the router to send messages.
Router setup
As suggested on the Turris
forum, I
ssh'ed into my router and added this in /etc/syslog-ng.d/remote.conf
:
destination d_loghost {
network("192.168.1.200" time-zone("America/Vancouver"));
};
source dns {
file("/var/log/resolver");
};
log {
source(src);
source(net);
source(kernel);
source(dns);
destination(d_loghost);
};
Setting the timezone to the same as my server was needed because the router messages were otherwise sent with UTC timestamps.
To ensure that the destination host always gets the same IP address
(192.168.1.200
), I went to the advanced DHCP configuration
page and added a
static lease for the server's MAC address so that it always gets assigned
192.168.1.200
. If that wasn't already the server's IP address, you'll have
to restart it for this to take effect.
Finally, I restarted the syslog-ng daemon on the router to pick up the new config file:
/etc/init.d/syslog-ng restart
Testing
In order to test this configuration, I opened three terminal windows:
tail -f /var/log/syslog
on the servertail -f /var/log/router.log
on the servertail -f /var/log/messages
on the router
I immediately started to see messages from the router in the third window
and some of these, not all because of my severity-5 filter, were flowing to
the second window as well. Also important is that none of the messages make
it to the first window, otherwise log messages from the router would be mixed
in with the server's own logs. That's the purpose of the stop
command in
/etc/rsyslog.d/router.conf
.
To force a log messages to be emitted by the router, simply ssh into it and issue the following command:
logger Test
It should show up in the second and third windows immediately if you've got everything setup correctly
Timezone problems
If I do the following on my router:
/etc/init.d/syslog-ng restart logger TestA
I see the following in /var/log/messages
:
Aug 14 20:39:35 hostname syslog-ng[9860]: syslog-ng shutting down; version='3.37.1' Aug 14 20:39:36 hostname syslog-ng[10024]: syslog-ng starting up; version='3.37.1' Aug 15 03:39:49 hostname root: TestA
The correct timezone is the one in the first two lines. Other daemon
messages are displayed using an incorrect timezone like logger
.
Thanks to a very helpful syslog-ng
mailing list thread, I found that this is actually an upstream OpenWRT bug.
My favourite work-around is to tell syslog-ng to simply ignore the timestamp
provided by the application and to use the time of reception (of the log
message) instead. To do this, simply change the following in
/etc/syslog-ng.conf
:
source src {
internal();
unix-dgram("/dev/log");
};
to:
source src {
internal();
unix-dgram("/dev/log", keep-timestamp(no));
};
Unfortunately, I wasn't able to fix it in a way that would survive a
syslog-ng
package update, but since this is supposedly fixed in Turris 6.0,
it shouldn't be a problem for much longer.
I had some problems getting the Gandi certbot plugin to work in Debian bullseye since the documentation appears to be outdated.
When running certbot renew --dry-run
, I saw the following error message:
Plugin legacy name certbot-plugin-gandi:dns may be removed in a future version. Please use dns instead.
Thanks to an issue in another DNS plugin, I was able to easily update my configuration to the new naming convention.
Setup
Get an API key from Gandi and
then put it in /etc/letsencrypt/gandi.ini
:
# live dns v5 api key
dns_api_key=ABCDEF
before make it only readable by root
:
chown root:root /etc/letsencrypt/gandi.ini
chmod 600 /etc/letsencrypt/gandi.ini
Then install the required package:
apt install python3-certbot-dns-gandi
Getting an initial certificate
To get an initial certificate using the Gandi plugin, simply use the following command:
certbot certonly -a dns --dns-credentials /etc/letsencrypt/gandi.ini -d example.fmarier.org
Setting up automatic renewal
If you have automatic renewals enabled,
you'll want to ensure your /etc/letsencrypt/renewal/example.fmarier.org.conf
file looks like this:
# renew_before_expiry = 30 days
version = 1.12.0
archive_dir = /etc/letsencrypt/archive/example.fmarier.org
cert = /etc/letsencrypt/live/example.fmarier.org/cert.pem
privkey = /etc/letsencrypt/live/example.fmarier.org/privkey.pem
chain = /etc/letsencrypt/live/example.fmarier.org/chain.pem
fullchain = /etc/letsencrypt/live/example.fmarier.org/fullchain.pem
[renewalparams]
account = abcdef
authenticator = dns
server = https://acme-v02.api.letsencrypt.org/directory
dns_credentials = /etc/letsencrypt/gandi.ini
CrashPlan recently updated itself to version 10 on my Pop!_OS laptop and stopped backing anything up.
When trying to start the client, I got faced with this error message:
Code42 cannot connect to its backend service.
Digging through log files
In /usr/local/crashplan/log/service.log.0
, I found the reason why the
service didn't start:
[05.18.22 07:40:05.756 ERROR main com.backup42.service.CPService] Error starting up, java.lang.IllegalStateException: Failed to start authorized services.
STACKTRACE:: java.lang.IllegalStateException: Failed to start authorized services.
at com.backup42.service.ClientServiceManager.authorize(ClientServiceManager.java:552)
at com.backup42.service.CPService.startServices(CPService.java:2467)
at com.backup42.service.CPService.start(CPService.java:562)
at com.backup42.service.CPService.main(CPService.java:1574)
Caused by: com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.lang.UnsatisfiedLinkError: Unable to load library 'uaw':
libuaw.so: cannot open shared object file: No such file or directory
libuaw.so: cannot open shared object file: No such file or directory
Native library (linux-x86-64/libuaw.so) not found in resource path (lib/com.backup42.desktop.jar:lang)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl.<init>(UserActivityWatcherServiceImpl.java:67)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl.class(UserActivityWatcherServiceImpl.java:23)
while locating com.code42.service.useractivity.UserActivityWatcherServiceImpl
at com.code42.service.AbstractAuthorizedModule.addServiceWithoutBinding(AbstractAuthorizedModule.java:77)
while locating com.code42.service.IAuthorizedService annotated with @com.google.inject.internal.Element(setName=,uniqueId=34, type=MULTIBINDER, keyType=)
while locating java.util.Set<com.code42.service.IAuthorizedService>
1 error
at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1097)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1126)
at com.backup42.service.ClientServiceManager.getServices(ClientServiceManager.java:679)
at com.backup42.service.ClientServiceManager.authorize(ClientServiceManager.java:513)
... 3 more
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'uaw':
libuaw.so: cannot open shared object file: No such file or directory
libuaw.so: cannot open shared object file: No such file or directory
Native library (linux-x86-64/libuaw.so) not found in resource path (lib/com.backup42.desktop.jar:lang)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:301)
at com.sun.jna.NativeLibrary.getInstance(NativeLibrary.java:461)
at com.sun.jna.Library$Handler.<init>(Library.java:192)
at com.sun.jna.Native.load(Native.java:596)
at com.sun.jna.Native.load(Native.java:570)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl.<init>(UserActivityWatcherServiceImpl.java:72)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl$$FastClassByGuice$$4bcc96f8.newInstance(<generated>)
at com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:114)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:306)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.code42.service.AuthorizedScope$1.get(AuthorizedScope.java:38)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:62)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.code42.service.AuthorizedScope$1.get(AuthorizedScope.java:38)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:198)
at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:151)
at com.google.inject.internal.InternalProviderInstanceBindingImpl$Factory.get(InternalProviderInstanceBindingImpl.java:113)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1094)
... 6 more
Suppressed: java.lang.UnsatisfiedLinkError: libuaw.so: cannot open shared object file: No such file or directory
at com.sun.jna.Native.open(Native Method)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:191)
... 28 more
Suppressed: java.lang.UnsatisfiedLinkError: libuaw.so: cannot open shared object file: No such file or directory
at com.sun.jna.Native.open(Native Method)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:204)
... 28 more
Suppressed: java.io.IOException: Native library (linux-x86-64/libuaw.so) not found in resource path (lib/com.backup42.desktop.jar:lang)
at com.sun.jna.Native.extractFromResourcePath(Native.java:1119)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:275)
... 28 more
[05.18.22 07:40:05.756 INFO main 42.service.history.HistoryLogger] HISTORY:: Code42 stopped, version 10.0.0
[05.18.22 07:40:05.756 INFO main com.backup42.service.CPService] ***** STOPPING *****
[05.18.22 07:40:05.757 INFO Thread-0 com.backup42.service.CPService] ShutdownHook...calling cleanup
[05.18.22 07:40:05.759 INFO STOPPING com.backup42.service.CPService] SHUTDOWN:: Stopping service...
This suggests that a new library dependency (uaw
) didn't get installed during the
last upgrade.
Looking at the upgrade log (/usr/local/crashplan/log/upgrade..log
), I
found that it detected my operating system as "pop 20":
Fri May 13 07:39:51 PDT 2022: Info : Resolve Native Libraries for pop 20...
Fri May 13 07:39:51 PDT 2022: Info : Keep common libs
Fri May 13 07:39:51 PDT 2022: Info : Keep pop 20 libs
I unpacked the official installer (login required):
$ tar zxf CrashPlanSmb_10.0.0_15252000061000_303_Linux.tgz
$ cd code42-install
$ gzip -dc CrashPlanSmb_10.0.0.cpi | cpio -i
and found that libuaw.so
is only shipped for 4 supported platforms
(rhel7
, rhel8
, ubuntu18
and ubuntu20
):
$ find nlib/
nlib/
nlib/common
nlib/common/libfreeblpriv3.chk
nlib/common/libsoftokn3.chk
nlib/common/libsmime3.so
nlib/common/libnss3.so
nlib/common/libplc4.so
nlib/common/libssl3.so
nlib/common/libsoftokn3.so
nlib/common/libnssdbm3.so
nlib/common/libjss4.so
nlib/common/libleveldb.so
nlib/common/libfreeblpriv3.so
nlib/common/libfreebl3.chk
nlib/common/libplds4.so
nlib/common/libnssutil3.so
nlib/common/libnspr4.so
nlib/common/libfreebl3.so
nlib/common/libc42core.so
nlib/common/libc42archive64.so
nlib/common/libnssdbm3.chk
nlib/rhel7
nlib/rhel7/libuaw.so
nlib/rhel8
nlib/rhel8/libuaw.so
nlib/ubuntu18
nlib/ubuntu18/libuaw.so
nlib/ubuntu20
nlib/ubuntu20/libuaw.so
Fixing the installation script
Others have fixed this problem by copying the files manually but since Pop!_OS is based on Ubuntu, I decided to fix this by forcing the OS to be detected as "ubuntu" in the installer.
I simply edited install.sh
like this:
--- install.sh.orig 2022-05-18 16:47:52.176199965 -0700
+++ install.sh 2022-05-18 16:57:26.231723044 -0700
@@ -15,7 +15,7 @@
readonly IS_ROOT=$([[ $(id -u) -eq 0 ]] && echo true || echo false)
readonly REQ_CMDS="chmod chown cp cpio cut grep gzip hash id ls mkdir mv sed"
readonly APP_VERSION_FILE="c42.version.properties"
-readonly OS_NAME=$(grep "^ID=" /etc/os-release | cut -d = -f 2 | tr -d \" | tr '[:upper:]' '[:lower:]')
+readonly OS_NAME=ubuntu
readonly OS_VERSION=$(grep "^VERSION_ID=" /etc/os-release | cut -d = -f 2 | tr -d \" | cut -d . -f1)
SCRIPT_DIR="${0:0:${#0} - ${#SCRIPT_NAME}}"
and then ran that install script as root again to upgrade my existing installation.
I've been very happy with my Turris Omnia router and decided recently to take advantage of the fact that is is easily upgradable to replace the original radios for Wave 2 models.
I didn't go for a Wi-Fi 6-capable card because I don't have any devices that support it at this point. There is also an official WiFi 6 upgrade kit in the works and so I might just go with that later.
Wi-Fi card selection
After seeing a report that someone was already using these cards on the Omnia, I decided to look for the following:
Compex themselves don't appear to sell to consumers, but I found an American store that would sell them to me and ship to Canada:
Each card uses 4 antennas, which means that I would need an additional diplexer, an extra pigtail to SMA-RP connector, and two more antennas to wire everything up. Thankfully, the Omnia already comes with two extra holes drilled into the back of the router (covered by plastic caps) and so there is no need for drilling the case.
I put the two cards in the middle and right-most slots (they don't seem to go in the left-most slot because of the SIM card holder being in the way) without worrying about antennas just yet.
Driver installation
I made sure that the chipsets were supported in OpenWRT
19.07 (LTS 4.14 kernel) and found
that support for the Qualcomm
QCA9984 chipset
was added in the ath10k
driver as of kernel
4.8
but only for two cards
apparently.
I installed the following proprietary firmware package via the advanced configuration interface:
ath10k-firmware-qca9984
and that automatically pulled in the free ath10k
driver. After rebooting,
I was able to see one of the two cards in the ReForis admin page and
configure it.
Note that there is an alternative
firmware available in OpenWRT
as well (look for packages ending in -ct
), but since both firmware/driver
combinations gave me the same initial results, I decided to go with the
defaults.
Problems
The first problem I ran into is that I could only see one of the two cards
in the output of lspci
(after ssh
ing into the router). Looking for ath
or wlan
in the dmesg
output, it doesn't look like the second card is
being recognized at all.
Neither the 2.4 GHz or 5 GHz Wave 2 card worked in the right-most slot, but either of them works fine when moved to the middle slot. The stock cards work just fine in the right-most slot. I have no explanation for this.
The second problem was that I realized that the antenna holes are not all the same. The two on each end are fully round and can accommodate the diplexers which come with a round SMA-RP connector.
On the other hand, the three middle ones have a notch at the top which can only accommodate the single antenna connectors which have a flat bit on one side. I would have to file one of holes in order to add a third diplexer to my setup.
Final working configuration
Since I didn't see a way to use both new cards at once, I ended up on a different configuration that would nevertheless still upgrade both my 2.4 GHz and 5 GHz Wi-Fi.
I moved the original dual-band card to the right-most slot and switched it to the 2.4 GHz band since it's more powerful (both in dB and in throughput) than the original half-length card.
Then I put the WLE1216V5-20 into the middle slot.
The only extra thing I had to buy were two extra pigtails to SMA-RP connectors and antennas.
Here's what the final product looks like:
After upgrading my MythTV machine to Debian Bullseye and MythTV 31, my Streamzap remote control stopped working correctly: the up and down buttons were working, but the OK button wasn't.
Here's the complete solution that made it work with the built-in kernel support (i.e. without LIRC).
Button re-mapping
Since some of the buttons were working, but not others, I figured that the buttons were probably not mapped to the right keys.
Inspired by these old v4l-utils
-based
instructions,
I made my own custom keymap by by copying the original keymap:
cp /lib/udev/rc_keymaps/streamzap.toml /etc/rc_keymaps/
and then modifying it to adapt it to what MythTV needs. This is what I ended up with:
<span class="createlink"><a href="/blog.cgi?do=create&from=posts%2Fusing-streamzap-remote-with-mythtv-debian-bullseye&page=protocols" rel="nofollow">?</a>protocols</span>
name = "streamzap"
protocol = "rc-5-sz"
[protocols.scancodes]
0x28c0 = "KEY_0"
0x28c1 = "KEY_1"
0x28c2 = "KEY_2"
0x28c3 = "KEY_3"
0x28c4 = "KEY_4"
0x28c5 = "KEY_5"
0x28c6 = "KEY_6"
0x28c7 = "KEY_7"
0x28c8 = "KEY_8"
0x28c9 = "KEY_9"
0x28ca = "KEY_ESC"
0x28cb = "KEY_MUTE"
0x28cc = "KEY_UP"
0x28cd = "KEY_RIGHTBRACE"
0x28ce = "KEY_DOWN"
0x28cf = "KEY_LEFTBRACE"
0x28d0 = "KEY_UP"
0x28d1 = "KEY_LEFT"
0x28d2 = "KEY_ENTER"
0x28d3 = "KEY_RIGHT"
0x28d4 = "KEY_DOWN"
0x28d5 = "KEY_M"
0x28d6 = "KEY_ESC"
0x28d7 = "KEY_L"
0x28d8 = "KEY_P"
0x28d9 = "KEY_ESC"
0x28da = "KEY_BACK"
0x28db = "KEY_FORWARD"
0x28dc = "KEY_R"
0x28dd = "KEY_PAGEUP"
0x28de = "KEY_PAGEDOWN"
0x28e0 = "KEY_D"
0x28e1 = "KEY_I"
0x28e2 = "KEY_END"
0x28e3 = "KEY_A"
Note that the keycodes can be found in the kernel source code.
With my own keymap in place at /etc/rc_keymaps/streamzap.toml
, I changed
/etc/rc_maps.cfg
to have the kernel driver automatically use it:
--- a/rc_maps.cfg
+++ b/rc_maps.cfg
@@ -126,7 +126,7 @@
* rc-real-audio-220-32-keys real_audio_220_32_keys.toml
* rc-reddo reddo.toml
* rc-snapstream-firefly snapstream_firefly.toml
-* rc-streamzap streamzap.toml
+* rc-streamzap /etc/rc_keymaps/streamzap.toml
* rc-su3000 su3000.toml
* rc-tango tango.toml
* rc-tanix-tx3mini tanix_tx3mini.toml
Button repeat delay
To adjust the delay before button presses are repeated, I followed these
old out-of-date
instructions
on the MythTV wiki and put the following in
/etc/udev/rules.d/streamzap.rules
:
ACTION=="add", ATTRS{idVendor}=="0e9c", ATTRS{idProduct}=="0000", RUN+="/usr/bin/ir-keytable -s rc0 -D 1000 -P 250"
Note that the -d
option has been replaced with -s
in the latest version
of ir-keytable
.
To check that the Streamzap is indeed detected as rc0
on your system, use
this command:
$ ir-keytable
Found /sys/class/rc/rc0/ with:
Name: Streamzap PC Remote Infrared Receiver (0e9c:0000)
Driver: streamzap
Default keymap: rc-streamzap
...
Make sure you don't pass the -c
to ir-keytable
or else it will clear the
keymap set via /etc/rc_maps.cfg
, removing all of the button mappings.
The filter rules preventing websites from portscanning the local machine have recently been tightened in Brave. It turns out there are a surprising number of ways to refer to the local machine in Chromium.
localhost
and friends
127.0.0.1
is the first address that comes to mind when thinking of the
local machine. localhost
is typically aliased to that address (via
/etc/hosts
), though that convention is not mandatory. The IPv6 equivalent
is [::1]
.
- http://localhost/
- http://foo.localhost/
- http://127.0.0.1/
- http://0177.0000.0000.0001/ (
127.0.0.1
in octal) - http://0x7F000001/ (
127.0.0.1
in hex) - http://2130706433/ (
127.0.0.1
in decimal) - http://[::ffff:127.0.0.1]/ (IPv4-mapped IPv6 address)
- http://[::ffff:7f00:1]/ (alternate of IPv4-mapped IPv6 address)
- http://[0000:0000:0000:0000:0000:ffff:7f00:0001]/ (fully-expanded IPv4-mapped IPv6 address)
- http://[::1]/
- http://[0000:0000:0000:0000:0000:0000:0000:0001]/ (fully-expanded form of
[::1]
)
0.0.0.0
0.0.0.0
is not a routable address, but that's what's used to tell a
service to bind (listen) on all network interfaces. In Chromium, it resolves
to the local machine, just like 127.0.0.1
. The IPv6 equivalent is [::]
.
- http://0.0.0.0/
- http://0000.0000.0000.0000/ (
0.0.0.0
in octal) - http://0x00000000/(
0.0.0.0
in hex) - http://0/ (
0.0.0.0
in decimal) - http://[::ffff:0.0.0.0]/ (IPv4-mapped IPv6 address)
- http://[::ffff:0000:0000]/ (alternate form of IPv4-mapped IPv6 address)
- http://[0000:0000:0000:0000:0000:ffff:0000:0000]/ (fully-expanded IPv4-mapped IPv6 address)
- http://[::]/
- http://[0000:0000:0000:0000:0000:0000:0000:0000]/ (fully-expanded form of
[::]
)
DNS-based
Of course, another way to encode these numerical URLs is to create A
/
AAAA
records for them under a domain you control. I've done this under my
personal domain:
- http://t127.fmarier.org/ (
127.0.0.1
) - http://t1aaaa.fmarier.org/ (
[::1]
) - http://t0.fmarier.org/ (
0.0.0.0
) - http://t0aaaa.fmarier.org/ (
[::]
) - http://t127aaaam.fmarier.org/ (
[::ffff:7f00:1]
) - http://t0aaaam.fmarier.org/ (
[::ffff:0000:0000]
)
For these to work, you'll need to:
- Make sure you can connect to IPv6-only hosts, for example by connecting to an appropriate VPN if needed.
- Put
nameserver 8.8.8.8
in/etc/resolv.conf
since you need a DNS server that will not filter these localhost domains. (For example, Unbound will do that if you useprivate-address: 127.0.0.0/8
in theserver
config.) - Go into
chrome://settings/security
and disable Always use secure connections to make sure the OS resolver is used. - Turn off the
chrome://flags/#block-insecure-private-network-requests
flag since that security feature (CORS-RFC1918) is designed to protect against these kinds of requests.
127.0.0.0/8
subnet
Technically, the entire 127.0.0.0/8
subnet can used to refer to the local
machine. However, it's not a reliable way to portscan a machine from a web
browser because it only catches the services that listen on all interfaces
(i.e. 0.0.0.0
).
For example, on my machine, if I nmap 127.0.0.1
, I get:
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.2p1
25/tcp open smtp Postfix smtpd
whereas if I nmap 127.0.1.25
, I only get:
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.2p1
That's because I've got the following in /etc/postfix/main.cf
:
inet_interfaces = loopback-only
which I assume is explicitly binding 127.0.0.1
.
Nevertheless, it would be good to get that fixed in Brave too.
Ever since upgrading to Ubuntu 20.04 (focal) I started getting the following
error in emacs when opening Python files (in the built-in python-mode
):
Filemode specification error: (wong-type argument stringp nil)
I used M-x toggle-debug-on-error
in order to see where the error
originated and saw the following immediately after opening a Python file:
Debugger entered--Lisp error: (wrong-type-argument stringp nil)
string-match("\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" nil)
(if (string-match "\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" user-mail-address) (progn (add-to-list (quote tramp-default-user-alist) (list "\\`gdrive\\'" nil (match-string 1 user-mail-address))) (add-to-list (quote tramp-default-host-alist) (quote ("\\`gdrive\\'" nil (\, (match-string 2 user-mail-address)))))))
eval-buffer(#<buffer *load*> nil "/usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el" nil t) ; Reading at buffer position 25605
load-with-code-conversion("/usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el" "/usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el" nil t)
require(tramp-loaddefs)
byte-code("\300\301!\210\300\302!\210\300\303!\210\300\304!\210\300\305!\210\300\306!\210\300\307!\210\300\310!\210\300\311!\210\300\312!\210\300\313!\210\300\314!\207" [require auth-source advice cl-lib custom format-spec parse-time password-cache shell timer ucs-normalize trampver tramp-loaddefs] 2)
require(tramp-compat)
byte-code("\300\301!\210\300\302!\210\303\304\305\306\307\310\307\311\312\313\314\315&\013\210\316\317\320\321\322DD\323\307\304\324\325&\007\210\316\326\320\321\327DD\330\307\304\324\331&\007\210\316\332\320\321\333DD\334\307\304\324\335&\007\210\316\336\320\321\337DD\340\307\304\324\341&\007\210\316\342\320\321\343DD\344\307\304\324\345&\007\210\316\346\320\321\347DD\350\307\304\324\351&\007\210\316\352\320\321\353DD\354\314\355\307\304\324\356&\011\207" [require tramp-compat cl-lib custom-declare-group tramp nil "Edit remote files with a combination of ssh, scp, etc." :group files comm :link (custom-manual "(tramp)Top") :version "22.1" custom-declare-variable tramp-mode funcall function #f(compiled-function () #<bytecode 0x13896d5>) "Whether Tramp is enabled.\nIf it is set to nil, all remote file names are used literally." :type boolean tramp-verbose #f(compiled-function () #<bytecode 0x1204c1d>) "Verbosity level for Tramp messages.\nAny level x includes messages for all levels 1 .. x-1. The levels are\n\n 0 silent (no tramp messages at all)\n 1 errors\n 2 warnings\n 3 connection to remote hosts (default level)\n 4 activities\n 5 internal\n 6 sent and received strings\n 7 file caching\n 8 connection properties\n 9 test commands\n10 traces (huge)." integer tramp-backup-directory-alist #f(compiled-function () #<bytecode 0x10f7791>) "Alist of filename patterns and backup directory names.\nEach element looks like (REGEXP . DIRECTORY), with the same meaning like\nin `backup-directory-alist'. If a Tramp file is backed up, and DIRECTORY\nis a local file name, the backup directory is prepended with Tramp file\nname prefix (method, user, host) of file.\n\n(setq tramp-backup-directory-alist backup-directory-alist)\n\ngives the same backup policy for Tramp files on their hosts like the\npolicy for local files." (repeat (cons (regexp :tag "Regexp matching filename") (directory :tag "Backup directory name"))) tramp-auto-save-directory #f(compiled-function () #<bytecode 0x1072e71>) "Put auto-save files in this directory, if set.\nThe idea is to use a local directory so that auto-saving is faster.\nThis setting has precedence over `auto-save-file-name-transforms'." (choice (const :tag "Use default" nil) (directory :tag "Auto save directory name")) tramp-encoding-shell #f(compiled-function () #<bytecode 0x1217129>) "Use this program for encoding and decoding commands on the local host.\nThis shell is used to execute the encoding and decoding command on the\nlocal host, so if you want to use `~' in those commands, you should\nchoose a shell here which groks tilde expansion. `/bin/sh' normally\ndoes not understand tilde expansion.\n\nFor encoding and decoding, commands like the following are executed:\n\n /bin/sh -c COMMAND < INPUT > OUTPUT\n\nThis variable can be used to change the \"/bin/sh\" part. See the\nvariable `tramp-encoding-command-switch' for the \"-c\" part.\n\nIf the shell must be forced to be interactive, see\n`tramp-encoding-command-interactive'.\n\nNote that this variable is not used for remote commands. There are\nmechanisms in tramp.el which automatically determine the right shell to\nuse for the remote host." (file :must-match t) tramp-encoding-command-switch #f(compiled-function () #<bytecode 0x106de75>) "Use this switch together with `tramp-encoding-shell' for local commands.\nSee the variable `tramp-encoding-shell' for more information." string tramp-encoding-command-interactive #f(compiled-function () #<bytecode 0xeaeafd>) "Use this switch together with `tramp-encoding-shell' for interactive shells.\nSee the variable `tramp-encoding-shell' for more information." "24.1" (choice (const nil) string)] 12)
require(tramp)
byte-code("\300\301!\210\302\303\304\305\306DD\307\310\301\311\312\313\314&\011\210\302\315\304\305\316DD\317\310\301\313\320&\007\210\302\321\304\305\322DD\323\310\301\313\324&\007\210\302\325\304\305\326DD\327\310\301\311\330\313\331&\011\207" [require tramp custom-declare-variable tramp-inline-compress-start-size funcall function #f(compiled-function () #<bytecode 0x10f4681>) "The minimum size of compressing where inline transfer.\nWhen inline transfer, compress transferred data of file\nwhose size is this value or above (up to `tramp-copy-size-limit').\nIf it is nil, no compression at all will be applied." :group :version "26.3" :type (choice (const nil) integer) tramp-copy-size-limit #f(compiled-function () #<bytecode 0x10f3a81>) "The maximum file size where inline copying is preferred over an out-of-the-band copy.\nIf it is nil, out-of-the-band copy will be used without a check." (choice (const nil) integer) tramp-terminal-type #f(compiled-function () #<bytecode 0x1097881>) "Value of TERM environment variable for logging in to remote host.\nBecause Tramp wants to parse the output of the remote shell, it is easily\nconfused by ANSI color escape sequences and suchlike. Often, shell init\nfiles conditionalize this setup based on the TERM environment variable." string tramp-histfile-override #f(compiled-function () #<bytecode 0x1020b49>) "When invoking a shell, override the HISTFILE with this value.\nWhen setting to a string, it redirects the shell history to that\nfile. Be careful when setting to \"/dev/null\"; this might\nresult in undesired results when using \"bash\" as shell.\n\nThe value t unsets any setting of HISTFILE, and sets both\nHISTFILESIZE and HISTSIZE to 0. If you set this variable to nil,\nhowever, the *override* is disabled, so the history will go to\nthe default storage location, e.g. \"$HOME/.sh_history\"." "25.2" (choice (const :tag "Do not override HISTFILE" nil) (const :tag "Unset HISTFILE" t) (string :tag "Redirect to a file"))] 10)
require(tramp-sh)
byte-code("\300\301!\210\300\302!\210\300\303!\210\300\304!\210\300\305!\210\306\307\310\"\210\306\311\312\"\210\313\314\315\316!\317B\"\210\313\320\315\321!\317B\"\210\322\323\324\325\326\327\330\331\332\333&\011\210\334\335!\204H\0\336\335\337\"\210\324\207" [require ansi-color cl-lib comint json tramp-sh autoload comint-mode "comint" help-function-arglist "help-fns" add-to-list auto-mode-alist purecopy "\\.py[iw]?\\'" python-mode interpreter-mode-alist "python[0-9.]*" custom-declare-group python nil "Python Language's flying circus support for Emacs." :group languages :version "24.3" :link (emacs-commentary-link "python") fboundp prog-first-column defalias #f(compiled-function () #<bytecode 0x10ffb91>)] 10)
python-mode()
set-auto-mode-0(python-mode nil)
set-auto-mode()
normal-mode(t)
after-find-file(nil t)
find-file-noselect-1(#<buffer generate_licenses.py> "~/devel/brave-browser/src/brave/script/generate_licenses.py" nil nil "~/devel/brave-browser/src/brave/script/generate_licenses.py" (51777401 64769))
find-file-noselect("/home/francois/devel/brave-browser/src/brave/script/generate_licenses.py" nil nil)
ido-file-internal(raise-frame)
#f(compiled-function () (interactive nil) #<bytecode 0xe59051>)()
ad-Advice-ido-find-file(#f(compiled-function () (interactive nil) #<bytecode 0xe59051>))
apply(ad-Advice-ido-find-file #f(compiled-function () (interactive nil) #<bytecode 0xe59051>) nil)
ido-find-file()
funcall-interactively(ido-find-file)
call-interactively(ido-find-file nil nil)
command-execute(ido-find-file)
The error comes from line 581 of /usr/share/emacs/26.3/lisp/net/tramp-loaddefs.el
:
(when (string-match "\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" user-mail-address) (add-to-list 'tramp-default-user-alist `("\\`gdrive\\'" nil ,(match-string 1 user-mail-address))) (add-to-list 'tramp-default-host-alist '("\\`gdrive\\'" nil (\, (match-string 2 user-mail-address)))))
Commenting that line makes the problem go away.
For reference, in emacs 27.1, that blurb looks like this:
(tramp--with-startup
(when
(string-match "\\(.+\\)@\\(\\(?:gmail\\|googlemail\\)\\.com\\)" user-mail-address)
(add-to-list 'tramp-default-user-alist `("\\`gdrive\\'" nil ,
(match-string 1 user-mail-address)))
(add-to-list 'tramp-default-host-alist '("\\`gdrive\\'" nil (\,
(match-string 2 user-mail-address)
)
)
)
)
)
with the only difference being the use of (tramp--with-startup)
, a
function which doesn't exist in emacs 26.3 apparently.
Given that the line references user-mail-address
, I took a look at my
~/.emacs
and found the following:
(custom-set-variables
'(user-full-name "Francois Marier")
'(user-mail-address (getenv "EMAIL"))
)
Removing the (user-mail-address)
line also makes the problem go away.
I ended up usin this latter approach instead in order to avoid modifying upstream emacs code. At least until I discover a need for setting my email address correctly in emacs.
I recently got an error during a certbot renewal:
Challenge failed for domain echo.fmarier.org
Failed to renew certificate jabber-gw.fmarier.org with error: Some challenges have failed.
The following renewals failed:
/etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem (failure)
1 renew failure(s), 0 parse failure(s)
due to the fact that I had removed the DNS entry for echo.fmarier.org
.
I tried to find a way to remove that name from the certificate before renewing it, but it seems like the only way to do it is to create a new certificate without that alternative name.
First, I looked for the domains included in the certificate:
$ certbot certificates
...
Certificate Name: jabber-gw.fmarier.org
Serial Number: 31485424904a33fb2ab43ab174b4b146512
Key Type: RSA
Domains: jabber-gw.fmarier.org echo.fmarier.org fmarier.org
Expiry Date: 2022-01-04 05:28:57+00:00 (VALID: 29 days)
Certificate Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem
Private Key Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem
Then, deleted the existing certificate:
$ certbot delete jabber-gw.fmarier.org
and finally created a new certificate with all other names except for the obsolete one:
$ certbot certonly -d jabber-gw.fmarier.org -d fmarier.org --duplicate
Some Garmin devices may pretend that they can format an SD card into the format they expect, but in my experience, you can instead get stuck in a loop in their user interface and never get the SD card recognized.
Here's what worked for me:
- Plug the SD card onto the Linux computer using a USB adapter.
- Find out the device name (e.g.
/dev/sdc
on my computer) usingdmesg
. - Start
fdisk /dev/sdc
as root. - Delete any partitions using the
d
command. - Create a new DOS partition table using the
o
command. - Create a new primary partition using the
n
command and accept all of the defaults. - Set the type of that partition to
W95 FAT32
(0b
). - Save everything using the
w
command. - Format the newly-created partition with
mkfs.vfat /dev/sdc1
.
Now if I run fdisk -l /dev/sdc
, I see the following:
Disk /dev/sdc: 14.84 GiB, 15931539456 bytes, 31116288 sectors
Disk model: Mass-Storage
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7f2ef0ad
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 31116287 31114240 14.8G b W95 FAT32
and that appears to be recognized directly by my Garmin DriveSmart 61.