I was recently hoping to replace an aging proprietary router (upgraded to a Gargoyle FOSS firmware). After rejecting a popular brand with a disturbing GPL violation habit, I settled on the Turris Omnia router, built on free software. Overall, I was pretty satisfied with the fact that it is free and comes with automatic updates, but I noticed a problem with the WiFi. Specifically, the 5 GHz access point was okay but the 2.4 GHz was awful.
False lead
I initially thought that the 2.4 GHz radio wasn't working, but then I realized that putting my phone next to the router would allow it to connect and exchange data at a slow-but-steady rate. If I moved the phone more than 3-4 meters away though, it would disconnect for lack of signal. To be frank, the wireless performance was much worse than my original router, even though the wired performance was, as expected, amazing:
I looked on the official support forums and found this intriguing thread about interference between USB3 and 2.4 GHz radios. This sounded a lot like what I was experiencing (working radio but terrible signal/interference) and so I decided to see if I could move the radios around inside the unit, as suggested by the poster.
After opening the case however, I noticed that radios were already laid out in the optimal way:
and that USB3 interference wasn't going to be the reason for my troubles.
Real problem
So I took a good look at the wiring and found that while the the larger radio (2.4 / 5 GHz dual-bander) was connected to all three antennas, the smaller radio (2.4 GHz only) was connected to only 2 of the 3 antennas:
To make it possible for antennas 1 and 3 to carry the signal from both radios, a duplexer got inserted between the radios and the antenna:
On one side is the 2.4 antenna port and on the other side is the 5 GHz port.
Looking at the wiring though, it became clear that my 2.4 GHz radio was connected to the 5 GHz ports of the two duplexers and the 5 GHz radio was connected to the 2.4 GHz ports of the duplexers. This makes sense considering that I had okay 5 GHz performance (with one of the three chains connected to the right filter) and abysimal 2.4 GHz performance (with none of the two chains connected to the right filter).
Solution
Swapping the antenna connectors around completely fixed the problem. With the 2.4 GHz radio connected to the 2.4 side of the duplexer and the dual-bander connected to the 5 GHz side, I was able to get the performance I would expect from such a high-quality router.
Interestingly enough, I found the solution to this problem the same weekend as I passed my advanced amateur radio license exam. I guess that was a good way to put the course material into practice!
Adding third-party embedded widgets on a website is a common but potentially dangerous practice. Thankfully, the web platform offers a few controls that can help mitigate the risks. While this post uses the example of an embedded SurveyMonkey survey, the principles can be used for all kinds of other widgets.
Note that this is by no means an endorsement of SurveyMonkey's proprietary service. If you are looking for a survey product, you should consider a free and open source alternative like LimeSurvey.
SurveyMonkey's snippet
In order to embed a survey on your website, the SurveyMonkey interface will tell you to install the following website collector script:
<script>(function(t,e,s,n){var
o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type="text/javascript",c.async=!0,c.id=n,c.src=["https:"===location.protocol?"https://":"http://","widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd9NxKf_2BhnTfDFrN34n_2BjT1Kk0sqrObugJL8ZXdb_2BaREa.js"].join(""),a.parentNode.insertBefore(c,a))})(window,document,"script","smcx-sdk");</script><a
style="font: 12px Helvetica, sans-serif; color: #999; text-decoration:
none;" href=https://www.surveymonkey.com> Create your own user feedback
survey </a>
which can be rewritten in a more understandable form as:
(
function (s) {
var scripts, last_script, new_script;
window.SMCX = window.SMCX || [],
document.getElementById("smcx-sdk") ||
(
scripts = document.getElementsByTagName("script"),
last_script = scripts[scripts.length - 1],
new_script = document.createElement("script"),
new_script.type = "text/javascript",
new_script.async = true,
new_script.id = "smcx-sdk",
new_script.src =
[
"https:" === location.protocol ? "https://" : "http://",
"widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd9NxKf_2BhnTfDFrN34n_2BjT1Kk0sqrObugJL8ZXdb_2BaREa.js"
].join(""),
last_script.parentNode.insertBefore(new_script, last_script)
)
}
)();
The fact that this adds a third-party script dependency to your website is problematic because it means that a security vulnerability in their infrastructure could lead to a complete compromise of your site, thanks to third-party scripts having full control over your website. Security issues aside though, this could also enable this third-party to violate your users' privacy expectations and extract any information displayed on your site for marketing purposes.
However, if you embed the snippet on a test page and inspect it with the developer tools, you will find that it actually creates an iframe:
<iframe
width="500"
height="500"
frameborder="0"
allowtransparency="true"
src="https://www.surveymonkey.com/r/D3KDY6R?embedded=1"
></iframe>
and you can use that directly on your site without having to load their script.
Mixed content anti-pattern
As an aside, the script snippet they propose makes use of a common front-end anti-pattern:
"https:"===location.protocol?"https://":"http://"
This is presumably meant to avoid inserting an HTTP script element into an HTTPS page, since that would be considered mixed content and get blocked by browsers, however this is entirely unnecessary. One should only ever use the HTTPS version of such scripts anyways since an HTTP page never prohibits embedding HTTPS content.
In other words, the above code snippet can be simplified to:
"https://"
Restricting iframes
Thanks to defenses which have been added to the web platform recently, there are a few things that can be done to constrain iframes.
Firstly, you can choose to hide your full page URL from SurveyMonkey using the referrer policy:
referrerpolicy="strict-origin"
This mean seem harmless, but page URLs sometimes include sensitive
information in the URL path or query string, for example, search terms that
a user might have typed. The strict-origin
policy will limit the referrer
to your site's hostname, port and protocol.
Secondly, you can prevent the iframe from being able to access anything about its embedding page or to trigger popups and unwanted downloads using the sandbox attribute:
sandbox="allow-scripts allow-forms"
Ideally, the contents of this attribute would be empty so that all restrictions would be active, but SurveyMonkey is a JavaScript application and it of course needs to submit a form since that's the purpose of the widget.
Finally, a new experimental capability is making its way into browsers: feature policy. In the context of untrusted iframes, it enables developers to explicitly disable certain powerful features:
allow="accelerometer 'none';
ambient-light-sensor 'none';
camera 'none';
display-capture 'none';
document-domain 'none';
fullscreen 'none';
geolocation 'none';
gyroscope 'none';
magnetometer 'none';
microphone 'none';
midi 'none';
payment 'none';
usb 'none';
vibrate 'none';
vr 'none';
webauthn 'none'"
Putting it all together, we end up with the following HTML snippet:
<iframe
width="500"
height="500"
frameborder="0"
allowtransparency="true"
allow="accelerometer 'none'; ambient-light-sensor 'none';
camera 'none'; display-capture 'none';
document-domain 'none'; fullscreen 'none';
geolocation 'none'; gyroscope 'none'; magnetometer 'none';
microphone 'none'; midi 'none'; payment 'none'; usb 'none';
vibrate 'none'; vr 'none'; webauthn 'none'"
sandbox="allow-scripts allow-forms"
referrerpolicy="strict-origin"
src="https://www.surveymonkey.com/r/D3KDY6R?embedded=1"
></iframe>
Content Security Policy
Another advantage of using the iframe directly is that instead of loosening your site's Content Security Policy by adding all of the following:
script-src https://www.surveymonkey.com
img-src https://www.surveymonkey.com
frame-src https://www.surveymonkey.com
you can limit the extra directives to just the frame controls:
frame-src https://www.surveymonkey.com
CSP Embedded Enforcement would be another nice mechanism to make use of, but looking at SurveyMonkey's CSP policy:
Content-Security-Policy:
default-src https: data: blob: 'unsafe-eval' 'unsafe-inline'
wss://*.hotjar.com 'self';
img-src https: http: data: blob: 'self';
script-src https: 'unsafe-eval' 'unsafe-inline' http://www.google-analytics.com http://ajax.googleapis.com
http://bat.bing.com http://static.hotjar.com http://www.googleadservices.com
'self';
style-src https: 'unsafe-inline' http://secure.surveymonkey.com 'self';
report-uri https://csp.surveymonkey.com/report?e=true&c=prod&a=responseweb
it allows the injection of arbitrary Flash files, inline scripts, evals and any other scripts hosted on an HTTPS URL, which means that it doesn't really provide any meaningful security benefits.
Embedded enforcement is thefore not a usable security control in this particular example until SurveyMonkey gets a stricter CSP policy.
Here's how I created a restricted but not ephemeral guest account on an Ubuntu 18.04 desktop computer that can be used without a password.
Create a user that can login without a password
First of all, I created a new user with a random password (using pwgen -s 64
):
adduser guest
Then following these instructions, I created a new group and added the user to it:
addgroup nopasswdlogin
adduser guest nopasswdlogin
In order to let that user login using
GDM without a password, I added the
following to the top of /etc/pam.d/gdm-password
:
auth sufficient pam_succeed_if.so user ingroup nopasswdlogin
Note that this user is unable to ssh into this machine since it's not part
of the sshuser
group I have setup in my sshd
configuration.
Privacy settings
In order to reduce the amount of digital traces left between guest sessions, I logged into the account using a GNOME session and then opened gnome-control-center. I set the following in the privacy section:
Then I replaced Firefox with Brave in the sidebar, set it as the default browser in gnome-control-center:
and configured it to clear everything on exit:
Create a password-less system keyring
In order to suppress prompts to unlock gnome-keyring, I opened seahorse and deleted the default keyring.
Then I started Brave, which prompted me to create a new keyring so that it can save the contents of its password manager securely. I set an empty password on that new keyring, since I'm not going to be using it.
I also made sure to disable saving of passwords, payment methods and addresses in the browser too.
Restrict user account further
Finally, taking an idea from this similar
solution, I prevented the user from
making any system-wide changes by putting the following in
/etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla
:
[guest-policy]
Identity=unix-user:guest
Action=*
ResultAny=no
ResultInactive=no
ResultActive=no
If you know of any other restrictions that could be added, please leave a comment!
Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.
Flashing the LibreCMC firmware with Debian support
Before we can install Debian, we need a firmware that includes all of the necessary tools.
On another machine, do the following:
- Download the latest
librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin
. - Mount a vfat-formatted USB stick.
- Copy the file onto it and rename it to
gnubee.bin
. - Unmount the USB stick
Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:
ssh 192.68.10.0
reboot
If you have a USB serial cable, you can use it to monitor the flashing process:
screen /dev/ttyUSB0 57600
otherwise keep an eye on the LEDs and wait until they are fully done flashing.
Getting ssh access to LibreCMC
Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.
Now enable SSH access via the built-in LibreCMC firmware:
- Plug a network cable between your laptop and the black network port.
- Open web-based admin panel at http://192.168.10.0.
- Go to System | Administration.
- Set a root password.
- Disable ssh password auth and root password logins.
- Paste in your RSA ssh public key.
- Click Save & Apply.
- Go to Network | Firewall.
- Select "accept" for WAN Input.
- Click Save & Apply.
Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.
Installing Debian
The first step is to install Debian jessie on the GnuBee.
Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:
ssh root@192.168.1.xxx
and the root password you set in the previous section.
Then use fdisk /dev/sda
to create the following partition layout on the
first drive:
Device Start End Sectors Size Type
/dev/sda1 2048 8390655 8388608 4G Linux swap
/dev/sda2 8390656 234441614 226050959 107.8G Linux filesystem
Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.
Then format the swap partition:
mkswap /dev/sda1
and download the latest version of the jessie installer:
wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install
(Yes, the --no-check-certificate
is really unfortunate. Please leave a
comment if you find a way to work around it.)
The stock installer fails to bring up the correct networking configuration
on my network and so I have modified the
install script by changing
the eth0.1
blurb to:
auto eth0.1
iface eth0.1 inet static
address 192.168.10.1
netmask 255.255.255.0
Then you should be able to run the installer succesfully:
sh ./debian-jessie-install
and reboot:
reboot
Restore ssh access in Debian jessie
Once the GnuBee has finished booting, login using the serial console:
- username:
root
- password:
GnuBee
and change the root password using passwd
.
Look for the IPv4 address of eth0.2
in the output of the ip addr
command
and then ssh into the GnuBee from your desktop computer:
ssh root@192.168.1.xxx # type password set above
mkdir .ssh
vim .ssh/authorized_keys # paste your ed25519 ssh pubkey
Finish the jessie installation
With this in place, you should be able to ssh into the GnuBee using your public key:
ssh root@192.168.1.172
and then finish the jessie installation:
wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install
bash ./debian-modules-install
reboot
After rebooting, I made a few tweaks to make the system more pleasant to use:
update-alternatives --config editor # choose vim.basic
dpkg-reconfigure locales # enable the locale that your desktop is using
Upgrade to stretch and then buster
To upgrade to stretch, put this in /etc/apt/sources.list
:
deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org/ stretch/updates main
Then upgrade the packages:
apt update
apt full-upgrade
apt autoremove
reboot
To upgrade to buster, put this in /etc/apt/sources.list
:
deb http://httpredir.debian.org/debian buster main
deb http://httpredir.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main
and upgrade the packages:
apt update
apt full-upgrade
apt autoremove
reboot
Next steps
At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:
openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.
The firmware is running an outdated version of the Linux kernel though this is being worked on by community members.
I hope to resolve these issues soon, and will update this blog post once I do, but you are more than welcome to leave a comment if you know of a solution I may have overlooked.
My VoIP provider recently added support for TLS/SRTP-based call encryption. Here's what I did to enable this feature on my Asterisk server.
First of all, I changed the registration line in /etc/asterisk/sip.conf
to
use the "tls" scheme:
[general]
register => tls://mydid:mypassword@servername.voip.ms
then I enabled incoming TCP connections:
tcpenable=yes
and TLS:
tlsenable=yes
tlscapath=/etc/ssl/certs/
Finally, I changed my provider entry in the same file to:
[voipms]
type=friend
host=servername.voip.ms
secret=mypassword
username=mydid
context=from-voipms
allow=ulaw
allow=g729
insecure=port,invite
transport=tls
encryption=yes
(Note the last two lines.)
The dialplan didn't change and so I still have the following in
/etc/asterisk/extensions.conf
:
[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN})
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN})
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _011X.,n,Authenticate(1234) ; require password for international calls
exten => _011X.,n,Dial(SIP/voipms/${EXTEN})
exten => _011X.,n,Hangup(16)
Server certificate
The only thing I still need to fix is to make this error message go away in my logs:
asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>
It appears to be related to the fact that I didn't set tlscertfile
in
/etc/asterisk/sip.conf
and that it's using its default value of
asterisk.pem
, a non-existent file.
Since my Asterisk server is only acting as a TLS client, and not a TLS server, there's probably no harm in not having a certificate. That said, it looks pretty easy to use a Let's Encrypt cert with Asterisk.
Similarly to what I wrote for Fedora, here is how I was able to create an OpenSUSE 15 LXC container on an Ubuntu 18.04 (bionic) laptop.
Setting up LXC on Ubuntu
First of all, install lxc:
apt install lxc
echo "veth" >> /etc/modules
modprobe veth
turn on bridged networking by putting the following in
/etc/sysctl.d/local.conf
:
net.ipv4.ip_forward=1
and applying it using:
sysctl -p /etc/sysctl.d/local.conf
Then allow the right traffic in your firewall
(/etc/network/iptables.up.rules
in my case):
# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
and apply these changes:
iptables-apply
before restarting the lxc networking:
systemctl restart lxc-net.service
Creating the container
Once that's in place, you can finally create the OpenSUSE 15 container:
lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64
To see a list of all distros available with the download
template:
lxc-create -n foo --template=download -- --list
Logging in as root
Start up the container and get a login console:
lxc-start -n opensuse15 -F
In another terminal, set a password for the root user:
lxc-attach -n opensuse15 passwd
You can now use this password to log into the console you started earlier.
Logging in as an unprivileged user via ssh
As root, install a few packages:
zypper install vim openssh sudo man
systemctl start sshd
systemctl enable sshd
and then create an unprivileged user:
useradd francois
passwd francois
cd /home
mkdir francois
chown francois:100 francois/
and give that user sudo access:
visudo # uncomment "wheel" line
groupadd wheel
usermod -aG wheel francois
Now login as that user from the console and add an ssh public key:
mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys
You can now login via ssh. The IP address to use can be seen in the output of:
lxc-ls --fancy
I recently setup a desktop computer with two SSDs using a software RAID1 and full-disk encryption (i.e. LUKS). Since this is not a supported configuration in Ubuntu desktop, I had to use the server installation medium.
This is my version of these excellent instructions.
Server installer
Start by downloading the alternate server installer and verifying its signature:
Download the required files:
wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
Verify the signature on the hash file:
$ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092 $ gpg --verify SHA256SUMS.gpg SHA256SUMS gpg: Signature made Fri Feb 15 08:32:38 2019 PST gpg: using RSA key D94AA3F0EFE21092 gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
Verify the hash of the ISO file:
$ sha256sum --ignore-missing -c SHA256SUMS ubuntu-18.04.2-server-amd64.iso: OK
Then copy it to a USB drive:
dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX
and boot with it.
Manual partitioning
Inside the installer, use manual partitioning to:
- Configure the physical partitions.
- Configure the RAID array second.
- Configure the encrypted partitions last
Here's the exact configuration I used:
/dev/sda1
is 512 MB and used as the EFI parition/dev/sdb1
is 512 MB but not used for anything/dev/sda2
and/dev/sdb2
are both 4 GB (RAID)/dev/sda3
and/dev/sdb3
are both 512 MB (RAID)/dev/sda4
and/dev/sdb4
use up the rest of the disk (RAID)
I only set /dev/sda2
as the EFI partition because I found that adding a
second EFI partition would break the installer.
I created the following RAID1 arrays:
/dev/sda2
and/dev/sdb2
for/dev/md2
/dev/sda3
and/dev/sdb3
for/dev/md0
/dev/sda4
and/dev/sdb4
for/dev/md1
I used /dev/md0
as my unencrypted /boot
partition.
Then I created the following LUKS partitions:
md1_crypt
as the/
partition using/dev/md1
md2_crypt
as the swap partition (4 GB) with a random encryption key using/dev/md2
Post-installation configuration
Once your new system is up, sync the EFI partitions using DD:
dd if=/dev/sda1 of=/dev/sdb1
and create a second EFI boot entry:
efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
Ensure that the RAID drives are fully sync'ed by keeping an eye on
/prod/mdstat
and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.
Once you have rebooted, remove the following package to speed up future boots:
apt purge btrfs-progs
To switch to the desktop variant of Ubuntu, install these meta-packages:
apt install ubuntu-desktop gnome
then use debfoster
to remove unnecessary packages (in particular the ones
that only come with the default Ubuntu server installation).
Fixing booting with degraded RAID arrays
Since I have run into RAID startup problems in the past, I expected having to fix up a few things to make degraded RAID arrays boot correctly.
I did not use LVM since I
didn't really feel the need to add yet another layer of abstraction of top
of my setup, but I found that the lvm2
package must still be installed:
apt install lvm2
with use_lvmetad = 0
in /etc/lvm/lvm.conf
.
Then in order to automatically bring up the RAID arrays with 1 out of 2
drives, I added the following script in
/etc/initramfs-tools/scripts/local-top/cryptraid
:
#!/bin/sh
PREREQ="mdadm"
prereqs()
{
echo "$PREREQ"
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
mdadm --run /dev/md0
mdadm --run /dev/md1
mdadm --run /dev/md2
before making that script executable:
chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid
and refreshing the initramfs:
update-initramfs -u -k all
Disable suspend-to-disk
Since I use a random encryption key for the swap
partition
(to avoid having a second password prompt at boot time), it means that
suspend-to-disk is not going to work and so I disabled it by putting the
following in /etc/initramfs-tools/conf.d/resume
:
RESUME=none
and by adding noresume
to the GRUB_CMDLINE_LINUX
variable in
/etc/default/grub
before applying these changes:
update-grub
update-initramfs -u -k all
Test your configuration
With all of this in place, you should be able to do a final test of your setup:
- Shutdown the computer and unplug the second drive.
- Boot with only the first drive.
- Shutdown the computer and plug the second drive back in.
Boot with both drives and re-add the second drive to the RAID array:
mdadm /dev/md0 -a /dev/sdb3 mdadm /dev/md1 -a /dev/sdb4 mdadm /dev/md2 -a /dev/sdb2
Wait until the RAID is done re-syncing and shutdown the computer.
- Repeat steps 2-5 with the first drive unplugged instead of the second.
- Reboot with both drives plugged in.
At this point, you have a working setup that will gracefully degrade to a one-drive RAID array should one of your drives fail.
I recently acquired an AnyTone AT-D878UV DMR radio which is unfortunately not supported by chirp, my usual go-to free software package for programming amateur radios.
Instead, I had to setup a Windows 10 virtual machine so that I could setup the radio using the manufacturer's computer programming software (CPS).
Install VirtualBox
Install VirtualBox:
apt install virtualbox virtualbox-guest-additions-iso
and add your user account to the vboxusers
group:
adduser francois vboxusers
to make filesharing before the host and the guest work.
Finally, reboot to ensure that group membership and kernel modules are all set.
Create a Windows 10 virtual machine
Create a new Windows 10 virtual machine within VirtualBox. Then, download Windows
10 from
Microsoft then start the virtual machine mounting the .iso
file as an
optical drive.
Follow the instructions to install Windows 10, paying attention to the various privacy options you will be offered.
Once Windows is installed, mount the host's
/usr/share/virtualbox/VBoxGuestAdditions.iso
as a virtual optical drive
and install the VirtualBox guest additions.
Installing the CPS
With Windows fully setup, it's time to download the latest version of the computer programming software.
Unpack the downloaded file and then install it as Admin (right-click on the
.exe
).
Do NOT install the GD driver update or the USB driver, they do not appear to be necessary.
Program the radio
First, you'll want to download from the radio to get a starting configuration that you can change.
To do this:
- Turn the radio on and wait until it has finished booting.
- Plug the USB programming cable onto the computer and the radio.
- From the CPS menu choose "Set COM port".
- From the CPS menu choose "Read from radio".
Save this original codeplug to a file as a backup in case you need to easily reset back to the factory settings.
To program the radio, follow this handy third-party guide since it's much better than the official manual.
You should be able to use the "Write to radio" menu option without any problems once you're done creating your codeplug.
ssh-agent
was in the news recently due to the matrix.org
compromise. The main
takeaway from that incident was that one should avoid the ForwardAgent
(or -A
) functionality when ProxyCommand
can
do
and consider multi-factor authentication on the server-side, for example
using
libpam-google-authenticator
or libpam-yubico.
That said, there are also two options to ssh-add
that can help reduce the
risk of someone else with elevated privileges hijacking your agent to make
use of your ssh credentials.
Prompt before each use of a key
The first option is -c
which will require you to confirm each use of your
ssh key by pressing Enter when a graphical prompt shows up.
Simply install an ssh-askpass
frontend like
ssh-askpass-gnome:
apt install ssh-askpass-gnome
and then use this to when adding your key to the agent:
ssh-add -c ~/.ssh/key
Automatically removing keys after a timeout
ssh-add -D
will remove all identities (i.e. keys) from your ssh agent, but
requires that you remember to run it manually once you're done.
That's where the second option comes in. Specifying -t
when adding a key
will automatically remove that key from the agent after a while.
For example, I have found that this setting works well at work:
ssh-add -t 10h ~/.ssh/key
where I don't want to have to type my ssh password everytime I push a git branch.
At home on the other hand, my use of ssh is more sporadic and so I don't mind a shorter timeout:
ssh-add -t 4h ~/.ssh/key
Making these options the default
I couldn't find a configuration file to make these settings the default and
so I ended up putting the following line in my ~/.bash_aliases
:
alias ssh-add='ssh-add -c -t 4h'
so that I can continue to use ssh-add
as normal and have not remember
to include these extra options.
Compiling the Brave Browser (based on Chromium) on Linux can take a really long time and so most developers use sccache to cache objects files and speed up future re-compilations.
Here's the cronjob I wrote to seed my local cache every work day to pre-compile the latest builds:
30 23 * * 0-4 francois /usr/bin/chronic /home/francois/bin/seed-brave-browser-cache
and here are the contents of that script:
#!/bin/bash
set -e
# Set the path and sccache environment variables correctly
source ${HOME}/.bashrc-brave
export LANG=en_CA.UTF-8
cd ${HOME}/devel/brave-browser-cache
echo "Environment:"
echo "- HOME = ${HOME}"
echo "- PATH = ${PATH}"
echo "- PWD = ${PWD}"
echo "- SHELL = ${SHELL}"
echo "- BASH_ENV = ${BASH_ENV}"
echo
echo $(date)
echo "=> Clean up repo and delete old build output"
rm -rf src/out node_modules src/brave/node_modules
git clean -f -d
git checkout HEAD package-lock.json
find -name "*.pyc" -delete
echo $(date)
echo "=> Update repo"
git fetch --prune origin
git pull
npm install
rm -rf src/brave/*
gclient sync -D
npm run init
echo $(date)
echo "=> Debug build"
killall sccache || true
ionice nice timeout 4h npm run build || ionice nice timeout 4h npm run build
ionice nice ninja -C src/out/Debug brave_unit_tests
ionice nice ninja -C src/out/Debug brave_browser_tests
echo
echo $(date)
echo "=>Release build"
killall sccache || true
ionice nice timeout 5h npm run build Release || ionice nice timeout 5h npm run build Release
ionice nice ninja -C src/out/Release brave_unit_tests
ionice nice ninja -C src/out/Release brave_browser_tests
echo
echo $(date)
echo "=> Delete build output"
rm -rf src/out
It references a ~/.bashrc-brave
file which contains:
#!/bin/sh
export PATH="${PATH}:${HOME}/bin:${HOME}/devel/brave-browser/vendor/depot_tools:${HOME}/.cargo/bin"
export SCCACHE_DIR="${HOME}/.cache/sccache"
export SCCACHE_CACHE_SIZE=200G
export NO_AUTH_BOTO_CONFIG="${HOME}/.boto"
ccache instead of sccache
While I started using sccache for compiling Brave, I recently switched to ccache as sccache turned out to be fairly unreliable at compiling Chromium.
Switching to ccache
is easy, simply install the package:
apt install ccache
and then set the environment
variable
in .npmrc
:
sccache = ccache
Finally, you'll probably want to increase the maximum cache size:
ccache --max-size=200G
in order to fit all of Chromium/Brave in the cache.