RSS Atom Add a new post titled:
Programming an AnyTone AT-D878UV on Linux using Windows 10 and VirtualBox

I recently acquired an AnyTone AT-D878UV DMR radio which is unfortunately not supported by chirp, my usual go-to free software package for programming amateur radios.

Instead, I had to setup a Windows 10 virtual machine so that I could setup the radio using the manufacturer's computer programming software (CPS).

Install VirtualBox

Install VirtualBox:

apt install virtualbox virtualbox-guest-additions-iso

and add your user account to the vboxusers group:

adduser francois vboxusers

to make filesharing before the host and the guest work.

Finally, reboot to ensure that group membership and kernel modules are all set.

Create a Windows 10 virtual machine

Create a new Windows 10 virtual machine within VirtualBox. Then, download Windows 10 from Microsoft then start the virtual machine mounting the .iso file as an optical drive.

Follow the instructions to install Windows 10, paying attention to the various privacy options you will be offered.

Once Windows is installed, mount the host's /usr/share/virtualbox/VBoxGuestAdditions.iso as a virtual optical drive and install the VirtualBox guest additions.

Installing the CPS

With Windows fully setup, it's time to download the latest version of the computer programming software.

Unpack the downloaded file and then install it as Admin (right-click on the .exe).

Do NOT install the GD driver update or the USB driver, they do not appear to be necessary.

Program the radio

First, you'll want to download from the radio to get a starting configuration that you can change.

To do this:

  1. Turn the radio on and wait until it has finished booting.
  2. Plug the USB programming cable onto the computer and the radio.
  3. From the CPS menu choose "Set COM port".
  4. From the CPS menu choose "Read from radio".

Save this original codeplug to a file as a backup in case you need to easily reset back to the factory settings.

To program the radio, follow this handy third-party guide since it's much better than the official manual.

You should be able to use the "Write to radio" menu option without any problems once you're done creating your codeplug.

Secure ssh-agent usage

ssh-agent was in the news recently due to the matrix.org compromise. The main takeaway from that incident was that one should avoid the ForwardAgent (or -A) functionality when ProxyCommand can do and consider multi-factor authentication on the server-side, for example using libpam-google-authenticator or libpam-yubico.

That said, there are also two options to ssh-add that can help reduce the risk of someone else with elevated privileges hijacking your agent to make use of your ssh credentials.

Prompt before each use of a key

The first option is -c which will require you to confirm each use of your ssh key by pressing Enter when a graphical prompt shows up.

Simply install an ssh-askpass frontend like ssh-askpass-gnome:

apt install ssh-askpass-gnome

and then use this to when adding your key to the agent:

ssh-add -c ~/.ssh/key

Automatically removing keys after a timeout

ssh-add -D will remove all identities (i.e. keys) from your ssh agent, but requires that you remember to run it manually once you're done.

That's where the second option comes in. Specifying -t when adding a key will automatically remove that key from the agent after a while.

For example, I have found that this setting works well at work:

ssh-add -t 10h ~/.ssh/key

where I don't want to have to type my ssh password everytime I push a git branch.

At home on the other hand, my use of ssh is more sporadic and so I don't mind a shorter timeout:

ssh-add -t 4h ~/.ssh/key

Making these options the default

I couldn't find a configuration file to make these settings the default and so I ended up putting the following line in my ~/.bash_aliases:

alias ssh-add='ssh-add -c -t 4h'

so that I can continue to use ssh-add as normal and have not remember to include these extra options.

Seeding sccache for Faster Brave Browser Builds

Compiling the Brave Browser (based on Chromium) on Linux can take a really long time and so most developers use sccache to cache objects files and speed up future re-compilations.

Here's the cronjob I wrote to seed my local cache every work day to pre-compile the latest builds:

30 23 * * 0-4   francois  /usr/bin/chronic /home/francois/bin/seed-brave-browser-cache

and here are the contents of that script:

#!/bin/bash
set -e

# Set the path and sccache environment variables correctly
source ${HOME}/.bashrc-brave
export LANG=en_CA.UTF-8

cd ${HOME}/devel/brave-browser-cache

echo "Environment:"
echo "- HOME = ${HOME}"
echo "- PATH = ${PATH}"
echo "- PWD = ${PWD}"
echo "- SHELL = ${SHELL}"
echo "- BASH_ENV = ${BASH_ENV}"
echo

echo $(date)
echo "=> Update repo"
git pull
npm install
npm run init

echo $(date)
echo "=> Delete any old build output"
rm -rf src/out

echo $(date)
echo "=> Debug build"
ionice nice timeout 4h npm run build
echo

echo $(date)
echo "=>Release build"
ionice nice timeout 5h npm run build Release
echo

echo $(date)
echo "=> Delete build output"
rm -rf src/out
Connecting a VoIP phone directly to an Asterisk server

On my Asterisk server, I happen to have two on-board ethernet boards. Since I only used one of these, I decided to move my VoIP phone from the local network switch to being connected directly to the Asterisk server.

The main advantage is that this phone, running proprietary software of unknown quality, is no longer available on my general home network. Most importantly though, it no longer has access to the Internet, without my having to firewall it manually.

Here's how I configured everything.

Private network configuration

On the server, I started by giving the second network interface a static IP address in /etc/network/interfaces:

auto eth1
iface eth1 inet static
    address 192.168.2.2
    netmask 255.255.255.0

On the VoIP phone itself, I set the static IP address to 192.168.2.3 and the DNS server to 192.168.2.2. I then updated the SIP registrar IP address to 192.168.2.2.

The DNS server actually refers to an unbound daemon running on the Asterisk server. The only configuration change I had to make was to listen on the second interface and allow the VoIP phone in:

server:
    interface: 127.0.0.1
    interface: 192.168.2.2
    access-control: 0.0.0.0/0 refuse
    access-control: 127.0.0.1/32 allow
    access-control: 192.168.2.3/32 allow

Finally, I opened the right ports on the server's firewall in /etc/network/iptables.up.rules:

-A INPUT -s 192.168.2.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.2.3/32 -p udp --dport 10000:20000 -j ACCEPT

Accessing the admin page

Now that the VoIP phone is no longer available on the local network, it's not possible to access its admin page. That's a good thing from a security point of view, but it's somewhat inconvenient.

Therefore I put the following in my ~/.ssh/config to make the admin page available on http://localhost:8081 after I connect to the Asterisk server via ssh:

Host asterisk
    LocalForward 8081 192.168.2.3:80
Encrypted connection between SIP phones using Asterisk

Here is the setup I put together to have two SIP phones connect together over an encrypted channel. Since the two phones do not support encryption, I used Asterisk to provide the encrypted channel over the Internet.

Installing Asterisk

First of all, each VoIP phone is in a different physical location and so I installed an Asterisk server in each house.

One of the server is a Debian stretch machine and the other runs Ubuntu bionic 18.04. Regardless, I used a fairly standard configuration and simply installed the asterisk package on both machines:

apt install asterisk

SIP phones

The two phones, both Snom 300, connect to their local asterisk server on its local IP address and use the same details as I have put in /etc/asterisk/sip.conf:

[1000]
type=friend
qualify=yes
secret=password1
encryption=no
context=internal
host=dynamic
nat=no
canreinvite=yes
mailbox=1000@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw

Dialplan and voicemail

The extension number above (1000) maps to the following configuration blurb in /etc/asterisk/extensions.conf:

[home]
exten => 1000,1,Dial(SIP/1000,20)
exten => 1000,n,Goto(in1000-${DIALSTATUS},1)
exten => 1000,n,Hangup
exten => in1000-BUSY,1,Hangup(17)
exten => in1000-CONGESTION,1,Hangup(3)
exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@mailboxes,su)
exten => in1000-CHANUNAVAIL,n,Hangup(3)
exten => in1000-NOANSWER,1,VoiceMail(1000@mailboxes,su)
exten => in1000-NOANSWER,n,Hangup(16)
exten => _in1000-.,1,Hangup(16)

the internal context maps to the following blurb in /etc/asterisk/extensions.conf:

[internal]
include => home
include => iax2users
exten => 707,1,VoiceMailMain(1000@mailboxes)

and 1000@mailboxes maps to the following entry in /etc/asterisk/voicemail.conf:

[mailboxes]
1000 => 1234,home,person@email.com

(with 1234 being the voicemail PIN).

Encrypted IAX links

In order to create a virtual link between the two servers using the IAX protocol, I created user credentials on each server in /etc/asterisk/iax.conf:

[iaxuser]
type=user
auth=md5
secret=password2
context=iax2users
allow=g722
allow=speex
encryption=aes128
trunk=no

then I created an entry for the other server in the same file:

[server2]
type=peer
host=server2.dyn.fmarier.org
auth=md5
secret=password2
username=iaxuser
allow=g722
allow=speex
encryption=yes
forceencrypt=yes
trunk=no
qualify=yes

The second machine contains the same configuration with the exception of the server name (server1 instead of server2) and hostname (server1.dyn.fmarier.org instead of server2.dyn.fmarier.org).

Speed dial for the other phone

Finally, to allow each phone to ring one another by dialing 2000, I put the following in /etc/asterisk/extensions.conf:

[iax2users]
include => home
exten => 2000,1,Set(CALLERID(all)=Francois Marier <2000>)
exten => 2000,2,Dial(IAX2/server1/1000)

and of course a similar blurb on the other machine:

[iax2users]
include => home
exten => 2000,1,Set(CALLERID(all)=Other Person <2000>)
exten => 2000,2,Dial(IAX2/server2/1000)

Firewall rules

Since we are using the IAX protocol instead of SIP, there is only one port to open in /etc/network/iptables.up.rules for the remote server:

# IAX2 protocol
-A INPUT -s x.x.x.x/y -p udp --dport 4569 -j ACCEPT

where x.x.x.x/y is the IP range allocated to the ISP that the other machine is behind.

If you want to restrict traffic on the local network as well, then these ports need to be open for the SIP phone to be able to connect to its local server:

# VoIP phones (internal)
-A INPUT -s 192.168.1.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.1.3/32 -p udp --dport 10000:20000 -j ACCEPT

where 192.168.1.3 is the static IP address allocated to the SIP phone.

Fedora 29 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Debian stretch and jessie, here is how I was able to create a Fedora 29 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Create the container

Once that's in place, you can finally create the Fedora 29 container:

lxc-create -n fedora29 -t download -- -d fedora -r 29 -a amd64

To see a list of all distros available with the download template:

lxc-create --template=download -- --list

Logging in as root

Start up the container and get a login console:

lxc-start -n fedora29 -F

In another terminal, set a password for the root user:

lxc-attach -n fedora29 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

dnf install openssh-server vim sudo man

and then create an unprivileged user with sudo access:

adduser francois -G wheel
passwd francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

Enabling all necessary locales

To ensure that you have all available locales and don't see ugly perl warnings such as:

perl: warning: Setting locale failed.
perl: warning: Falling back to the standard locale ("C").

install the appropriate language packs:

dnf install langpacks-en.noarch
dnf reinstall dnf
Erasing Persistent Storage Securely on Linux

Here are some notes on how to securely delete computer data in a way that makes it impractical for anybody to recover that data. This is an important thing to do before giving away (or throwing away) old disks.

Ideally though, it's better not to have to rely on secure erasure and start use full-disk encryption right from the start, for example, using LUKS. That way if the secure deletion fails for whatever reason, or can't be performed (e.g. the drive is dead), then it's not a big deal.

Rotating hard drives

With ATA or SCSI hard drives, DBAN seems to be the ideal solution.

  1. Burn it on CD,
  2. boot with it,
  3. and following the instructions.

Note that you should disconnect any drives you don't want to erase before booting with that CD.

This is probably the most trustworth method of wiping since it uses free and open source software to write to each sector of the drive several times. The methods that follow rely on proprietary software built into the firmware of the devices and so you have to trust that it is implemented properly and not backdoored.

ATA / SATA solid-state drives

Due to the nature of solid-state storage (i.e. the lifetime number of writes is limited), it's not a good idea to use DBAN for those. Instead, we must rely on the vendor's implementation of ATA Secure Erase.

First, set a password on the drive:

hdparm --user-master u --security-set-pass p /dev/sdX

and then issue a Secure Erase command:

hdparm --user-master u --security-erase-enhanced p /dev/sdX

NVMe solid-state drives

For SSDs using an NVMe connector, simply request a User Data Erase

nvme format -s1 /dev/nvme0n1
Restricting outgoing HTTP traffic in a web application using a squid proxy

I recently had to fix a Server-Side Request Forgery bug in Libravatar's OpenID support. In addition to enabling authentication on internal services whenever possible, I also forced all outgoing network requests from the Django web-application to go through a restrictive egress proxy.

OpenID logins are prone to SSRF

Server-Side Request Forgeries are vulnerabilities which allow attackers to issue arbitrary GET requests on the server side. Unlike a Cross-Site Request Forgery, SSRF requests do not include user credentials (e.g. cookies). On the other hand, since these requests are done by the server, they typically originate from inside the firewall.

This allows attackers to target internal resources and issue arbitrary GET requests to them. One could use this to leak information, especially when error reports include the request payload, tamper with the state of internal services or portscan an internal network.

OpenID 1.x logins are prone to these vulnerabilities because of the way they are initiated:

  1. Users visit a site's login page.
  2. They enter their OpenID URL in a text field.
  3. The server fetches the given URL to discover the OpenID endpoints.
  4. The server redirects the user to their OpenID provider to continue the rest of the login flow.

The third step is the potentially problematic one since it requires a server-side fetch.

Filtering URLs in the application is not enough

At first, I thought I would filter out undesirable URLs inside the application:

  • hostnames like localhost, 127.0.0.1 or ::1
  • non-HTTP schemes like file or gopher
  • non-standard ports like 5432 or 11211

However this filtering is going to be very easy to bypass:

  1. Add a hostname in your DNS zone which resolves to 127.0.0.1.
  2. Setup a redirect to a blacklisted URL such as file:///etc/passwd.

Applying the filter on the original URL is clearly not enough.

Install and configure a Squid proxy

In order to fully restrict outgoing OpenID requests from the web application, I used a Squid HTTP proxy.

First, install the package:

apt install squid3

and set the following in /etc/squid3/squid.conf:

acl to_localnet dst 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl to_localnet dst 10.0.0.0/8            # RFC 1918 local private network (LAN)
acl to_localnet dst 100.64.0.0/10         # RFC 6598 shared address space (CGN)
acl to_localnet dst 169.254.0.0/16        # RFC 3927 link-local (directly plugged) machines
acl to_localnet dst 172.16.0.0/12         # RFC 1918 local private network (LAN)
acl to_localnet dst 192.168.0.0/16        # RFC 1918 local private network (LAN)
acl to_localnet dst fc00::/7              # RFC 4193 local private network range
acl to_localnet dst fe80::/10             # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny manager
http_access deny to_localhost
http_access deny to_localnet
http_access allow localhost
http_access deny all

http_port 127.0.0.1:3128

Ideally, I would like to use a whitelist approach to restrict requests to a small set of valid URLs, but in the case of OpenID, the set of valid URLs is not fixed. Therefore the only workable approach is a blacklist. The above snippet whitelists port numbers (80 and 443) and blacklists requests to localhost (a built-in squid acl variable which resolves to 127.0.0.1 and ::1) as well as known local IP ranges.

Expose the proxy to Django in the WSGI configuration

In order to force all outgoing requests from Django to go through the proxy, I put the following in my WSGI application (/etc/libravatar/django.wsgi):

os.environ['ftp_proxy'] = "http://127.0.0.1:3128"
os.environ['http_proxy'] = "http://127.0.0.1:3128"
os.environ['https_proxy'] = "http://127.0.0.1:3128"

The whole thing seemed to work well in my limited testing. There is however a bug in urllib2 with proxying HTTPS URLs that include a port number, and there is an open issue in python-openid around proxies and OpenID.

Lean data in practice

Mozilla has been promoting the idea of lean data for a while. It's about recognizing both that data is valuable and that it is a dangerous thing to hold on to. Following these lean data principles forces you to clarify the questions you want to answer and think hard about the minimal set of information you need to answer these questions.

Out of these general principles came the Firefox data collection guidelines. These are the guidelines that every team must follow when they want to collect data about our users and that are enforced through the data stewardship program.

As one of the data steward for Firefox, I have reviewed hundreds of data collection requests and can attest to the fact that Mozilla does follow the lean data principles it promotes. Mozillians are already aware of the problems with collecting large amounts of data, but the Firefox data review process provides an additional opportunity for an outsider to question the necessity of each piece of data. In my experience, this system is quite effective at reducing the data footprint of Firefox.

What does lean data look like in practice? Here are a few examples of changes that were made to restrict the data collected by Firefox to what is truly needed:

  • Collecting a user's country is not particularly identifying in the case of large countries likes the USA, but it can be when it comes to very small island nations. How many Firefox users are there in Niue? Hard to know, but it's definitely less than the number of Firefox users in Germany. After I raised that issue, the team decided to put all of the small countries into a single "other" bucket.

  • Similarly, cities generally have enough users to be non-identifying. However, some municipalities are quite small and can lead to the same problems. There are lots of Firefox users in Portland, Oregon for example, but probably not that many in Portland, Arkansas or Portland, Pennsylvania. If you want to tell the Oregonian Portlanders apart, it might be sufficient to bucket Portland users into "Oregon" and "not Oregon", instead of recording both the city and the state.

  • When collecting window sizes and other pixel-based measurements, it's easier to collect the exact value. However, that exact value could be stable for a while and create a temporary fingerprint for a user. In most cases, teams wanting to collect this kind of data have agreed to round the value in order to increase the number of users in each "bucket" without affecting their ability to answer their underlying questions.

  • Firefox occasionally runs studies which involve collecting specific URLs that users have consented to share with us (e.g. "this site crashes my Firefox"). In most cases though, the full URL is not needed and so I have often been able to get teams to restrict the collection to the hostname, or to at least remove the query string, which could include username and passwords on badly-designed websites.

  • When making use of Google Analytics, it may not be necessary to collect everything it supports by default. For example, my suggestion to trim the referrers was implemented by one of the teams using Google Analytics since while it would have been an interesting data point, it wasn't necessary to answer the questions they had in mind.

Some of these might sound like small wins, but to me they are a sign that the process is working. In most cases, requests are very easy to approve because developers have already done the hard work of data minimization. In a few cases, by asking questions and getting familiar with the problem, the data steward can point out opportunities for further reductions in data collection that the team may have missed.

Installing Vidyo on Ubuntu 18.04

Following these instructions as well as the comments in there, I was able to get Vidyo, the proprietary videoconferencing system that Mozilla uses internally, to work on Ubuntu 18.04 (Bionic Beaver). The same instructions should work on recent versions of Debian too.

Installing dependencies

First of all, install all of the package dependencies:

sudo apt install libqt4-designer libqt4-opengl libqt4-svg libqtgui4 libqtwebkit4 sni-qt overlay-scrollbar-gtk2 libcanberra-gtk-module

Then, ensure you have a system tray application running. This should be the case for most desktop environments.

Building a custom Vidyo package

Download version 3.6.3 from the CERN Vidyo Portal but don't expect to be able to install it right away.

You need to first hack the package in order to remove obsolete dependencies.

Once that's done, install the resulting package:

sudo dpkg -i vidyodesktop-custom.deb

Packaging fixes and configuration

There are a few more things to fix before it's ready to be used.

First, fix the ownership on the main executable:

sudo chown root:root /usr/bin/VidyoDesktop

Then disable autostart since you don't probably don't want to keep the client running all of the time (and listening on the network) given it hasn't received any updates in a long time and has apparently been abandoned by Vidyo:

sudo rm /etc/xdg/autostart/VidyoDesktop.desktop

Remove any old configs in your home directory that could interfere with this version:

rm -rf ~/.vidyo ~/.config/Vidyo

Finally, launch VidyoDesktop and go into the settings to check "Always use VidyoProxy".