RSS Atom Add a new post titled:
OpenSUSE 15 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Fedora, here is how I was able to create an OpenSUSE 15 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Creating the container

Once that's in place, you can finally create the OpenSUSE 15 container:

lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list

Logging in as root

Start up the container and get a login console:

lxc-start -n opensuse15 -F

In another terminal, set a password for the root user:

lxc-attach -n opensuse15 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

zypper install vim openssh sudo man
systemctl start sshd
systemctl enable sshd

and then create an unprivileged user:

useradd francois
passwd francois
cd /home
mkdir francois
chown francois:100 francois/

and give that user sudo access:

visudo  # uncomment "wheel" line
groupadd wheel
usermod -aG wheel francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy
Installing Ubuntu 18.04 using both full-disk encryption and RAID1

I recently setup a desktop computer with two SSDs using a software RAID1 and full-disk encryption (i.e. LUKS). Since this is not a supported configuration in Ubuntu desktop, I had to use the server installation medium.

This is my version of these excellent instructions.

Server installer

Start by downloading the alternate server installer and verifying its signature:

  1. Download the required files:

     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso
     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS
     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
    
  2. Verify the signature on the hash file:

     $ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092
     $ gpg --verify SHA256SUMS.gpg SHA256SUMS
     gpg: Signature made Fri Feb 15 08:32:38 2019 PST
     gpg:                using RSA key D94AA3F0EFE21092
     gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined]
     gpg: WARNING: This key is not certified with a trusted signature!
     gpg:          There is no indication that the signature belongs to the owner.
     Primary key fingerprint: 8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092
    
  3. Verify the hash of the ISO file:

     $ sha256sum --ignore-missing -c SHA256SUMS
     ubuntu-18.04.2-server-amd64.iso: OK
    

Then copy it to a USB drive:

dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX

and boot with it.

Manual partitioning

Inside the installer, use manual partitioning to:

  1. Configure the physical partitions.
  2. Configure the RAID array second.
  3. Configure the encrypted partitions last

Here's the exact configuration I used:

  • /dev/sda1 is 512 MB and used as the EFI parition
  • /dev/sdb1 is 512 MB but not used for anything
  • /dev/sda2 and /dev/sdb2 are both 4 GB (RAID)
  • /dev/sda3 and /dev/sdb3 are both 512 MB (RAID)
  • /dev/sda4 and /dev/sdb4 use up the rest of the disk (RAID)

I only set /dev/sda2 as the EFI partition because I found that adding a second EFI partition would break the installer.

I created the following RAID1 arrays:

  • /dev/sda2 and /dev/sdb2 for /dev/md2
  • /dev/sda3 and /dev/sdb3 for /dev/md0
  • /dev/sda4 and /dev/sdb4 for /dev/md1

I used /dev/md0 as my unencrypted /boot partition.

Then I created the following LUKS partitions:

  • md1_crypt as the / partition using /dev/md1
  • md2_crypt as the swap partition (4 GB) with a random encryption key using /dev/md2

Post-installation configuration

Once your new system is up, sync the EFI partitions using DD:

dd if=/dev/sda1 of=/dev/sdb1

and create a second EFI boot entry:

efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi

Ensure that the RAID drives are fully sync'ed by keeping an eye on /prod/mdstat and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.

Once you have rebooted, remove the following package to speed up future boots:

apt purge btrfs-progs

To switch to the desktop variant of Ubuntu, install these meta-packages:

apt install ubuntu-desktop gnome

then use debfoster to remove unnecessary packages (in particular the ones that only come with the default Ubuntu server installation).

Fixing booting with degraded RAID arrays

Since I have run into RAID startup problems in the past, I expected having to fix up a few things to make degraded RAID arrays boot correctly.

I did not use LVM since I didn't really feel the need to add yet another layer of abstraction of top of my setup, but I found that the lvm2 package must still be installed:

apt install lvm2

with use_lvmetad = 0 in /etc/lvm/lvm.conf.

Then in order to automatically bring up the RAID arrays with 1 out of 2 drives, I added the following script in /etc/initramfs-tools/scripts/local-top/cryptraid:

 #!/bin/sh
 PREREQ="mdadm"
 prereqs()
 {
      echo "$PREREQ"
 }
 case $1 in
 prereqs)
      prereqs
      exit 0
      ;;
 esac

 mdadm --run /dev/md0
 mdadm --run /dev/md1
 mdadm --run /dev/md2

before making that script executable:

chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid

and refreshing the initramfs:

update-initramfs -u -k all

Disable suspend-to-disk

Since I use a random encryption key for the swap partition (to avoid having a second password prompt at boot time), it means that suspend-to-disk is not going to work and so I disabled it by putting the following in /etc/initramfs-tools/conf.d/resume:

RESUME=none

and by adding noresume to the GRUB_CMDLINE_LINUX variable in /etc/default/grub before applying these changes:

update-grub
update-initramfs -u -k all

Test your configuration

With all of this in place, you should be able to do a final test of your setup:

  1. Shutdown the computer and unplug the second drive.
  2. Boot with only the first drive.
  3. Shutdown the computer and plug the second drive back in.
  4. Boot with both drives and re-add the second drive to the RAID array:

     mdadm /dev/md0 -a /dev/sdb3
     mdadm /dev/md1 -a /dev/sdb4
     mdadm /dev/md2 -a /dev/sdb2
    
  5. Wait until the RAID is done re-syncing and shutdown the computer.

  6. Repeat steps 2-5 with the first drive unplugged instead of the second.
  7. Reboot with both drives plugged in.

At this point, you have a working setup that will gracefully degrade to a one-drive RAID array should one of your drives fail.

Programming an AnyTone AT-D878UV on Linux using Windows 10 and VirtualBox

I recently acquired an AnyTone AT-D878UV DMR radio which is unfortunately not supported by chirp, my usual go-to free software package for programming amateur radios.

Instead, I had to setup a Windows 10 virtual machine so that I could setup the radio using the manufacturer's computer programming software (CPS).

Install VirtualBox

Install VirtualBox:

apt install virtualbox virtualbox-guest-additions-iso

and add your user account to the vboxusers group:

adduser francois vboxusers

to make filesharing before the host and the guest work.

Finally, reboot to ensure that group membership and kernel modules are all set.

Create a Windows 10 virtual machine

Create a new Windows 10 virtual machine within VirtualBox. Then, download Windows 10 from Microsoft then start the virtual machine mounting the .iso file as an optical drive.

Follow the instructions to install Windows 10, paying attention to the various privacy options you will be offered.

Once Windows is installed, mount the host's /usr/share/virtualbox/VBoxGuestAdditions.iso as a virtual optical drive and install the VirtualBox guest additions.

Installing the CPS

With Windows fully setup, it's time to download the latest version of the computer programming software.

Unpack the downloaded file and then install it as Admin (right-click on the .exe).

Do NOT install the GD driver update or the USB driver, they do not appear to be necessary.

Program the radio

First, you'll want to download from the radio to get a starting configuration that you can change.

To do this:

  1. Turn the radio on and wait until it has finished booting.
  2. Plug the USB programming cable onto the computer and the radio.
  3. From the CPS menu choose "Set COM port".
  4. From the CPS menu choose "Read from radio".

Save this original codeplug to a file as a backup in case you need to easily reset back to the factory settings.

To program the radio, follow this handy third-party guide since it's much better than the official manual.

You should be able to use the "Write to radio" menu option without any problems once you're done creating your codeplug.

Secure ssh-agent usage

ssh-agent was in the news recently due to the matrix.org compromise. The main takeaway from that incident was that one should avoid the ForwardAgent (or -A) functionality when ProxyCommand can do and consider multi-factor authentication on the server-side, for example using libpam-google-authenticator or libpam-yubico.

That said, there are also two options to ssh-add that can help reduce the risk of someone else with elevated privileges hijacking your agent to make use of your ssh credentials.

Prompt before each use of a key

The first option is -c which will require you to confirm each use of your ssh key by pressing Enter when a graphical prompt shows up.

Simply install an ssh-askpass frontend like ssh-askpass-gnome:

apt install ssh-askpass-gnome

and then use this to when adding your key to the agent:

ssh-add -c ~/.ssh/key

Automatically removing keys after a timeout

ssh-add -D will remove all identities (i.e. keys) from your ssh agent, but requires that you remember to run it manually once you're done.

That's where the second option comes in. Specifying -t when adding a key will automatically remove that key from the agent after a while.

For example, I have found that this setting works well at work:

ssh-add -t 10h ~/.ssh/key

where I don't want to have to type my ssh password everytime I push a git branch.

At home on the other hand, my use of ssh is more sporadic and so I don't mind a shorter timeout:

ssh-add -t 4h ~/.ssh/key

Making these options the default

I couldn't find a configuration file to make these settings the default and so I ended up putting the following line in my ~/.bash_aliases:

alias ssh-add='ssh-add -c -t 4h'

so that I can continue to use ssh-add as normal and have not remember to include these extra options.

Seeding sccache for Faster Brave Browser Builds

Compiling the Brave Browser (based on Chromium) on Linux can take a really long time and so most developers use sccache to cache objects files and speed up future re-compilations.

Here's the cronjob I wrote to seed my local cache every work day to pre-compile the latest builds:

30 23 * * 0-4   francois  /usr/bin/chronic /home/francois/bin/seed-brave-browser-cache

and here are the contents of that script:

#!/bin/bash
set -e

# Set the path and sccache environment variables correctly
source ${HOME}/.bashrc-brave
export LANG=en_CA.UTF-8

cd ${HOME}/devel/brave-browser-cache

echo "Environment:"
echo "- HOME = ${HOME}"
echo "- PATH = ${PATH}"
echo "- PWD = ${PWD}"
echo "- SHELL = ${SHELL}"
echo "- BASH_ENV = ${BASH_ENV}"
echo

echo $(date)
echo "=> Clean up repo and delete old build output"
rm -rf src/out node_modules src/brave/node_modules
git clean -f -d
git checkout HEAD package-lock.json

echo $(date)
echo "=> Update repo"
git pull
npm install
npm run init

echo $(date)
echo "=> Debug build"
killall sccache || true
ionice nice timeout 4h npm run build || ionice nice timeout 4h npm run build
ionice nice ninja -C src/out/Debug brave_unit_tests
ionice nice ninja -C src/out/Debug brave_browser_tests
echo

echo $(date)
echo "=>Release build"
killall sccache || true
ionice nice timeout 5h npm run build Release || ionice nice timeout 5h npm run build Release
ionice nice ninja -C src/out/Release brave_unit_tests
ionice nice ninja -C src/out/Release brave_browser_tests
echo

echo $(date)
echo "=> Delete build output"
rm -rf src/out
Connecting a VoIP phone directly to an Asterisk server

On my Asterisk server, I happen to have two on-board ethernet boards. Since I only used one of these, I decided to move my VoIP phone from the local network switch to being connected directly to the Asterisk server.

The main advantage is that this phone, running proprietary software of unknown quality, is no longer available on my general home network. Most importantly though, it no longer has access to the Internet, without my having to firewall it manually.

Here's how I configured everything.

Private network configuration

On the server, I started by giving the second network interface a static IP address in /etc/network/interfaces:

auto eth1
iface eth1 inet static
    address 192.168.2.2
    netmask 255.255.255.0

On the VoIP phone itself, I set the static IP address to 192.168.2.3 and the DNS server to 192.168.2.2. I then updated the SIP registrar IP address to 192.168.2.2.

The DNS server actually refers to an unbound daemon running on the Asterisk server. The only configuration change I had to make was to listen on the second interface and allow the VoIP phone in:

server:
    interface: 127.0.0.1
    interface: 192.168.2.2
    access-control: 0.0.0.0/0 refuse
    access-control: 127.0.0.1/32 allow
    access-control: 192.168.2.3/32 allow

Finally, I opened the right ports on the server's firewall in /etc/network/iptables.up.rules:

-A INPUT -s 192.168.2.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.2.3/32 -p udp --dport 10000:20000 -j ACCEPT

Accessing the admin page

Now that the VoIP phone is no longer available on the local network, it's not possible to access its admin page. That's a good thing from a security point of view, but it's somewhat inconvenient.

Therefore I put the following in my ~/.ssh/config to make the admin page available on http://localhost:8081 after I connect to the Asterisk server via ssh:

Host asterisk
    LocalForward 8081 192.168.2.3:80
Encrypted connection between SIP phones using Asterisk

Here is the setup I put together to have two SIP phones connect together over an encrypted channel. Since the two phones do not support encryption, I used Asterisk to provide the encrypted channel over the Internet.

Installing Asterisk

First of all, each VoIP phone is in a different physical location and so I installed an Asterisk server in each house.

One of the server is a Debian stretch machine and the other runs Ubuntu bionic 18.04. Regardless, I used a fairly standard configuration and simply installed the asterisk package on both machines:

apt install asterisk

SIP phones

The two phones, both Snom 300, connect to their local asterisk server on its local IP address and use the same details as I have put in /etc/asterisk/sip.conf:

[1000]
type=friend
qualify=yes
secret=password1
encryption=no
context=internal
host=dynamic
nat=no
canreinvite=yes
mailbox=1000@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw

Dialplan and voicemail

The extension number above (1000) maps to the following configuration blurb in /etc/asterisk/extensions.conf:

[home]
exten => 1000,1,Dial(SIP/1000,20)
exten => 1000,n,Goto(in1000-${DIALSTATUS},1)
exten => 1000,n,Hangup
exten => in1000-BUSY,1,Hangup(17)
exten => in1000-CONGESTION,1,Hangup(3)
exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@mailboxes,su)
exten => in1000-CHANUNAVAIL,n,Hangup(3)
exten => in1000-NOANSWER,1,VoiceMail(1000@mailboxes,su)
exten => in1000-NOANSWER,n,Hangup(16)
exten => _in1000-.,1,Hangup(16)

the internal context maps to the following blurb in /etc/asterisk/extensions.conf:

[internal]
include => home
include => iax2users
exten => 707,1,VoiceMailMain(1000@mailboxes)

and 1000@mailboxes maps to the following entry in /etc/asterisk/voicemail.conf:

[mailboxes]
1000 => 1234,home,person@email.com

(with 1234 being the voicemail PIN).

Encrypted IAX links

In order to create a virtual link between the two servers using the IAX protocol, I created user credentials on each server in /etc/asterisk/iax.conf:

[iaxuser]
type=user
auth=md5
secret=password2
context=iax2users
allow=g722
allow=speex
encryption=aes128
trunk=no

then I created an entry for the other server in the same file:

[server2]
type=peer
host=server2.dyn.fmarier.org
auth=md5
secret=password2
username=iaxuser
allow=g722
allow=speex
encryption=yes
forceencrypt=yes
trunk=no
qualify=yes

The second machine contains the same configuration with the exception of the server name (server1 instead of server2) and hostname (server1.dyn.fmarier.org instead of server2.dyn.fmarier.org).

Speed dial for the other phone

Finally, to allow each phone to ring one another by dialing 2000, I put the following in /etc/asterisk/extensions.conf:

[iax2users]
include => home
exten => 2000,1,Set(CALLERID(all)=Francois Marier <2000>)
exten => 2000,2,Dial(IAX2/server1/1000)

and of course a similar blurb on the other machine:

[iax2users]
include => home
exten => 2000,1,Set(CALLERID(all)=Other Person <2000>)
exten => 2000,2,Dial(IAX2/server2/1000)

Firewall rules

Since we are using the IAX protocol instead of SIP, there is only one port to open in /etc/network/iptables.up.rules for the remote server:

# IAX2 protocol
-A INPUT -s x.x.x.x/y -p udp --dport 4569 -j ACCEPT

where x.x.x.x/y is the IP range allocated to the ISP that the other machine is behind.

If you want to restrict traffic on the local network as well, then these ports need to be open for the SIP phone to be able to connect to its local server:

# VoIP phones (internal)
-A INPUT -s 192.168.1.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.1.3/32 -p udp --dport 10000:20000 -j ACCEPT

where 192.168.1.3 is the static IP address allocated to the SIP phone.

Fedora 29 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Debian stretch and jessie, here is how I was able to create a Fedora 29 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Create the container

Once that's in place, you can finally create the Fedora 29 container:

lxc-create -n fedora29 -t download -- -d fedora -r 29 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list

Logging in as root

Start up the container and get a login console:

lxc-start -n fedora29 -F

In another terminal, set a password for the root user:

lxc-attach -n fedora29 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

dnf install openssh-server vim sudo man

and then create an unprivileged user with sudo access:

adduser francois -G wheel
passwd francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

Enabling all necessary locales

To ensure that you have all available locales and don't see ugly perl warnings such as:

perl: warning: Setting locale failed.
perl: warning: Falling back to the standard locale ("C").

install the appropriate language packs:

dnf install langpacks-en.noarch
dnf reinstall dnf
Erasing Persistent Storage Securely on Linux

Here are some notes on how to securely delete computer data in a way that makes it impractical for anybody to recover that data. This is an important thing to do before giving away (or throwing away) old disks.

Ideally though, it's better not to have to rely on secure erasure and start use full-disk encryption right from the start, for example, using LUKS. That way if the secure deletion fails for whatever reason, or can't be performed (e.g. the drive is dead), then it's not a big deal.

Rotating hard drives

With ATA or SCSI hard drives, DBAN seems to be the ideal solution.

  1. Burn it on CD,
  2. boot with it,
  3. and following the instructions.

Note that you should disconnect any drives you don't want to erase before booting with that CD.

This is probably the most trustworth method of wiping since it uses free and open source software to write to each sector of the drive several times. The methods that follow rely on proprietary software built into the firmware of the devices and so you have to trust that it is implemented properly and not backdoored.

ATA / SATA solid-state drives

Due to the nature of solid-state storage (i.e. the lifetime number of writes is limited), it's not a good idea to use DBAN for those. Instead, we must rely on the vendor's implementation of ATA Secure Erase.

First, set a password on the drive:

hdparm --user-master u --security-set-pass p /dev/sdX

and then issue a Secure Erase command:

hdparm --user-master u --security-erase-enhanced p /dev/sdX

NVMe solid-state drives

For SSDs using an NVMe connector, simply request a User Data Erase

nvme format -s1 /dev/nvme0n1
Restricting outgoing HTTP traffic in a web application using a squid proxy

I recently had to fix a Server-Side Request Forgery bug in Libravatar's OpenID support. In addition to enabling authentication on internal services whenever possible, I also forced all outgoing network requests from the Django web-application to go through a restrictive egress proxy.

OpenID logins are prone to SSRF

Server-Side Request Forgeries are vulnerabilities which allow attackers to issue arbitrary GET requests on the server side. Unlike a Cross-Site Request Forgery, SSRF requests do not include user credentials (e.g. cookies). On the other hand, since these requests are done by the server, they typically originate from inside the firewall.

This allows attackers to target internal resources and issue arbitrary GET requests to them. One could use this to leak information, especially when error reports include the request payload, tamper with the state of internal services or portscan an internal network.

OpenID 1.x logins are prone to these vulnerabilities because of the way they are initiated:

  1. Users visit a site's login page.
  2. They enter their OpenID URL in a text field.
  3. The server fetches the given URL to discover the OpenID endpoints.
  4. The server redirects the user to their OpenID provider to continue the rest of the login flow.

The third step is the potentially problematic one since it requires a server-side fetch.

Filtering URLs in the application is not enough

At first, I thought I would filter out undesirable URLs inside the application:

  • hostnames like localhost, 127.0.0.1 or ::1
  • non-HTTP schemes like file or gopher
  • non-standard ports like 5432 or 11211

However this filtering is going to be very easy to bypass:

  1. Add a hostname in your DNS zone which resolves to 127.0.0.1.
  2. Setup a redirect to a blacklisted URL such as file:///etc/passwd.

Applying the filter on the original URL is clearly not enough.

Install and configure a Squid proxy

In order to fully restrict outgoing OpenID requests from the web application, I used a Squid HTTP proxy.

First, install the package:

apt install squid3

and set the following in /etc/squid3/squid.conf:

acl to_localnet dst 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl to_localnet dst 10.0.0.0/8            # RFC 1918 local private network (LAN)
acl to_localnet dst 100.64.0.0/10         # RFC 6598 shared address space (CGN)
acl to_localnet dst 169.254.0.0/16        # RFC 3927 link-local (directly plugged) machines
acl to_localnet dst 172.16.0.0/12         # RFC 1918 local private network (LAN)
acl to_localnet dst 192.168.0.0/16        # RFC 1918 local private network (LAN)
acl to_localnet dst fc00::/7              # RFC 4193 local private network range
acl to_localnet dst fe80::/10             # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny manager
http_access deny to_localhost
http_access deny to_localnet
http_access allow localhost
http_access deny all

http_port 127.0.0.1:3128

Ideally, I would like to use a whitelist approach to restrict requests to a small set of valid URLs, but in the case of OpenID, the set of valid URLs is not fixed. Therefore the only workable approach is a blacklist. The above snippet whitelists port numbers (80 and 443) and blacklists requests to localhost (a built-in squid acl variable which resolves to 127.0.0.1 and ::1) as well as known local IP ranges.

Expose the proxy to Django in the WSGI configuration

In order to force all outgoing requests from Django to go through the proxy, I put the following in my WSGI application (/etc/libravatar/django.wsgi):

os.environ['ftp_proxy'] = "http://127.0.0.1:3128"
os.environ['http_proxy'] = "http://127.0.0.1:3128"
os.environ['https_proxy'] = "http://127.0.0.1:3128"

The whole thing seemed to work well in my limited testing. There is however a bug in urllib2 with proxying HTTPS URLs that include a port number, and there is an open issue in python-openid around proxies and OpenID.