RSS Atom Add a new post titled:
Usual Debian Server Setup

I manage a few servers for myself, friends and family as well as for the Libravatar project. Here is how I customize recent releases of Debian on those servers.

Hardware tests

apt-get install memtest86+ smartmontools e2fsprogs

Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.

To check memory, I boot into memtest86+ from the grub menu and let it run overnight.

Then I check the hard drives using:

smartctl -t long /dev/sdX
badblocks -swo badblocks.out /dev/sdX

Configuration

apt-get install etckeepr git sudo vim

To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:

  • turn off daily auto-commits
  • turn off auto-commits before package installs

To get more control over the various packages I install, I change the default debconf level to medium:

dpkg-reconfigure debconf

Since I use vim for all of my configuration file editing, I make it the default editor:

update-alternatives --config editor

ssh

apt-get install openssh-server mosh fail2ban

Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.

I also ensure that the locale I use is available on the server by adding it the list of generated locales:

dpkg-reconfigure locales

Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):

HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

UsePrivilegeSeparation sandbox

AuthenticationMethods publickey
PasswordAuthentication no
PermitRootLogin no

AcceptEnv LANG LC_* TZ
LogLevel VERBOSE
AllowGroups sshuser

or the following for wheezy servers:

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256

On those servers where I need duplicity/paramiko to work, I also add the following:

KexAlgorithms ...,diffie-hellman-group-exchange-sha1
MACs ...,hmac-sha1

Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.

I also create a new group and add the users that need ssh access to it:

addgroup sshuser
adduser francois sshuser

and add a timeout for root sessions by putting this in /root/.bash_profile:

TMOUT=600

Security checks

apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper mcelog
apt-get remove john john-data rpcbind tripwire

Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/error.log
/var/log/mail.err
/var/log/mail.warn
/var/log/mail.info
/var/log/fail2ban.log

while ensuring that the apache logfiles are readable by logcheck:

chmod a+rx /var/log/apache2
chmod a+r /var/log/apache2/*

and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:

create 644 root adm

I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):

INTRO=0
FQDN=0

Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:

Tiger_Check_RUNPROC=Y
Tiger_Check_DELETED=Y
Tiger_Check_APACHE=Y
Tiger_FSScan_WDIR=Y
Tiger_SSH_Protocol='2'
Tiger_Passwd_Hashes='sha512'
Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres'
Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd'

General hardening

apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra

While the harden packages are configuration-free, AppArmor must be manually enabled:

perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
update-grub

Entropy and timekeeping

apt-get install haveged rng-tools ntp

To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.

Preventing mistakes

apt-get install molly-guard safe-rm sl

The above packages are all about catching mistakes (such as accidental deletions). However, in order to extend the molly-guard protection to mosh sessions, one needs to manually apply a patch.

Package updates

apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest

These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.

In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:

#!/bin/sh
cat /var/run/reboot-required 2> /dev/null || true

to send me a notification whenever a kernel update requires a reboot to take effect.

Handy utilities

apt-get install renameutils atool iotop sysstat lsof mtr-tiny mc

Most of these tools are configuration-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.

Apache configuration

apt-get install apache2-mpm-event

While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.

I enable these in /etc/apache2/conf.d/security:

<Directory />
    AllowOverride None
    Order Deny,Allow
    Deny from all
</Directory>
ServerTokens Prod
ServerSignature Off

and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.

I also create a new /etc/apache2/conf.d/servername which contains:

ServerName machine_hostname

Mail

apt-get install postfix

Configuring mail properly is tricky but the following has worked for me.

In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.

Change the following in /etc/postfix/main.cf:

inet_interfaces = loopback-only
myhostname = (fully qualified hostname)
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2, !SSLv3

Set the following aliases in /etc/aliases:

  • set francois as the destination of root emails
  • set an external email address for francois
  • set root as the destination for www-data emails

before running newaliases to update the aliases database.

Create a new cronjob (/etc/cron.hourly/checkmail):

#!/bin/sh
ls /var/mail

to ensure that email doesn't accumulate unmonitored on this box.

Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.

Network tuning

To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:

net.core.default_qdisc=fq_codel
Using OpenVPN on Android Lollipop

I use my Linode VPS as a VPN endpoint for my laptop when I'm using untrusted networks and I wanted to do the same on my Android 5 (Lollipop) phone.

It turns out that it's quite easy to do (doesn't require rooting your phone) and that it works very well.

Install OpenVPN

Once you have installed and configured OpenVPN on the server, you need to install the OpenVPN app for Android (available both on F-Droid and Google Play).

From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:

./build-key nexus6        # "nexus6" as Name, no password

and then copy the following files onto your phone:

  • ca.crt
  • nexus6.crt
  • nexus6.key
  • ta.key

Create a new VPN config

If you configured your server as per my instructions, these are the settings you'll need to use on your phone:

Basic:

  • LZO Compression: YES
  • Type: Certificates
  • CA Certificate: ca.crt
  • Client Certificate: nexus6.crt
  • Client Certificate Key: nexus6.key

Server list:

  • Server address: hafnarfjordur.fmarier.org
  • Port: 1194
  • Protocol: UDP
  • Custom Options: NO

Authentication/Encryption:

  • Expect TLS server certificate: YES
  • Certificate hostname check: YES
  • Remote certificate subject: server
  • Use TLS Authentication: YES
  • TLS Auth File: ta.key
  • TLS Direction: 1
  • Encryption cipher: AES-256-CBC
  • Packet authentication: SHA384 (not SHA-384)

That's it. Everything else should work with the defaults.

Keeping up with noisy blog aggregators using PlanetFilter

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options

In my opinion, the first step in starting a new free software project should be to look for a reason not to do it :) So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.

It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.

A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter

PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.

If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/.

You can either:

  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.

The software will fetch new posts every hour and overwrite the local copy of each feed.

A basic configuration file looks like this:

[feed]
url = http://planet.debian.org/atom.xml

[blacklist]

Filters

There are currently two ways of filtering posts out. The main one is by author name:

[blacklist]
authors =
  Alice Jones
  John Doe

and the other one is by title:

[blacklist]
titles =
  This week in review
  Wednesday meeting for

In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support

Since blog updates happen asynchronously in the background, they can work very well over Tor.

In order to set that up in the Debian version of planetfilter:

  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:

     proxyAddress = "127.0.0.1"
     proxyPort = 8008
     allowedClients = 127.0.0.1
     allowedPorts = 1-65535
     proxyName = "localhost"
     cacheIsShared = false
     socksParentProxy = "localhost:9050"
     socksProxyType = socks5
     chunkHighMark = 67108864
     diskCacheRoot = ""
     localDocumentRoot = ""
     disableLocalInterface = true
     disableConfiguration = true
     dnsQueryIPv6 = no
     dnsUseGethostbyname = yes
     disableVia = true
     censoredHeaders = from,accept-language,x-pad,link
     censorReferer = maybe
    
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:

     export http_proxy="localhost:8008"
     export https_proxy="localhost:8008"
    

Bugs and suggestions

The source code is available on repo.or.cz.

I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!

I'm also interested in any suggestions you may have.

Error while running "git gc"

If you see errors like these while trying to do garbage collection on a git repository:

$ git gc
warning: reflog of 'refs/heads/synced/master' references pruned commits
warning: reflog of 'refs/heads/annex/direct/master' references pruned commits
warning: reflog of 'refs/heads/git-annex' references pruned commits
warning: reflog of 'refs/heads/master' references pruned commits
warning: reflog of 'HEAD' references pruned commits
error: Could not read a4909371f8d5a38316e140c11a2d127d554373c7
fatal: Failed to traverse parents of commit 334b7d05087ed036c1a3979bc09bcbe9e3897226
error: failed to run repack

then the reflog may be pointing to corrupt entries.

They can be purged by running this:

$ git reflog expire --all --stale-fix

Thanks to Joey Hess for pointing me in the right direction while debugging a git-annex problem.

Upgrading Lenovo ThinkPad BIOS under Linux

The Lenovo support site offers downloadable BIOS updates that can be run either from Windows or from a bootable CD.

Here's how to convert the bootable CD ISO images under Linux in order to update the BIOS from a USB stick.

Checking the BIOS version

Before upgrading your BIOS, you may want to look up which version of the BIOS you are currently running. To do this, install the dmidecode package:

apt-get install dmidecode

then run:

dmidecode

or alternatively, look at the following file:

cat /sys/devices/virtual/dmi/id/bios_version

Updating the BIOS using a USB stick

To update without using a bootable CD, install the genisoimage package:

apt-get install genisoimage

then use geteltorito to convert the ISO you got from Lenovo:

geteltorito -o bios.img gluj19us.iso

Insert a USB stick you're willing to erase entirely and then copy the image onto it (replacing sdX with the correct device name, not partition name, for the USB stick):

dd if=bios.img of=/dev/sdX

then restart and boot from the USB stick by pressing Enter, then F12 when you see the Lenovo logo.

Using unattended-upgrades on Rackspace's Debian and Ubuntu servers

I install the unattended-upgrades package on almost all of my Debian and Ubuntu servers in order to ensure that security updates are automatically applied. It works quite well except that I still need to login manually to upgrade my Rackspace servers whenever a new rackspace-monitoring-agent is released because it comes from a separate repository that's not covered by unattended-upgrades.

It turns out that unattended-upgrades can be configured to automatically upgrade packages outside of the standard security repositories but it's not very well documented and the few relevant answers you can find online are still using the old whitelist syntax.

Initial setup

The first thing to do is to install the package if it's not already done:

apt-get install unattended-upgrades

and to answer yes to the automatic stable update question.

If you don't see the question (because your debconf threshold is too low -- change it with dpkg-reconfigure debconf), you can always trigger the question manually:

dpkg-reconfigure -plow unattended-upgrades

Once you've got that installed, the configuration file you need to look at is /etc/apt/apt.conf.d/50unattended-upgrades.

Whitelist matching criteria

Looking at the unattended-upgrades source code, I found the list of things that can be used to match on in the whitelist:

  • origin (shortcut: o)
  • label (shortcut: l)
  • archive (shortcut: a)
  • suite (which is the same as archive)
  • component (shortcut: c)
  • site (no shortcut)

You can find the value for each of these fields in the appropriate _Release file under /var/lib/apt/lists/.

Note that the value of site is the hostname of the package repository, also present in the first part these *_Release filenames (stable.packages.cloudmonitoring.rackspace.com in the example below).

In my case, I was looking at the following inside /var/lib/apt/lists/stable.packages.cloudmonitoring.rackspace.com_debian-wheezy-x86%5f64_dists_cloudmonitoring_Release:

Origin: Rackspace
Codename: cloudmonitoring
Date: Fri, 23 Jan 2015 18:58:49 UTC
Architectures: i386 amd64
Components: main
...

which means that, in addition to site, the only things I could match on were origin and component since there are no Suite or Label fields in the Release file.

This is the line I ended up adding to my /etc/apt/apt.conf.d/50unattended-upgrades:

 Unattended-Upgrade::Origins-Pattern {
         // Archive or Suite based matching:
         // Note that this will silently match a different release after
         // migration to the specified archive (e.g. testing becomes the
         // new stable).
 //      "o=Debian,a=stable";
 //      "o=Debian,a=stable-updates";
 //      "o=Debian,a=proposed-updates";
         "origin=Debian,archive=stable,label=Debian-Security";
         "origin=Debian,archive=oldstable,label=Debian-Security";
+        "origin=Rackspace,component=main";
 };

Testing

To ensure that the config is right and that unattended-upgrades will pick up rackspace-monitoring-agent the next time it runs, I used:

unattended-upgrade --dry-run --debug

which should output something like this:

Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ['origin=Debian,archive=stable,label=Debian-Security', 'origin=Debian,archive=oldstable,label=Debian-Security', 'origin=Rackspace,component=main']
Checking: rackspace-monitoring-agent (["<Origin component:'main' archive:'' origin:'Rackspace' label:'' site:'stable.packages.cloudmonitoring.rackspace.com' isTrusted:True>"])
pkgs that look like they should be upgraded: rackspace-monitoring-agent
...
Option --dry-run given, *not* performing real actions
Packages that are upgraded: rackspace-monitoring-agent

Making sure that automatic updates are happening

In order to make sure that all of this is working and that updates are actually happening, I always install apticron on all of the servers I maintain. It runs once a day and emails me a list of packages that need to be updated and it keeps doing that until the system is fully up-to-date.

The only thing missing from this is getting a reminder whenever a package update (usually the kernel) requires a reboot to take effect. That's where the update-notifier-common package comes in.

Because that package will add a hook that will create the /var/run/reboot-required file whenever a kernel update has been installed, all you need to do is create a cronjob like this in /etc/cron.daily/reboot-required:

#!/bin/sh
cat /var/run/reboot-required 2> /dev/null || true

assuming of course that you are already receiving emails sent to the root user (if not, add the appropriate alias in /etc/aliases and run newaliases).

Making Firefox Hello work with NoScript and RequestPolicy

Firefox Hello is a new beta feature in Firefox 34 which give users the ability to do plugin-free video-conferencing without leaving the browser (using WebRTC technology).

If you cannot get it to work after adding the Hello button to the toolbar, this post may help.

Preferences to check

There are a few preferences to check in about:config:

  • media.peerconnection.enabled should be true
  • network.websocket.enabled should be true
  • loop.enabled should be true
  • loop.throttled should be false

NoScript

If you use the popular NoScript add-on, you will need to whitelist the following hosts:

  • about:loopconversation
  • hello.firefox.com
  • loop.services.mozilla.com
  • opentok.com
  • tokbox.com

RequestPolicy

If you use the less popular but equally annoying RequestPolicy add-on, then you will need to whitelist the following origin to destination mappings:

  • about:loopconversation -> opentok.com
  • about:loopconversation -> tokbox.com
  • firefox.com -> mozilla.com
  • firefox.com -> opentok.com
  • firefox.com -> tokbox.com

If you find a more restrictive policy that works, please leave a comment!

Mercurial and Bitbucket workflow for Gecko development

While it sounds like I should really switch to a bookmark-based Mercurial workflow for my Gecko development, I figured that before I do that, I should document how I currently use patch queues and Bitbucket.

Starting work on a new bug

After creating a new bug in Bugzilla, I do the following:

  1. Create a new mozilla-central-mq-BUGNUMBER repo on Bitbucket using the web interface and put https://bugzilla.mozilla.org/show_bug.cgi?id=BUGNUMBER as the Website in the repository settings.
  2. Create a new patch queue: hg qqueue -c BUGNUMBER
  3. Initialize the patch queue: hg init --mq
  4. Make some changes.
  5. Create a new patch: hg qnew -Ue bugBUGNUMBER.patch
  6. Commit the patch to the mq repo: hg commit --mq -m "Initial version"
  7. Push the mq repo to Bitbucket: hg push --mq ssh://hg@bitbucket.org/fmarier/mozilla-central-mq-BUGNUMBER
  8. Make the above URL the default for pull/push by putting this in .hg/patches-BUGNUMBER/.hg/hgrc:

    [paths]
    default = https://bitbucket.org/fmarier/mozilla-central-mq-BUGNUMBER
    default-push = ssh://hg@bitbucket.org/fmarier/mozilla-central-mq-BUGNUMBER
    

Working on a bug

I like to preserve the history of the work I did on a patch. So once I've got some meaningful changes to commit to my patch queue repo, I do the following:

  1. Add the changes to the current patch: hg qref
  2. Check that everything looks fine: hg diff --mq
  3. Commit the changes to the mq repo: hg commit --mq
  4. Push the changes to Bitbucket: hg push --mq

Switching between bugs

Since I have one patch queue per bug, I can easily work on more than one bug at a time without having to clone the repository again and work from a different directory.

Here's how I switch between patch queues:

  1. Unapply the current queue's patches: hg qpop -a
  2. Switch to the new queue: hg qqueue BUGNUMBER
  3. Apply all of the new queue's patches: hg qpush -a

Rebasing a patch queue

To rebase my patch onto the latest mozilla-central tip, I do the following:

  1. Unapply patches using hg qpop -a
  2. Update the branch: hg pull -u
  3. Reapply the first patch: hg qpush and resolve any conflicts
  4. Update the patch file in the queue: hg qref
  5. Repeat steps 3 and 4 for each patch.
  6. Commit the changes: hg commit --mq -m "Rebase patch"

Credits

Thanks to Thinker Lee for telling me about qqueue and Chris Pearce for explaining to me how he uses mq repos on Bitbucket.

Of course, feel free to leave a comment if I missed anything useful or if there's a easier way to do any of the above.

Hiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):

servers = (
  {
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  },
  {
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  }
);

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

#!/bin/bash
ssh irc.example.com "pgrep znc || znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine.

Start by installing (as root) the necessary packages:

apt-get install lxc libvirt-bin debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24

but I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:

    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://http.debian.net/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64

then install a text editor inside the container because the root image doesn't have one by default:

apt-get install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Fixing Perl locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.