In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.
The only requirement is the ThinkPad ACPI kernel module which you can find in
package in Debian. That's
what generates the
ibm/hotkey events we will listen for.
Hooking into the events
event=ibm/hotkey LEN0068:00 00000080 00004010 action=su francois -c "/home/francois/bin/external-monitor dock"
event=ibm/hotkey LEN0068:00 00000080 00004011 action=su francois -c "/home/francois/bin/external-monitor undock"
then restart udev:
sudo service udev restart
Finding the right events
To make sure the events are the right ones, lift them off of:
and ensure that your script is actually running by adding:
logger "ACPI event: $*"
at the begininng of it and then looking in
/var/log/syslog for this lines
logger: external-monitor undock logger: external-monitor dock
If that doesn't work for some reason, try using an ACPI event script like this:
event=ibm/hotkey action=logger %e
to see which event you should hook into.
Using xrandr inside an ACPI event script
Because the script will be running outside of your user session, the
xrandr calls must explicitly set the display variable (
-d). This is what
#!/bin/sh logger "ACPI event: $*" xrandr -d :0.0 --output DP2 --auto xrandr -d :0.0 --output eDP1 --auto xrandr -d :0.0 --output DP2 --left-of eDP1
Sharing a scanner over the network using SANE is fairly straightforward. Here's how I shared a scanner on a server (running Debian jessie) with a client (running Ubuntu trusty).
The packages you need on both the client and the server are:
Test the scanner locally
Once you have SANE installed, you can test it out locally to confirm that it detects your scanner:
This should give you output similar to this:
device `genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
If that doesn't work, make sure that the scanner is actually detected by the USB stack:
$ lsusb | grep Canon Bus 001 Device 006: ID 04a9:190f Canon, Inc.
and that its USB ID shows up in the SANE backend it needs:
$ grep 190f /etc/sane.d/genesys.conf usb 0x04a9 0x190f
To do a test scan, simply run:
scanimage > test.ppm
and then take a look at the (greyscale) image it produced (
Configure the server
With the scanner working locally, it's time to expose it to network clients
by adding the client IP addresses to
## Access list 192.168.1.3
and then opening the appropriate port on your firewall
/etc/network/iptables in Debian):
-A INPUT -s 192.168.1.3 -p tcp --dport 6566 -j ACCEPT
Then you need to ensure that the SANE server is running by setting the
if you're using the sysv init system, or by running this command:
systemctl enable saned.socket
if using systemd.
I actually had to reboot to make saned visible to systemd, so if you still run into these errors:
$ service saned start Failed to start saned.service: Unit saned.service is masked.
you're probably just one reboot away from getting it to work.
Configure the client
On the client, all you need to do is add the following to
connect_timeout = 60 myserver
myserver is the hostname or IP address of the server running saned.
Test the scanner remotely
With everything in place, you should be able to see the scanner from the client computer:
$ scanimage -L device `net:myserver:genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
and successfully perform a test scan using this command:
scanimage > test.ppm
In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.
Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.
First, I had to install the
pagekite package (already in
Debian and Ubuntu) and open up a port on my firewall by adding the following
-A INPUT -p tcp --dport 10022 -j ACCEPT
Then I created a new
CNAME for my server in DNS:
pagekite.fmarier.org. 3600 IN CNAME fmarier.org.
With that in place, I started the pagekite frontend using this command:
pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1
I used this command to connect my laptop to the pagekite frontend:
pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1
Finally, my colleague needed to add the folowing entry to
Host pagekite.fmarier.org CheckHostIP no ProxyCommand /bin/nc -X connect -x %h:10022 %h %p
He was then able to ssh into my laptop via
Making settings permanent
I was initially quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically) at boot.
I ended up putting the following in
/etc/pagekite.d/20_frontends.rc on my
#defaults isfrontend rawports=virtual ports=10022 domain=raw:pagekite.fmarier.org:Password1
as well as removing the following lines from
# Delete this line! abort_not_configured
before restarting the pagekite daemon using:
service pagekite restart
While the Bluray digital restrictions management system is a lot more crippling than the one preventing users from watching their legally purchased DVDs, it is possible to decode some Bluray discs on Linux using vlc.
First of all, install the required packages as root:
apt install vlc libaacs0 libbluray-bdj libbluray1 mkdir /usr/share/libbluray/ ln -s /usr/share/java/libbluray-0.5.0.jar /usr/share/libbluray/libbluray.jar
The last two lines are there to fix an error you might see on the console when opening a Bluray disc with vlc:
libbluray/bdj/bdj.c:249: libbluray.jar not found. libbluray/bdj/bdj.c:349: BD-J check: Failed to load libbluray.jar
and is apparently due to a bug in libbluray.
mkdir ~/.config/aacs cd ~/.config/aacs wget http://www.labdv.com/aacs/KEYDB.cfg
but it is still limited in the range of discs it can decode.
The list of available wifi channels is slightly different from country to country. To ensure access to the right channels and transmit power settings, one needs to set the right regulatory domain in the wifi stack.
For most Linux-based computers, you can look and change the current regulatory domain using these commands:
iw reg get iw reg set CA
CA is the two-letter country code
when the device is located.
On Debian and Ubuntu, you can make this setting permanent by putting the
country code in
Finally, to see the list of channels that are available in the current config, use:
iwlist wlan0 frequency
In order to persist your changes though, you need to use the uci command:
uci set wireless.radio0.country=CA uci set wireless.radio1.country=CA uci commit wireless
wireless.radio1 are the wireless devices
specific to your router. You can look them up using:
uci show wireless
To test that it worked, simply reboot the router and then look at the selected regulatory domain:
iw reg get
Scanning the local wifi environment
Once your devices are set to the right country, you should scan the local environment to pick the least congested wifi channel. You can use the Kismet spectools (free software) if you have the hardware, otherwise WifiAnalyzer (proprietary) is a good choice on Android (remember to manually set the available channels in the settings).
apt-get install memtest86+ smartmontools e2fsprogs
Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.
To check memory, I boot into memtest86+ from the grub menu and let it run overnight.
Then I check the hard drives using:
smartctl -t long /dev/sdX badblocks -swo badblocks.out /dev/sdX
apt-get install etckeepr git sudo vim
To keep track of the configuration changes I make in
/etc/, I use etckeeper
to keep that directory in a git repository and make the following changes to
- turn off daily auto-commits
- turn off auto-commits before package installs
To get more control over the various packages I install, I change the default debconf level to medium:
Since I use vim for all of my configuration file editing, I make it the default editor:
update-alternatives --config editor
and I turn on syntax highlighting and visual beeping globally by adding the
syntax on set background=dark set visualbell
apt-get install openssh-server mosh fail2ban
Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.
I also ensure that the locale I use is available on the server by adding it the list of generated locales:
Other than that, I harden the ssh configuration
and end up with the following settings in
HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms email@example.com,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers firstname.lastname@example.org,aes256-ctr,aes192-ctr,aes128-ctr MACs email@example.com,firstname.lastname@example.org,email@example.com,hmac-sha2-512,hmac-sha2-256,firstname.lastname@example.org UsePrivilegeSeparation sandbox AuthenticationMethods publickey PasswordAuthentication no PermitRootLogin no AcceptEnv LANG LC_* TZ LogLevel VERBOSE AllowGroups sshuser
or the following for wheezy servers:
HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512,hmac-sha2-256
On those servers where I need duplicity/paramiko to work, I also add the following:
KexAlgorithms ...,diffie-hellman-group-exchange-sha1 MACs ...,hmac-sha1
Then I remove the "Accepted" filter in
(first line) to get a notification whenever anybody successfully logs into
I also create a new group and add the users that need ssh access to it:
addgroup sshuser adduser francois sshuser
and add a timeout for root sessions by putting this in
apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper mcelog apt-get remove john john-data rpcbind tripwire
Logcheck is the main tool I use to keep an eye on log files, which is why I
add a few additional log files to the default list in
/var/log/apache2/error.log /var/log/mail.err /var/log/mail.warn /var/log/mail.info /var/log/fail2ban.log
while ensuring that the apache logfiles are readable by logcheck:
chmod a+rx /var/log/apache2 chmod a+r /var/log/apache2/*
and fixing the log rotation configuration by adding the following to
create 644 root adm
I also modify the main logcheck configuration file
Other than that, I enable daily checks in
customize a few tiger settings in
Tiger_Check_RUNPROC=Y Tiger_Check_DELETED=Y Tiger_Check_APACHE=Y Tiger_FSScan_WDIR=Y Tiger_SSH_Protocol='2' Tiger_Passwd_Hashes='sha512' Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres' Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd'
apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra
While the harden packages are configuration-free, AppArmor must be manually enabled:
perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub update-grub
Entropy and timekeeping
apt-get install haveged rng-tools ntp
To keep the system clock accurate and increase the amount of entropy
available to the server, I install the above packages and add the
apt-get install molly-guard safe-rm sl
apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest
These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.
In addition to this, I use the
update-notifier-common package along with
the following cronjob in
#!/bin/sh cat /var/run/reboot-required 2> /dev/null || true
to send me a notification whenever a kernel update requires a reboot to take effect.
apt-get install renameutils atool iotop sysstat lsof mtr-tiny mc
Most of these tools are configuration-free, except for sysstat, which requires
enabling data collection in
/etc/default/sysstat to be useful.
apt-get install apache2-mpm-event
While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.
I enable these in
<Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> ServerTokens Prod ServerSignature Off
and remove cgi-bin directives from
I also create a new
/etc/apache2/conf.d/servername which contains:
apt-get install postfix
Configuring mail properly is tricky but the following has worked for me.
/etc/hostname, put the bare hostname (no domain), but in
/etc/mailname put the fully qualified hostname.
Change the following in
inet_interfaces = loopback-only myhostname = (fully qualified hostname) smtp_tls_security_level = may smtp_tls_protocols = !SSLv2, !SSLv3
Set the following aliases in
francoisas the destination of
- set an external email address for
rootas the destination for
newaliases to update the aliases database.
Create a new cronjob (
#!/bin/sh ls /var/mail
to ensure that email doesn't accumulate unmonitored on this box.
Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then
test the whole setup using
To reduce the server's contribution to
bufferbloat I change the default kernel
queueing discipline (jessie or later) by putting the following in
I use my Linode VPS as a VPN endpoint for my laptop when I'm using untrusted networks and I wanted to do the same on my Android 5 (Lollipop) phone.
It turns out that it's quite easy to do (doesn't require rooting your phone) and that it works very well.
easy-rsa directory you created while generating the server keys,
create a new keypair for your phone:
./build-key nexus6 # "nexus6" as Name, no password
and then copy the following files onto your phone:
Create a new VPN config
If you configured your server as per my instructions, these are the settings you'll need to use on your phone:
- LZO Compression:
- CA Certificate:
- Client Certificate:
- Client Certificate Key:
- Server address:
- Custom Options:
- Expect TLS server certificate:
- Certificate hostname check:
- Remote certificate subject:
- Use TLS Authentication:
- TLS Auth File:
- TLS Direction:
- Encryption cipher:
- Packet authentication:
That's it. Everything else should work with the defaults.
I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.
In my opinion, the first step in starting a new free software project should be to look for a reason not to do it So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.
It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.
A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.
PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.
You can either:
file:///var/cache/planetfilter/planetname.xmlto your local feed reader
- serve it locally (e.g.
http://localhost/planetname.xml) using a webserver, or
- host it on a server somewhere on the Internet.
The software will fetch new posts every hour and overwrite the local copy of each feed.
A basic configuration file looks like this:
[feed] url = http://planet.debian.org/atom.xml [blacklist]
There are currently two ways of filtering posts out. The main one is by author name:
[blacklist] authors = Alice Jones John Doe
and the other one is by title:
[blacklist] titles = This week in review Wednesday meeting for
In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.
Since blog updates happen asynchronously in the background, they can work very well over Tor.
In order to set that up in the Debian version of planetfilter:
- Install the tor and polipo packages.
Set the following in
proxyAddress = "127.0.0.1" proxyPort = 8008 allowedClients = 127.0.0.1 allowedPorts = 1-65535 proxyName = "localhost" cacheIsShared = false socksParentProxy = "localhost:9050" socksProxyType = socks5 chunkHighMark = 67108864 diskCacheRoot = "" localDocumentRoot = "" disableLocalInterface = true disableConfiguration = true dnsQueryIPv6 = no dnsUseGethostbyname = yes disableVia = true censoredHeaders = from,accept-language,x-pad,link censorReferer = maybe
Tell planetfilter to use the polipo proxy by adding the following to
export http_proxy="localhost:8008" export https_proxy="localhost:8008"
Bugs and suggestions
The source code is available on repo.or.cz.
I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!
I'm also interested in any suggestions you may have.
If you see errors like these while trying to do garbage collection on a git repository:
$ git gc warning: reflog of 'refs/heads/synced/master' references pruned commits warning: reflog of 'refs/heads/annex/direct/master' references pruned commits warning: reflog of 'refs/heads/git-annex' references pruned commits warning: reflog of 'refs/heads/master' references pruned commits warning: reflog of 'HEAD' references pruned commits error: Could not read a4909371f8d5a38316e140c11a2d127d554373c7 fatal: Failed to traverse parents of commit 334b7d05087ed036c1a3979bc09bcbe9e3897226 error: failed to run repack
then the reflog may be pointing to corrupt entries.
They can be purged by running this:
$ git reflog expire --all --stale-fix
The Lenovo support site offers downloadable BIOS updates that can be run either from Windows or from a bootable CD.
Here's how to convert the bootable CD ISO images under Linux in order to update the BIOS from a USB stick.
Checking the BIOS version
Before upgrading your BIOS, you may want to look up which version of the BIOS you are currently running. To do this, install the dmidecode package:
apt-get install dmidecode
or alternatively, look at the following file:
Updating the BIOS using a USB stick
apt-get install genisoimage
geteltorito to convert the ISO you got from Lenovo:
geteltorito -o bios.img gluj19us.iso
Insert a USB stick you're willing to erase entirely and then copy the
image onto it (replacing
sdX with the correct device name, not
partition name, for the USB stick):
dd if=bios.img of=/dev/sdX
then restart and boot from the USB stick by pressing Enter, then F12 when you see the Lenovo logo.