pages tagged backupFeeding the Cloudhttps://feeding.cloud.geek.nz/tags/backup/Feeding the Cloudikiwiki2024-01-14T19:58:18ZCrashplan 10 won't start on Ubuntu derivativeshttps://feeding.cloud.geek.nz/posts/crashplan-10-wont-start-on-ubuntu-derivatives/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2022-05-19T02:26:15Z2022-05-19T02:15:00Z
<p>CrashPlan recently updated itself to version 10 on my
<a href="https://pop.system76.com/">Pop!_OS</a> laptop and stopped backing anything up.</p>
<p><img alt="" src="https://feeding.cloud.geek.nz/posts/crashplan-10-wont-start-on-ubuntu-derivatives/crashplan-backup-alert-email.png" /></p>
<p>When trying to start the client, I got faced with this error message:</p>
<blockquote><p>Code42 cannot connect to its backend service.</p></blockquote>
<p><img alt="" src="https://feeding.cloud.geek.nz/posts/crashplan-10-wont-start-on-ubuntu-derivatives/crashplan-error-message.png" /></p>
<h2 id="Digging_through_log_files">Digging through log files</h2>
<p>In <code>/usr/local/crashplan/log/service.log.0</code>, I found the reason why the
service didn't start:</p>
<pre><code>[05.18.22 07:40:05.756 ERROR main com.backup42.service.CPService] Error starting up, java.lang.IllegalStateException: Failed to start authorized services.
STACKTRACE:: java.lang.IllegalStateException: Failed to start authorized services.
at com.backup42.service.ClientServiceManager.authorize(ClientServiceManager.java:552)
at com.backup42.service.CPService.startServices(CPService.java:2467)
at com.backup42.service.CPService.start(CPService.java:562)
at com.backup42.service.CPService.main(CPService.java:1574)
Caused by: com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.lang.UnsatisfiedLinkError: Unable to load library 'uaw':
libuaw.so: cannot open shared object file: No such file or directory
libuaw.so: cannot open shared object file: No such file or directory
Native library (linux-x86-64/libuaw.so) not found in resource path (lib/com.backup42.desktop.jar:lang)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl.<init>(UserActivityWatcherServiceImpl.java:67)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl.class(UserActivityWatcherServiceImpl.java:23)
while locating com.code42.service.useractivity.UserActivityWatcherServiceImpl
at com.code42.service.AbstractAuthorizedModule.addServiceWithoutBinding(AbstractAuthorizedModule.java:77)
while locating com.code42.service.IAuthorizedService annotated with @com.google.inject.internal.Element(setName=,uniqueId=34, type=MULTIBINDER, keyType=)
while locating java.util.Set<com.code42.service.IAuthorizedService>
1 error
at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1097)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1126)
at com.backup42.service.ClientServiceManager.getServices(ClientServiceManager.java:679)
at com.backup42.service.ClientServiceManager.authorize(ClientServiceManager.java:513)
... 3 more
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'uaw':
libuaw.so: cannot open shared object file: No such file or directory
libuaw.so: cannot open shared object file: No such file or directory
Native library (linux-x86-64/libuaw.so) not found in resource path (lib/com.backup42.desktop.jar:lang)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:301)
at com.sun.jna.NativeLibrary.getInstance(NativeLibrary.java:461)
at com.sun.jna.Library$Handler.<init>(Library.java:192)
at com.sun.jna.Native.load(Native.java:596)
at com.sun.jna.Native.load(Native.java:570)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl.<init>(UserActivityWatcherServiceImpl.java:72)
at com.code42.service.useractivity.UserActivityWatcherServiceImpl$$FastClassByGuice$$4bcc96f8.newInstance(<generated>)
at com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:114)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:306)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.code42.service.AuthorizedScope$1.get(AuthorizedScope.java:38)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:62)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.code42.service.AuthorizedScope$1.get(AuthorizedScope.java:38)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:198)
at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:151)
at com.google.inject.internal.InternalProviderInstanceBindingImpl$Factory.get(InternalProviderInstanceBindingImpl.java:113)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1094)
... 6 more
Suppressed: java.lang.UnsatisfiedLinkError: libuaw.so: cannot open shared object file: No such file or directory
at com.sun.jna.Native.open(Native Method)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:191)
... 28 more
Suppressed: java.lang.UnsatisfiedLinkError: libuaw.so: cannot open shared object file: No such file or directory
at com.sun.jna.Native.open(Native Method)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:204)
... 28 more
Suppressed: java.io.IOException: Native library (linux-x86-64/libuaw.so) not found in resource path (lib/com.backup42.desktop.jar:lang)
at com.sun.jna.Native.extractFromResourcePath(Native.java:1119)
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:275)
... 28 more
[05.18.22 07:40:05.756 INFO main 42.service.history.HistoryLogger] HISTORY:: Code42 stopped, version 10.0.0
[05.18.22 07:40:05.756 INFO main com.backup42.service.CPService] ***** STOPPING *****
[05.18.22 07:40:05.757 INFO Thread-0 com.backup42.service.CPService] ShutdownHook...calling cleanup
[05.18.22 07:40:05.759 INFO STOPPING com.backup42.service.CPService] SHUTDOWN:: Stopping service...
</code></pre>
<p>This suggests that a new library dependency (<code>uaw</code>) didn't get installed during the
last upgrade.</p>
<p>Looking at the upgrade log (<code>/usr/local/crashplan/log/upgrade..log</code>), I
found that it detected my operating system as "pop 20":</p>
<pre><code>Fri May 13 07:39:51 PDT 2022: Info : Resolve Native Libraries for pop 20...
Fri May 13 07:39:51 PDT 2022: Info : Keep common libs
Fri May 13 07:39:51 PDT 2022: Info : Keep pop 20 libs
</code></pre>
<p>I unpacked the <a href="https://console.us2.crashplanpro.com/app/#/console/app-downloads">official installer</a> (login required):</p>
<pre><code>$ tar zxf CrashPlanSmb_10.0.0_15252000061000_303_Linux.tgz
$ cd code42-install
$ gzip -dc CrashPlanSmb_10.0.0.cpi | cpio -i
</code></pre>
<p>and found that <code>libuaw.so</code> is only shipped for 4 supported platforms
(<code>rhel7</code>, <code>rhel8</code>, <code>ubuntu18</code> and <code>ubuntu20</code>):</p>
<pre><code>$ find nlib/
nlib/
nlib/common
nlib/common/libfreeblpriv3.chk
nlib/common/libsoftokn3.chk
nlib/common/libsmime3.so
nlib/common/libnss3.so
nlib/common/libplc4.so
nlib/common/libssl3.so
nlib/common/libsoftokn3.so
nlib/common/libnssdbm3.so
nlib/common/libjss4.so
nlib/common/libleveldb.so
nlib/common/libfreeblpriv3.so
nlib/common/libfreebl3.chk
nlib/common/libplds4.so
nlib/common/libnssutil3.so
nlib/common/libnspr4.so
nlib/common/libfreebl3.so
nlib/common/libc42core.so
nlib/common/libc42archive64.so
nlib/common/libnssdbm3.chk
nlib/rhel7
nlib/rhel7/libuaw.so
nlib/rhel8
nlib/rhel8/libuaw.so
nlib/ubuntu18
nlib/ubuntu18/libuaw.so
nlib/ubuntu20
nlib/ubuntu20/libuaw.so
</code></pre>
<h2 id="Fixing_the_installation_script">Fixing the installation script</h2>
<p>Others have fixed this problem by <a href="https://old.reddit.com/r/Crashplan/comments/upjjk3/fix_v10_fix_login_issue_missing_libuawso/">copying the files
manually</a>
but since Pop!_OS is based on Ubuntu, I decided to fix this by forcing the
OS to be detected as "ubuntu" in the installer.</p>
<p>I simply edited <code>install.sh</code> like this:</p>
<pre><code>--- install.sh.orig 2022-05-18 16:47:52.176199965 -0700
+++ install.sh 2022-05-18 16:57:26.231723044 -0700
@@ -15,7 +15,7 @@
readonly IS_ROOT=$([[ $(id -u) -eq 0 ]] && echo true || echo false)
readonly REQ_CMDS="chmod chown cp cpio cut grep gzip hash id ls mkdir mv sed"
readonly APP_VERSION_FILE="c42.version.properties"
-readonly OS_NAME=$(grep "^ID=" /etc/os-release | cut -d = -f 2 | tr -d \" | tr '[:upper:]' '[:lower:]')
+readonly OS_NAME=ubuntu
readonly OS_VERSION=$(grep "^VERSION_ID=" /etc/os-release | cut -d = -f 2 | tr -d \" | cut -d . -f1)
SCRIPT_DIR="${0:0:${#0} - ${#SCRIPT_NAME}}"
</code></pre>
<p>and then ran that install script as root again to <em>upgrade</em> my existing
installation.</p>
Backing up to a GnuBee PC 2https://feeding.cloud.geek.nz/posts/backing-up-to-gnubee2/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2024-01-14T19:58:18Z2020-05-03T01:05:00Z
<p>After <a href="https://feeding.cloud.geek.nz/posts/installing-debian-buster-on-gnubee2/">installing Debian buster on my
GnuBee</a>,
I set it up for receiving backups from my other computers.</p>
<h2 id="Software_setup">Software setup</h2>
<p>I started by configuring it <a href="https://feeding.cloud.geek.nz/posts/usual-server-setup/">like a typical
server</a> but without
a few packages that either take a lot of memory or CPU:</p>
<ul>
<li><a href="https://packages.debian.org/buster/fail2ban">fail2ban</a></li>
<li><a href="https://packages.debian.org/buster/rkhunter">rkhunter</a></li>
<li><a href="https://packages.debian.org/buster/sysstat">sysstat</a></li>
</ul>
<p>I changed the default hostname:</p>
<ul>
<li><code>/etc/hostname</code>: <code>foobar</code></li>
<li><code>/etc/mailname</code>: <code>foobar.example.com</code></li>
<li><code>/etc/hosts</code>: <code>127.0.0.1 foobar.example.com foobar localhost</code></li>
</ul>
<p>and then installed the <code>avahi-daemon</code> package to be able to reach this box
using <code>foobar.local</code>.</p>
<p>I noticed the presence of a <a href="https://github.com/neilbrown/gnubee-tools/issues/23">world-writable
directory</a> and so I
tightened the security of some of the default mount points by putting the following
in <code>/etc/rc.local</code>:</p>
<pre><code>chmod 755 /etc/network
exit 0
</code></pre>
<h2 id="Hardware_setup">Hardware setup</h2>
<p>My OS drive (<code>/dev/sda</code>) is a small SSD so that the GnuBee can run silently when the
spinning disks aren't needed. To hold the backup data on the other hand, I
got three 4-TB drives drives which I setup in a
<a href="https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5">RAID-5</a> array.
If the data were valuable, I'd use
<a href="https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_6">RAID-6</a> instead
since it can survive two drives failing at the same time, but in this case
since it's only holding backups, I'd have to lose the original machine at
the same time as two of the 3 drives, a very unlikely scenario.</p>
<p>I created new gpt partition tables on <code>/dev/sdb</code>, <code>/dev/sdbc</code>, <code>/dev/sdd</code>
and used <code>fdisk</code> to create a single partition of <code>type 29</code> (Linux RAID) on
each of them.</p>
<p>Then I created the RAID array:</p>
<pre><code>mdadm /dev/md127 --create -n 3 --level=raid5 /dev/sdb1 /dev/sdc1 /dev/sdd1
</code></pre>
<p>and waited more than 24 hours for that operation to finish. Next, I
formatted the array:</p>
<pre><code>mkfs.ext4 -m 0 /dev/md127
</code></pre>
<p>and added the following to <code>/etc/fstab</code>:</p>
<pre><code>/dev/md127 /mnt/data/ ext4 noatime,nodiratime 0 2
</code></pre>
<h3 id="Keeping_a_copy_of_the_root_partition">Keeping a copy of the root partition</h3>
<p>In order to survive a failing SSD drive, I could have bought a second SSD
and gone for a
<a href="https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1">RAID-1</a> setup.
Instead, I went for a cheaper option, a <a href="https://feeding.cloud.geek.nz/posts/poor-mans-raid1-between-ssd-and-hard-drive/">poor man's
RAID-1</a>,
where I will have to reinstall the machine but it will be very quick and I
won't lose any of my configuration.</p>
<p>The way that it works is that I periodically sync the contents of the root
partition onto the RAID-5 array using a cronjob in <code>/etc/cron.d/hdd-sync</code>:</p>
<pre><code>0 10 * * * root /usr/local/sbin/ssd_root_backup
</code></pre>
<p>which runs the <code>/usr/local/sbin/ssd_root_backup</code> script:</p>
<pre><code>#!/bin/sh
nocache nice ionice -c3 rsync -aHx --delete --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/mnt/* --exclude=/lost+found/* --exclude=/media/* --exclude=/var/tmp/* /* /mnt/data/root/
</code></pre>
<h3 id="Drive_spin_down">Drive spin down</h3>
<p>To reduce unnecessary noise and reduce power consumption, I also installed
<a href="https://sourceforge.net/projects/hdparm/">hdparm</a>:</p>
<pre><code>apt install hdparm
</code></pre>
<p>and configured all spinning drives to spin down after being idle for 2
minutes and for maximum power saving by putting the following in <code>/etc/hdparm.conf</code>:</p>
<pre><code>/dev/sdb {
apm = 1
spindown_time = 24
}
/dev/sdc {
apm = 1
spindown_time = 24
}
/dev/sdd {
apm = 1
spindown_time = 24
}
</code></pre>
<p>and then reloaded the configuration:</p>
<pre><code> /usr/lib/pm-utils/power.d/95hdparm-apm resume
</code></pre>
<h3 id="Monitoring_drive_health">Monitoring drive health</h3>
<p>Finally I setup <a href="https://www.smartmontools.org/">smartmontools</a> by putting
the following in <code>/etc/smartd.conf</code>:</p>
<pre><code>/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdb -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdc -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdd -a -o on -S on -s (S/../.././02|L/../../6/03)
</code></pre>
<p>and restarting the daemon:</p>
<pre><code>systemctl restart smartd.service
</code></pre>
<p><a href="https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/">Some of these errors</a> reported by this tool are good predictors of imminent failure.</p>
<h2 id="Backup_setup">Backup setup</h2>
<p>I started by using <a href="http://duplicity.nongnu.org/">duplicity</a> since I have
been using that tool for many years, but a 190GB backup took around 15 hours
on the GnuBee with gigabit ethernet.</p>
<p>After a <a href="https://stumbles.id.au/">friend</a> suggested it, I took a look at
<a href="https://restic.net">restic</a> and I have to say that I am impressed. The
same backup finished in about half the time.</p>
<h3 id="User_and_ssh_setup">User and ssh setup</h3>
<p>After <a href="https://feeding.cloud.geek.nz/posts/hardening-ssh-servers/">hardening the ssh
setup</a> as I
usually do, I created a user account for each machine needing to backup onto
the GnuBee:</p>
<pre><code>adduser machine1
adduser machine1 sshuser
adduser machine1 sftponly
chsh machine1 -s /bin/false
</code></pre>
<p>and then matching directories under <code>/mnt/data/home/</code>:</p>
<pre><code>mkdir /mnt/data/home/machine1
chown machine1:machine1 /mnt/data/home/machine1
chmod 700 /mnt/data/home/machine1
</code></pre>
<p>Then I created a custom <strong>passwordless</strong> ssh key for each machine:</p>
<pre><code>ssh-keygen -f /root/.ssh/foobar_backups -t ed25519
</code></pre>
<p>and placed it in <code>/home/machine1/.ssh/authorized_keys</code> on the GnuBee.</p>
<p>Then I added the <code>restrict</code> prefix in front of that key so that it looked like:</p>
<pre><code>restrict ssh-ed25519 AAAAC3N... root@machine1
</code></pre>
<p>On each machine, I added the following to <code>/root/.ssh/config</code>:</p>
<pre><code>Host foobar.local
User machine1
Compression no
Ciphers aes128-ctr
IdentityFile /root/backup/foobar_backups
IdentitiesOnly yes
ServerAliveInterval 60
ServerAliveCountMax 240
</code></pre>
<p>The reason for setting the ssh cipher and disabling compression is to <a href="https://gist.github.com/KartikTalwar/4393116">speed
up the ssh connection</a> as much
as possible given that the <a href="https://groups.google.com/d/msg/gnubee/5_nKjgmKSoY/a0ER5fEcBAAJ">GnuBee has a very small RAM
bandwidth</a>.</p>
<p>Another performance-related change I made on the GnuBee was switching to the <a href="https://serverfault.com/questions/660160/openssh-difference-between-internal-sftp-and-sftp-server#660325">internal sftp
server</a>
by putting the following in <code>/etc/ssh/sshd_config</code>:</p>
<pre><code>Subsystem sftp internal-sftp
</code></pre>
<h3 id="Restic_script">Restic script</h3>
<p>After reading through the excellent <a href="https://restic.readthedocs.io/en/stable/">restic
documentation</a>, I wrote the
following backup script, based on my <a href="https://sources.debian.org/src/duplicity/0.8.11.1612-1/debian/examples/system-backup/">old duplicity
script</a>,
to reuse on all of my computers:</p>
<pre><code># Configure for each host
PASSWORD="XXXX" # use `pwgen -s 64` to generate a good random password
BACKUP_HOME="/root/backup"
REMOTE_URL="sftp:foobar.local:"
RETENTION_POLICY="--keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 2"
# Internal variables
SSH_IDENTITY="IdentityFile=$BACKUP_HOME/foobar_backups"
EXCLUDE_FILE="$BACKUP_HOME/exclude"
PKG_FILE="$BACKUP_HOME/dpkg-selections"
PARTITION_FILE="$BACKUP_HOME/partitions"
# If the list of files has been requested, only do that
if [ "$1" = "--list-current-files" ]; then
RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL ls latest
exit 0
# Show list of available snapshots
elif [ "$1" = "--list-snapshots" ]; then
RESTIC_PASSWORD=$GPG_PASSWORD restic --quiet -r $REMOTE_URL snapshots
exit 0
# Restore the given file
elif [ "$1" = "--file-to-restore" ]; then
if [ "$2" = "" ]; then
echo "You must specify a file to restore"
exit 2
fi
RESTORE_DIR="$(mktemp -d ./restored_XXXXXXXX)"
RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL restore latest --target "$RESTORE_DIR" --include "$2" || exit 1
echo "$2 was restored to $RESTORE_DIR"
exit 0
# Delete old backups
elif [ "$1" = "--prune" ]; then
# Expire old backups
RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL forget $RETENTION_POLICY
# Delete files which are no longer necessary (slow)
RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL prune
exit 0
# Unlock the repository
elif [ "$1" = "--unlock" ]; then
RESTIC_PASSWORD=$PASSWORD restic -r $REMOTE_URL unlock
exit 0
# Catch invalid arguments
elif [ "$1" != "" ]; then
echo "Invalid argument: $1"
exit 1
fi
# Check the integrity of existing backups
CHECK_CACHE_DIR="$(mktemp -d /var/tmp/restic-check-XXXXXXXX)"
RESTIC_PASSWORD=$PASSWORD restic --quiet --cache-dir=$CHECK_CACHE_DIR -r $REMOTE_URL check || exit 1
rmdir "$CHECK_CACHE_DIR"
# Dump list of Debian packages
dpkg --get-selections > $PKG_FILE
# Dump partition tables from harddrives
/sbin/fdisk -l /dev/sda > $PARTITION_FILE
/sbin/fdisk -l /dev/sdb > $PARTITION_FILE
# Do the actual backup
RESTIC_PASSWORD=$PASSWORD restic --quiet --cleanup-cache -r $REMOTE_URL backup / --exclude-file $EXCLUDE_FILE
</code></pre>
<p>I run it with the following cronjob in <code>/etc/cron.d/backups</code>:</p>
<pre><code>30 8 * * * root ionice nice nocache /root/backup/backup-machine1-to-foobar
30 2 * * Sun root ionice nice nocache /root/backup/backup-machine1-to-foobar --prune
</code></pre>
<p>in a way that <a href="https://feeding.cloud.geek.nz/posts/three-wrappers-to-run-commands-without-impacting-the-rest-of-the-system/">doesn't impact the rest of the system too much</a>.</p>
<p>I also put the following in my <code>/etc/rc.local</code> to cleanup any leftover temp
directories for aborted backups:</p>
<pre><code>rmdir --ignore-fail-on-non-empty /var/tmp/restic-check-*
</code></pre>
<p>Finally, I printed a copy of each of my backup script, using
<a href="https://www.gnu.org/software/enscript/">enscript</a>, to stash in a safe place:</p>
<pre><code>enscript --highlight=bash --style=emacs --output=- backup-machine1-to-foobar | ps2pdf - > foobar.pdf
</code></pre>
<p>This is actually a pretty important step since <strong>without the password, you
won't be able to decrypt and restore what's on the GnuBee</strong>.</p>
Backing up to S3 with Duplicityhttps://feeding.cloud.geek.nz/posts/backing-up-to-s3-with-duplicity/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-10-09T22:48:39Z2019-12-22T20:30:00Z
<p>Here is how I setup <a href="http://duplicity.nongnu.org/">duplicity</a> to use S3 as a
backend while giving duplicity the minimum set of permissions to my Amazon
Web Services account.</p>
<h1 id="AWS_Security_Settings">AWS Security Settings</h1>
<p>First of all, I enabled the following <a href="https://console.aws.amazon.com/iam/home#/security_credentials">general security
settings</a> in
my AWS account:</p>
<ul>
<li>MFA with a U2F device</li>
<li>no root user access keys</li>
</ul>
<p>Then I set a <strong>password policy</strong> in the <a href="https://console.aws.amazon.com/iam/home#/account_settings">IAM Account
Settings</a> and
<strong>turned off all public access</strong> in the <a href="https://s3.console.aws.amazon.com/s3/settings">S3 Account
Settings</a>.</p>
<h1 id="Creating_an_S3_bucket">Creating an S3 bucket</h1>
<p>As a destination for the backups, I created a new <code>backup-foobar</code> <a href="https://s3.console.aws.amazon.com/s3/home">S3
bucket</a> keeping all of the
default options except for the
<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions">region</a>
which I set to <code>ca-central-1</code> to ensure that my data would stay in
Canada.</p>
<p>The bucket name can be anything you want as long as:</p>
<ul>
<li>it's not already taken by another AWS user</li>
<li>it's a valid hostname (i.e. alphanumeric characters or dashes)</li>
</ul>
<p>Note that I did <em>not</em> enable S3 server-side encryption since I will be
encrypting the backups client-side using the support built into duplicity
instead.</p>
<h1 id="Creating_a_restricted_user_account">Creating a restricted user account</h1>
<p>Then I went back into the <a href="https://console.aws.amazon.com/iam/home">Identity and Access Managment
console</a> and created a new
<code>DuplicityBackup</code> <strong>policy</strong>:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:CreateBucket",
"s3:ListBucket",
"s3:DeleteObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::backup-foobar",
"arn:aws:s3:::backup-foobar/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
</code></pre>
<p>It's unfortunate that the unrestricted <code>s3:ListAllMyBuckets</code> permission has
to be granted, but in my testing, duplicity would error out without it. No
other permissions were needed.</p>
<p>The next step was to create a new <code>DuplicityBackupHosts</code> IAM <strong>group</strong> to which
I attached the <code>DuplicityBackup</code> policy.</p>
<p>Finally, I created a new <code>machinename</code> IAM <strong>user</strong>:</p>
<ul>
<li>Access: <strong>programmatic only</strong></li>
<li>Group: <code>DuplicityBackupHosts</code></li>
<li>Tags: <code>duplicity=1</code></li>
</ul>
<p>and wrote down the <em>access key</em> and the <em>access key secret</em>.</p>
<h1 id="Duplicity_settings">Duplicity settings</h1>
<p>I installed duplicity like this:</p>
<pre><code>apt install duplicity python3-boto3
</code></pre>
<p>then used the following options:</p>
<ul>
<li><code>--s3-use-new-style</code>: apparently required on non-US regions</li>
<li><code>--s3-use-ia</code>: recommended pricing structure for backups</li>
<li><code>--s3-use-multiprocessing</code>: speeds up uploading of backup chunks</li>
</ul>
<p>and the following remote URL:</p>
<pre><code>boto3+s3://backup-foobar/machinename
</code></pre>
<p>I ended up with the following command:</p>
<pre><code>http_proxy= AWS_ACCESS_KEY_ID=<access_key> AWS_SECRET_ACCESS_KEY=<access_key_secret> PASSPHRASE=<password> duplicity --s3-use-new-style --s3-use-ia --s3-use-multiprocessing --no-print-statistics --verbosity 1 --exclude-device-files --exclude-filelist <exclude_file> --include-filelist <include_file> --exclude '**' / <remote_url>
</code></pre>
<p>where <code><exclude_file></code> is a file which contains the list of paths to keep
out of my backup:</p>
<pre><code>/etc/.git
/home/francois/.cache
</code></pre>
<p><code><include_file></code> is a file which contains the list of paths to include
in the backup:</p>
<pre><code>/etc
/home/francois
/usr/local/bin
/usr/local/sbin
/var/log/apache2
/var/www
</code></pre>
<p>and <code><password></code> is a long random string (<code>pwgen -s 64</code>) used to encrypt the backups.</p>
<h1 id="Backup_script">Backup script</h1>
<p>Here are two other things I included in my backup script prior to the actual
backup line listed in the previous section.</p>
<p>The first one deletes files related to failed backups:</p>
<pre><code>http_proxy= AWS_ACCESS_KEY_ID=<access_key> AWS_SECRET_ACCESS_KEY=<access_key_secret> PASSPHRASE=<password> duplicity cleanup --verbosity 1 --force <remote_url>
</code></pre>
<p>and the second deletes old backups (older than 12 days in this example):</p>
<pre><code>http_proxy= AWS_ACCESS_KEY_ID=<access_key> AWS_SECRET_ACCESS_KEY=<access_key_secret> PASSPHRASE=<password> duplicity remove-older-than 12D --verbosity 1 --force <remote_url>
</code></pre>
<p>Feel free to leave a comment if I forgot anything that might be useful!</p>
CrashPlan and non-executable /tmp directorieshttps://feeding.cloud.geek.nz/posts/crashplan-and-non-executable-tmp-directories/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2022-05-19T02:13:51Z2014-06-10T05:15:00Z
<p>If your computer's <code>/tmp</code> is non-executable, you will run into problems with
<a href="http://www.code42.com/crashplan/">CrashPlan</a>.</p>
<p>For example, the temp directory on my laptop is mounted using this line in
<code>/etc/fstab</code>:</p>
<pre><code>tmpfs /tmp tmpfs size=1024M,noexec,nosuid,nodev 0 0
</code></pre>
<p>This configuration leads to two serious problems with CrashPlan.</p>
<h1 id="CrashPlan_client_not_starting_up">CrashPlan client not starting up</h1>
<p>The first one is that while the daemon is running, the client doesn't start
up and doesn't print anything out to the console.</p>
<p>You have to look in <code>/usr/local/crashplan/log/ui_error.log</code> to find the
following error message:</p>
<pre><code>Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
Can't load library: /tmp/.cpswt/libswt-gtk-4234.so
Can't load library: /tmp/.cpswt/libswt-gtk.so
no swt-gtk-4234 in java.library.path
no swt-gtk in java.library.path
/tmp/.cpswt/libswt-gtk-4234.so: /tmp/.cpswt/libswt-gtk-4234.so: failed to map segment from shared object: Operation not permitted
at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source)
at org.eclipse.swt.internal.C.<clinit>(Unknown Source)
at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source)
at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source)
at com.backup42.desktop.CPDesktop.<init>(CPDesktop.java:266)
at com.backup42.desktop.CPDesktop.main(CPDesktop.java:200)
</code></pre>
<p>To fix this, you must tell the client to use a different directory, one that
is executable and writable by users who need to use the GUI, by adding
something like this to the <code>GUI_JAVA_OPTS</code> variable of
<code>/usr/local/crashplan/bin/run.conf</code>:</p>
<pre><code>-Djava.io.tmpdir=/home/username/.crashplan-tmp
</code></pre>
<h1 id="Backup_waiting_forever">Backup waiting forever</h1>
<p>The second problem is that once you're able to start the client, backups are
<a href="http://randomwindowstips.wordpress.com/2013/02/25/crashplan-pro-for-linux-stuck-at-waiting-for-backup-or-connecting-to-backup-destination/">stuck at "waiting for backup"</a>
and you can see the following in <code>/usr/local/crashplan/log/engine_error.log</code>:</p>
<pre><code>Exception in thread "W87903837_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager
at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.<init>(JNAInotifyFileWatcherDriver.java:21)
at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:393)
at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(BackupSetsManager.java:331)
at com.code42.backup.path.BackupSetsManager.access$1600(BackupSetsManager.java:66)
at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(BackupSetsManager.java:1073)
at com.code42.utils.AWorker.run(AWorker.java:158)
at java.lang.Thread.run(Thread.java:744)
</code></pre>
<p>This time, you must tell the server to use a different directory, one that
is executable and writable by the CrashPlan engine user (<code>root</code> on my
machine), by adding something like this to the <code>SRV_JAVA_OPTS</code> variable of
<code>/usr/local/crashplan/bin/run.conf</code>:</p>
<pre><code>-Djava.io.tmpdir=/var/tmp/crashplan
</code></pre>
<p>To ensure that the directory exists, you can put the following in <code>/etc/rc.local</code>:</p>
<pre><code>#!/bin/sh -e
mkdir -p /var/tmp/crashplan
exit 0
</code></pre>
<p>Finally, it seems like you <strong>need to restart the machine</strong> before this
starts working. I'm not sure why restarting crashplan isn't enough.</p>
Encrypted system backup to DVDhttps://feeding.cloud.geek.nz/posts/encrypted-system-backup-to-dvd/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2023-09-10T04:07:18Z2011-04-02T23:30:00Z
<p>Inspired by <a href="http://worldbackupday.net/">World Backup Day</a>, I decided to take a backup of my laptop. Thanks to using a <a href="http://www.debian.org/">free operating system</a> I don't have to backup any of my software, just configuration and data files, which fit on a single DVD.</p>
<p>In order to avoid worrying too much about secure storage and disposal of these backups, I have decided to encrypt them using a standard <a href="https://feeding.cloud.geek.nz/2008/04/two-tier-encryption-strategy-archiving.html">encrypted loopback filesystem</a>.</p>
<p>(Feel free to leave a comment if you can suggest an easier way of doing this.)</p>
<h3 id="Cryptmount_setup">Cryptmount setup</h3>
<p>Install <a href="http://cryptmount.sourceforge.net/">cryptmount</a>:</p>
<pre><code>apt-get install cryptmount
</code></pre>
<p>and setup two encrypted mount points in <code>/etc/cryptmount/cmtab</code>:</p>
<pre><code>backup {
dev=/backup.dat
dir=/backup
fstype=ext4
mountoptions=defaults,noatime
keyfile=/backup.key
keyhash=sha512
keycipher=aes-xts-plain64
keyformat=builtin
cipher=aes-xts-plain64
}
testbackup {
dev=/media/cdrom/backup.dat
dir=/backup
fstype=ext4
mountoptions=defaults,noatime,ro,noload
keyfile=/media/cdrom/backup.key
keyhash=sha512
keycipher=aes-xts-plain64
keyformat=builtin
cipher=aes-xts-plain64
}
</code></pre>
<h3 id="Initialize_the_encrypted_filesystem">Initialize the encrypted filesystem</h3>
<p>Make sure you have at least 4.3 GB of free disk space on <code>/</code> and then run:</p>
<pre><code>mkdir /backup
dd if=/dev/zero of=/backup.dat bs=1M count=4096
cryptmount --generate-key 32 backup
cryptmount --prepare backup
mkfs.ext4 -m 0 /dev/mapper/backup
cryptmount --release backup
</code></pre>
<p>Alternatively, if you're using a double-layer DVD then use this <code>dd</code> line:</p>
<pre><code>dd if=/dev/zero of=/backup.dat bs=1M count=8000
</code></pre>
<h3 id="Burn_the_data_to_a_DVD">Burn the data to a DVD</h3>
<p>Mount the newly created partition:</p>
<pre><code>cryptmount backup
</code></pre>
<p>and then copy the files you want to <code>/backup/</code> before unmounting that partition:</p>
<pre><code>cryptmount -u backup
</code></pre>
<p>Finally, use your favourite DVD-burning program to burn these files:</p>
<ul>
<li><code>/backup.dat</code></li>
<li><code>/backup.key</code></li>
<li><code>/etc/cryptmount/cmtab</code></li>
</ul>
<h3 id="Test_your_backup">Test your backup</h3>
<p>Before deleting these two files, test the DVD you've just burned by mounting it:</p>
<pre><code>mount /cdrom
cryptmount testbackup
</code></pre>
<p>and looking at a random sampling of the files contained in <code>/backup</code>.</p>
<p>Once you are satisfied that your backup is fine, umount the DVD:</p>
<pre><code>cryptmount -u testbackup
umount /cdrom
</code></pre>
<p>and remove the temporary files:</p>
<pre><code>rm /backup.dat /backup.key
</code></pre>