I've been using disk encryption (via LUKS and cryptsetup) on Debian and Ubuntu for quite some time and it has worked well for me. However, while setting up full disk encryption for a new computer on a RAID1 partition, I discovered that there are a few major problems with RAID on Ubuntu.
My Setup: RAID and LUKS
Since I was setting up a new machine on Ubuntu 12.04 LTS (Precise Pangolin), I used the alternate CD (I burned ubuntu-12.04.3-alternate-amd64+mac.iso to a blank DVD) to get access to the full disk encryption options.
First, I created a RAID1 array
to mirror the data on the two hard disks. Then, I used the partition manager
built into the installer to setup an unencrypted boot partition (/dev/md0
mounted as /boot
) and an encrypted root partition (/dev/md1
mounted as
/
) on the RAID1 array.
While I had done full disk encryption and mirrored drives before, I had never done them at the same time on Ubuntu or Debian.
The problem: cannot boot an encrypted degraded RAID
After setting up the RAID, I decided to test it by booting from each drive with the other one unplugged.
The first step was to ensure that the system is configured (via
dpkg-reconfigure mdadm
) to boot in "degraded mode".
When I rebooted with a single disk though, I received a evms_activate is not available error message instead of the usual cryptsetup password prompt. The exact problem I ran into is best described in this comment (see this bug for context).
It turns out that booting degraded RAID arrays has been plagued with several problems.
My solution: an extra initramfs boot script to start the RAID array
The underlying problem is that the RAID1 array is not started automatically
when it's missing a disk and so cryptsetup cannot find the
UUID of the drive to decrypt (as
configured in /etc/crypttab
).
My fix, based on
a script I was
lucky enough to stumble on, lives in /etc/initramfs-tools/scripts/local-top/cryptraid
:
#!/bin/sh
PREREQ="mdadm"
prereqs()
{
echo "$PREREQ"
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
cat /proc/mdstat
mdadm --run /dev/md1
cat /proc/mdstat
After creating that file, remember to:
- make the script executable (using
chmod a+x
) and - regenerate the initramfs (using
dpkg-reconfigure linux-image-KERNELVERSION
).
To make sure that the script is doing the right thing:
- press "Shift" while booting to bring up the Grub menu
- then press "e" to edit the default boot line
- remove the "quiet" and "splash" options from the kernel arguments
- press F10 to boot with maximum console output
You should see the RAID array stopped (look for the output of the first cat
/proc/mdstat
call) and then you should see output from a running degraded
RAID array.
Backing up the old initramfs
If you want to be extra safe while testing this new initramfs, make sure you
only reconfigure one kernel at a time (no update-initramfs -u -k all
) and
make a copy of the initramfs before you reconfigure the kernel:
cp /boot/initrd.img-KERNELVERSION-generic /boot/initrd.img-KERNELVERSION-generic.original
Then if you run into problems, you can go into the Grub menu, edit the
default boot option and make it load the .original
initramfs.
I have tried this patch in both versions of Ubuntu server 16.04 (32 and 64 bit) in a Virtual Box vm. I found that the 64 bit editions it's fine, but the 32 bit edition does not boot up. It stops with the message "cryptsetup: lvm is not available" Perhaps there is a way to solve this, but is far from my knowledge. Thanks for the patch.
Kind regards
Thanks, works in ubuntu server 18.04.1 ! but i need to set the md device --readwrite, too.
(this "feature" is unfixed for over 9 years )