I recently setup a desktop computer with two SSDs using a software RAID1 and full-disk encryption (i.e. LUKS). Since this is not a supported configuration in Ubuntu desktop, I had to use the server installation medium.
This is my version of these excellent instructions.
Server installer
Start by downloading the alternate server installer and verifying its signature:
Download the required files:
wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
Verify the signature on the hash file:
$ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092 $ gpg --verify SHA256SUMS.gpg SHA256SUMS gpg: Signature made Fri Feb 15 08:32:38 2019 PST gpg: using RSA key D94AA3F0EFE21092 gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
Verify the hash of the ISO file:
$ sha256sum --ignore-missing -c SHA256SUMS ubuntu-18.04.2-server-amd64.iso: OK
Then copy it to a USB drive:
dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX
and boot with it.
Manual partitioning
Inside the installer, use manual partitioning to:
- Configure the physical partitions.
- Configure the RAID array second.
- Configure the encrypted partitions last
Here's the exact configuration I used:
/dev/sda1
is 512 MB and used as the EFI parition/dev/sdb1
is 512 MB but not used for anything/dev/sda2
and/dev/sdb2
are both 4 GB (RAID)/dev/sda3
and/dev/sdb3
are both 512 MB (RAID)/dev/sda4
and/dev/sdb4
use up the rest of the disk (RAID)
I only set /dev/sda2
as the EFI partition because I found that adding a
second EFI partition would break the installer.
I created the following RAID1 arrays:
/dev/sda2
and/dev/sdb2
for/dev/md2
/dev/sda3
and/dev/sdb3
for/dev/md0
/dev/sda4
and/dev/sdb4
for/dev/md1
I used /dev/md0
as my unencrypted /boot
partition.
Then I created the following LUKS partitions:
md1_crypt
as the/
partition using/dev/md1
md2_crypt
as the swap partition (4 GB) with a random encryption key using/dev/md2
Post-installation configuration
Once your new system is up, sync the EFI partitions using DD:
dd if=/dev/sda1 of=/dev/sdb1
and create a second EFI boot entry:
efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
Ensure that the RAID drives are fully sync'ed by keeping an eye on
/prod/mdstat
and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.
Once you have rebooted, remove the following package to speed up future boots:
apt purge btrfs-progs
To switch to the desktop variant of Ubuntu, install these meta-packages:
apt install ubuntu-desktop gnome
then use debfoster
to remove unnecessary packages (in particular the ones
that only come with the default Ubuntu server installation).
Fixing booting with degraded RAID arrays
Since I have run into RAID startup problems in the past, I expected having to fix up a few things to make degraded RAID arrays boot correctly.
I did not use LVM since I
didn't really feel the need to add yet another layer of abstraction of top
of my setup, but I found that the lvm2
package must still be installed:
apt install lvm2
with use_lvmetad = 0
in /etc/lvm/lvm.conf
.
Then in order to automatically bring up the RAID arrays with 1 out of 2
drives, I added the following script in
/etc/initramfs-tools/scripts/local-top/cryptraid
:
#!/bin/sh
PREREQ="mdadm"
prereqs()
{
echo "$PREREQ"
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
mdadm --run /dev/md0
mdadm --run /dev/md1
mdadm --run /dev/md2
before making that script executable:
chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid
and refreshing the initramfs:
update-initramfs -u -k all
Disable suspend-to-disk
Since I use a random encryption key for the swap
partition
(to avoid having a second password prompt at boot time), it means that
suspend-to-disk is not going to work and so I disabled it by putting the
following in /etc/initramfs-tools/conf.d/resume
:
RESUME=none
and by adding noresume
to the GRUB_CMDLINE_LINUX
variable in
/etc/default/grub
before applying these changes:
update-grub
update-initramfs -u -k all
Test your configuration
With all of this in place, you should be able to do a final test of your setup:
- Shutdown the computer and unplug the second drive.
- Boot with only the first drive.
- Shutdown the computer and plug the second drive back in.
Boot with both drives and re-add the second drive to the RAID array:
mdadm /dev/md0 -a /dev/sdb3 mdadm /dev/md1 -a /dev/sdb4 mdadm /dev/md2 -a /dev/sdb2
Wait until the RAID is done re-syncing and shutdown the computer.
- Repeat steps 2-5 with the first drive unplugged instead of the second.
- Reboot with both drives plugged in.
At this point, you have a working setup that will gracefully degrade to a one-drive RAID array should one of your drives fail.
Keep the EFI partitions in sync
Since the EFI partition is not RAIDed, I decided to setup a cron job to keep it in sync on the two drives.
First of all, I made sure that both partitions were mounted by explicitly
using their PARTUUID
(use blkid /dev/sda1
to look it up) in /etc/fstab
:
PARTUUID=9a923b8a-6d41-473d-b4f3-b7488eedeace /boot/efi vfat umask=0077,rw,x-gvfs-hide 0 1
PARTUUID=ef386bc2-e184-4397-86b4-88ecd4469a9b /mnt/efi vfat umask=0077,rw,x-gvfs-hide 0 0
and then creating the /mnt/efi
directory and mounting everything:
mkdir -p /mnt/efi
mount -a
Then put the following script in cron.daily/efi-sync
:
#!/bin/sh
if [ ! -e /mnt/efi/backup.mnt ] ; then
echo "The backup drive is not mounted in /mnt/efi."
exit 1
fi
if [ ! -e /boot/efi/orig.mnt ] ; then
echo "The original drive is not the EFI partition"
exit 1
fi
rsync -aHx --delete --exclude=/efi/orig.mnt --exclude=/efi/backup.mnt /boot/efi /mnt/
and adding the guard files:
touch /mnt/efi/backup.mnt
touch /boot/efi/orig.mnt