pages tagged ext4Feeding the Cloudhttps://feeding.cloud.geek.nz/tags/ext4/Feeding the Cloudikiwiki2022-08-02T16:40:49ZUpgrading an ext4 filesystem for the year 2038https://feeding.cloud.geek.nz/posts/upgrading-ext4-filesystem-for-y2k38/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2021-05-08T19:00:00Z
<p>If you see a message like this in your logs:</p>
<pre><code>ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
</code></pre>
<p>it's an indication that your filesystem is not <a href="https://en.wikipedia.org/wiki/Year_2038_problem">Y2k38</a>-safe.</p>
<p>You can also check this manually using:</p>
<pre><code>$ tune2fs -l /dev/sda1 | grep "Inode size:"
Inode size: 128
</code></pre>
<p>where an inode size of <code>128</code> is insufficient beyond 2038 and an inode size
of <code>256</code> is what you want.</p>
<p>The safest way to change this is to copy the contents of your partition to another <code>ext4</code>
partition:</p>
<pre><code>cp -a /boot /mnt/backup/
</code></pre>
<p>and then reformat with the correct inode size:</p>
<pre><code>umount /boot
mkfs.ext4 -I 256 /dev/sda1
</code></pre>
<p>before copying everything back:</p>
<pre><code>mount /boot
cp -a /mnt/backup/boot/* /boot/
</code></pre>
Repairing a corrupt ext4 root partitionhttps://feeding.cloud.geek.nz/posts/repairing-corrupt-ext4-root-partition/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2020-09-26T19:45:00Z
<p>I ran into filesystem corruption
(<a href="https://en.wikipedia.org/wiki/Ext4">ext4</a>) on the root partition of my
<a href="https://feeding.cloud.geek.nz/posts/backing-up-to-gnubee2/">backup server</a>
which caused it to go into read-only mode. Since it's the root partition,
it's not possible to unmount it and repair it while it's running. Normally I
would boot from an <a href="https://ubuntu.com/download/alternative-downloads">Ubuntu live CD / USB
stick</a>, but in this case
the machine is using the
<a href="https://en.wikipedia.org/wiki/MIPS_architecture"><code>mipsel</code></a> architecture and
so that's not an option.</p>
<h1 id="Repair_using_a_USB_enclosure">Repair using a USB enclosure</h1>
<p>I had to pull the shutdown the server and then pull the SSD drive out. I
then moved it to an external USB enclosure and connected it to my laptop.</p>
<p>I started with an automatic filesystem repair:</p>
<pre><code>fsck.ext4 -pf /dev/sde2
</code></pre>
<p>which failed for some reason and so I moved to an interactive repair:</p>
<pre><code>fsck.ext4 -f /dev/sde2
</code></pre>
<p>Once all of the errors were fixed, I ran a full surface scan to update the
list of bad blocks:</p>
<pre><code>fsck.ext4 -c /dev/sde2
</code></pre>
<p>Finally, I forced another check to make sure that everything was fixed at
the filesystem level:</p>
<pre><code>fsck.ext4 -f /dev/sde2
</code></pre>
<h1 id="Fix_invalid_alternate_GPT">Fix invalid alternate GPT</h1>
<p>The other thing I noticed is this messge in my <code>dmesg</code> log:</p>
<pre><code>scsi 8:0:0:0: Direct-Access KINGSTON SA400S37120 SBFK PQ: 0 ANSI: 6
sd 8:0:0:0: Attached scsi generic sg4 type 0
sd 8:0:0:0: [sde] 234441644 512-byte logical blocks: (120 GB/112 GiB)
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: 31 00 00 00
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Optimal transfer size 33553920 bytes
Alternate GPT is invalid, using primary GPT.
sde: sde1 sde2
</code></pre>
<p>I therefore checked to see if the partition table looked fine and got the
following:</p>
<pre><code>$ fdisk -l /dev/sde
GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 799CD830-526B-42CE-8EE7-8C94EF098D46
Device Start End Sectors Size Type
/dev/sde1 2048 8390655 8388608 4G Linux swap
/dev/sde2 8390656 234441614 226050959 107.8G Linux filesystem
</code></pre>
<p>It turns out that all I had to do, since only the backup / alternate GPT
partition table was corrupt and the primary one was fine, was to re-write
the partition table:</p>
<pre><code>$ fdisk /dev/sde
Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Command (m for help): w
The partition table has been altered.
Syncing disks.
</code></pre>
<h1 id="Run_SMART_checks">Run SMART checks</h1>
<p>Since I still didn't know what caused the filesystem corruption in the first
place, I decided to do one last check:
<a href="https://en.wikipedia.org/wiki/S.M.A.R.T.">SMART</a> errors.</p>
<p>I couldn't do this via the USB enclosure since the SMART commands aren't
forwarded to the drive and so I popped the drive back into the backup
server and booted it up.</p>
<p>First, I checked whether any SMART errors had been reported using
<a href="https://www.smartmontools.org/">smartmontools</a>:</p>
<pre><code>smartctl -a /dev/sda
</code></pre>
<p>That didn't show any errors and so I kicked off an extended test:</p>
<pre><code>smartctl -t long /dev/sda
</code></pre>
<p>which ran for 30 minutes and then passed without any errors.</p>
<p>The mystery remains unsolved.</p>
Manually expanding a RAID1 array on Ubuntuhttps://feeding.cloud.geek.nz/posts/manually-expanding-raid1-array-ubuntu/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2017-04-01T06:00:00Z
<p>Here are the notes I took while manually expanding an non-LVM encrypted
RAID1 array on an Ubuntu machine.</p>
<p>My original setup consisted of a 1 TB drive along with a 2 TB drive, which
meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of
unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB
drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3
TB drive).</p>
<h1 id="Partition_the_new_drive">Partition the new drive</h1>
<p>In order to partition the new 3 TB drive, I started by creating a
<strong>temporary partition</strong> on the old 2 TB drive (<code>/dev/sdc</code>) to use up all of
the capacity on that drive:</p>
<pre><code>$ parted /dev/sdc
unit s
print
mkpart
print
</code></pre>
<p>Then I initialized the partition table and creating the EFI partition
partition on the new drive (<code>/dev/sdd</code>):</p>
<pre><code>$ parted /dev/sdd
unit s
mktable gpt
mkpart
</code></pre>
<p>Since I want to have the RAID1 array be as large as the smaller of the two
drives, I made sure that the second partition (<code>/home</code>) on the
new 3 TB drive had:</p>
<ul>
<li>the same <strong>start position</strong> as the second partition on the old drive</li>
<li>the <strong>end position</strong> of the third partition (the temporary one I just
created) on the old drive</li>
</ul>
<p>I created the partition and flagged it as a RAID one:</p>
<pre><code>mkpart
toggle 2 raid
</code></pre>
<p>and then deleted the temporary partition on the old 2 TB drive:</p>
<pre><code>$ parted /dev/sdc
print
rm 3
print
</code></pre>
<h1 id="Create_a_temporary_RAID1_array_on_the_new_drive">Create a temporary RAID1 array on the new drive</h1>
<p>With the new drive properly partitioned, I created a new RAID array for it:</p>
<pre><code>mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing
</code></pre>
<p>and added it to <code>/etc/mdadm/mdadm.conf</code>:</p>
<pre><code>mdadm --detail --scan >> /etc/mdadm/mdadm.conf
</code></pre>
<p>which required manual editing of that file to remove duplicate entries.</p>
<h1 id="Create_the_encrypted_partition">Create the encrypted partition</h1>
<p>With the new RAID device in place, I created the encrypted LUKS partition:</p>
<pre><code>cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
cryptsetup luksOpen /dev/md10 chome2
</code></pre>
<p>I took the UUID for the temporary RAID partition:</p>
<pre><code>blkid /dev/md10
</code></pre>
<p>and put it in <code>/etc/crypttab</code> as <code>chome2</code>.</p>
<p>Then, I formatted the new LUKS partition and mounted it:</p>
<pre><code>mkfs.ext4 -m 0 /dev/mapper/chome2
mkdir /home2
mount /dev/mapper/chome2 /home2
</code></pre>
<h1 id="Copy_the_data_from_the_old_drive">Copy the data from the old drive</h1>
<p>With the home paritions of both drives mounted, I copied the files over to
the new drive:</p>
<pre><code>eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/
</code></pre>
<p>making use of
<a href="https://feeding.cloud.geek.nz/posts/three-wrappers-to-run-commands-without-impacting-the-rest-of-the-system/">wrappers that preserve system reponsiveness</a>
during I/O-intensive operations.</p>
<h1 id="Switch_over_to_the_new_drive">Switch over to the new drive</h1>
<p>After the copy, I switched over to the new drive in a step-by-step way:</p>
<ol>
<li>Changed the UUID of <code>chome</code> in <code>/etc/crypttab</code>.</li>
<li>Changed the UUID and name of <code>/dev/md1</code> in <code>/etc/mdadm/mdadm.conf</code>.</li>
<li>Rebooted with both drives.</li>
<li>Checked that the new drive was the one used in the encrypted <code>/home</code> mount using: <code>df -h</code>.</li>
</ol>
<h1 id="Add_the_old_drive_to_the_new_RAID_array">Add the old drive to the new RAID array</h1>
<p>With all of this working, it was time to clear the mdadm superblock from the
old drive:</p>
<pre><code>mdadm --zero-superblock /dev/sdc1
</code></pre>
<p>and then change the second partition of the old drive to make it the same
size as the one on the new drive:</p>
<pre><code>$ parted /dev/sdc
rm 2
mkpart
toggle 2 raid
print
</code></pre>
<p>before adding it to the new array:</p>
<pre><code>mdadm /dev/md1 -a /dev/sdc1
</code></pre>
<h1 id="Rename_the_new_array">Rename the new array</h1>
<p>To
<a href="https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array#64356">change the name of the new RAID array</a>
back to what it was on the old drive, I first had to stop both the old and
the new RAID arrays:</p>
<pre><code>umount /home
cryptsetup luksClose chome
mdadm --stop /dev/md10
mdadm --stop /dev/md1
</code></pre>
<p>before running this command:</p>
<pre><code>mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2
</code></pre>
<p>and updating the name in <code>/etc/mdadm/mdadm.conf</code>.</p>
<p>The last step was to regenerate the initramfs:</p>
<pre><code>update-initramfs -u
</code></pre>
<p>before rebooting into something that looks exactly like the original RAID1
array but with twice the size.</p>
Setting up RAID on an existing Debian/Ubuntu installationhttps://feeding.cloud.geek.nz/posts/setting-up-raid-on-existing/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2022-08-02T16:40:49Z2011-03-13T10:25:00Z
<p>I run <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Standard_RAID_levels#RAID_1">RAID1</a> on all of the machines I support. While such hard disk mirroring is not a replacement for having good working backups, it means that a single drive failure is not going to force me to have to spend lots of time rebuilding a machine.</p>
<p>The best possible time to set this up is of course when you first install the operating system. The <a href="http://www.debian.org/">Debian</a> installer will set everything up for you if you choose that option and Ubuntu has <a href="http://www.ubuntu.com/desktop/get-ubuntu/alternative-download#alternate">alternate installation CDs</a> which allow you to do the same.</p>
<p>This post documents the steps I followed to retrofit RAID1 into an existing Debian squeeze installation. Getting a mirrored setup after the fact.</p>
<h3 id="Overview">Overview</h3>
<p>Before you start, make sure the following packages are installed:</p>
<pre><code>apt-get install mdadm rsync initramfs-tools
</code></pre>
<p>Then go through these steps:</p>
<ol>
<li>Partition the new drive.</li>
<li>Create new degraded RAID arrays.</li>
<li>Install GRUB2 on both drives.</li>
<li>Copy existing data onto the new drive.</li>
<li>Reboot using the RAIDed drive and test system.</li>
<li>Wipe the original drive by adding it to the RAID array.</li>
<li>Test booting off of the original drive.</li>
<li>Resync drives.</li>
<li>Test booting off of the new drive.</li>
<li>Reboot with the two drives and resync the array.</li>
</ol>
<p>(My instructions are mostly based on this <a href="http://wiki.xtronics.com/index.php/Raid">old tutorial</a> but also on this <a href="http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-ubuntu-10.04">more recent one</a>.)</p>
<h3 id="z--_Partition_the_new_drive">1- Partition the new drive</h3>
<p>Once you have connected the new drive (<strong><code>/dev/sdb</code></strong>), boot into your system and use one of <code>cfdisk</code> or <code>fdisk</code> to display the partition information for the existing drive (<strong><code>/dev/sda</code></strong> on my system).</p>
<p>The idea is to create partitions of the same size on the new drive. (If the new drive is bigger, leave the rest of the drive unpartitioned.)</p>
<p>Partition types should all be: <strong><code>fd</code></strong> (or "linux raid autodetect").</p>
<h3 id="z--_Create_new_degraded_RAID_arrays">2- Create new degraded RAID arrays</h3>
<p>The newly partioned drive, consisting of a root and a swap partition, can be added to new RAID1 arrays using <code>mdadm</code>:</p>
<pre><code>mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
</code></pre>
<p>and formatted like this:</p>
<pre><code>mkswap /dev/md1
mkfs.ext4 /dev/md0
</code></pre>
<p>Specify these devices explicitly in <code>/etc/mdadm/mdadm.conf</code>:</p>
<pre><code>DEVICE /dev/sda* /dev/sdb*
</code></pre>
<p>and append the RAID arrays to the end of that file:</p>
<pre><code>mdadm --detail --scan >> /etc/mdadm/mdadm.conf
dpkg-reconfigure mdadm
</code></pre>
<p>You can check the status of your RAID arrays at any time by running this command:</p>
<pre><code>cat /proc/mdstat
</code></pre>
<h3 id="z--_Install_GRUB2_on_both_drives">3- Install GRUB2 on both drives</h3>
<p>The best way to ensure that <a href="https://help.ubuntu.com/community/Grub2">GRUB2</a>, the default bootloader in Debian and Ubuntu, is installed on both drives is to reconfigure its package:</p>
<pre><code>dpkg-reconfigure grub-pc
</code></pre>
<p>and select both <code>/dev/sda</code> and <code>/dev/sdb</code> (but not <code>/dev/md0</code>) as installation targets.</p>
<p>This should cause the init ramdisk (<code>/boot/initrd.img-2.6.32-5-amd64</code>) and the grub menu (<code>/boot/grub/grub.cfg</code>) to be rebuilt with RAID support.</p>
<h3 id="z--_Copy_existing_data_onto_the_new_drive">4- Copy existing data onto the new drive</h3>
<p>Copy everything that's on the existing drive onto the new one using <code>rsync</code>:</p>
<pre><code>mkdir /tmp/mntroot
mount /dev/md0 /tmp/mntroot
rsync -auHxv --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* /* /tmp/mntroot/
</code></pre>
<h3 id="z--_Reboot_using_the_RAIDed_drive_and_test_system">5- Reboot using the RAIDed drive and test system</h3>
<p>Before rebooting, open <code>/tmp/mntroot/etc/fstab</code>, and change <code>/dev/sda1</code> and <code>/dev/sda2</code> to <code>/dev/md0</code> and <code>/dev/md1</code>respectively.</p>
<p>Then reboot and from within the GRUB menu, hit "e" to enter edit mode and make sure that you will be booting off of the new disk:</p>
<pre>
set root='(<b>md/0</b>)'
linux /boot/vmlinuz-2.6.32-5-amd64 root=<b>/dev/md0</b> ro quiet
</pre>
<p>Once the system is up, you can check that the root partition is indeed using the RAID array by running <code>mount</code> and looking for something like:</p>
<pre>
<b>/dev/md0 on /</b> type ext4 (rw,noatime,errors=remount-ro)
</pre>
<h3 id="z--_Wipe_the_original_drive_by_adding_it_to_the_RAID_array">6- Wipe the original drive by adding it to the RAID array</h3>
<p>Once you have verified that everything is working on <code>/dev/sdb</code>, it's time to change the partition types on <code>/dev/sda</code> to <code>fd</code> and to add the original drive to the degraded RAID array:</p>
<pre><code>mdadm /dev/md0 -a /dev/sda1
mdadm /dev/md1 -a /dev/sda2
</code></pre>
<p>You'll have to wait until the two partitions are fully synchronized but you can check the sync status using:</p>
<pre><code>watch -n1 cat /proc/mdstat
</code></pre>
<h3 id="z--_Test_booting_off_of_the_original_drive">7- Test booting off of the original drive</h3>
<p>Once the sync is finished, update the boot loader menu:</p>
<pre><code>update-grub
</code></pre>
<p>and shut the system down:</p>
<pre><code>shutdown -h now
</code></pre>
<p>before physically disconnecting <code>/dev/sdb</code> and turning the machine back on to test booting with only <code>/dev/sda</code> present.</p>
<p>After a successful boot, shut the machine down and plug the second drive back in before powering it up again.</p>
<h3 id="z--_Resync_drives">8- Resync drives</h3>
<p>If everything works, you should see the following after running <code>cat /proc/mdstat</code>:</p>
<pre><code>md0 : active raid1 sda1[1]
280567040 blocks [2/1] [_U]
</code></pre>
<p>indicating that the RAID array is incomplete and that the second drive is not part of it.</p>
<p>To add the second drive back in and start the sync again:</p>
<pre>
mdadm /dev/md0 -a <b>/dev/sdb1</b>
</pre>
<h3 id="z--_Test_booting_off_of_the_new_drive">9- Test booting off of the new drive</h3>
<p>To complete the testing, shut the machine down, pull <code>/dev/sda</code> out and try booting with <code>/dev/sdb</code> only.</p>
<h3 id="z-0-_Reboot_with_the_two_drives_and_resync_the_array">10- Reboot with the two drives and resync the array</h3>
<p>Once you are satisfied that it works, reboot with both drives plugged in and re-add the first drive to the array:</p>
<pre>
mdadm /dev/md0 -a <b>/dev/sda1</b>
</pre>
<p>Your setup is now complete and fully tested.</p>
<h3 id="Ongoing_maintenance">Ongoing maintenance</h3>
<p>I recommend making sure the two RAIDed drives stay in sync by enabling periodic RAID checks. The easiest way is to enable the checks that are built into the Debian package:</p>
<pre><code>dpkg-reconfigure mdadm
</code></pre>
<p>but you can also create a weekly or monthly cronjob which does the following:</p>
<pre>
echo "check" > /sys/block/<b>md0</b>/md/sync_action
</pre>
<p>Something else you should seriously consider is to install the <code>smartmontools</code> package and run weekly <a href="https://secure.wikimedia.org/wikipedia/en/wiki/S.M.A.R.T.">SMART</a> checks by putting something like this in your <code>/etc/smartd.conf</code>:</p>
<pre><code>/dev/sda -a -d ata -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdb -a -d ata -o on -S on -s (S/../.././02|L/../../6/03)
</code></pre>
<p>These checks, performed by the hard disk controllers directly, could warn you of imminent failures ahead of time. Personally, when I start seeing <a href="https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/">errors in the SMART log</a> (<code>smartctl -a /dev/sda</code>), I order a new drive straight away.</p>
RAID1 alternative for SSD driveshttps://feeding.cloud.geek.nz/posts/raid1-alternative-for-ssd-drives/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2010-11-02T07:10:00Z
<p>I recently added a <a href="http://www.intel.com/design/flash/nand/value/overview.htm">solid-state drive</a> to my desktop computer to take advantage of the performance boost rumored to come with these drives. For reliability reasons, I've always tried to use software <a href="http://en.wikipedia.org/wiki/Raid1#RAID_1">RAID1</a> to avoid having to reinstall my machine from backups should a hard drive fail. While this strategy is fairly cheap with regular hard drives, it's not really workable with SSD drives which are still an order of magnitude more expensive.</p>
<p>The strategy I settled on is this one:</p>
<ul>
<li>continue to have all partitions (<code>/</code>, <code>/home</code> and <code>/data</code>) on my RAID1 hard drives,</li>
<li>put another copy of the root partition (<code>/</code>) on the SSD drive, and</li>
<li>leave my <code>/tmp</code> and swap partitions in <a href="http://en.wikipedia.org/wiki/RAID0#RAID_0">RAID0</a> arrays on my rotational hard drives to reduce the number of writes on the SSD.</li>
</ul>
<p>This setup has the benefit of using a very small SSD to speed up the main partition while keeping all important data on the larger mirrored drives.</p>
<h2 id="Resetting_the_SSD">Resetting the SSD</h2>
<p>The first thing I did, given that I purchased a second-hand drive, was to <strong>completely erase the drive</strong> and mark all sectors as empty using an <a href="http://en.wikipedia.org/wiki/Write_amplification#Secure_erase">ATA secure erase</a>. Because SSDs have a tendency to get slower as data is added to them, it is necessary to clear the drive in a way that will let the controller know that every byte is now free to be used again.</p>
<p>There is a lot of advice on the web on how to do this and many tutorials refer to an old piece of software called <a href="https://web.archive.org/web/20130511064320/http://cmrr.ucsd.edu:80/people/Hughes/SecureErase.shtml">Secure Erase</a>. There is a much better solution on Linux: <a href="https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase">issuing the commands directly using <strong>hdparm</strong></a>.</p>
<h2 id="Partitioning_the_SSD">Partitioning the SSD</h2>
<p>Once the drive is empty, it's time to create partitions on it. I'm not sure how important it is to <strong>align the partitions to the SSD erase block size</strong> on newer drives, but I decided to follow <a href="http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/">Ted Ts'o's instructions</a> anyways.</p>
<p>Another thing I did is leave <strong>20% of the drive unpartitioned</strong>. I've often read that SSDs are faster the more free space they have so I figured that limiting myself to 80% of the drive should help the drive maintain its peak performance over time. In fact, I've heard that extra unused unpartitionable space is one of the main differences between the <a href="http://www.intel.com/design/flash/nand/value/overview.htm">value</a> and <a href="http://www.intel.com/design/flash/nand/extreme/index.htm">extreme</a> series of Intel SSDs. I'd love to see an official confirmation of this from Intel of course!</p>
<h2 id="Keeping_the_RAID1_array_in_sync_with_the_SSD">Keeping the RAID1 array in sync with the SSD</h2>
<p>Once I added the solid-state drive to my computer and copied my root partition on it, I adjusted my <code>fstab</code> and <a href="http://en.wikipedia.org/wiki/GNU_GRUB">grub</a> settings to boot from that drive. I also setup the following cron job (running twice daily) to keep a copy of my root partition on the old RAID1 drives (mounted on <code>/mnt</code>):</p>
<pre><code>nice ionice -c3 rsync -aHx --delete --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/home/* --exclude=/mnt/* --exclude=/lost+found/* --exclude=/data/* /* /mnt/
</code></pre>
<h2 id="Tuning_the_SSD">Tuning the SSD</h2>
<p>Finally, after reading this <a href="http://lwn.net/Articles/408428/">excellent LWN article</a>, I decided to tune the SSD drive (<code>/dev/sda</code>) by adjusting three things:</p>
<ul>
<li><p>Add the <code>discard</code> mount option (also know as ATA <a href="http://en.wikipedia.org/wiki/TRIM">TRIM</a> and introduced in the 2.6.33 Linux kernel) to the root partition in <code>/etc/fstab</code>:</p>
<pre><code>/dev/<i>sda1</i> / ext4 <b>discard</b>,errors=remount-ro,noatime 0 1
</code></pre></li>
<li><p>Use the <code>noop</code> IO scheduler by adding these lines to <code>/etc/rc.local</code>:</p>
<pre><code>echo noop &gt; /sys/block/<i>sda</i>/queue/scheduler
echo 1 &gt; /sys/block/<i>sda</i>/queue/iosched/fifo_batch
</code></pre></li>
<li><p>Turn off entropy gathering (for kernels 2.6.36 or later) by adding this line to <code>/etc/rc.local</code>:</p>
<pre><code>echo 0 &gt; /sys/block/<i>sda</i>/queue/add_random
</code></pre></li>
</ul>
<p>Is there anything else I should be doing to make sure I get the most out of my SSD?</p>
Encrypting your home directory using LUKS on Debian/Ubuntuhttps://feeding.cloud.geek.nz/posts/encrypting-your-home-directory-using/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2008-05-24T07:47:00Z
<p>Laptops are easily lost or stolen and in order to protect your emails, web passwords, encryption keys, etc., you should really think about encrypting (at least) your home directory.</p>
<p>If you happen to have <code>/home</code> on a separate partition already (<code>/dev/sda5</code> in this example), then it's a really easy process.</p>
<p>Do the following as the <code>root</code> user:</p>
<ol>
<li><p>Install the <a href="https://packages.debian.org/stable/cryptsetup"><code>cryptsetup</code> package</a>:</p>
<pre><code>apt install cryptsetup
</code></pre></li>
<li><p>Copy your home directory to a temporary directory on a different partition:</p>
<pre><code>mkdir /homebackup
cp -a /home/* /homebackup
</code></pre></li>
<li><p>Encrypt your home partition:</p>
<pre><code>umount /home
cryptsetup -h sha512 -c aes-xts-plain64 -s 512 luksFormat /dev/sda5
cryptsetup luksOpen /dev/sda5 chome
mkfs.ext4 -m 0 /dev/mapper/chome
</code></pre></li>
<li><p>Add this line to <code>/etc/crypttab</code>:</p>
<pre><code>chome /dev/sda5 none luks,timeout=30
</code></pre></li>
<li><p>Set the home partition to this in <code>/etc/fstab</code> (replacing the original home partition line):</p>
<pre><code>/dev/mapper/chome /home ext4 nodev,nosuid,noatime 0 2
</code></pre></li>
<li><p>Copy your home data back into the encrypted partition:</p>
<pre><code>mount /home
cp -a /homebackup/* /home
rm -rf /homebackup
</code></pre></li>
</ol>
<p>That's it. Next time you boot your laptop, you will be prompted for the passphrase you set in Step 2.</p>
<p>Now to fully secure your laptop against theft, you should think about an <a href="http://packages.debian.org/sid/duplicity">encrypted backup strategy</a> for your data...</p>
Two-tier encryption strategy: Archiving your files inside an encrypted loopback partitionhttps://feeding.cloud.geek.nz/posts/two-tier-encryption-strategy-archiving/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2022-07-17T04:54:21Z2008-04-10T04:53:00Z
<p>Even with a fully encrypted system (root and <a href="https://feeding.cloud.geek.nz/2008/03/encrypted-swap-partition-on.html">swap</a> partitions), your data is still vulnerable while your computer is on. That's why <a href="http://www.schneier.com/blog/">Bruce Schneier</a> recommends a <a href="http://www.wired.com/politics/security/commentary/securitymatters/2007/11/securitymatters_1129">two-tier encryption strategy</a>.</p>
<p>The idea is that infrequently used files are moved to a separate partition, encrypted with a different key. That way, the bulk of your data files is protected even if your laptop is <a href="http://www.schneier.com/blog/archives/2008/02/hotplug_1.html">hijacked</a> or if an intruder manages to steal some files while your main partition is decrypted.</p>
<p>On Debian and Ubuntu, a secure archive area can be created easily using an encrypted loopback partition and the <code>cryptmount</code> package.</p>
<p>Add this to <code>/etc/cryptmount/cmtab</code>:</p>
<pre><code>archives {
dev=/home/francois/.archives
dir=/home/francois/archives
fstype=ext4
fsoptions=defaults,noatime
keyfile=/home/francois/.archives.key
keyformat=builtin
keyhash=sha512
keycipher=aes-xts-plain64
cipher=aes-xts-plain64
}
</code></pre>
<p>Create the key and the 3GB loopback partition:</p>
<pre><code>sudo cryptmount --generate-key 32 archives
sudo chown francois:francois .archives.key
dd if=/dev/zero of=.archives bs=1G count=3
mkdir archives
sudo cryptmount --prepare archives
sudo mkfs.ext4 -m 0 /dev/mapper/archives
sudo cryptmount --release archives
</code></pre>
<p>Fix the permissions so that you can write to this partition with your normal user account:</p>
<pre><code>cryptmount archives
cd archives
sudo chown francois:francois .
cryptmount -u archives
</code></pre>
<p>Then you can mount and umount that partition using:</p>
<pre><code>cryptmount archives
</code></pre>
<p>and:</p>
<pre><code>cryptmount -u archives
</code></pre>