I recently added a solid-state drive to my desktop computer to take advantage of the performance boost rumored to come with these drives. For reliability reasons, I've always tried to use software RAID1 to avoid having to reinstall my machine from backups should a hard drive fail. While this strategy is fairly cheap with regular hard drives, it's not really workable with SSD drives which are still an order of magnitude more expensive.

The strategy I settled on is this one:

  • continue to have all partitions (/, /home and /data) on my RAID1 hard drives,
  • put another copy of the root partition (/) on the SSD drive, and
  • leave my /tmp and swap partitions in RAID0 arrays on my rotational hard drives to reduce the number of writes on the SSD.

This setup has the benefit of using a very small SSD to speed up the main partition while keeping all important data on the larger mirrored drives.

Resetting the SSD

The first thing I did, given that I purchased a second-hand drive, was to completely erase the drive and mark all sectors as empty using an ATA secure erase. Because SSDs have a tendency to get slower as data is added to them, it is necessary to clear the drive in a way that will let the controller know that every byte is now free to be used again.

There is a lot of advice on the web on how to do this and many tutorials refer to an old piece of software called Secure Erase. There is a much better solution on Linux: issuing the commands directly using hdparm.

Partitioning the SSD

Once the drive is empty, it's time to create partitions on it. I'm not sure how important it is to align the partitions to the SSD erase block size on newer drives, but I decided to follow Ted Ts'o's instructions anyways.

Another thing I did is leave 20% of the drive unpartitioned. I've often read that SSDs are faster the more free space they have so I figured that limiting myself to 80% of the drive should help the drive maintain its peak performance over time. In fact, I've heard that extra unused unpartitionable space is one of the main differences between the value and extreme series of Intel SSDs. I'd love to see an official confirmation of this from Intel of course!

Keeping the RAID1 array in sync with the SSD

Once I added the solid-state drive to my computer and copied my root partition on it, I adjusted my fstab and grub settings to boot from that drive. I also setup the following cron job (running twice daily) to keep a copy of my root partition on the old RAID1 drives (mounted on /mnt):

nice ionice -c3 rsync -aHx --delete --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/home/* --exclude=/mnt/* --exclude=/lost+found/* --exclude=/data/* /* /mnt/

Tuning the SSD

Finally, after reading this excellent LWN article, I decided to tune the SSD drive (/dev/sda) by adjusting three things:

  • Add the discard mount option (also know as ATA TRIM and introduced in the 2.6.33 Linux kernel) to the root partition in /etc/fstab:

    /dev/<i>sda1</i>  /  ext4  <b>discard</b>,errors=remount-ro,noatime  0  1
    
  • Use the noop IO scheduler by adding these lines to /etc/rc.local:

    echo noop &gt; /sys/block/<i>sda</i>/queue/scheduler
    echo 1 &gt; /sys/block/<i>sda</i>/queue/iosched/fifo_batch
    
  • Turn off entropy gathering (for kernels 2.6.36 or later) by adding this line to /etc/rc.local:

    echo 0 &gt; /sys/block/<i>sda</i>/queue/add_random
    

Is there anything else I should be doing to make sure I get the most out of my SSD?

Regarding your RAID layout; it seems you have put some of the hottest filesystems on your slowest disks (e.g. /tmp). Why not ensure you have a robust/reliable backup solution to recover from drive failure, and run your filesystems from the SSD? If you don't already have such a solution, remember, RAID1 is NOT a backup solution.

Regarding the 80/20 partition split. How is the drive controller to know you aren't using the last 20%? It doesn't know anything about partitions. Those sectors are still addressable by the OS, and thus could be used at any time (from the controller's POV).

Finally it would have been interesing if you had done some performance benchmarks before and after each of the other various tweaks you applied, to see if they made any difference.

Comment by Jon Dowland
I don't think leaving 20 percent free helps -- you're using the wrong concept of "free". The SSD will use that space anyway through remapping everytime you write to it, until it has used all erase blocks and has to start erasing for writes.
Comment by Anonymous
I would have used mdadm --write-mostly for this.
Comment by kaol

You don't want to use the noop IO scheduler. Turns out that cfq helps for SSDs as well; it will automatically avoid some of the heuristics that don't make sense on SSDs, but it will still provide a number of benefits.

And while the 20% thing might make sense for cheap SSDs, the Intel SSDs will handle remapping just fine.

Comment by Anonymous
you could look at bcache. it just uses the SSD to cache the rotating disk. everything ends up on the rotating storage, but you get very fast access. its not in the kernel yet, but its been around for a while.
Comment by Anonymous

using a journaling filesystem?

consider putting the journal on the ssd

Comment by John Hughes
Swapping to SSD should also be much faster than swapping to the rotating disk.
Comment by Anonymous