If you have waded through my previous post about adding a larger drive to an lvm volume running on an mdadm RAID array, you'll have seen it's a bit of a fiddly, complex process. I have been fiddling around a bit with ZFS on Linux on my old desktop machine (Core 2 Quad Q9300, 8GB RAM). I had a couple of old 1TB drives, and added the 2TB drive that I removed from the home theatre PC, mentioned in the previous post. Actually, I had put the 3TB drive in this old box initially, intending it to replace the HTPC. But it's a bit power hungry, using around 90W at idle, and I am having some niggling playback issues with it. But that's another story. So the 3TB WD Red drive gets replaced with the old Samsung 2TB.
With the three drives, I tried creating a ZFS raidz array, basically equivalent to RAID 5. It runs pretty well. Swapping the drives over (from 3TB to 2TB) gave me an opportunity to do the ZFS version of a drive swap.
It was a bit of an anticlimax, really. There was one command, the syntax from the man page is below:
zpool replace [-f] pool device [new_device]
In my case it was something like
sudo zpool replace tank /dev/disk/by-id/scsi-SATA...(the drive removed) /dev/disk/by-id/scsi-SATA....(the 2TB drive)
That started off the resilvering process (resynchronising the data across the drives). If I then were to replace each other drive, one at a time, with the above process, it would resize the array to match the new drive sizes.
The beauty of ZFS is it simplifies things so much. It is the RAID management and file system all in one.
I would have liked to run it in the HTPC, but ZFS is quite resource-hungry, in terms of processing power and memory. Since my HTPC just has a little Atom 330 chip and is maxed out with 4GB of memory (of which only 3GB is visible), I would be staying with mdadm and lvm. There is virtually no noticeable performance penalty with it. Oh well, going through all those steps is not exactly a frequent task.
No comments:
Post a Comment