We’ve decided to go with striped mirrored vdev’s (similar to RAID10) for our ZFS configuration. It gives us the best performance and fault tolerance for what we use the system for. To reproduce our ZFS configuration, you would use all of the commands in the image below (assuming your drives were named the same way ours were) :
› Continue reading
Spare drives in a ZFS or any RAID configuration is a must have. Consider this – A winter storm hits and a snowplow piles 3 feet of snow in your driveway. 5 minutes later a drive in your system fails. You’re stuck until you can plow or snowblow your way out of your drive. Without spare drives in your system, you run the risk of additional drives failing while that drive is offline. Depending on the ZFS or RAID level a second drive failure could cause permanent data loss. If you have a spare drive (or multiple spare drives) in your system, you can automatically start rebuilding the array as soon as it detects that a drive has failed. This limits the amount of time that your ZFS or RAID subsystem is unprotected.
To add spare drives to your system first run the format command to find the disks that you have in your system.
ZIL (ZFS Intent Log) drives can be added to a ZFS pool to speed up the write capabilities of any level of ZFS RAID. It writes the metadata for a file to a very fast SSD drive to increase the write throughput of the system. When the physical spindles have a moment, that data is then flushed to the spinning media and the process starts over. We have observed significant performance increases by adding ZIL drives to our ZFS configuration. One thing to keep in mind is that the ZIL should be mirrored to protect the speed of the ZFS system. If the ZIL is not mirrored, and the drive that is being used as the ZIL drive fails, the system will revert to writing the data directly to the disk, severely hampering performance.
To add a ZIL drive to your ZFS system first run the format command to find the disks that you have available in your system.
The Cache drives (or L2ARC Cache) are used for frequently accessed data. In our system we have configured it with 320GB of L2ARC cache. This cache resides on MLC SSD drives which have significantly faster access times than traditional spinning media. This means that up to 320GB of the most frequently accessed data can be kept in an SSD cache, and when requested does not have to be read from spinning media. This greatly speeds up access time for files that are frequently used.
To add caching drives to your Zpool first run the format command to find the disks that you have in your system.
A Striped RAIDZ Zpool is useful for a number of reasons. It gives you additional resiliency against drive failures and performs slightly better due to being striped across more drives. A Striped RAIDZ Zpool is very similar to RAID50.
To create a Striped RAIDZ Zpool first run the format command to find the disks that you have in your system.
A three way mirror is useful if you are very very concerned about data integrity. Basically a three way mirror is similar to RAID1, except it mirrors it’s data across three drives instead of two drives. This effectively cuts your usable space to 1/3 of the total capacity of the drives, but it allows two drives to fail while maintaining data integrity.
To set up a three way mirror first run the format command to find the disks that you have in your system.
A Striped Mirrored Vdev Zpool is very similar to RAID10. It does have the additional feature of having checksuming to prevent silent data corruption, but essentially it is the same as RAID10. It allows you to have great random read and random write performance, but it does decrease your available disk space to 50% of the physical capacity of your drives. Every time we have set up RAID for workloads though, we have found that the additional available IOPS for random writes far offsets the penalty of losing half of your disk space.
To create a Striped Mirrored Vdev Zpool first, run the format command to find the disks that you have in your system.
A RAIDZ2 Zpool is very similar in function to a RAID6 array. You get two parity points to prevent array failure in case of drive failures. A RAIDZ2 Zpool can tolerate two drive failures before it becomes vulnerable to data loss.
To create a RAIDZ2 Zpool first run the format command to find the disks that you have in your system.
RAIDZ is ZFS’s implementation of RAID5. It uses a variable width stripe for it’s parity, which allows for better performance than traditional RAID5 implementations. RAIDZ is typically used when you want the most out of your physical storage and are willing to sacrifice a bit of performance to get it. You can have a single disk failure in a RAIDZ array and still maintain all of your data.
To create a RAIDZ Zpool first run the format command to find the disks that you have in your system.
Mirrored Vdev’s are equivalent to a RAID1 array, with the added bonus of checksum data to prevent silent data corruption. The performance of a Mirrored Vdev Zpool will be very similar to a RAID1 array.
To create a Mirrored Vdev Zpool first run the format command to find the disks that you have in your system.
A ZFS Striped Vdev pool is very similar to RAID0. You get to keep all of the available storage that your drives offer, but you have no resiliency to hard drive failure. If one drive in a Striped Vdev Zpool fails you will lose all of your data. You do still have checksum data to prevent silent data loss, but any physical failure of a drive will result in data loss. We strongly recommend never using this level of ZFS, as there is no resiliency to drive failure.
To create a Striped Vdev Zpool first run the format command to find the disks that you have in your system.
ZFS RAID levels
When we evaluated ZFS for our storage needs, the immediate question became – what are these storage levels, and what do they do for us? ZFS uses odd (to someone familiar with hardware RAID) terminology like Vdevs, Zpools, RAIDZ, and so forth. These are simply Sun’s words for a form of RAID that is pretty familiar to most people that have used hardware RAID systems.
› Continue reading