Disk Drive Selection

Our search for an affordable yet high performance storage array has led us to use ZFS, OpenSolaris, and commodity hardware. To get the affordable part of the storage under hand, we had to investigate all of our options when it came to hard drives and available SATA technology. We finally settled on a combination of Western Digital RE3 1TB drives, Intel X25-M G2 SSD’s, Intel X25-E SSD’s, and Intel X25-V SSD’s.

All Internal Drives

The whole point of our storage build was to give us a reasonably large amount of storage that still performed well. For the bulk of our storage we planned on using enterprise grade SATA HDD’s. We investigated several options, but finally settled on the Western Digital RE3 1TB HDD’s. The WD RE3 drives perform reasonably well and give us a lot of storage for our money. They have enterprise features that make them suitable for use in a RAID subsystem, and are backed by a 5 year warranty.

WD RE3 1TB Hard Drive

To accelerate the performance of our ZFS system, we will be employing the L2ARC caching feature of ZFS. The L2ARC stores recently accessed data, and allows it to be read from a faster medium than traditional rotating HDD’s. This helps when deploying virtualized environments that all share a common base image. If your windows/linux/etc installation is 5GB and is accessed a lot (think about reads from any system file) and is shared across 50 virtual machines, you could store all of that data on a really fast SSD drive and allow all of the VM’s to read from that volume for all of their OS needs. While researching this, we decided to deploy our ZFS system with two 160GB Intel X25-M G2 MLC SSD drives. This will allow us to theoretically cache 320GB of the most frequently accessed data and drastically reduce access times. Intel specifies that the X25-M G2 can achieve up to 35,000 random 4k read IOPS and up to 8600 random 4k write IOPS. This is significantly faster than any spindle based hard drive available. The access time for those read operations is also significantly lower, reducing the time that you have to wait for a read operation to finish. The only drawback of the X25-M G2 is that it is an MLC flash drive, which in theory limits the amount of write operations that it can perform before the drive is worn out. We will be monitoring these drives very closely to see if there is any performance degradation over time.

Intel X25M

To accelerate write performance we selected 32GB Intel X25-E drives. These will be the ZIL (log) drives for the ZFS system. Since ZFS is a copy-on-write file system, every transaction is tracked in a log. By moving the log to SSD storage, you can greatly improve write performance on a ZFS system. Since this log is accessed on every write operation, we wanted to use an SSD drive that had a significantly longer life span. The Intel X25-E drives are an SLC style flash drive, which means they can be written to many more times than an MLC drive and not fail. Since most of the operations on our system are write operations, we had to have something that had a lot of longevity. We also decided to mirror the drives, so that if one of them failed, the log did not revert to a hard-drive based log system which would severely impact performance. Intel quotes these drives as 3300 IOPS write and 35,000 IOPS read. You may notice that this is lower than the X25-M G2 drives. We are so very concerned about the longevity of the drives that we decided a tradeoff on IOPS was worth the additional longevity.

Intel X25E

For our boot drives, we selected 40GB Intel X-25V SSD drives. We could have went with traditional rotating media for the boot drives, but with the cost of these drives going down every day we decided to splurge and use SSD’s for the boot volume. We don’t need the ultimate performance that is available with the higher end SSD’s for the boot volume, but we still realize that having your boot volumes on SSD’s will help reduce boot times in case of a reboot and they have the added bonus of being a low power draw device.

Intel X25V

Wednesday, May 19th, 2010 Hardware

12 Comments to Disk Drive Selection

  • TimeWaster10 says:

    WOW, thanks for sharing.

    I’ll be interested to see the how the x25-m’s fare.

    Thanks again, keep up the good work.

  • intel says:

    Can you help me understand a little better?

    base OS/opensolaris = SSD 40gb (solely for the speed/low power)

    Cache drives: 2 x X25-M 160GB (stripped so 320GB) – ZFS can handle cache devices going offline with no data loss (degraded performance)

    ZIL (log): 2x X25-E (these are mirror/raid1) – any data lost in the ZIL can be catastrophic for ZFS… or so I’ve read as the log devices tell ZFS where and which data is where physically.

    Is my information correct? – How many writes is really needed for ZIL, I mean could you get away with 10,000 Raptor drives and a SSD (if you mirror them, only the slowest performing drive will matter in general, right)?

  • admin says:

    I would not recommend using a 10,000 Raptor drive for the ZIL. The ZIL needs a small, fast drive that responds quickly. For the ZIL, you should use an Intel X25-E or a pair of them in a mirror.

  • theoldlr says:

    You mentioned the SATA drives, “have enterprise features that make them suitable for use in a RAID subsystem.” Can you please elaborate on this? I understand you get 2 years longer warranty compared to a consumer level drive, but is there anything else to be gained? Even if consumer level drives fail sooner would the lower cost compensate (in your opinion)? BTW, very informative, helpful blog!

  • admin says:

    The main advantage of an enterprise grade SATA drive is TLER support in the firmware. Without it, SATA drives will often fail out of RAID configurations for seemingly no reason. Here is a WIKI link on the topic:

    Beyond the TLER issue, there is also vibration compensation. The enterprise grade SATA drives have accellerometers on the PCB to measure vibrations so the HD can compensation proactively when doing seeks. WIthout the vibration compensation, a rack full of drives in a data center can actually receive enough vibration to increase the seek time and cause performance to degrade. Anyway, there is a fantastic video on YouTube that shows this happening. A guy shouts at a storage server and you can literally measure the performance drop from the added vibration:

  • Florian says:

    Hey there, great blogging here.

    What kind of chassis do you use with the intel SSDs? Can you send any details.

    Looks to me like a 3.5″ chassis with interposers attached.

  • admin says:

    Florian: We used a product called an Icy Dock to convert the 3.5″ drive bays to 2.5″ for the SSD drives. NewEgg sells them cheap:

  • jcdmacleod says:

    What are your thoughts on 10k Velociraptors for storage? Several companies OEM them (Dell/HP) for their 10k SATA offerings.

  • If you are not using SAS expanders, I would say that they should be ok. After a year or so of use, we have decided to completely abandon SATA and move to SAS for all of our storage needs. The price delta between 7200RPM SATA and SAS is usually not a whole lot, and I would wholeheartedly recommend using SAS over SATA.

    With all that being said, the WD Velociraptors are not on on the Nexenta HSL, so if you run into problems with them, they will probably request that you remove them from your configuration to continue troubleshooting.

  • jcdmacleod says:

    That brings up another point, I believe you used the SuperMicro Direct Attached version of their chassis, to avoid the expander issue?

    I’m currently looking at their SC216A model – 24 x 2.5″, however, I am leaning towards SAS drives as the price between higher end SATA and SAS is nearly the same. What concerns me is the SATA connectivity for the L2ARC and ZIL SSDs. The 2.5″ SuperMicro chassis do not have space for interposers. Although, using a direct attached version of the chassis, would somewhat avoid this issue

    I agree, the Velociraptors are not on the HCL, however, Nexenta seems to be slow in getting things added to it, which can be annoying to say the least. I suppose it is hard with the number of new generation products coming out.

    It would be interesting to get an updated post, for example, if you were to repeat all of this two years later, what would you choose now and why? Both from a technology and learning curve perspective.

  • hjmangalam says:

    Another request to update this info re: how well the SSDs have fared. Any failures with the SLCs or the MLCs? Have you ever tortured an MLC to death using it as a ZIL? Or do they survive surprisingly well?
    Inquiring minds…

  • So far we have had no failures with either the MLC or SLC drives. We’ve never used the MLC drives as SLOG drives, so no real abuse on them, but I would expect anyone using these systems in a full production environment wouldn’t intentionally put MLC drives in and abuse them if they expected them to last.

  • Leave a Reply

    You must be logged in to post a comment.