ZFSBuild 2012

It’s been two years since we built our last ZFS based server, and we decided that it was about time for us to build an updated system.  The goal is to build something that exceeds the functionality of the previous system, while costing approximately the same amount.  The original ZFSBuild 2010 system cost US $6765 to build, and for what we got back then, it was a heck of a system.  The new ZFSBuild 2012 system is going to match the price point of the previous design, yet offer measurably better performance.

The new ZFSBuild 2012 system is comprised of the following :

SuperMicro SC846BE16-R920 chassis – 24 bays, single expander, 6Gbit SAS capable.  Very similar to the ZFSBuild 2010 server, with a little more power, and a faster SAS interconnect.

SuperMicro X9SRI-3F-B Motherboard – Single socket Xeon E5 compatible motherboard.  This board supports 256GB of RAM (over 10x the RAM we could support in the old system) and significantly faster/more powerful CPU’s.

Intel Xeon E5 1620 – 3.6Ghz latest generation Intel Xeon CPU.  More horsepower for better compression and faster workload processing.  ZFSBuild 2010 was short on CPU, and we found it lacking in later NFS tests.  We won’t make that mistake again.

20x Toshiba MK1001TRKB 1TB SAS 6Gbit HDD’s – 1TB SAS drives.  The 1TB SATA drives that we used in the previous build were ok, but SAS drives give much better information about their health and performance, and for an enterprise deployment, are absolutely necessary.  These drives are only $5 more per drive than what we paid for the drives in ZFSBuild 2010.  Obviously if you’d like to save more money, SATA drives are an option, but we strongly recommend using SAS drives when ever possible.

LSI 9211-8i SAS controller – Moving the SAS duties to a Nexenta HSL certified SAS controller.  Newer chipset, better performance, and replaceability in case of failure.

Intel SSD’s all around – We went with a mix of  2x Intel 313 (ZIL), 2x 520 (L2ARC) and 2x 330 (boot – internal cage) SSD’s for this build.  We have less ZIL space than the previous build (20GB vs 32GB) but rough math says that we shouldn’t ever need more than 10-12GB of ZIL.  We will have more L2ARC (480GB vs 320GB) and the boot drives are roughly the same.

64GB RAM – Generic Kingston ValueRAM.  The original ZFSBuild was based on 12GB of memory, which 2 years ago seemed like a lot of RAM for a storage server.  Today we’re going with 64 GB right off the bat using 8GB DIMM’s.  The motherboard has the capacity to go to 256GB with 32GB DIMM’s.  With 64GB of RAM, we’re going to be able to cache a _lot_ of data.  My suggestion is to not go super-overboard on RAM to start with, as you can run into issues as noted here : http://www.zfsbuild.com/2012/03/05/when-is-enough-memory-too-much-part-2/

For the same price as our ZFSBuild 2010 project, the ZFSBuild 2012 project will include more CPU, much more RAM, more cache, better drives, and better chassis.  It’s amazing what two years difference makes when building this stuff.

Expect that we’ll evaluate Nexenta Enterprise, OpenIndiana, and revisit FreeNAS’s ZFS implementation.  We probably won’t go back over the Promise units, as we’ve already discussed them and they likely haven’t changed (and we don’t have any lying about not doing anything anymore).

We are planning to re-run the same battery of tests that we used in 2010 for the original ZFSBuild 2010 benchmarks.  We still have the same test blade server available to reproduce the testing environment.  We also plan to run additional tests using various sized working sets.  InfiniBand will be benchmarked in additional to standard gigabit Ethernet this round.

So far, we have received nearly all of the hardware.  We are still waiting on a cable for the rear fans and a few 3.5 to 2.5 drive bay converters for the ZIL and L2ARC SSD drives.  As soon as those items arrive, we will place the ZFSBuild 2012 server in our server room and begin the benchmarking.  We are excited to see how it performs relative to the ZFSBuild 2010 server design.

Here are a couple pictures we have taken so far on the ZFSBuild 2012 project:

Tuesday, September 18th, 2012 Hardware

61 Comments to ZFSBuild 2012

  • jzsjr says:

    Matt,
    I mirrored your build except for the cache drives I bought two samsung 840 250Gb drives. Everything is working fine with nexenta community edition but I don’t get any drive lights showing on the two cache drives. I swapped stuff around and the backplane leds are working fine. I’m just curious if this is the way it is with the cache drives not showing any activity or is there something up with using samsung 840 ssds? Any ideas?
    thanks,
    Jim

  • We have had problems with SATA drives showing drive lights working on our builds also. ZFSBuild2010 did not have this problem, but we were not using the MPTSAS driver for our drives. I believe that it has something to do with mixing SAS and SATA drives, and the use of the MPTSAS driver. If you use all SATA or all SAS, I believe that it would resolve this problem. I would not recommend the all SATA route.

  • I’ve added you on skype – feel free to ping me with questions.

  • edattoli says:

    Hi!
    A few days ago I got all the parts and I finished assembling my ZFSB with the same parts mentioned in this blog.
    I installed Nexenta in this new server, but I have a problem: the system BIOS recognizes the two SATA disks connected directly to the mother to boot, but these discs do not appear in the BIOS boot options, only appear the disks attached to the backplane (not shown attached disks the mother).

    Can you help me with this issue?

    Regards.-

  • What you’ll need to do is likely configure them as individual RAID0 devices. Once you configure them as RAID0 devices, Nexenta should see them and allow you to use them.

  • edattoli says:

    Matt,
    the 1:46pm answer is for me?
    In that case: I’ve not problems with Nexenta, at the installation time Nexenta can see all disks (SAS and SATA, connecter to the motherboard and connected to the LSI), and we install the OS on the SATA SSD boot drives, but after the installation, when reboot, the server can’t boot because in the BIOS I can’t select the SATA drives in the boot order BIOS area.

    Regards.-
    Edgar

  • Edgar – yes, that response is for you. I assumed that you were having problems with Nexentastor not seeing the drives. If you are not seeing them in the boot order, you may have too may devices showing up that are bootable. Try disabling the option ROM’s for booting them(CD-ROM, HBA, Network, etc). I have seen it where a boot device will not show up because there are too many other devices competing for those boot slots.

  • edattoli says:

    Matt thanks for the reply.
    We were able to contact us with Supermicro support area, and they sent a new BIOS that solves the problem. Now with the new BIOS the system boots correctly.
    If you want I can send you the BIOS files.

  • […] I am a small business owner, I still care about my data. Another note, some people also talk about having issues mixing SATA and SAS drives together and this author went so far as to recommend an all SAS drive line-up for storage pools (but the […]

  • fredsherbet says:

    Hi, I was wondering what you use to connect to this server? I’m hitting the limits of gigabit ethernet, and looking for ways to set up a ZFS server as a working-drive for video editing. My ZFS server is faster than HDD, but to get faster, I’d needed a faster-than-gigabit connection.

    Thanks

  • We use 20Gbit Infiniband to connect these systems. Infiniband is pretty fickle though, and I’d probably recommend just upgrading to 10Gbit if you’re having throughput concerns.

  • Leave a Reply

    You must be logged in to post a comment.