ZFSBuild2012 – Mission Statement

We wanted to take a moment to discuss what to expect from the series of ZFSBuild2012 articles we are posting. These articles are not intended to be a performance shootout for hardware, software, networking, or anything else.

We have obviously run benchmarks as part of this project, but the benchmarks are not the primary goal of the project. The goal of this project is to share information about how to build a low cost, high performance, easy to manage ZFS based SAN. We have built and heavily tested this design. The ZFSBuild2012 design can deliver over 100,000 IOPS and costs about $7k to build.

We have received a lot of requests for benchmarks that people would like to see and comparisons that people would like to see us make between various solutions. While it might be interesting to make thousands of comparisons between every possible operating system and configuration combination, it is not the focus of this series of articles.

We will be posting benchmark results in an effort to explain the performance difference between the ZFSBuild2012 design and the ZFSBuild2010 design. It is probably not a spoiler to let everybody know that the ZFSBuild2012 design is much faster than the ZFSBuild2010 design. Over the past two years, we learned a lot of things about designing better ZFS based SANs and the underlying hardware got a lot faster. The purpose of comparing the two designs is merely to show how much performance can be gained from the new design. We used the same benchmark tools that we ran back in 2010, and we even used the same blade for running the benchmarks, so the benchmarks we will post comparing the two designs are a true apples to apples test.

We will also be posting benchmarks comparing the performance of InfiniBand using different configurations of the same hardware with Nexenta. Again, the purpose of those benchmarks will be to help people find the correct way to configure InfiniBand. It is not meant to be a fanboy style shootout about various driver settings.

Unfortunately, it was not practical for us to run benchmarks comparing 10GigE, FC, and InfiniBand. While we do have access to this tech, we did not have all of this tech installed into the same blade center, so there was no good way for us to run a true apples to apples style comparison of the various network interconnects. Additionally, we did not want to buy more networking hardware just for the purposes of installing into the test blade center for the purpose of running benchmarks. But like we already mentioned, this series of articles is not intended as a shootout. The purpose of these articles to share information about how to build a reasonably low cost SAN that can deliver over 100,000 IOPS.

Ultimately, it is up to you to choose what you do with the information shared in our series of article on the ZFSBuild2012 design. We used Nexenta for most of the benchmarks and we deployed the unit into production using Nexenta, but we are not trying to convince anybody to give up their favorite storage appliance software in favor of Nexenta. We chose to use Nexenta Community Edition in the ZFSBuild2012 solution because it offered excellent performance and because it can automatically notify the admin of a failed drive (including flashing an LED on the drive bay when it is time to replace a failed drive). We fully understand that some people will choose to run a bare operating system (such as OpenSolaris, OpenIndiana, or FreeBSD) and others will choose to run FreeNAS or ZFSGuru. There is nothing wrong with that. You should run what ever you are comfortable with.

We hope you enjoy the ZFSBuild2012 series of articles.  (We will be posting the ZFSBuild2012 articles soon)

 

Friday, November 30th, 2012 Benchmarks

10 Comments to ZFSBuild2012 – Mission Statement

  • elusion says:

    For the past 30 days, I visit this site every day just to see if there is any updates! The new zfsbuild 2012 configuration guides will definitely help for many people including me in setting up a ZFS SAN. I would definitely love to know what VM do you run on your blades? xen/virtual box/kvm. And for the past 30 days I was wondering what version of Nexenta you were using and I am happy to learn that even the community version can run infiniband

  • We are working on writing and polishing all of the articles related to the 2012 build. We will be posting them as they are done.

    As far as the platform that we run on, the vast majority of our infrastructure runs Windows 2008 R2 SP1, and Hyper-V, along with some Xen hosts.

  • Brendon says:

    Question: how are you getting community edition to give you disk failure info? I believe (unless something has changed) that CE doesn’t have the smart module available, only enterprise edition. That would be VERY helpful to know since the smart is one of the things pushing me away from nexenta CE. Thanks and like the other poster, I’ve been visiting regularly to get your updates. I’m currently building my own 2nd gen zfs nas. Thank you.

  • Version 3.1.3.5 appears to have all of the disk info that we need. The key part to this is that the JBOD that we are using is on the Nexenta HSL, and has a slotmap available for it to identify drive location.

  • lifeismycollege says:

    You are killing us. We are all so excited we can’t stand it. 😉

  • Brendon says:

    I know this is an old article, but can u point me at or share your slot map file? I’ve done some looks but couldn’t find it; I have the very similar sc847.
    Thx for all your efforts

  • For zfsbuild2012 we are using the built-in slotmap that comes with the Nexenta Community edition in 3.1.3.5 – if you go to NMC and type “setup jbod” it will give you options to assign the JBOD type to your system. The SC847 is in the list (we are using SC847FRONT for our system).

  • schism says:

    If you wanted to attach another JBOD to a unit like this, what would have to be done?

    I know you’d need an HBA in the head server that has an external connector that connects down to your new JBOD — but what would it connect to exactly? How would you get the backplane of the additional JBOD connected to the HBA on the head-end? And what if you wanted to attach another JBOD after that? Do you have to attach back to the head, or is there a way to attach it to the JBOD and daisy-chain?

  • You can connect multiple ways. You can daisy chain from the expander on the backplane, come directly off of the HBA, or plug the HBA into a SAS switch and fan out from the SAS switches. If you look through the manuals for the SuperMicro chassis they have fantastic diagrams on how the daisy chaining would work.

  • […] speed.  Some ZFS users are reporting in excess of 100,000 IOPS in their installations using fairly generic hardware.  That’s […]

  • Leave a Reply

    You must be logged in to post a comment.