ZFSBuild2012 – Benchmark Progress

We are still running benchmarks for the ZFSBuild2012 SAN building project.  We completed all of the Nexenta benchmarks this past week.  We ran every combination of networking configurations to deeply test both 1Gbps Ethernet and 20Gbps InfiniBand on the Nexenta platform.  We completed ZFSGuru (with FreeBSD 9.1) benchmarks this morning (both 1Gbps Ethernet and 20Gbps InfiniBand).  We are currently running FreeNAS 8.3 benchmarks.

Saturday, October 27th, 2012 Benchmarks

21 Comments to ZFSBuild2012 – Benchmark Progress

  • JB says:

    When will you start posting those results? Or are you going to wait till the very end and do a comparison of all of them?

  • Simon Andersen says:

    I am really looking forward to the results. We are bulding something similar, however with more disks and 2.5″ drives

  • Simon – Would love to hear more about this. I would assume that with more spindles (and possibly higher spindle speed) that some of your write throughput would be significantly higher. Keep us posted!

  • JB says:

    Whats got a UI and now has ZFS v28??… FreeNAS!! I sincerely hope you have done tests using their latest release from 3 days ago 8.3 🙂 … that would certainly provide a more real comparison of performance since FreeNAS used to run significantly older version of ZFS which many attributed the poor performance too.

  • JB says:

    Funny.. I totally did not read the last sentence.. doh.

  • JB says:

    How do you get freenas working with infiniband since they clearly state they do not support it?

  • We did not get InfiniBand working with FreeNAS. We will be comparing InfiniBand results only on platforms that support InfiniBand, and 1Gbit Ethernet results on platforms that support 1Gbit. We will not be comparing Nexenta 20Gbit Infiniband results against FreeNAS 1Gbit ethernet results.

  • eegee says:

    Glad to see FreeNAS 8.3 made it to RELEASE in time.

  • andyshinn says:

    Are you planning to benchmark any Linux RAID / SAN solutions for comparison (such as Linux kernel 3.x with linux-iscsi tools)? Obviously, the ZFS file system isn’t an apples to apples comparison. But i’d be interested in knowing how the same hardware performs on different platforms.

  • wroshon says:

    Great to see that you are sharing your experience with this new build. Your documentation of your first build was the best I found of an enterprise level ZFS storage system.

    Have you considered including NAS4Free 9.1.0.1, the continuation of FreeNAS 7 project in your comparison? My understanding is that it is a fairly simple matter to install and user BSD 9.1 driver if the one you need is not included in the NAS4Free build.

    I’d expect the benchmarks to be near identical to ZFSGuru/BSD 9.1, since NAS4Free is currently built on BSD9.1, but it would be nice to have your opinion of the NAS4Free management interface and suitability for your environment.

    I’d also value your opinion of Napp-IT on an Illumos distribution.

    Capacity based licensing and support costs for Nexenta Enterprise eliminated that option for the build I’m working on (80+TB). First year is more than twice the hardware costs for what is second/third tier storage for us.

  • AndyShinn – While we’d love to do every permutation of every OS, and every file system, it’s not feasible to test _everything_. Since the stated goal of this site is “A friendly guide to building ZFS based SAN/NAS solutions” we will probably not be looking at Linux implementations that are not ZFS based.

  • mikolan says:

    How is the benchmark coming along?

  • Benchmarking is complete. We are now writing articles about the config, IB, and other interesting tidbits. Once we’ve got a comprehensive set of articles laid out, we’ll begin posting them!

  • Jae-Hoon.Choi says:

    I was interested your friendly guide to ZFS SAN since your previous system. I’m also many test with my supermicro hardware in my personal lab.

    Your system is consist with SuperMicro chassis and LSI 9211-8i 6G SAS HBA. I was experienced problem with SuperMicro CSE743TQ, CSE847E16 and LSI 9211-8i combination. That problem was SAS HDD Bay order was displayed randomly on Nexenta Core, CE, OpenIndiana everytime. I was find solution that flash the LSI 9211-8i’s firmware with SuperMicro’s one.

    below is my benchmark results with SDR 10Gb Infiniband test lab.

    http://www.nexentastor.org/boards/1/topics/7166

    I’ll wait your posting about your Hareware, Infiniband configuration.
    I want to see your Nexenta Enterprise’s Dataset, disk configuration screenshot, too!

  • Tommytis says:

    Awesome News… Can’t wait

  • Johan says:

    I hope FreeBSD comes out possitive this time.

    I use it mainly at my sites.
    Even if it performs bad i am not going to switch.
    FreeBSD served me well over the last years from 4.x.
    But it is always nice to know it is on par with the competition :D.

    gr
    Johan

  • pedro says:

    Hi Guys,

    really looking forward to this, checking the site daily 🙂 it’s been almost a month now 🙁 any ETA?

    Thanks 🙂

  • nOon29 says:

    hello, it’s seems to be a really good configuration and i hope to see the test result but in your article of zfs build2012 you don’t explain if you use raidz or raidz2 or raid10.
    The result will be so much different particularly in random write.

    An other point i think it’s weird you don’t test 10Gb ethernet.

    And last point why don’t try a zeusram for zil (ssd is good but with the write cliff i think it’s just not the right think for zil)

  • @nOon29 – ZFSBuild2012 is Striped Mirrors (RAID10) just like ZFSBuild2010. We do not use any other type of pools at ZFSBuild.

    We have considered 10Gbit Ethernet, but our infrastructure at this point does not support it. We would have to make major investments in our BladeCenter infrastructure to support 10Gbit Ethernet, and with 20Gbit InfiniBand and RDMA, it seems like a wasted investment to slow things down.

    ZeusRAM has been on our radar for a long time, and we do have access to some systems that have ZeusRAM disks in them. We do not use them in the ZFSBuild systems due to cost. Adding two ZeusRAM devices would nearly double the cost of the ZFSBuild2012 system. We are not aiming at replacing $200,000 SAN arrays with this system, and as such it’s not built to those specs. At some point we may do a full review of one system that we have access to that is an HA cluster, w/ lots of RAM, ZeusRAM disks, and 10Gbit Ethernet, but at this time we are choosing to focus more on the SMB sized systems.

  • schism says:

    How many IOPS can one reasonably push over a single gigabit ethernet link?

  • The number of IOPS is directly tied to the size of the IO. Typically you see people talk about 4k IOPS as the benchmark, and the theoretical limit (as long as my math is correct) for a gigabit link would be just under 33,000 IOPS. Obviously there is TCP overhead, and you aren’t going to see that absolute maximum. If you are looking at 128k IOPS, you’re going to see less than 1000 IOPS.

  • Leave a Reply

    You must be logged in to post a comment.