ZFSBuild2012 – Nexenta vs FreeNAS vs ZFSGuru

Back in 2010, we ran some benchmarks to compare the performance of FreeNAS 0.7.1 on the ZFSBuild2010 hardware with the performance of Nexenta and OpenSolaris on the same ZFSBuild2010 hardware. (http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/)  FreeNAS 0.7.1 shocked all of us by delivering the absolute worst performance of anything we had tested.

With our ZFSBuild2012 project, we definitely wanted to revisit the performance of FreeNAS, and were fortunate enough to have FreeNAS 8.3 released while we were running benchmarks on the ZFSBuild2012 system.  It is obvious that the FreeNAS team worked on the performance issues, because version 8.3 of FreeNAS is very much on par with Nexenta in terms of performance (at least when doing iSCSI over gigabit Ethernet based benchmarks).

We believe the massive improvement in FreeNAS’s performance is the result of more than simply using a newer version of FreeBSD.  FreeNAS 8.3 is based on FreeBSD 8.3 and includes ZFS v28.  To test our theory, we benchmarked ZFSGuru 0.2.0-beta 7 installed onto FreeBSD 9.1, which also included ZFS v28.  The performance of ZFSGuru was a fraction of the performance of FreeNAS 8.3.  This led us to believe that the FreeNAS team did more than simply install their web GUI on top of FreeBSD.  If we had to guess, we suspect the FreeNAS team did some tweaking in the iSCSI target to boost the performance.  Theoretically speaking, this same tweaking could be done in ZFSGuru to produce similar gains.  Maybe a future version of ZFSGuru will include the magic tweaks that the FreeNAS already includes, but for now ZFSGuru is much slower than FreeNAS.  If you want to run a ZFS based SAN using FreeBSD, we strongly recommend FreeNAS.  We don’t recommend ZFSGuru at this point in time, since it is vastly slower than both Nexenta and FreeNAS.

The software versions we used during testing were Nexenta 3.1.3, ZFSGuru 0.2.0-beta 7 (FreeBSD 9.1), and FreeNAS 8.3 (FreeBSD 8.3).  Click here to read about benchmark methods.

We realize that newer versions have been released since we ran our benchmarks.  We will not be re-running any of the benchmarks at this time, because we have already placed the ZFSBuild2012 server into production.  The ZFSBuild2012 server currently runs Nexenta as a high performance InfiniBand SRP target for one of our web hosting clusters.

All tests were run on exactly the same ZFS storage pool using exactly the same hardware.  The hardware used in these tests is the ZFSBuild2012 hardware.

We did not run any InfiniBand tests with FreeNAS 8.3, because FreeNAS 8.3 does not have any InfiniBand support.  We did run some InfiniBand tests with ZFSGuru, but those results will appear in a different article specifically dedicated to that topic.

 
IOMeter 4k Benchmarks:
IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 8k Benchmarks:
IOMeter 8k Benchmarks

IOMeter 8k Benchmarks

IOMeter 8k Benchmarks

IOMeter 8k Benchmarks

IOMeter 16k Benchmarks:
IOMeter 16k Benchmarks

IOMeter 16k Benchmarks

IOMeter 16k Benchmarks

IOMeter 16k Benchmarks

IOMeter 32k Benchmarks:
IOMeter 32k Benchmarks

IOMeter 32k Benchmarks

IOMeter 32k Benchmarks

IOMeter 32k Benchmarks

Friday, January 25th, 2013 Benchmarks

23 Comments to ZFSBuild2012 – Nexenta vs FreeNAS vs ZFSGuru

  • […] was written in 2010.  We have a new article with much newer benchmarks from 2012 posted at http://www.zfsbuild.com/2013/01/25/zfsbuild2012-nexenta-vs-freenas-vs-zfsguru/. […]

  • CiPHER says:

    Oh my god, what did you do with ZFSguru to get such awful performance?

    In your article, you link to ‘benchmark methods’, but i cannot find any information on how you tested ZFSguru.

    Did you benchmark right off the LiveCD, for instance? Because it has limited ZFS to 64MB ARC size, obviously not suitable for benchmarking. And if you performed an installation, did you use any memory tuning? This can lead to drastically different performance figures that allow such great disparity between FreeNAS and ZFSguru.

  • admin says:

    CiPHER: We installed ZFSGuru to the internal boot drives. We definitely did NOT do any benchmarks using the LiveCD. We did not do any custom memory tuning with any of the storage application software packages. We assumed Nexenta, FreeNAS, and ZFSGuru would each be tuned out of the box with performance in mind.

    Before you nit-pick how we ran our benchmarks, please run benchmarks against the same hardware using Nexenta, FreeNAS, and ZFSGuru.

  • Pantagruel says:


    We did run some InfiniBand tests with ZFSGuru, but those results will appear in a different article specifically dedicated to that topic.

    Nice, eagerly awaiting that specific topic.
    Will you be elaborating on how to actually setup IB in ZFSguru (FreeBSD 9.x), things like SRP/iSCSI/etc ?

  • analog_ says:

    Any particular reason why the most native/vanilla/the_reference hasn’t been tested? I’m talking OpenIndiana. Also very interested in IB (can’t get my MHGH28 working properly).

  • admin says:

    analog_: We did run tests on OpenSolaris and OpenIndiana during our ZFSBuild2010 project. For the ZFSBuild2012 project, we decided we were only going to run tests on storage appliances (not bare operating systems). Since OpenIndiana is a bare operating system and not a storage appliance, we did not run benchmarks on it for the ZFSBuild2012 project. We touched on this in our ZFSBuild2012 Mission Statement at http://www.zfsbuild.com/2012/11/30/zfsbuild2012-mission-statement/

    As for IB, did you get an IB subnet manager running? (such as OpenSM) Can you see the IB NICs on the storage side? Have you installed OFED on your compute nodes? Can you see the IB NICs on the compute nodes? Those are the first steps to getting IB working.

    Beyond that, set up an IP on each IB NIC. Initially, unplug all of the multipathing cables, because sometimes IB has weird special case problems with multiple cables in the same IB NIC. We recommend only having one cable and one IP per physical IB NIC during initial testing and debugging. After you have things working with one cable per NIC, then you can try adding another cable and IP per NIC.

    Once you have IPoIB working and each compute node can talk to the storage node, then work on configuring SRP. SRP offers massively more performance than IPoIB, but it is best to debug IB in stages. Start by getting all of the IP stuff working in the initial debugging stages, and then set up SRP. Even though you won’t use IP with SRP, it is easier to debug it that way. Once you have confirmed all of the nodes can talk to each other over IB, it is actually really easy to get SRP working with Nexenta or OpenIndiana.

    Also, make sure you have the latest firmware in your IB NICs. The latest firmware is required by the latest OFED drivers. If you have an old NIC with old firmware, it will give the latest OFED some grief.

  • CiPHER says:

    Thanks for your reply, Admin. Please understand correctly: i highly value the work and effort you put into benchmarking and publishing your results is much appreciated.

    However, it is very strange that you should get such drastic performance differences. The IOps scores might deter people from analyzing the results correctly, but it translates to only 10MB/s of sequential write performance for ZFSguru; while the other results show throughput in the 90MB/s range where you would expect performance to be on a 1 gigabit network.

    If what you say is true, and during the ZFSguru installation you did not select any memory tuning profile (it is an option at step4), this means ZFSguru should perform the same as vanilla FreeBSD 9.1. You mentioned that FreeNAS might be ‘optimized’ for performance, but as far as i know it is the other way around. FreeNAS does or did not optimize the loader.conf and had ‘vanilla FreeBSD performance’, while ZFSguru employs a memory tuning profile that changes performance characteristics from vanilla FreeBSD. Optimizing loader configuration may also negatively impact performance, but i’m amazed and simply appalled you only got 10MB/s out of it.

    For comparison: my low-end AMD Brazos with two notebook disks already push beyond 85MB/s over gigabit (Samba). Surely, with 8 cpu-cores, 48GiB RAM and 18 SAS drives, you should be able to push more than 10MB/s out of that hardware.

    It could be that the disparity in performance is due to some issue with interrupts or drivers in FreeBSD 9.1, which does not occur in FreeBSD 8.3 employed by FreeNAS. I would very much like to find out, but i’m afraid it will be very difficult to mimic your exact benchmark setup. You have very specific server hardware, a complicated test setup involving iSCSI and the test methods are not well enough defined to allow to verify the results with similar hardware. Since you have put the box into production, i also understand that you cannot do any more tests.

    However, this prompted a spark of interest in me to setup community benchmarking where people at home benchmark their own systems using a variety of NAS appliances. This allows to ignore clearly deviating results and give a more balanced performance perspective on more general available hardware.

    If in the future you would want to benchmark again, i would very much like to aid in that, by looking at your test setup and make suggestions that makes producing reliable benchmark data easier.

  • admin says:

    CiPHER: We also value your contributions to the ZFS community. We have seen you help other people with ZFS related issues repeatedly on the ZFSGuru forum in addition to your help developing ZFSGuru. We respect your contributions and we appreciate your feedback regarding the benchmarks.

    You mentioned Samba in your testing. We did not do any SMB/CIFS benchmarks. All of our benchmarks in this article were using iSCSI. It is very possible that ZFSGuru performs differently with iSCSI than with SMB/CIFS. We are not interested in SMB/CIFS performance. Our primary interest is in block level SAN style performance such as iSCSI and SRP.

    Back in 2010, FreeNAS 0.7.1 delivered absolutely terrible performance in the iSCSI based benchmarks. Something in FreeNAS 0.7.1 simply caused it to hold itself back from performing at its full potential in iSCSI tests. There may be some similar issue affecting ZFSGuru. If we had to guess, we would bet it is something to do with the iSCSI target on FreeBSD (maybe some settings in the iSCSI target). FreeNAS 8.3 does not suffer the same performance problem, so it is obvious that awesome performance can be delivered through iSCSI on FreeBSD. ZFSGuru developers should look closely at the differences in iSCSI settings on FreeNAS 0.7.1, FreeNAS 8.3, and ZFSGuru 0.2.0.

    We have posted all of the information you would need to reproduce our benchmarks. We have posted the IOmeter file, explained the setup, and posted all of the details about all of our hardware involved in the testing. There is nothing overly complicated about this. Our test methods are well defined. I urge you to try running some iSCSI based tests with a variety of storage appliances before you try to discredit our testing methods.

    The issue with iSCSI performance is not merely a FreeBSD issue either. At one point in time, OpenSolaris (really old version) had really bad iSCSI performance, but these days OpenSolaris, OpenIndiana, and Nexenta set the bar for awesome iSCSI performance. In the case of OpenSolaris, the iSCSI improvements were delivered by replacing the iSCSI target code with better code. Settings and tweaks were not enough in that case. My point is merely that iSCSI bottlenecks have affected other software and those bottlenecks were corrected because people took the issue seriously and fixed them. Do some iSCSI based testing to compare ZFSGuru, FreeNAS, and Nexenta, and then figure out how to improve ZFSGuru to match the performance of FreeNAS and Nexenta over iSCSI. Good luck with it.

  • nexentaderek says:

    I’d be curious how NexentaStor 3.1.3.5 or even better, NexentaStor 4.0 compare in your benchmarks. When is ZFSBuild2013? 😉

  • admin says:

    nexentaderek: We are curious about that as well. We would love to run benchmarks comparing Nexenta 3.1.3, 3.1.3.5, and (eventually) 4.0. We cannot do that right now because the ZFSBuild2012 hardware is being used in production already. We are talking internally about building a another ZFSBuild2012 style server to use as a cold spare. If/When we do that, we might be able to use that for running more benchmarks.

    As for ZFSBuild2013 (or ZFSBuild2014), we have some hardware design ideas for it, but nothing set in stone at this point. We are using the ZFSBuild2012 hardware design in production at this point and we are extremely happy with the performance and reliability. In production, we are using Nexenta 3.1.3.5 with InfiniBand SRP.

  • palesius says:

    Inspired by your 2012 build, we’ve been setting up a very similar system to host iSCSI targets for a Windows 2012 virtual host.

    Main differences in hardware:
    Motherboard: X9SRH-7TF (we went with this one for the onboard dual 10Gb ethernet, and the built in LSI 2308 SAS controller).
    Memory: only 32GB
    Drives: same Toshiba drives, but only 9 (4 mirrored vdevs +1 spare)
    LARC: no LARC drives
    ZIL: a single Mushkin Chronos SSD

    OS: we needed to keep our software costs down, so Nexenta wasn’t really an option now that they changed the licensing. We started with OpenIndiana, but ran into a weird iSCSI issue. After less than an hour of heavy iSCSI activity, performance would drop from the 400MB-800MB/sec range to the 2-4MB/sec range (and locally too, not just over iSCSI). Stopping all iSCSI activity seemed to stop the problem, but it kept happening. We then moved to FreeBSD 9.0. The only changes from the defaults were compiling an updated driver for and configuring the 10gb ethernet (had to disable LRO because it was dropping packets, but enabled TX/RX checksums and TRO). And also downloaded the most recent istgt (0.5 20121028), but this was more to be able to reload the config file without having to stop the service entirely than for any performance reason.

    The performance tuning we did with istgt.conf (everything else is pretty much the same as istgt.sample.cong) is as follows:
    MaxSessions 32
    MaxConnections 8
    MaxR2T 256
    QueueDepth 64

    I’d like it to be higher of course, but we have no trouble getting 50MB/sec (on 4k reads or 32k reads) on a single session over 1Gb Ethernet. The 1Gb connection is normally used for SSH and other management tasks, so has not really been tuned at all for performance.
    Over a single 10Gb link we see 270MB/s or so with 32k reads. Bringing in the second link bumps it up to 315 or so. Adding additional sessions increases it a bit more (480 with 4 per link, 520 w/6)
    For comparison a simple dd done locally to a 64Gb file yields about 340MB/sec write and 325MB/sec read. (The volume I used for testing was fairly small, so probably a lot of it was coming from cache on the reads).

    It certainly seems that you can get decent iSCSI performance on a pretty standard FreeBSD install. I wonder what made the benchmarks so poor for ZFSGuru.

    I would be interested in any memory or other performance tuning that we may have missed out on. CiPHER alluded to there being some changes worth making and I would guess he might know a thing or two about the subject 🙂

  • […] ZFSBuild2012 – Nexenta vs FreeNAS vs ZFSGuru […]

  • gea says:

    Have you considered to compare OmniOS?

    OmniOS is currently the most up to date Illumos based option for storage use, stable and free.

    I moved to OmniOS as my main platform for napp-it storage appliances.

  • We have not tested OmniOS – we will add it to the list to possibly test when we have an available system.

  • Annie Zhang says:

    Can you add a direct link to the different softwares for future readers? It’d be more convenient than googling each one in a separate tab.

  • rk says:

    Hi Matt,

    Do you have any good contacts at Supercom? I had some questions on the SC846 chassis used in your build. I tried to call their main line but the person I got ahold with barely spoke english and wasn’t very familiar with the product. (Feel free to Skype or email me back; it may be a while before I loop back to these comments)

  • Tried adding you on skype, but I cannot find you in their directory.

  • ckim44 says:

    Hi!
    I really appreciate the work that you have done and was wondering if you have any timeframe for when the article will be posted that tells us how you configured infiniband?

  • Trying to get some time set aside to get this knocked out. Unfortunately the blog has fallen off of my “high” priority list as of late. Right now no ETA, but I can assure you I’m working towards it.

  • ckim44 says:

    Thanks for your reply and time!
    I look forward to the article and want to really thank you for doing all this!

  • ckim44 says:

    Hi again
    I have another question if you dont mind.
    I see that you are using your Nexenta box to serve LUNs over SRP to your hypervisors

    Is there a particular reason why you chose SRP instead of NFS over IPoIB for your VM stores?

    Was there a database requirement to have LUNs available?

  • The majority of our infrastructure is Hyper-V based, and we are not as comfortable using NFS as a datastore for Hyper-V volumes. The Infiniband SRP construct is much more familiar and easily implementable for our use case.

  • Leave a Reply

    You must be logged in to post a comment.