FreeNAS vs OpenSolaris ZFS Benchmarks

We have received a lot of feedback from members of the IT community since we published our benchmarks comparing OpenSolaris and Nexenta with an off the shelf Promise VTrak M610i. One question we received from several people was about FreeNAS. Several people asked “How does FreeNAS compare to OpenSolaris on the same hardware?” That was an excellent question, and we decided to run some tests to answer that question.

NOTE: This article was written in 2010 using FreeNAS 0.7.1.  We have a new article with much newer benchmarks from 2012 posted about FreeNAS 8.3 at http://www.zfsbuild.com/2013/01/25/zfsbuild2012-nexenta-vs-freenas-vs-zfsguru/. We strongly encourage you to read both this article and the newer article before passing judgement on FreeNAS, because there were significant performance improvements made in FreeNAS in recent versions.

FreeNAS makes it really easy to deploy a SAN or NAS based on FreeBSD. FreeBSD has ZFS support, but it is an older version of ZFS than what is available in OpenSolaris. The ZFS version can be a big deal. For example, the version of ZFS included with FreeBSD does not include support for deduplication, so you cannot use deduplication with FreeNAS.

We like the web based GUI included with FreeNAS. It makes it relatively easy to configure a FreeNAS box. Not everything works properly in the GUI, though. We had problems creating ZFS pools using the GUI. We finally reverted to using the command line to build the ZFS pool (zpool create and zpool add commands). Once we had the ZFS set up, we told the GUI to synchronize the GUI with the ZFS pool, so we could see ZFS information through the GUI. That worked, but it would have been more pleasant if the GUI had been able to create the ZFS pool without the need for us to revert to the command line.

The next issue we ran into we was an oversight in the GUI. With the zpool command, you can add devices to an existing ZFS pool. The GUI does not have any way of handling that. To add devices to an existing ZFS pool, you need to use the zpool add command and then the sync option in the FreeNAS GUI.

Once we had the ZFS pool completely configured, we set up an iSCSI target through the FreeNAS GUI. Unfortunately, you cannot directly share a ZFS volume using iSCSI with FreeNAS. The GUI has an option for doing that, but the GUI does not list any ZFS volumes in the drop down box. The only way to successfully share a ZFS volume via iSCSI is if you create a file extent on the ZFS volume and then share the iSCSI target using that file instead of the ZFS volume directly. This was not terribly difficult to do, but it was definitely not obvious. At the very least, the GUI should be smart enough to share a ZFS volume by creating the file based extent behind the scenes.

I realize that not a lot of people have an InfiniBand network, but I feel I should touch on this issue anyway since we do. In many cases, you can deploy a 20Gbps InfiniBand network for less than you can deploy a 10Gbps Ethernet network. Unfortunately, FreeBSD does not have any support for InfiniBand, so you cannot use InfiniBand at all with FreeNAS. It is a shame, because InfiniBand offers outstanding performance for the price.

When we tested FreeNAS with a ZFS volume, we noticed that FreeNAS could see all 12GB of memory, but FreeNAS only chose to use 4GB. This limited how much ARC caching FreeNAS could do. With OpenSolaris, all of the memory above the first 1GB of memory is used for the ARC caching. This means OpenSolaris was able to use 11GB for caching, which was clearly better than the 4GB that FreeNAS limited itself to. We were definitely using the 64 bit version of FreeNAS, so the issue was not a 32 bit limitation.

Throughout all of our testing with FreeNAS, one thing kept annoying us. FreeNAS would sometimes forget the IP on boot. When this happened, we had to use IPMI to remote KVM into the box and manually configure the network settings from the command line.

Benchmarks

We used exactly the same hardware for the FreeNAS box as we used for the OpenSolaris and Nexenta testing. We also used the same test blade to run Iometer. This means our OpenSolaris and Nexenta results should be able to be compared with FreeNAS. Additionally, we set up exactly the same ZFS volume on FreeNAS. The ZFS volume was 18 active drives in RAID, plus SLC drives for ZIL, two MLC drives for L2ARC caching, and two hard drives for spares. We used the same 24 drives that we used in the OpenSolaris and Nexenta benchmarks.

Here is the actual IOmeter config file:
Iometer-config-file.zip

As with our previous benchmarks, we use 1Gbps Ethernet to connect the test blade with the SAN box. We also have 20Gbps InfiniBand connected to all of the hardware, but none of these benchmarks used InfiniBand. We set up an iSCSI target on FreeNAS and ran Iometer from the test blade using the exact same list of tests we used for our previous benchmark article.

Once all of the tests completed, we also tore down the iSCSI target and ZFS. Then we set up a FreeNAS Software RAID volume and an iSCSI target pointing to the Software RAID volume. The software RAID volume was configured as a RAID10 volume with 18 drives, since that was the core of each ZFS volume we had constructed so far for testing. Then we ran all of our benchmarks against the Software RAID based iSCSI target on the FreeNAS box. In this benchmark section, we will be comparing OpenSolaris with both FreeNAS/ZFS and FreeNAS/SoftwareRAID.

4k Benchmark Results:

8k Benchmark Results:

16k Benchmark Results:

32k Benchmark Results:

Conclusion

We don’t really understand why FreeNAS performed so poorly compared to OpenSolaris using exactly the same hardware and test environment. In some tests, the FreeNAS with ZFS performed even worse than a software RAID volume did in FreeNAS. Given the results, we cannot recommend using FreeNAS for enterprise grade file storage. We expected FreeNAS to perform much better than it actually did. We were amazed by how slow FreeNAS was compared to OpenSolaris.

But that does not mean everybody should stop using FreeNAS. There are times when FreeNAS is a great solution. For a low end file server at home, it is probably fine. One thing FreeNAS does do well is open up a lot of different file sharing methods. For example, FreeNAS can directly share files through FTP or even BitTorrent. This may sound really silly when we are looking for enterprise SAN solutions, but these are the types of things home users will use a file server for. So even though we would never consider FreeNAS for enterprise grade SAN solutions, we can understand why home users might really like FreeNAS.

Friday, September 10th, 2010 Benchmarks, ZFS

32 Comments to FreeNAS vs OpenSolaris ZFS Benchmarks

  • bsdimp says:

    Which version of FreeNAS did you use?

  • Doug White says:

    (Full disclosure: While I work for iXsystems, who is the current custodian of FreeNAS, and I am a FreeBSD committer, the following is my own personal opinion.)

    First, I recommend reading this paper, especially section 3. It explains how to perform benchmarks that are repeatable and verifiable and points out numerous examples of common benchmark paper pitfalls.

    http://www.fsl.cs.sunysb.edu/docs/fsbench/fsbench-tr.html

    A colleague also suggested this paper, in which the research team ran into anomalous results while running their benchmarks, and took extra time to explore why they were getting the results they did. The resulting conclusions were far more interesting.

    http://www.usenix.org/event/usenix03/tech/freenix03/full_papers/ellard/ellard.pdf

    Second, while the graphs are pretty, you don’t explain why you ran the tests you chose. I see you ran various tests at different block sizes. Why did you use different block sizes, and how did you choose the block sizes that were used? You also chose some odd numbers for “que depth” (which in IOMeter parlance is simultaneous transactions using Windows async I/O). You started at 9 and incremented by 3 for each test point. Why did you choose this method? It is distinctly different than other I/O test papers that I have read, which all usually start at 1 simultaneous transaction and ramp up from there.

    Finally, I could not find a real name or contact email anywhere for you. Even comment responses are done with the ‘admin’ account, which just links to No Support Hosting. I would love to put you in touch with the FreeNAS development team, who can help you with the issues you were encountering. The memory usage issue is especially curious since it does not correspond with any of my observations. You can contact me at the email address in my profile.

    They would also love to hear feedback about the alpha version that was recently posted for testing. The files can be downloaded from http://sourceforge.net/projects/freenas/files/.

    Thank you for your consideration.

  • Andys says:

    Could you please list the versions of each operating system used.

  • admin says:

    bsdimp: We used 64bit FreeNAS 0.7.1.

    Doug White: We appreciate the feedback. Instead of debating the merits of each benchmark chosen, it may be more productive to simply say the benchmark tests were exactly the same for each of the solutions graphed. The real question is not which benchmarks should we use. The real question should be “why did FreeNAS do so poorly?” One theory we are discussing on this end is that maybe FreeNAS’s ZFS memory limit starved itself. We used 300GB of L2ARC, which means ZFS would need 3-6GB of ARC memory to manage the L2ARC. It seemed like FreeNAS’s ZFS limited itself to 4GB of the 12GB of memory during the tests. Well, 4GB would probably not be enough memory to successfully use the 300GB of L2ARC we supplied. Why does a 64bit build of FreeNAS limit ZFS to only using 4GB of memory? FreeNAS could see the entire 12GB of system memory, but it only used 4GB of it during tests. OpenSolaris will use all of the available memory during those same tests.

    Andys: That question is answered in detail on several other pages within this site.

    Test Blade Specs:
    http://www.zfsbuild.com/2010/05/05/test-blade-configuration/ (running Windows 2008 R2)

    Here are some hardware articles for the ZFS system used for the OpenSolaris, Nexenta, and FreeNAS benchmarks:
    http://www.zfsbuild.com/2010/04/08/motherboard-cpu-heatsink-and-ram-selection/
    http://www.zfsbuild.com/2010/03/21/chassis-selection/

  • Doug White says:

    Thanks for the links to the hardware you used. It would be more convenient to just list that in your test report, but this is enough to attempt to reproduce your test environment.

    What version of Nexcenta were you testing? I could not find the version number on the site.

    How, precisely, are you determining how much memory FreeNAS is “using”?

    Did you use the same SSD setup for L2ARC and ZIL on the Nexcenta and OpenSolaris test? You don’t indicate you did in the description of that test. It would not be fair to test FreeNAS with SSD to Nexcenta without. Perhaps post the output of ‘zpool list’ on the FreeNAS setup so we can compare it to the zpool setup howto?

    Did you zap the SSD when changing the test configuration? Testing a new SSD on one environment against a used SSD on another is similarly unfair; it is well known that SSDs perform differently as they are used. Zapping the LBA table by performing an ATA SECURITY ERASE on the Intel SSDs is enough to restore their performance to new.

  • admin says:

    We literally used the exact same hardware for the OpenSolaris, Nexenta, and FreeNAS testing. We also used exactly the same zpool configuration.

    In the FreeNAS GUI, it showed how much memory it detected total and how much of it was being used.

  • mrhyd3 says:

    Thank you for the sharing of info here. I’m curious as to why Nexenta was slower than OpenSolaris. Considering they’re the same build?

  • admin says:

    We talked with Nexenta about the performance difference between OpenSolaris and Nexenta. They did not have any idea for the performance gap. Our theory is that Nexenta uses more RAM for the web GUI and for tracking information, and that leaves less memory for the ARC (main system memory used as primary cache by ZFS).

  • danbi says:

    As you mentioned ZFS version is important, I wonder why you did not test with FreeBSD. The zpool version in 8-current is already 15.

    With regards to the ARC — have you done any tuning in OpenSolaris or is your install generic?

    There are many tunables related to ZFS, for example the device queue depths that can produce entirely different performance results.

  • admin says:

    danbi: We did test FreeNAS (based on FreeBSD) with ZFS and without ZFS. Both are in the graphs.

    We used OpenSolaris b134 in the tests, which ships with pretty good tuning by default. We intentionally did not do any custom tuning to OpenSolaris, Nexenta (based on OS b134), or FreeNAS. We wanted to get an idea how these SAN/NAS solutions performed out of the box using a given set of server grade hardware.

    We obviously realize that there are a lot of tuning options with ZFS. The intent of this article was to show how these solutions performed out of the box. If one of the solutions gives itself a huge disadvantage out of the box due to improper or incomplete default tuning, that is not our problem.

    With respect to FreeNAS with ZFS, the solution is distributed as a SAN/NAS solution and should have default tuning that helps it perform well at its intended task. If the FreeNAS team wants to release a version of FreeNAS that is actually tuned better for ZFS performance, we will be happy to benchmark that new build and compare it with the numbers generated during these benchmarks. But we will not do a bunch of custom tuning within FreeNAS simply to try to make FreeNAS look better. The FreeNAS team should be properly tuning FreeNAS’s tunables before releasing each build, so FreeNAS can perform well out of the box.

  • danbi says:

    Current production versions of FreeNAS are based on FreeBSD 7, which has more or less unstable ZFS implementation. FreeNAS should have patched/tuned it of course, and hopefuly someone will explain the figures you observed.

    FreeBSD 8.1 (current release) has much better ZFS. But it seems, the performance hit you observed is the result of using iSCSI exports (which is not much ZFS anyway, more zpool/zvolume-only) — there is no native support for iSCSI in FreeBSD, and this includes FreeNAS.

    Therefore, you may wish to do an filesystem level comparision, instead of iSCSI only comparison, although that might not match your use case directly.

    In any case, ZFS tunning may have very serious impact on performance, even on OpenSolaris. One area of tuning you may want to look at is the device queue depth, as you are using port extenders and therefore may bottleneck your I/O with unnecessary transfers.

  • admin says:

    danbi: The sole reason we are doing this site is to investigate a variety of ways to build low cost, relatively high performance SAN/NAS solutions. If we only look at file system performance and ignore iSCSI, then we will see much higher performance numbers but the numbers won’t be useful for us.

    The reason we chose to test FreeNAS is because other IT peers asked us to. After we inititally published performance results about OpenSolaris, several people asked us to test FreeNAS using the same hardware to see how it performed relative to OpenSolaris and Nexenta.

    This page is merely our answer to that specific question. This page is not designed as an “everything about FreeBSD and ZFS” type of post. There is no dbout that with enough tuning FreeNAS with ZFS should perform better than what it does out of the box, but that is completely beyond the scope of what we set out to do with this page. We may do a follow up benchmark with a newer build of FreeNAS at some point in time if the FreeNAS team releases a newer build that significantly addresses the performances issues outlined on this page.

    As for local benchmarking, that was never the focus of this website. This site is about building SAN/NAS solutions, which means the tests need to be run over iSCSI or NFS. We did run a few quick numbers several months ago on this same hardware. Here is a link to that:
    http://www.zfsbuild.com/2010/05/24/initial-zfs-performance-stats/

  • mrhyd3 says:

    “admin”, I completely get this blog. I did some reading on other articles posted and understand you viewpoint, “The sole reason we are doing this site is to investigate a variety of ways to build low cost, relatively high performance SAN/NAS solutions”.

    I can say, we appreciate all you have posted, it has helped my team immensely.

    Unfortunately, people may have not read this post with the understanding as I have.

  • rens says:

    Might be a good idea to also try FreeBSD 8.1? Opensolaris will soon be dead ;< Nexenta costs too much, so a lot of people are looking for a replacement. Might be FreeBSD.

  • danbi says:

    admin: I fully agree with you. FreeBSD is far from competitive when it comes to iSCSI. I just wanted to point out why others might consider the comparison ‘unfair’.

    However, I still believe your setup might (or might not) benefit from ZFS optimization –although this might be suitable for a separate post.

    One topic of particular interest is if you have tested performance with and without dedup enabled.

  • admin says:

    rens: While it is true that the future of OpenSolaris is uncertain (thanks to Oracle), I am not sure FreeBSD is a viable alternative at this point. At this point, I would still use OpenSolaris b134 or the free Nexenta Core Platform. There is also the Illumos project (http://www.illumos.org/), which is a fork of the OpenSolaris project.

    danbi: If FreeBSD is does not have a competitive iSCSI target, then FreeBSD is not going to do well as a SAN. To be completely fair on the issue, OpenSolaris had terrible iSCSI performance in the 2009.06 build, but fixed that issue in the b134 build. At some point, the FreeBSD team will likely fix their iSCSI shortcomings as well.

    We have not performance tested with dedupe so far, but we are curious about what impact dedupe might have. Dedupe will use additional CPU resources, but it will also increase the chance of a cache hit. Dedupe might help or hurt performance, depending on which factor is limiting performance.

  • dave99 says:

    You mentioned nexenta might be slower because of the gui, so I’m assuming you are using the free/demo version of nexentastor. Any chance of testing with the nexenta core version? With no gui by default, it would be interesting to see if it closes the gap with opensolaris.

  • ep says:

    I would like to point out that OpenIndiana (http://www.openindiana.org), even today, may be a viable alternative to OpenSolaris to build a SAN/NAS. The current version is based on build 147 of onnv and supports the latest ZFS pool version.

  • admin says:

    dave99: The version used during tests was the trial version of NexentaStor Enterprise Edition. We are planning to test Nexenta Core Platform.

    ep: We are definitely excited about OpenIndiana and plan to test it in the near future.

  • Olivier says:

    Thanks for this bench: This will motivate us to improve FreeNAS.

  • rschultz101 says:

    want:
    benchmark / filetransfer:
    – freenas – ZFS – AFP -mac
    – freenas – HSF+ -iscsi – xserve
    – freenas – HSF+ – AFP
    – freenas – NFS – samba

  • admin says:

    rschultz101: Feel free to run some of your own benchmarks. With this site, we are primarily focussed on building low cost, high performance ZFS based SAN/NAS solutions for hosting iSCSI targets and NFS shares for our private computing clouds. We are not running any benchmarks unrelated to those specific goals.

  • aka101 says:

    Hi and thanx for an excellent site. I have really missed something like this. I have to flag my self as an BSD fan, not hard core as I use w7 and ubuntu for personal stuff, but I also design and run virtualized systems for a living. Based on ESX, HDS AMS, and Brocade SAN for the record. ZFS is however a new paradigm in storage that will change a lot when it becomes more robust and feature complete. The potential is no doubt far greater than other block based storage controllers.

    I hope you get your IB network up and running for testing. I would sure like to see the same tests with NFS/RDMA, which in my opinion is the strongest tech nowadays for shared storage.

  • reefburnaby says:

    Hi, Thanks for the great info on building ZFS SANs. I was wondering if you had any plans for high availability opensolaris filers. I am always worried about single point of failure — especially when it comes to SAN.

  • Turn11Shawn says:

    Thanks for the great information it has been helpful as we benchmark ZFS on FreeBSD (currently 8.1) as an NFS storage server for VMWare ESXi. We find sending snapshots to secondary storage servers is an excellent way to provide both onsite and offsite data replication and performance is very good with SAS disks and 12-16GB of RAM. Many small companies cannot afford HDS or similar SAN offerings so having a robust contender in the lower price market is very important to us. We chose NFS due to iSCSI target fears and that seems to have been a good choice.

    Keep up the good work, I will share any info we have that might help others.

  • […] to a lot of debate about the setup, hardware used, default settings etc. This test is no different: FreeNAS vs OpenSolaris ZFS benchmarks. Hopefully we will see a massive improvement in FreeNAS 0.8 which is currently available as alpha […]

  • chuch says:

    How did you get Iometer install on the Solaris machine ? I can’t for the life of me get dynamo installed on this system. Any help would be appreciated.

  • Woet says:

    This benchmark is over 1 year old now, when are you going to do it with the latest releases of the OSes? (Nexenta 3.1.0, FreeNAS 8.0.2, etc.)

  • admin says:

    We have the current platform in production, so unfortunately we can no longer do performance benchmarking on this system. We have learned a lot about the platform as a whole though using it for the better part of a year. We’re planning some new blog posts to document our experiences, and we’ve got a new build in the hopper that we’ll do benchmarking, performance analysis, and other goodies on.

  • […] Benchmark ZFS Build posted an interesting benchmark between FreeNAS and OpenSolaris, the result was a landslide victory for OpenSolaris. I’m going to […]

  • […] hardware with the performance of Nexenta and OpenSolaris on the same ZFSBuild2010 hardware. (http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/)  FreeNAS 0.7.1 shocked all of us by delivering the absolute worst performance of anything we had […]

  • Leave a Reply

    You must be logged in to post a comment.