Benchmarks

ZFSBuild2012 – Nexenta vs FreeNAS vs ZFSGuru

Back in 2010, we ran some benchmarks to compare the performance of FreeNAS 0.7.1 on the ZFSBuild2010 hardware with the performance of Nexenta and OpenSolaris on the same ZFSBuild2010 hardware. (http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/)  FreeNAS 0.7.1 shocked all of us by delivering the absolute worst performance of anything we had tested.

With our ZFSBuild2012 project, we definitely wanted to revisit the performance of FreeNAS, and were fortunate enough to have FreeNAS 8.3 released while we were running benchmarks on the ZFSBuild2012 system.  It is obvious that the FreeNAS team worked on the performance issues, because version 8.3 of FreeNAS is very much on par with Nexenta in terms of performance (at least when doing iSCSI over gigabit Ethernet based benchmarks).

We believe the massive improvement in FreeNAS’s performance is the result of more than simply using a newer version of FreeBSD.  FreeNAS 8.3 is based on FreeBSD 8.3 and includes ZFS v28.  To test our theory, we benchmarked ZFSGuru 0.2.0-beta 7 installed onto FreeBSD 9.1, which also included ZFS v28.  The performance of ZFSGuru was a fraction of the performance of FreeNAS 8.3.  This led us to believe that the FreeNAS team did more than simply install their web GUI on top of FreeBSD.  If we had to guess, we suspect the FreeNAS team did some tweaking in the iSCSI target to boost the performance.  Theoretically speaking, this same tweaking could be done in ZFSGuru to produce similar gains.  Maybe a future version of ZFSGuru will include the magic tweaks that the FreeNAS already includes, but for now ZFSGuru is much slower than FreeNAS.  If you want to run a ZFS based SAN using FreeBSD, we strongly recommend FreeNAS.  We don’t recommend ZFSGuru at this point in time, since it is vastly slower than both Nexenta and FreeNAS.

The software versions we used during testing were Nexenta 3.1.3, ZFSGuru 0.2.0-beta 7 (FreeBSD 9.1), and FreeNAS 8.3 (FreeBSD 8.3).  Click here to read about benchmark methods.

We realize that newer versions have been released since we ran our benchmarks.  We will not be re-running any of the benchmarks at this time, because we have already placed the ZFSBuild2012 server into production.  The ZFSBuild2012 server currently runs Nexenta as a high performance InfiniBand SRP target for one of our web hosting clusters.

All tests were run on exactly the same ZFS storage pool using exactly the same hardware.  The hardware used in these tests is the ZFSBuild2012 hardware.

We did not run any InfiniBand tests with FreeNAS 8.3, because FreeNAS 8.3 does not have any InfiniBand support.  We did run some InfiniBand tests with ZFSGuru, but those results will appear in a different article specifically dedicated to that topic.

 
IOMeter 4k Benchmarks:
IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 4k Benchmarks
› Continue reading

Friday, January 25th, 2013 Benchmarks 23 Comments

ZFSBuild2012 – Write Back Cache Performance

Nexenta includes an option to enable or disable Write Back Cache on shared ZVols. To manage this setting, you must first create your ZVol and then set the ZVol to Shared. Then you can select the ZVol and edit the Write Back Caching setting. The purpose of this article is to find out how much performance is affected by the setting. All benchmarks were run on the ZFSBuild2012 hardware using Nexenta 3.1.3. All tests were run on the same ZFS storage pool. Click here to read about benchmark methods.

iSCSI-WC-E is iSCSI using IPoIB with connected mode disabled and the Write Back Cache enabled.

iSCSI-WC-D is iSCSI using IPoIB with connected mode disabled and the Write Back Cache disabled.

IB-SRP-WC-E is InfiniBand SRP with the Write Back Cache enabled.

IB-SRP-WC-D is InfiniBand SRP with the Write Back Cache disabled.

Generally speaking, enabling the Write Back Cache has no significant impact on read performance, but a huge improvement for write performance.

› Continue reading

Monday, December 17th, 2012 Benchmarks 13 Comments

ZFSBuild2012 – InfiniBand Performance

We love InfiniBand.  But it is no merely enough to simply install InfiniBand.  We decided to test three different popular connection options with InfiniBand so we could better understand which method offers the best performance.  We tested IPoIB with connected mode enabled (IPoIB-CM), IPoIB with connected mode disabled (IPoIB-UD), and SRP.
› Continue reading

Saturday, December 15th, 2012 Benchmarks 12 Comments

ZFSBuild2012 – Performance compared to ZFSBuild2010

ZFSBuild2012 is faster than ZFSBuild2010 in every way possible.  This page compares the iSCSI 1Gbps Ethernet performance difference between ZFSBuild2012 and ZFSBuild2010.  Both hardware designs are running Nexenta with write back caching enabled for the iSCSI shared ZVol.  Click here to read about benchmark methods.  We will be posting InfiniBand benchmarks with ZFSBuild2012 soon, and those InfiniBand benchmarks show even more performance.

IOMeter 4k Benchmarks:
IOMeter 4k Benchmarks › Continue reading

Friday, December 14th, 2012 Benchmarks 5 Comments

ZFSBuild2012 – Benchmark Methods

We took great care when setting up and running benchmarks to be sure we were gathering data that could be used to make useful comparisons.  We wanted to be able to compare the ZFSBuild2012 design with the ZFSBuild2010 design.  We also wanted to be able to compare various configuration options within the ZFSBuild2012 design, so we could make educated decisions about how to configure a variety of options to get the most performance out of the design.  The purpose of this page is to share all of our benchmarking methods.
› Continue reading

Friday, December 14th, 2012 Benchmarks 9 Comments

ZFSBuild2012 – Mission Statement

We wanted to take a moment to discuss what to expect from the series of ZFSBuild2012 articles we are posting. These articles are not intended to be a performance shootout for hardware, software, networking, or anything else.

We have obviously run benchmarks as part of this project, but the benchmarks are not the primary goal of the project. The goal of this project is to share information about how to build a low cost, high performance, easy to manage ZFS based SAN. We have built and heavily tested this design. The ZFSBuild2012 design can deliver over 100,000 IOPS and costs about $7k to build.

We have received a lot of requests for benchmarks that people would like to see and comparisons that people would like to see us make between various solutions. While it might be interesting to make thousands of comparisons between every possible operating system and configuration combination, it is not the focus of this series of articles.

We will be posting benchmark results in an effort to explain the performance difference between the ZFSBuild2012 design and the ZFSBuild2010 design. It is probably not a spoiler to let everybody know that the ZFSBuild2012 design is much faster than the ZFSBuild2010 design. Over the past two years, we learned a lot of things about designing better ZFS based SANs and the underlying hardware got a lot faster. The purpose of comparing the two designs is merely to show how much performance can be gained from the new design. We used the same benchmark tools that we ran back in 2010, and we even used the same blade for running the benchmarks, so the benchmarks we will post comparing the two designs are a true apples to apples test.

We will also be posting benchmarks comparing the performance of InfiniBand using different configurations of the same hardware with Nexenta. Again, the purpose of those benchmarks will be to help people find the correct way to configure InfiniBand. It is not meant to be a fanboy style shootout about various driver settings.

Unfortunately, it was not practical for us to run benchmarks comparing 10GigE, FC, and InfiniBand. While we do have access to this tech, we did not have all of this tech installed into the same blade center, so there was no good way for us to run a true apples to apples style comparison of the various network interconnects. Additionally, we did not want to buy more networking hardware just for the purposes of installing into the test blade center for the purpose of running benchmarks. But like we already mentioned, this series of articles is not intended as a shootout. The purpose of these articles to share information about how to build a reasonably low cost SAN that can deliver over 100,000 IOPS.

Ultimately, it is up to you to choose what you do with the information shared in our series of article on the ZFSBuild2012 design. We used Nexenta for most of the benchmarks and we deployed the unit into production using Nexenta, but we are not trying to convince anybody to give up their favorite storage appliance software in favor of Nexenta. We chose to use Nexenta Community Edition in the ZFSBuild2012 solution because it offered excellent performance and because it can automatically notify the admin of a failed drive (including flashing an LED on the drive bay when it is time to replace a failed drive). We fully understand that some people will choose to run a bare operating system (such as OpenSolaris, OpenIndiana, or FreeBSD) and others will choose to run FreeNAS or ZFSGuru. There is nothing wrong with that. You should run what ever you are comfortable with.

We hope you enjoy the ZFSBuild2012 series of articles.  (We will be posting the ZFSBuild2012 articles soon)

 

Friday, November 30th, 2012 Benchmarks 10 Comments

ZFSBuild2012 – Benchmark Progress

We are still running benchmarks for the ZFSBuild2012 SAN building project.  We completed all of the Nexenta benchmarks this past week.  We ran every combination of networking configurations to deeply test both 1Gbps Ethernet and 20Gbps InfiniBand on the Nexenta platform.  We completed ZFSGuru (with FreeBSD 9.1) benchmarks this morning (both 1Gbps Ethernet and 20Gbps InfiniBand).  We are currently running FreeNAS 8.3 benchmarks.

Saturday, October 27th, 2012 Benchmarks 21 Comments

ZFSBuild2012 – Still running InfiniBand SRP Benchmarks 32k Random Read

We are still running the IOMeter benchmarks using the new ZFSBuild2012 server and InfiniBand SRP.  The numbers we are seeing with SRP are absolutely awesome.  For example, right now it is running a 32k random read benchmark and it is getting nearly 60,000 IOPS and moving 1854MB/second.  This is amazing because when we ran raw IB performance tests on the network, the max performance of the IB network was 1890MB/second.  The ib_read_bw and ib_write_bw tools showed our IB network moving an average of 1869MB/second and a peak of 1890MB/second.  It is really exciting to see the ZFSBuild2012 box delivering 1854MB/second through IOMeter, which is about 98% of the wirespeed for our IB network.

Thursday, October 11th, 2012 Benchmarks, InfiniBand 7 Comments

ZFSBuild2012 – More testing tidbits

So – just a quick update – we got Infiniband SRP (SCSI RDMA Protocol) working from our Windows system to our Nexenta system.  Here’s a screengrab from IOMeter beating the crap out of our 20Gbit Infiniband network.  This is a 16k, random read workload.  Obviously it’s all fitting in to RAM, but compared to iSCSI w/ IPoIB, there’s no contest.  iSCSI/IPoIB manages about 400MB/sec, and 27,000 IOPS.  This should be very exciting to see this as a possibility moving forward.

 

 

Thursday, October 11th, 2012 Benchmarks, InfiniBand 2 Comments

ZFSBuild2012 – Testing has begun

We are re-running the original ZFSBuild2010 tests, and initial results are that this system is significantly faster than the old system.  4k Random Reads are peaking out at over 50,000 IOPS, delivering over 200MB/sec over Infiniband.  8k random reads delivering 40,000 IOPS and over 300MB/sec over Infiniband.  These numbers are AWESOME!

Keep in mind though, this is with a 25GB working set.  ZFSBuild 2010 only had 12GB of RAM, not nearly enough to cache even 25GB of data.  ZFSBuild 2012 has 64GB of RAM, allowing to put all of this data in RAM.  We’ll be tailoring the benchmarks after this run to more accurately reflect real world workloads, as we know the 25GB working set size is giving us artificially high results.

Look for more info in the next week or so!

Tuesday, October 9th, 2012 Benchmarks 1 Comment

OpenIndiana Benchmarks

After Oracle decided to change the course of OpenSolaris (forum thread), the open source community reacted by forking the code base through a new project called Illumos. The first downloadable ISO from the Illumos project is OpenIndiana.

OpenIndiana is based on OpenSolaris b147. It is important to take a minute and look at build numbers of popular milestones within the OpenSolaris development process. Here are some major ones.
OpenSolaris 2008.11: b101
OpenSolaris 2009.06: b111
OpenSolaris 2010.03: b134
OpenSolaris b147 forks to create OpenIndiana b147

The b134 (2010.03) release was held back and never released as an official OpenSolaris release. If you go to the OpenSolaris site, the most recent official ISO is 2009.06. We have been using b134 in all of our tests anyway, because b134 has measurably better iSCSI performance than 2009.06 (b111). Additionally, the NexentaStor and Nexenta Core Platform distributions we benchmarked were both originally based on OpenSolaris b134.

OpenIndiana is built on OpenSolaris b147, which means it has a number of bug fixes since b134 and even more bug fixes since b111 (2009.06). At this point, you can think of OpenIndiana as the latest and greatest OpenSolaris code with OpenIndiana logos added to it. At this point, OpenIndiana is not significantly different from OpenSolaris b147 in any specific technical way.

› Continue reading

Monday, October 11th, 2010 Benchmarks 14 Comments

Nexenta Core Platform Benchmarks

In our benchmarks between OpenSolaris b134, NexentaStor Enterprise, and a Promise VTrack M610i box, we found that OpenSolaris consistently outperformed NexentaStor. We were never really quite sure why, since NexentaStor is based on Nexenta Core Platform and Nexenta Core Platform was based on OpenSolaris b134. We expected NexentaStor to match the performance of OpenSolaris, but it simply did not.

One theory we had for the performance difference was that the web GUI in NexentaStor used enough system memory that NexentaStor had significantly less ARC cache available and was therefore at a performance disadvantage to OpenSolaris. This got us curious about how Nexenta Core Platform would perform relative to OpenSolaris and NexentaStor.

We decided to benchmark Nexenta Core Platform using the same hardware and benchmarks that we have used for all of the previous benchmark runs. The results exceeded our expectations.
› Continue reading

Saturday, October 9th, 2010 Benchmarks 6 Comments