Benchmarks Comparing OpenSolaris, Nexenta, and a Promise VTrak

We ran some benchmarks using IOmeter (running on Windows 2008 R2 on our test blade) to compare OpenSolaris running on our ZFSBuild project hardware, Nexenta running on exactly the same hardware, and a Promise VTrak 610i box. We ran all of the benchmarks over gigabit ethernet.

Here are screenshots of the IOmeter config:



Here is the actual IOmeter config file:
Iometer-config-file.zip

Here are the benchmarks in the order they were performed in IOmeter. Keep in mind that OpenSolaris and Nexenta are able to receive a performance benefit thanks to L2ARC caching.

4k Benchmark Results:


8k Benchmark Results:


16k Benchmark Results:


32k Benchmark Results:


The results are very interesting. OpenSolaris is able to consistently outperform Nexenta, even though Nexenta is running the same build of OpenSolaris at its core and the tests were run on exactly the same hardware. We are not really sure why OpenSolaris was able to outperform a Nexenta build of OpenSolaris. We are definitely going to contact Nexenta about this.

In nearly every test, both OpenSolaris and Nexenta outperformed the Promise VTrak 610i. This was pretty much expected. The only exceptions were the 32k tests where the Nexenta fell behind the Promise box.

Tuesday, August 3rd, 2010 Benchmarks

15 Comments to Benchmarks Comparing OpenSolaris, Nexenta, and a Promise VTrak

  • zfsjay says:

    maybe, i’m confused. but the IOPS in your previous benchmarks were in the 6 digits these are in the 4 and 5 digits. Am I missing something?

  • admin says:

    The IOPS in our previous benchmarks were on small datasets that fit completely into the ARC cache. Basically, once the ARC cache was populated, it was reading directly from RAM. The tests that we performed were on a 25GB dataset, which was well outside the capacity of the ARC cache. Another thing to note is that these tests were performed over Gigabit Ethernet, not on the local machine.

  • jdye says:

    have you gotten the storage running over the IB yet? will you be looking to do it with iSER+iscsi or nfs?

  • admin says:

    We have done some testing with IB. These tests were all run from a Windows 2008 R2 box. We tested that machine with both gigabit Ethernet and with IB. The IB stuff was really reasy to install on Windows, but it was not completely stable. Windows would drop iSCSI connections every few minutes when running over IB. We have run other tests from CentOS boxes using IB and those were stable, so we suspect the stability in Windows is driver related. We want to do a lot more testing and debugging before we post a detailed article about Windows and IB.

  • […] our benchmarks between OpenSolaris b134, NexentaStor Enterprise, and a Promise VTrack M610i box, we found that OpenSolaris consistently outperformed NexentaStor. We were never really quite sure […]

  • RoyK says:

    which version of Nexenta did you test here?

  • admin says:

    It was a trial version of NexentaStor Enterprise Edition 3.0.

  • Niklas says:

    What was the hardware config for the benchmarks regarding Nexenta and Open Solaris? Amount of RAM, CPU, ZIL and L2ARC etc?

    Thx in advanced

  • admin says:

    Niklas: We posted the complete article including all of the hardware specs at:
    http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking

  • Niklas says:

    Thanks for the specs :-).
    Is there any latency stats for the benchmarks above?
    And once again thanks for all the excellent information on this site.

  • rulezmore says:

    Hi.
    After months of studies I finally reached some results and some knowledge using Solaris derived distros + ZFS + COMSTAR.
    Yesterday I used your IOMeter config file for testing my ZFS Server over iSCSI and I was wondering if you could take a quick look at the results and tell me what is good and what is wrong about it based on your experience.
    Those are the specs of the storage machine:

    – SuperMicro X8SIL-F with 2 x 1 Intel Gigabit Ethernet onboard (iSCSI Network)
    – 16 GB Registered ECC Memory (4 x 4GB)
    – 2 x 160GB SATA HDDs for the Operating system connected to the mobo
    – 6 x 2TB SATA WD RE Hard drives connected to an IBM unbranded ServeRAID BR10i (LSI 1068) 3Gbit SAS/SATA controller with IT firmware
    – 2 x 60GB SATA OCS Agility 3 SSD Drives connected to an IBM unbranded ServeRAID M1015 (LSI 2008) 6 Gbit SAS/SATA controller with IT firmware
    – 1 x Broadcom 5709c Gigabit dual port ethernet adapter (iSCSI Network)
    – 1 x Realtek 1 Gigabit ethernet adapter (Management Network)

    The OS I used for my tests is Oracle Solaris 11 11/11 because of the better support of CIFS with Active Directory, but I’ll soon be using OpenIndiana because I don’t need CIFS support anymore if the iSCSI tests are good enough.
    The array was configured as a 5 x 2TB WD Disks + 1 Spare with RAID2Z. The 2 SSD drives were used as a mirror for write cache.
    The Tagert iSCSI ports used were 1 Intel Gigabit (onboard) + 1 Broadcom Gigabit, both using 9K Jumbo frames.
    The server used as the iSCSI initiator was an IBM x3550 dual 2Ghz Xeon with 3GB of RAM and 4 x Broadcom Gigabit adapters two of which (5709c) used for the tests using 9K Jumbo frames.
    The swtich used for the tests was a cheap Trendnet 24 GBit ports TEG-240WS split into 5 different untagged vlans (only 2 were used) to use Multipath IO with two connections. Jumbo frames were enabled of course.

    If you have a bit of time to evaluate my tests, you can download them at this url: http://zfstestresults.zapto.org/

    Thank you.

    Regards

    P.S.: your web site was an inspiration to me….

  • I’ll look through the results – but my initial impression is that they are probably a little low. I’ll have some additional responses to post soon.

  • r3N0oV4 says:

    Those are the results from a local dd based benchmark:

    DD Bench

    write 10.24 GB via dd, please wait…
    time dd if=/dev/zero of=/tank/dd.tst bs=1024000 count=10000

    10000+0 records in
    10000+0 records out

    real 26.7
    user 0.0
    sys 4.6

    10.24 GB in 26.7s = 383.52 MB/s Write

    read 10.24 GB via dd, please wait…
    time dd if=/tank/dd.tst of=/dev/null bs=1024000

    10000+0 records in
    10000+0 records out

    real 2.4
    user 0.0
    sys 2.1

    10.24 GB in 2.4s = 4266.67 MB/s Read

    There must me something related to iSCSI. Don’t you think? Network usage on the initiator seems to be very low during heavy file transfers.

  • What does the pool configuration look like? Also, what does the iSCSI vdev configuration look like? Block Size? Write back/write through? Etc?

  • r3N0oV4 says:

    Surprisingly, reinstalling my server with Openindiana 151a2 and correctly setting the Windows box iSCSI initiators (4 x Broadcom 5709c with iSCSI off-load support) the results may have changed.
    I’m going to post them as soon as IOMeter finishes.
    My pool is a 1 x RAID2Z with 1 HotSpare and an SSD Mirror for ZIL (I’m goind to add a third SSD for the L2ARC cache). Write back is disabled because of the ZIL SSDs while the block size was 128K during the first test and is now 64K on the running tests.
    Stay tuned.

  • Leave a Reply

    You must be logged in to post a comment.