Initial InfiniBand Performance Testing

We’ve gotten the InfiniBand network mostly up and running, and have been doing some performance tests with the included WinOF InfiniBand performance tools. Needless to say the results are nothing less than stunning. This is the fastest transport that we’ve ever had available to us to use in the DataCenter.

The InfiniBand write performance averaged 1869.51MB/sec which equals roughly 14.60Gbit/sec. The Max theoretical throughput on a 4x DDR InfiniBand link is 16Gbit/sec, so I would say that the performance there is stellar.

InfiniBand Write Performance

The InfiniBand read performance averaged 1869.46MB/sec, which works out to be roughly 14.60Gbit/sec also. Again, we can see that we are loading the InfiniBand network down very well and there is great bandwidth available.

Infiniband Read Performance

As you can see, 20Gbit InfiniBand is providing more bandwidth than any other DataCenter interconnect available today. With 14.60Gbit/sec of useable bandwidth per port, the Infiniband network will not be a bottleneck for our storage system. This is excellent to know that even if we were to scale our system to hundreds of drives we would be looking at interconnects that can handle more data than our wildest imaginations could dream up.

Friday, May 21st, 2010 Benchmarks, InfiniBand

7 Comments to Initial InfiniBand Performance Testing

  • rens says:

    Impressive! Looking forward to see the performance from within a VPS and how it handles the load of 100rds of VMs. Especially interesting to see how much more load it can handle due to the SSD’s and the ZFS filesystem (compared to a regular hardware RAID 10 system).

  • intel says:

    Thanks for sharing. Would you mind sharing some IOstats directly from the SAN/Opensolarisbox for your zpool?

  • redr0bin says:

    Outstanding results! I’m trying to implement rather similar setup, but with topspin ib switch and mellanox ddr dual-port cards. OpenIndiana as a storage array exporting block devices to Linux machines via SRP.

    It seems to me that I’m not gaining speed like you. Local writes to array are about 2.5-3.0 GB/s, exported to Linux 300-500 MB/s. How do you think what can be the bottleneck ?

    Thanks in advance!

  • admin says:

    redr0bin: There are lots of things that could limit the performance. One quick thing to check is your multipathing config. For example, if you are doing round robin balancing in your multipathing and have 1Gbps links in the mix, the IB link will be underutilized.

    One test we ran limited itself to 3Gbps, because there were two 1Gbps links and one 20GBps link. Rather than using the 20Gbps link as much as possible, round robin multipathing split the load evenly across the 3 links. It saturated the two 1Gbps links and underutilized the 20Gbps link, so it yielded 3Gbps instead of 22Gbps. To make sure something like that is not the problem, run a performance test with the slower links disabled.

  • redr0bin says:

    Yes, I do have multiple gigabit links in solaris box as well as in Linuxes. But on the solaris box I have only one zfs block device exported as srp, iscsi and nfs are disabled by svcadm and no multipathing on test setup. You think I’m still utilizing gigabit network in the mix with ib ?

  • admin says:

    Probably not, but it might be worth disabling the slower links and retesting just to be sure.

  • redr0bin says:

    The two more possible points of slow links are Mellanox ConnectX DDR card on Solaris or maybe Topspin IB switch. Tested from Linu to Linux over IB gained 500MB/s and more on writes 🙁

  • Leave a Reply

    You must be logged in to post a comment.