We are still running the IOMeter benchmarks using the new ZFSBuild2012 server and InfiniBand SRP. The numbers we are seeing with SRP are absolutely awesome. For example, right now it is running a 32k random read benchmark and it is getting nearly 60,000 IOPS and moving 1854MB/second. This is amazing because when we ran raw IB performance tests on the network, the max performance of the IB network was 1890MB/second. The ib_read_bw and ib_write_bw tools showed our IB network moving an average of 1869MB/second and a peak of 1890MB/second. It is really exciting to see the ZFSBuild2012 box delivering 1854MB/second through IOMeter, which is about 98% of the wirespeed for our IB network.
So – just a quick update – we got Infiniband SRP (SCSI RDMA Protocol) working from our Windows system to our Nexenta system. Here’s a screengrab from IOMeter beating the crap out of our 20Gbit Infiniband network. This is a 16k, random read workload. Obviously it’s all fitting in to RAM, but compared to iSCSI w/ IPoIB, there’s no contest. iSCSI/IPoIB manages about 400MB/sec, and 27,000 IOPS. This should be very exciting to see this as a possibility moving forward.
We’ve gotten the InfiniBand network mostly up and running, and have been doing some performance tests with the included WinOF InfiniBand performance tools. Needless to say the results are nothing less than stunning. This is the fastest transport that we’ve ever had available to us to use in the DataCenter. › Continue reading
The title of this post says it all. How to get Infiniband to work properly. We have had a roller coaster of fun trying to get the InfiniBand network up and running properly. From the headache of installing the InfiniBand switch module to finding the correct Mezzanine cards for our blades, to getting IPoIB (IP over InfiniBand) working properly on all systems, it’s been interesting to say the least.
› Continue reading
For years we have successfully connected all of our blade centers to our storage area networks using 1GigE. Each time we needed more bandwidth, we simply added more networking ports. For our ZFS Build project, we decided to break from this tradition and try out higher performance networking solutions in place of the 1GigE networking. › Continue reading
Installing Infiniband switch in a SuperMicro SBE-710E
Our current infrastructure relies completely on iSCSI for our storage solution. As such, we have dual gigabit switch modules in our bladecenter. While this has worked very well for us, we want to expand our bladecenter to accept the SuperMicro 4x DDR Infiniband switch.
› Continue reading