FCoE with Intel 82599 10Gbit connections

So in my lab I’ve got a Cisco Nexus 5548 and a SuperMicro SuperServer 6026-6RFT+.  I’ve put Nexenta on this, Windows, and several other things.  One thing I hadn’t tried is running FCoE.  Intel announced FCoE support for all Nintanic based chips several years ago, and I hadn’t tried it.

I figured this was as good of a time as any to play with ESXi and FCoE so I dug in.  ESXi 5.1 installed flawlessly.  It saw all of the NIC’s, all of the hard drives, everything.  The Nexus 5548 worked great, I sailed along creating new vSAN’s for FCoE, and thought, “here we go!”.

I followed the guide here for enabling FCoE http://www.intel.com/content/www/us/en/network-adapters/10-gigabit-network-adapters/ethernet-x520-configuring-fcoe-vmware-esxi-5-guide.html.  It all looked splendid until I actually got to the part where you activate the new FCoE Storage adapter.  Every time I tried to add the Software FCoE adapter, it acted like there was no available adapter that supported FCoE.  I knew this wasn’t the case, as it was very clearly mentioned that it _was_ supported.

After several hours of poking, prodding, trying different versions of ESXi, updating the system board BIOS, tinkering with BIOS settings, trying Windows – thinking maybe, just maybe ESXi wasn’t going to work, I gave up and sent an email to Intel.

Intel responded very graciously that since it was an integrated controller on the system board, there wasn’t much they could do for me, and that I would have to talk to my manufacturer.  I followed their advice, contacted SuperMicro, and got a fantastic response.

Matt,
Would you don’t mind flash the EEPROM firmware. The firmware release on 08/09/11 will allow Intel 82599EB to support FCoE.

X8DTU-6TF_EEPROM.zip

Steps to flash onboard LAN EEPROM

1.Extract the files and Copy them to a bootable USB stick or to a bootable floppy disk.
(If you don’t have a bootable USB stick you can make it using:
http://www.softpedia.com/get/System/Boot-Manager-Disk/BootFlashDOS.shtml
http://www.sevenforums.com/tutorials/46707-ms-dos-bootable-flash-drive-create.html).

2.Boot up the system using the USB stick.

3.At the command prompt type —  <filename>.bat

4.Enter the 12 digit LAN1 MAC address, when prompted.

5.Power cycle the system.

6.Reinstall the LAN drivers after EEPROM is flashed.

 

Technical Support
ES

 

After flashing the new EEPROM on to the LAN controller, I was able to sucessfully enable FCoE for this system using ESXi (and subsequently Windows Server 2008R2).

Tuesday, August 27th, 2013 Configuration, Hardware, Virtualization 2 Comments

ZFSBuild2012 – Nexenta vs FreeNAS vs ZFSGuru

Back in 2010, we ran some benchmarks to compare the performance of FreeNAS 0.7.1 on the ZFSBuild2010 hardware with the performance of Nexenta and OpenSolaris on the same ZFSBuild2010 hardware. (http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/)  FreeNAS 0.7.1 shocked all of us by delivering the absolute worst performance of anything we had tested.

With our ZFSBuild2012 project, we definitely wanted to revisit the performance of FreeNAS, and were fortunate enough to have FreeNAS 8.3 released while we were running benchmarks on the ZFSBuild2012 system.  It is obvious that the FreeNAS team worked on the performance issues, because version 8.3 of FreeNAS is very much on par with Nexenta in terms of performance (at least when doing iSCSI over gigabit Ethernet based benchmarks).

We believe the massive improvement in FreeNAS’s performance is the result of more than simply using a newer version of FreeBSD.  FreeNAS 8.3 is based on FreeBSD 8.3 and includes ZFS v28.  To test our theory, we benchmarked ZFSGuru 0.2.0-beta 7 installed onto FreeBSD 9.1, which also included ZFS v28.  The performance of ZFSGuru was a fraction of the performance of FreeNAS 8.3.  This led us to believe that the FreeNAS team did more than simply install their web GUI on top of FreeBSD.  If we had to guess, we suspect the FreeNAS team did some tweaking in the iSCSI target to boost the performance.  Theoretically speaking, this same tweaking could be done in ZFSGuru to produce similar gains.  Maybe a future version of ZFSGuru will include the magic tweaks that the FreeNAS already includes, but for now ZFSGuru is much slower than FreeNAS.  If you want to run a ZFS based SAN using FreeBSD, we strongly recommend FreeNAS.  We don’t recommend ZFSGuru at this point in time, since it is vastly slower than both Nexenta and FreeNAS.

The software versions we used during testing were Nexenta 3.1.3, ZFSGuru 0.2.0-beta 7 (FreeBSD 9.1), and FreeNAS 8.3 (FreeBSD 8.3).  Click here to read about benchmark methods.

We realize that newer versions have been released since we ran our benchmarks.  We will not be re-running any of the benchmarks at this time, because we have already placed the ZFSBuild2012 server into production.  The ZFSBuild2012 server currently runs Nexenta as a high performance InfiniBand SRP target for one of our web hosting clusters.

All tests were run on exactly the same ZFS storage pool using exactly the same hardware.  The hardware used in these tests is the ZFSBuild2012 hardware.

We did not run any InfiniBand tests with FreeNAS 8.3, because FreeNAS 8.3 does not have any InfiniBand support.  We did run some InfiniBand tests with ZFSGuru, but those results will appear in a different article specifically dedicated to that topic.

 
IOMeter 4k Benchmarks:
IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 4k Benchmarks

IOMeter 4k Benchmarks
› Continue reading

Friday, January 25th, 2013 Benchmarks 23 Comments

Dtrace broken with SRP targets?

Anyone that’s using Infiniband, SRP targets, Dtrace, and version 3.1.3.5 of Nexenta Community edition, please raise your hands.

Nobody?  Not suprising :)  Pulled some dtrace scripts off of another system to evaluate performance on the ZFSBuild2012 system, and got a very wierd error :

 

#./arcreap.d

dtrace: failed to compile script ./arcreap.d: “/usr/lib/dtrace/srp.d”, line 49: translator member ci_local definition uses incompatible types: “string” = “struct hwc_parse_mt”

 

I’ve never seen this before, and the exact same script on ZFSBuild2010 works flawlessly.  Something in SRP I’m guessing, that’s the part that’s throwing the error, and we aren’t using SRP on the ZFSBuild2010 system.  If anyone at Nexenta or anyone working on the Illumos project sees anything here that make sense, I’d love to hear about it.

Thursday, December 27th, 2012 ZFS 4 Comments

ZFSBuild2012 – Write Back Cache Performance

Nexenta includes an option to enable or disable Write Back Cache on shared ZVols. To manage this setting, you must first create your ZVol and then set the ZVol to Shared. Then you can select the ZVol and edit the Write Back Caching setting. The purpose of this article is to find out how much performance is affected by the setting. All benchmarks were run on the ZFSBuild2012 hardware using Nexenta 3.1.3. All tests were run on the same ZFS storage pool. Click here to read about benchmark methods.

iSCSI-WC-E is iSCSI using IPoIB with connected mode disabled and the Write Back Cache enabled.

iSCSI-WC-D is iSCSI using IPoIB with connected mode disabled and the Write Back Cache disabled.

IB-SRP-WC-E is InfiniBand SRP with the Write Back Cache enabled.

IB-SRP-WC-D is InfiniBand SRP with the Write Back Cache disabled.

Generally speaking, enabling the Write Back Cache has no significant impact on read performance, but a huge improvement for write performance.

› Continue reading

Monday, December 17th, 2012 Benchmarks 13 Comments

ZFSBuild2012 – InfiniBand Performance

We love InfiniBand.  But it is no merely enough to simply install InfiniBand.  We decided to test three different popular connection options with InfiniBand so we could better understand which method offers the best performance.  We tested IPoIB with connected mode enabled (IPoIB-CM), IPoIB with connected mode disabled (IPoIB-UD), and SRP.
› Continue reading

Saturday, December 15th, 2012 Benchmarks 12 Comments

ZFSBuild2012 – Performance compared to ZFSBuild2010

ZFSBuild2012 is faster than ZFSBuild2010 in every way possible.  This page compares the iSCSI 1Gbps Ethernet performance difference between ZFSBuild2012 and ZFSBuild2010.  Both hardware designs are running Nexenta with write back caching enabled for the iSCSI shared ZVol.  Click here to read about benchmark methods.  We will be posting InfiniBand benchmarks with ZFSBuild2012 soon, and those InfiniBand benchmarks show even more performance.

IOMeter 4k Benchmarks:
IOMeter 4k Benchmarks › Continue reading

Friday, December 14th, 2012 Benchmarks 5 Comments

ZFSBuild2012 – Benchmark Methods

We took great care when setting up and running benchmarks to be sure we were gathering data that could be used to make useful comparisons.  We wanted to be able to compare the ZFSBuild2012 design with the ZFSBuild2010 design.  We also wanted to be able to compare various configuration options within the ZFSBuild2012 design, so we could make educated decisions about how to configure a variety of options to get the most performance out of the design.  The purpose of this page is to share all of our benchmarking methods.
› Continue reading

Friday, December 14th, 2012 Benchmarks 9 Comments

ZFSBuild2012 – Building Pictures

This article includes pictures taken while we were assembling the ZFSBuild2012 SAN.

Installing the motherboard:
Installing the motherboard
› Continue reading

Thursday, December 13th, 2012 Hardware 11 Comments

ZFSBuild2012 – Specs and Parts Pictures

We took a lot of pictures while we were building and testing the ZFSBuild2012 SAN.

The ZFSBuild 2012 system is comprised of the following :

SuperMicro SC846BE16-R920 chassis – 24 bays, single expander, 6Gbit SAS capable.

SuperMicro X9SRI-3F-B Motherboard – Single socket Xeon E5 compatible motherboard.

Intel Xeon E5 1620 – 3.6Ghz latest generation Intel Xeon CPU.

20x Toshiba MK1001TRKB 1TB SAS 6Gbit HDD’s – 1TB SAS drives.

LSI 9211-8i SAS controller – Moving the SAS duties to a Nexenta HSL certified SAS controller.

Intel SSD’s all around
2x Intel 313 series 20GB SSD drives for ZIL
2x Intel 520 series 240GB SSD drives for L2ARC
2x Intel 330 series 60GB SSD drives for boot (installed into internal cage)

64GB RAM (8x 8GB) – Generic Kingston ValueRAM.

20Gbps ConnectX InfiniBand card

4x ICYDock 2.5″ to 3.5″ drive bay converters

Internal drive bay bracket for boot drives

Y-cable for the rear fans (not enough fan sockets without one Y-cable)

Here are some pictures of the first few parts that showed up:
Stack of computer parts

Stack of computer parts
› Continue reading

Thursday, December 13th, 2012 Hardware 10 Comments

ZFSBuild2012 – Mission Statement

We wanted to take a moment to discuss what to expect from the series of ZFSBuild2012 articles we are posting. These articles are not intended to be a performance shootout for hardware, software, networking, or anything else.

We have obviously run benchmarks as part of this project, but the benchmarks are not the primary goal of the project. The goal of this project is to share information about how to build a low cost, high performance, easy to manage ZFS based SAN. We have built and heavily tested this design. The ZFSBuild2012 design can deliver over 100,000 IOPS and costs about $7k to build.

We have received a lot of requests for benchmarks that people would like to see and comparisons that people would like to see us make between various solutions. While it might be interesting to make thousands of comparisons between every possible operating system and configuration combination, it is not the focus of this series of articles.

We will be posting benchmark results in an effort to explain the performance difference between the ZFSBuild2012 design and the ZFSBuild2010 design. It is probably not a spoiler to let everybody know that the ZFSBuild2012 design is much faster than the ZFSBuild2010 design. Over the past two years, we learned a lot of things about designing better ZFS based SANs and the underlying hardware got a lot faster. The purpose of comparing the two designs is merely to show how much performance can be gained from the new design. We used the same benchmark tools that we ran back in 2010, and we even used the same blade for running the benchmarks, so the benchmarks we will post comparing the two designs are a true apples to apples test.

We will also be posting benchmarks comparing the performance of InfiniBand using different configurations of the same hardware with Nexenta. Again, the purpose of those benchmarks will be to help people find the correct way to configure InfiniBand. It is not meant to be a fanboy style shootout about various driver settings.

Unfortunately, it was not practical for us to run benchmarks comparing 10GigE, FC, and InfiniBand. While we do have access to this tech, we did not have all of this tech installed into the same blade center, so there was no good way for us to run a true apples to apples style comparison of the various network interconnects. Additionally, we did not want to buy more networking hardware just for the purposes of installing into the test blade center for the purpose of running benchmarks. But like we already mentioned, this series of articles is not intended as a shootout. The purpose of these articles to share information about how to build a reasonably low cost SAN that can deliver over 100,000 IOPS.

Ultimately, it is up to you to choose what you do with the information shared in our series of article on the ZFSBuild2012 design. We used Nexenta for most of the benchmarks and we deployed the unit into production using Nexenta, but we are not trying to convince anybody to give up their favorite storage appliance software in favor of Nexenta. We chose to use Nexenta Community Edition in the ZFSBuild2012 solution because it offered excellent performance and because it can automatically notify the admin of a failed drive (including flashing an LED on the drive bay when it is time to replace a failed drive). We fully understand that some people will choose to run a bare operating system (such as OpenSolaris, OpenIndiana, or FreeBSD) and others will choose to run FreeNAS or ZFSGuru. There is nothing wrong with that. You should run what ever you are comfortable with.

We hope you enjoy the ZFSBuild2012 series of articles.  (We will be posting the ZFSBuild2012 articles soon)

 

Friday, November 30th, 2012 Benchmarks 10 Comments

ZFSBuild2012 – Benchmarking complete

Benchmarking for ZFSBuild 2012 has completed.  We’ve got a bunch of articles in the pipeline about this build, and we’ll be releasing them over the next few weeks and months.  Stay tuned!

Thursday, November 29th, 2012 Hardware 1 Comment

ZFSBuild2012 – Benchmark Progress

We are still running benchmarks for the ZFSBuild2012 SAN building project.  We completed all of the Nexenta benchmarks this past week.  We ran every combination of networking configurations to deeply test both 1Gbps Ethernet and 20Gbps InfiniBand on the Nexenta platform.  We completed ZFSGuru (with FreeBSD 9.1) benchmarks this morning (both 1Gbps Ethernet and 20Gbps InfiniBand).  We are currently running FreeNAS 8.3 benchmarks.

Saturday, October 27th, 2012 Benchmarks 21 Comments