So in my lab I’ve got a Cisco Nexus 5548 and a SuperMicro SuperServer 6026-6RFT+. I’ve put Nexenta on this, Windows, and several other things. One thing I hadn’t tried is running FCoE. Intel announced FCoE support for all Nintanic based chips several years ago, and I hadn’t tried it.
I figured this was as good of a time as any to play with ESXi and FCoE so I dug in. ESXi 5.1 installed flawlessly. It saw all of the NIC’s, all of the hard drives, everything. The Nexus 5548 worked great, I sailed along creating new vSAN’s for FCoE, and thought, “here we go!”.
I followed the guide here for enabling FCoE http://www.intel.com/content/www/us/en/network-adapters/10-gigabit-network-adapters/ethernet-x520-configuring-fcoe-vmware-esxi-5-guide.html. It all looked splendid until I actually got to the part where you activate the new FCoE Storage adapter. Every time I tried to add the Software FCoE adapter, it acted like there was no available adapter that supported FCoE. I knew this wasn’t the case, as it was very clearly mentioned that it _was_ supported.
After several hours of poking, prodding, trying different versions of ESXi, updating the system board BIOS, tinkering with BIOS settings, trying Windows – thinking maybe, just maybe ESXi wasn’t going to work, I gave up and sent an email to Intel.
Intel responded very graciously that since it was an integrated controller on the system board, there wasn’t much they could do for me, and that I would have to talk to my manufacturer. I followed their advice, contacted SuperMicro, and got a fantastic response.
Would you don’t mind flash the EEPROM firmware. The firmware release on 08/09/11 will allow Intel 82599EB to support FCoE.
Steps to flash onboard LAN EEPROM
1.Extract the files and Copy them to a bootable USB stick or to a bootable floppy disk.
(If you don’t have a bootable USB stick you can make it using:
2.Boot up the system using the USB stick.
3.At the command prompt type — <filename>.bat
4.Enter the 12 digit LAN1 MAC address, when prompted.
5.Power cycle the system.
6.Reinstall the LAN drivers after EEPROM is flashed.
After flashing the new EEPROM on to the LAN controller, I was able to sucessfully enable FCoE for this system using ESXi (and subsequently Windows Server 2008R2).
So lets start talking about some of the things that we’ve learned over the last year, shall we? The number one thing that we have learned is that your working set size dramatically impacts your performance on ZFS. When you can keep that working set inside of RAM, your performance numbers are outrageous. Hundreds of thousands of IOPS. Latency is next to nothing, and the world is wonderful. Step over that line though, and you had better have architected your pool to absorb it.
Flash forward to a year later, and we’ve found that our typical response times and IOPS delivered are much higher than we expected, and latency is much lower. Why is that you may ask? With large datasets and very random access patterns, you would expect close to worst-case scenarios. What we found is that we had a lot of blocks that were never accessed. We have thick-provisioned most of our VM’s, which results in 100+GB of empty space in most of those VM’s (nearly 50% of all allocated capacity). With Nexenta and ZFS, all of the really active data was being moved into the ARC/L2ARC cache. While we still had some reads and writes going to the disks, it was a much smaller percentage. We quickly figured out that the algorithms and tuning Nexenta employs in ZFS for caching seems to be very very intelligent, and our working set was much smaller than we ever really imagined.