FCoE and Intel X520’s

We’ve got a newfound lab to play with, and it’s a monster.  Suffice to say there is going to be a lot of information coming out about this system in the next few weeks.  One of the more interesting things that we’re trying with it is software FCoE.  Obviously for the best performance, you’re going to want to use a hardware FCoE CNA (Converged Network Adapter).  For testing purposes though, Intel X520 10GbE network cards seem to work just fine.  Just wanted to throw this out there if anyone wanted/needed to play with FCoE

Here’s the steps to getting FCoE up and running on Intel X520 nic’s (this is being done on NexentaStor Enterprise, 3.1.3 – officially unsupported) This should work on OpenIndiana also.

1. Install 10GBE Cards and connect them.
2. Configure interfaces – set MTU to 9000 (this is important) – test traffic with ping –s 9000 and iperf
3. Unconfigure network interfaces

setup network interface ixgbe0 unconfigure

4. Create FCoE Target – on iscsi target machine
fcadm create-fcoe-port -t -f ixgbe0 (-f enables Promiscuous Mode: On)

5. Create FCoE Initiator – on iscsi initiator machine
fcadm create-fcoe-port -i -f ixgbe0

6. Check state
stmfadm list-state

7. List targets
stmfadm list-target

8. Online your targets
stmfadm online-target

9. Create volume on target side
a. Setup volume create data

10. Create zvol on target side
setup zvol create data/test

11. Share zvol over iscsi
setup zvol data/test share

12. Make sure lun shows up
a. Show lun

 

Friday, July 27th, 2012 Hardware

8 Comments to FCoE and Intel X520’s

  • cnagele says:

    Have you tested network latency and throughput on the x520 between Nexenta and linux? We’ve have a test environment and noticed that iperf tests from Illumos to Debian are about 9.9Gbps, but the other way is much less. We see the same with pings, about 0.21ms to and 0.08 the other way.

    Wondering if you had to do any tuning for 10gbe.

  • I have not tested this yet directly. All of the stuff that we have 10GbE on has been used for VMware environments, and we have not directly tested Linux clients connecting via 10GbE. If I get a chance in the lab I will do some testing with this.

  • cnagele says:

    We finally got it to work. I had to set

    intr_throttling=1;

    in the /kernel/drv/ixgbe.conf. After that I get line speed both ways (9.89Gb/s) and near local disk performance for reads and writes (with zil disabled). Pings are still a bit slower from OmniOS to Debian though.

  • That’s excellent news. I’ll be monkeying with our setup here in the next week or so, I’ll try flipping that bit and see if it makes any difference in our deployment.

  • johnny says:

    what FCoE switch are you using to connect to your x520 as the target mode interface on your Solaris box?
    what version of Solaris r u using? what version of ixgbe r u using? Thanks

  • We are running Cisco Nexus 5548 switches, running Nexenta 3.1.3, and I’m not sure exactly what version if ixgbe we are using. I’ll jump back into the lab in the next few days and get that info.

  • cryptz says:

    Just curious, have you done any comparison of 10g iscsi and 10g fcoe with this hardware?

  • […] FCoE on NexentaStor HowTo by ZFSBuild […]

  • Leave a Reply

    You must be logged in to post a comment.