Test Blade Configuration

Our bladecenters are full of high performance blades that we use to run a virtualized hosting environment at this time. Since the blades that are in those systems are in production, we couldn’t very well use them to test the performance of our ZFS system. As such, we had to build another blade. We wanted the blade to be similar in spec to the blades that we were using, but we also wanted to utilize some of the new technology that has come out since we put many of our blades into production. Our current environment is mixed with blades that are running Dual Xeon 5420 processors w/ 32GB RAM and dual 250GB SATA hard drives, some systems that are running Dual Xeon 5520 processors w/48GB RAM and dual 32GB SAS HDD’s.
We use the RAID1 volumes in each blade as boot volumes. All of our content is stored on RAID10 SANs.

Following that tradition, we decided to use the SuperMicro SBI-7126T-S6 as our base blade. We populated it with Dual Xeon 5620 processors (Xeon Westmere based quad core), 48GB Registered ECC DDR3 memory, dual Intel X-25V SSD drives (for boot in a RAID1 mirror) and a SuperMicro AOC-IBH-XDD InfiniBand Mezzanine card.

Front panel

Front panel of the SBI-7126T-S6 Blade Module

SSD Drives being installed into drive bays

Intel X25-V SSD boot drives installed

Components installed into blade
Dual Xeon 5620 processors, 48GB Registered ECC DDR3 memory, Infiniband DDR Mezzanine card installed

Some of our initial tests will be running Windows 2008R2 and IOMeter. We will be testing iSCSI connections, NFS connections, and hopefully SRP (SCSI RDMA Protocol) connections. Once we have some baseline numbers to work with for different read/write loads, we will be doing additional testing of different virtualization environments.

We plan on deploying VMWare, Xen, and Hyper-V 2.0 hosts on this blade, and we will then run a battery of IOMeter tests from inside a Windows 2008R2 guest. We will build the guest OS fresh for each virtualization platform as to prevent any driver corruption issues, and we will run tests using thin provisioning, normal provisioning, and all sorts of different connection protocols. We hope to give a very good idea of what our raw system performance can be, and what kind of system performance guest OS’s can see from different Hypervisors.

Exact testing and methodology has not been planned yet, but we will go in to great detail of the exact tests we will be performing and why we chose those tests. We hope to be able to give our readers enough information that they can replicate these tests on their own.

Wednesday, May 5th, 2010 Hardware

4 Comments to Test Blade Configuration

  • rens says:

    May i ask what your primary reason are to use blades instead of regular Supermicro servers? Power/space efficiency? What about costs, are they about the same?

    Looking forward to your next posts, thanks!

  • admin says:

    The blade style solution is really nice. It is easy to manage and very reliable. The effeciency of blade solutions is excellent.

    If you only need one server, a blade solution is much more expensive. However, a blade solution can actually cost less when you are looking at large deployments (many servers). For example, price out a full blade center and then price out the same amount of processing power using non-blade servers. The blade style solution is generally less expensive.

  • […] blade that we had built for testing of the InfiniBand network and then as a virtualization system. Test Blade Configuration It’s overkill to say the least. We have ordered another blade outfitted with a Xeon 5506, 2GB […]

  • […] that we used for the ZFSBuild2010 testing back in 2010.  The specs for that blade are available at http://www.zfsbuild.com/2010/05/05/test-blade-configuration/ .  Note, we did not merely use a blade with the same specs.  We literally removed that exact […]

  • Leave a Reply

    You must be logged in to post a comment.