Why go with a BladeCenter instead of 1U systems?

The pro’s and con’s of using a BladeCenter

Many people looking through this site may be saying to themselves – Why would you not just build 1U systems and save yourself the cost of running a BladeCenter. Certainly 1U systems would be more cost effective than running a BladeCenter, right?  We thought so too until we really dug into it and found out that running the BladeCenter was actually less expensive when you look at building more than a few servers.  The detailed answers to how this shook out will be explored in this post in depth.

Our journey into BladeCenter land started about 3 years ago when we decided virtualization was on the horizon and would be the next “killer app”. We wanted to start consolidating workloads and consolidating management capabilities.

At the time, we were running 1U rack mount systems, all connected to a central KVM panel and several different gigabit Ethernet switches. To make the move to virtualization, we would need to think about moving to SAN storage and a network to support the SAN environment. Being a medium sized shop with a budget, we decided that iSCSI was the way we were going to go. Fiber Channel was too expensive and we had zero experience working with a fiber network. It seemed simpler to go with the topology that we were more familiar with.

After we decided to go with iSCSI, we chose a vendor. Promise Technology produces iSCSI units that are affordable and have so far worked well for us. We have finally outgrown the Promise units though, and need a more robust and expandable solution. We’re currently running multiple Promise M610i systems on each bladecenter.

Now that we had settled on our SAN hardware, it was time to look at building our virtualization boxes. We kicked around the idea of 1U systems vs using a bladecenter, and built a pro’s/con’s list of those systems. That list is below :

Pros and Cons of a BladeCenter vs 1U servers :

Pros for 1U systems

Inexpensive
Easily replaceable

Cons for 1U systems

Extra network cabling
little expandability
Extra power cabling for dual power supplies
Extra costs for KVM over IP
No remote PDC without KVM over IP
No drive redirection without KVM over IP
Lower rack density

We then built a pro and cons list for the Bladecenter

Pros for BladeCenter

Consolidated Management
Significantly less cable clutter
Redundant high efficiency power supplies
Remote KVM built in
Remote drive redirection/ISO mounts
Higher rack density

Cons for BladeCenter
Limited expandability
Cost (we initially assumed BladeCenter might cost more)
Vendor Lock-In

So the question becomes, do you go with
Bladecenter:
Fully Populated Bladecenter

Or a bunch of 1U servers :
a bunch of 1U servers
When we looked at the list, we decided the Pro’s for the BladeCenter outweighed the Pro’s of 1U units.

After doing a thorough cost analysis though, we found out that it is actually less expensive to go with the Bladecenter by just under US$2000 if you are actually filling the bladecenter vs 10 1U systems. This is shown in the spreadsheet below using pricing for current systems that we are building (Blade) and what we would build if we were using 1U systems.

Parts Pricing Breakdown 1U vs BladeCenter

Part URL Quantity Price Total

Blade link 1 $728 $728
Processor link 2 $399 $798
RAM link 12 $143 $1716
Infiniband link 1 $411 $411
heatsink link 2 $28 $56
HDD link 2 $120 $240

Total Blade $3949

Bladecenter Chassis link 1 $3248 $3248
Infiniband Switch link 1 $3957 $3957
Secondary Gigabit Network link 1 $440 $440

Extra costs Blade $7645

Barebones link1 $1219 $1219
Processor link 2 $399 $798
Ram link 12 $143 $1716
Infiniband link 1 $493 $493
Heatsink link 2 $27 $54
HDD link 2 $120 $240

Total 1U $4520

Infiniband Switch link 1 $3182 $3182
Gigabit Switches link 2 $209 $418

Extra Costs 1U $3600

Total Bladecenter One blade $11594
Total Bladecenter Full $47135

Total 1U $8120
Total 10U $48800


Cabling costs breakdown

Now we’ve left off things like UPS’s, Ethernet Cables, Power strips, etc, but those costs can actually add up quite quickly also. And who wants to deal with a huge bundle of Ethernet cables?

Ethernet Cable Bundle

Ethernet Cables – $5 per cable 3x per server – 30 total – $150
Infiniband Cables – $50 per cable 1x per server – 10 total – $500

As you can see the lists above demonstrate that if you’re actually going to be using the full capacity of a bladecenter, it’s cheaper than buying 1U systems, it’s more dense than buying 1U systems, and it’s significantly less cable mess than buying 1U systems. This doesn’t even factor in UPS cost and redundancy.

UPS and power distribution costs

UPS and power distribution could become another potential cost hurdle, as well as a reliability hurdle. You would want each server to be on two separate UPS’s in case one UPS failed. That means each UPS has to be able to handle the full load of the servers being connected to it. That means a minimum of 4x 3500 watt UPS’s @ 110V. With the BladeCenter, the power considerations are N+1, so the BladeCenter will never use more than 6000 watts (3*2000). As such, you could run 4x 2000 watt UPS’s, and be protected from any single UPS fault. If you wanted to use 2000 watt UPS’s for the 1U systems you’d need a minimum of 10 UPS’s to be protected. 4 2000 watt UPS’s are significantly cheaper than 10 2000 watt UPS’s. Even if you went with the 3500 watt UPS’s for the 1U systems, the 3500 watt UPS’s are significantly more expensive than the 2000 watt UPS’s.

Space Considerations

Another consideration is space.  Say you have 100 servers to deploy.  If you deploy using 1U servers and supporting equipment you will end up using 340U of space, or 9 racks of space. If you deploy using bladecenters and supporting equipment you will use a total of 150U of space or a little less than 4 racks.

A single rack has 42U of space.  In 42U of space, you can fit 6 bladecenters, for a total of 60 servers.  2 racks will get you 100 servers, plus 14U of extra space for UPS’s.  With 10 bladecenters, you need 40 UPS’s, with a typical 2000 watt UPS taking up 2U.  That will fill another 80U of space.  Total space required for 10 bladecenters and 40 UPS’s is 150U, or just shy of 4 racks full of equipment.  If you want to do the same amount of space for 1U servers, you’ll need 100U for the servers, 200U for UPS’s, 20U for Ethernet switches, and 20 U for Infiniband switches for a total of 340U, or just over 8 racks.  If you’re filling a datacenter and are paying for rack space, the extra 5 racks can make up a significant montly cost, not to mention the lost data processing density.

Overall we feel that the decision to go with a bladecenter has benefitted us greatly. We’ve saved space, money, and gotten rid of the ever terrible datacenter cable jungle. When we implement our ZFS system as our storage subsystem, we will be able to remove even more cabling as there will only be two InfiniBand cables from the SAN system to the bladecenter. Compare this to 2x Gigabit Ethernet cables per Promise iSCSI device plus a single ethernet connection for management. That consolidates multiple ethernet cables down to two Infiniband cables per head-end unit and a single cable from the Head unit to each additional shelf of drives. A significant improvement from our standpoint.

Monday, May 10th, 2010 Hardware

4 Comments to Why go with a BladeCenter instead of 1U systems?

  • rens says:

    Thanks a lot for the detailed answer on my question! I’m checking in daily for updates here 😉

  • jdye says:

    nowadays the idataplex (and 2u twin^2 supermicro clones) might be a better choice. it runs about $4000 barebones with integral QDR infiniband (with sockets for opteron 61xx), and $2700, i think, without IB. that’s 800 for the chassis, and 800 for the motherboard with IB, or about 450 for the motherboard without. that will get you 4 machines in 2u. it might come down to the cost of cooling, though. i think the supermicro idpx clones have 80mm fans. and i’ve not seen the blowers on the supermicro blade chassis.

    just a thought.

  • admin says:

    We’ve looked at some of the solutions where you can place multiple nodes in the same case. It is kind of like having a mini-bladecenter. Depeneding on the specific needs, it may be a viable option. For our needs, we prefer using blade centers. The cooling on blade centers is really impressive.

    If you are looking for maximum consolidation, a really neat option is the Supermicro TwinBlade. The TwinBlade stuff is two server nodes per blade. Here is a link about TwinBlade:
    http://www.supermicro.com/products/SuperBlade/TwinBlade/

  • Leave a Reply

    You must be logged in to post a comment.