Chassis Selection

SuperMicro SC846E1-R900B
SuperMicro SC846E1-R900B Chassis Photo

We host a lot of websites and need a lot of fast storage for those websites and our Cloud Infrastructure. We currently run a lot of individual iSCSI devices over Gigabit Ethernet. We want to consolidate those individual iSCSI devices in to a centralized unit that is comprised of Hybrid storage (SSD+HDD) that is expandable to support a large amount of drives with redundant connections to our Cloud Infrastructure. We are also moving from our current Gigabit networking to Infiniband for increased bandwidth and decreased latency. Eventually this solution will encompass realtime data replication to a remote datacenter for global failover.

Our search for our ZFS SAN build starts with the Chassis.  We looked at several systems from Supermicro, Norco, and Habey.  Those systems can be found here :

SuperMicro : SuperMicro SC846E1-R900B

Norco : NORCO RPC-4020

Norco : NORCO RPC-4220

Habey : Habey ESC-4201C

The Norco and Habey systems were all relatively inexpensive, but none came with power supplies, nor did they support daisy chaining to additional enclosures.  You also needed to use multiple connections to the backplane to access all of the drive bays.

The Supermicro system was by far the most expensive of the lot, but it came with redundant hot-swap power supplies, the ability to daisy chain to additional enclosures, 4 extra hot-swap drive bays, and the connection to the backplane was a single SFF-8087 connection, simplifying internal cabling significantly.
SuperMicro Backplane connections

The SuperMicro backplane also gives us the ability without any additional controllers to daisy chain to additional external chassis using the built in expander.  This simplifies expansion significantly, allowing us to simply add a cable to the back of the chassis for additional enclosure expansion.

Given the cost to benefits analysis, we decided to go with the SuperMicro chassis.  While it was $800 more than most of the other systems, having a single connector to the backplane allowed us to not spend a lot of money on the SAS HBA card (more on this later).  To support all of the other systems, you either needed to have a 24 port RAID card or a SAS controller card that supported 5 SFF-8087 connectors.  Those cards each can run into the 500-600 dollar range.

We also found that the power supplies that we would want for this build would have significantly increased the cost.  By having the redundant hot-swap power supplies included in the chassis, we saved additional costs.  The only power supply that we found that would come even close to fulfilling our needs for the Norco and Habey units was an Athena Power Hot Swap Redundant power supply that was $370 Athena Power Supply.  Factoring that in to our purchasing decisions makes the SuperMicro chassis more and more of a no-brainer.

SuperMicro SC846E1-R900B Chassis Photo
We moved the SuperMicro chassis into one of the racks in the datacenter for testing as testing it in the office was akin to sitting next to a jet waiting for takeoff. After a few days of it sitting in the office we were all threatening OSHA complaints due to the noise! Seriously, it was that loud. It is not well suited for home or office use unless you can isolate it.

SuperMicro SC846E1-R900B Chassis Photo
Rear of the SuperMicro chassis. You can see one power supply disconnected, but the system is actually running. You can also see two network cables running to the system. The one on the left is the connection to the IPMI management interface for remote management. The one on the right is one of the gigabit ports. Those ports can be used for internal SAN communications or external WAN communication.

Power Supply Replacement
Removing the Power supply is as simple as pulling the plug, flipping a lever, and pulling out the PSU. The system stays online as long as one power supply is in the chassis and active.

Power Supply
The Power Supply appears to be built by AbleCom which judging by the logo could be a subsidiary of SuperMicro. In any event, it appears to be a well built unit. Only time will tell.

Power Distribution Backplane
This is the Power Distribution Backplane. This allows both PSU’s to be active and hot swapable. If this should ever go out, it is field replaceable, but the system does have to go offline.

A final thought on the Chassis selection – SuperMicro also offers chassis with 1200W power supplies.  We considered this, but when we looked at the decisions that we were making on hard drive selections, we decided 900W would be a great plenty.  We are selecting a hybrid storage solution using 7200RPM SATA HDD’s and ultra fast Intel SSD caching drives for our project and do not need the extra power for those drives.  If our plan was to populate this chassis with only 15,000RPM SAS HDD’s, we would definitely want to select the 1200W chassis. 

Another consideration would be if you decided to create a highly available system.  If that is your goal you would want to use the E2 version of the chassis that we selected, as it supports dual SAS controllers.  Since we are using SATA drives, and SATA drives only support a single controller, we decided to go with the single controller backplane.

Additional Photos :

Internal Photo
This is the interior of the chassis, looking from the back of the chassis to the front of the chassis. We had already installed the SuperMicro X8ST3-F Motherboard, Intel Xeon E5504 Processor, Intel Heatsink, Intel X25-V SSD drives (for the mirrored boot volume), and cabling when this photo was taken.

Internal Photo
This is the interior of the chassis, looking at the memory, air shroud, and internal disk drives. The disks are currently mounted so that the data and power connectors are on the bottom.

Fixed Hard Drives
Another photo of the interior of the chassis looking at the hard drives. 2.5″ hard drives make this installation simple. Some of our initial testing with 3.5″ hard drives left us a little more cramped for space.

Hot Swap Drive Caddys
The hot swap drive caddys are somewhat light weight, but that is likely due to the high density of the drive system. Once you mount a hard drive in them though they are sufficiently rigid for anyones needs. Don’t plan on dropping one on the floor though and having it save your drive. You can also see how simple it is to change out an SSD. We used the IcyDock’s for our SSD location because they are tool-less. If an SSD were to go bad, we simply pull the drive out, flip the lid open quick, and drop in a new drive. The whole process would take 30 seconds. Very handy if the need ever arises.

Hot Swap Fan
The hot-swap fans are another nice feature. The fan on the right is partially removed showing how simple it is to remove and install fans. Being able to simply slide the chassis out, open the cover, and drop in new fans without powering the system down is a must-have feature for a storage system such as this. We will be using this in a production environment where taking a system offline just to change a fan is not acceptable.

Front Panel Power Control
The front panel isn’t super complicated, but it does provide us with what we need. Power on/off, reset, and indicator lights for Power, Hard Drive Activity, LAN1 and LAN2, Overheat, and Power fail (for failed Power Supply).

Sunday, March 21st, 2010 Hardware

19 Comments to Chassis Selection

  • Benji says:

    Do you know if this chassis can be vertically mounted?

  • admin says:

    This chassis is designed to be mounted horizontally in a rack. It is pretty sturdy, so you might be able to get away with standing it on the floor vertically. I would not recommend doing that, though.

  • Benji says:

    It wouldn’t be standing on the floor, rather mounted vertically in a vertical rack, such as this one :

    http://bhlpower.com/images/miniraqstandard2.jpg

    So the chassis would be fastened by its “ears” rather than being on rails horizontally.

    Do you think it would work with this chassis?

    Thanks

  • admin says:

    I think it is too heavy to mount it like that with all of the weight on the “ears”.

  • […] We want to consolidate those individual iSCSI devices in to a centralized unit that is comprised of Hybrid storage (SSD+HDD) that is expandable to support a large amount of drives with redundant connections to our Cloud … A final thought on the Chassis selection – SuperMicro also offers chassis with 1200W power supplies. We considered this, but when we looked at the decisions that we were making on hard drive selections, we decided 900W would be a great plenty. …Read more… […]

  • Warhammer says:

    Saw the article over at Anandtech. Really an interesting build as I am presently trying to piece a storage server together. Looks like you got great performance thru your parts selections without spending big $$$$ on a controller card.
    Did you try using mechanical drives instead of SSD’s to see what the performance hit would be?
    Do you know of any good primers on SAS/SATA backplane technology?
    I have downloaded the Supermicro manuals for the the X8ST3-F MB and the 826TQ R800 chassis but I am getting lost trying to figure out if that would all work together without a controller as in your build.
    Thanks for a most informative site.

    Warhammer

  • admin says:

    Warhammer: We did use mechanical drives for the primary storage. We only used SSD drives for the boot volume, ZIL drives, and cache drives. The idea behind using ZFS this way is we can build a hybrid storage solution where we get excellent performance without the cost of doing a purely SSD based storage solution.

    I think your other question regarding a controller is actually asking how we built this solution without needed a hardware RAID card. Was that what you were asking? Anyway, ZFS directly manages the drives. A hardware RAID controller would actually get in the way when using ZFS for SAN or NAS based solutions. You can think of ZFS as extremely high end software style RAID.

  • Warhammer says:

    I manage desktop support for a school district and admin a Promise that hangs off a Mac server we have. I don’t get to do too much other “networking” on the job. The performance of your solution is an eyeopener given the price point.
    On the SSD’s I was curious of you had tried using mechanical drives for those to see what the performance hit was.
    I am trying to find out what type of backplane is needed in the chassis to work with the X8ST3-F mb without needing another controller and will run 12 or so drives.
    I am looking at the 2U 12 drive chassis like the 826TQ. Those are sold with a few different backplanes – no expander, 1 expander and 2 expanders being the differences that are readily apparent.
    From what I have read it looks like the no expander will only run the number of drives that you have ports for – in the case of the X8ST3-F that would be 8 ports.
    A single expander board would allow access to all drives with the ports on the mainboard and attachment of addtional chassis via cabling and some kind of power control board.
    A dual expander would allow single expander capabilities and failover for higher availability?
    If that is right then it looks like the E1 would be the way I would go.

    “I think your other question regarding a controller is actually asking how we built this solution without needed a hardware RAID card. Was that what you were asking?”

  • admin says:

    Warhammer – you are correct. Without a backplane with a built-in expander, you will be limited to the number of ports available on your motherboard or controller. You also have to run an individual SAS/SATA cable for each individual drive. Cabling becomse a huge headache in a situation like that, which is why we recommend going the expander route.

    If you are not looking to have a dual-controller setup and SAS HDD’s, then the E1 is the backplane to get. If you’re using SATA drives, there is no benefit to using the E2 backplane, as SATA drives do not support dual controller connections.

  • datobin1 says:

    Having expanders on the backplane will allow more drives to be connected using a single sas port but creates a bottle-neck. With 24 drives connected via 1 sas connection you are limited to 12GBits or you could say that 6 drive are sharing on 3Gbit connection.

    For ultimate performance you would want to use a backplane with 6 IPass/sas connections. It will allow 3Gbit per drive. This will however require you to have a SAS card with 6 connection or multiple cards.

    Using six connection eliminates the controller to drive bottle-neck but creates a new one. The sas controller itself will turn in to the new bottle-neck. Currently I don’t know of a SAS card that can handle 72Gbits of through-put. If anyone has tested one that is able to please add comment.

    The other option is using Multiple SAS controller cards but, then you will not be able to have all the disks in one array.

    I recently setup a backup 2 Disk solution utilizing a 900 series chassis. A Adaptec RAID 52445 giving full bandwidth to all drives and 24 RE3 2TB WD disk. In this setup the SAS card became the weak point limiting me to just a little over 1GByte/sec write/read speed. I could use 12 disk or 24 and all the performance test were the same. I ran some test with just a six disk array and saw the performance drop off.

    Researching the controller card I fond that it was my weak point. The card has a fast processor and can calculate raid parity effectively but has bandwidth limitations. I was further able to test this with raid 0, 5, 6, comparison test. One would think that raid 0 would be able to out perform raid 5, and 6 but with this card all test were the same showing just how good the card can do parity but also showing its bandwidth limit.

    At the end of the day the server can do a little over 1GBs read/write over the entire 40TB Raid 6 Array. Enough for my needs but not the full potential of the drives(which should be over 2GBs).

  • admin says:

    You are absolutely correct – We are currently limited to 12Gbit in throughput to the SATA drives in our enclosure. In testing, we were never able to see anywhere close to that in random testing, so I believe at this point that it is somewhat moot. If we were to switch to SSD drives I believe that it could potentially become an issue, although it would be one that I would love to have (that SAS to expander bandwidth was out bottleneck).

  • hlnoiku says:

    Did you install the OS through the IPMI or a usb-cd rom drive, or a network install?

  • admin says:

    We used IPMI to install the OS. We placed the ISO for the OS on one of our file servers and then mounted it using IPMI.

  • We are _exetremely_ interested in the Storage Bridge Bay product that SuperMicro manufactures. There are several reasons that we have not deployed SuperMicro’s Storage Bridge Bay for our Nexenta platform. I’m going to run those down quick and dirty, but I think this probably deserves a full blog post.

    1 – We don’t _need_ HA.
    2 – It’s expensive (hardware)
    3 – It’s expensive (software)
    4 – (relatively) limited expansion
    Like I said, quick and dirty. Making a note right now to put up a blog post about those four points.

  • schism says:

    If this system were to blow up (but drives are still in-tact) and you didn’t have an exact replica of hardware so you had to install OpenSolaris on a different chassis entirely and move all the drives over. Now how do you get all your disks, volumes etc back online?

    I’m just planning for “worst case” of what could happen and how hard it is to migrate all your disks to a different system.

  • What I would do to test this is to set it up in VMware (or xen, or hyper-v, or whatever) and test the migration of virtual disks from one VM to another.

  • schism says:

    Matt,

    I did this just now to test it. I had a 4 drive mirrored pair pool. I then installed a fresh copy of OpenSolaris 11 11/11 and moved the 4 drives over. I then had to run:

    zpool import -f mypool

    After doing so, everything imported just fine. It even imported the NFS settings so it was sharing my volumes over NFS automatically as well.

  • I would assume that iSCSI configs may be lost though? Is that correct?

  • Leave a Reply

    You must be logged in to post a comment.