SuperMicro 6036ST-6LR

We’ve been asked about the SuperMicro 6036ST-6LR, affectionately known as the SuperMicro Storage Bridge Bay, and why we did not use that platform.  I threw out a few reasons quick last night, but wanted to elaborate on those points a little bit, and add another one.  First and foremost, when we started our build, the Storage Bridge Bay wasn’t available.  If it had been, we probably would have gotten it just for the new/neat factor.  Now, on to the other reasons I posted last night.

1 – We don’t _need_ HA.  This sounds a bit silly, but for all intents and purposes, we don’t _need_ HA on a day to day basis.  Our hardware has been stable enough to allow us to get by without having a full HA system for storage.  Yes, for maintenance windows, it would be nice to be able to fail to another head, do maintenance, then fail back.  We are a service provider though, and our SLA’s allow us to open maintenance windows.  Everyone has them, everyone needs them, and sometimes downtime is an unavoidable consequence.  Our customers are well aware of this and tolerate these maintenance windows quite well.  While we’d love to have HA available, it’s not a requirement in our environment at this time.

2 – It’s expensive (Hardware) – For about the same price, you can get 2x 2U SuperMicro systems and cable them up to an external Jbod.  If you don’t need or want HA, your costs go down exponentially.

3 – It’s expensive (Software) – To get into HA with Nexenta, you have to run at least Gold level licensure, plus the HA cluster plugin.  For example, if you’ve only got 8TB of RAW storage, the difference is between 1725 for an 8TB Silver license, vs 10480 for 2 8TB Gold licenses plus the HA Cluster plugin.  Obviously going with more storage makes the cost differential smaller, but there is definately a premium associated with going HA.  Our budget simply doesn’t allow us to spend that much money on the storage platform.  Our original build clocked in well under $10,000 (6,700 to be exact, checking the records).  Our next build has a budget of under 20,000.  Spending half of the budget on HA software just breaks the bank.

4 – (Relatively) limited expansion – Our new build will likely be focused on a dedicated 2U server as a head node.  This node has multiple expansion slots, integrated 10GbE, and support for up to 288GB of RAM (576GB if 32GB DIMMS get certified).  It’s a much beefier system allowing for much more power in the head for compression and caching.  Not that the Storage Bridge Bay doesn’t have expansion, but it’s nowhere near as expandable as a dedicated 2U head node.

Now, after all of this, don’t go throwing in the towel on building an HA system, or even building an HA system using the Storage Bridge Bay.  For many use cases, it’s the perfect solution.  If you don’t need a ton of storage but still need High Availability and are constrained on space, this is a perfect solution.  3U and you can have a few dozen TB of storage, plus read and write caches.  It’d also be the perfect solution for a VDI deployment requiring HA.  Slap a bunch of SSD’s in it, and it’s rocking.  After using the Nexenta HA plugin, I can say that it’s definately a great feature to have, and if you’ve got the requirement for HA I’d give it a look.

Tuesday, November 22nd, 2011 Hardware

7 Comments to SuperMicro 6036ST-6LR

  • d3mon187 says:

    I’d really like to hear what you are doing for your new build. I’ve just recently been researching on building a solaris zfs SAN box, and it’s a bit overwhelming at times for a non linux guru like myself. Really, I’m just not that impressed with the big SANs like compellent and EMC, and the prices are pretty crazy. We have an EMC AX4-5i right now that I’m really not impressed with, and I’m sick of sinking money into it. If we go the ZFS route, we’d also at least be able to use the old AX4 as storage for an off site backup with a simple HA license. Reading about Nexenta’s simple HA, it seems like a much cheaper alternative to their active/active HA, and I figure it’d be entirely acceptable if our main unit failed and then we had to use the offsite backup for a little while. On the hardware front, I’m having some trouble though. I want a system that can support 24-36 drives in the main unit with options for expansion, and I really want it to support 6Gb/s. With the money we would be saving from not going with Compellent or EMC, I think we could get a pretty crazy fast system. Any recommendations?

  • There are a few really good options out there. For around 15k you can put together a pretty solid build based on a SuperMicro SC847 chassis. Check this one out here – http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400U.cfm

    My main suggestion would be to look over the HSL and follow it pretty closely. Drives, SAS cards, Enclosures, Motherboards, and network cards should be paid extra attention. http://www.nexenta.com/corp/images/stories/pdfs/hardware-supported.pdf

    So far we’ve had really good luck with SuperMicro equipment, and will be following that route. Keep an eye on this site, and we’ll be posting a new build that would probably be a great starter for your build.

  • StamfordRob says:

    I have been using SuperMicro units for some time now using OPENe OS and now I am looking to test the Nexenta install. I have not seen anything on your controller cards. Are you using the base ports on these SuperMicro units and not a controller? I ask because I have a 52445 card in my 24U SM server and I think that it has truly been a bottle neck and most recently not as dependable as I though Adaptec was. Do you have your full used specs listed anywhere here? I may be missing it. I have seen your drive selections and talk about expansion boards (which I never used before). From the way all this reads, with ZFS or Nexenta you trust a software raid over hardware, is that true? I am a bit old school and we never trusted that before. Can someone talk more to that trust point too. Thanks for all the great knowledge on this blog.

  • On the ZFSBuild 2010 build we used the on-board SAS controllers. They were LSI 1068 based controllers and were sufficient for what we planned with that system. If you look at the “Hardware” tags, those go through step-by-step all of the hardware that we selected for our Nexenta build.

    As far as trusting ZFS over hardware RAID, that’s something that is becoming more commonplace with all vendors (EMC, NetApp, and the like). The thing that sold us on ZFS was the end-to-end checksumming. For every chunk of data that is written, a checksum is calculated. This is very similar to a hardware RAID controller. Where it differs is that every time that chunk of data is read back out, the checksum is also read, compared, and if it is invalid, the data is reconstructed.

    You can also schedule background checks of that data to verify integrity. These run at a low priority, and in my experience have had a low impact on production performance.

    If you want more convincing, check out CERN’s research on silent data corruption, and their opinion of ZFS. It’ll wow you.

    Based on all of that ZFS is now my filesystem of choice. Nexenta has wrapped that all into a package that is not only easy to administer, but performs great, enhances data integrity, and costs less than the enterprise competition.

  • nerdmagic says:

    FWIW I have also resisted HA because of the cost of Nexenta or Solaris Cluster. I previously implemented HA when Solaris Cluster was available free for OpenSolaris 2009.06.

    However it turns out you can buy the HA software Nexenta uses, without buying Nexenta. It’s called RSF-1 and is available from a small British firm at http://high-availability.com.

    I got a 30-day demo license and tested it on a Solaris 11 ZFS cluster, which they assured me would work even though they only list Solaris 10 on the site.

    So far, for me, it just works. It’s simple, robust, and portable (supported on every major Unix/Linux platform). It looks like the kind of HA system all of us would like to have time to write for ourselves, except it’s had a decade to mature.

    I expect they would even support it on OpenIndiana (after all it already runs on Illumos via Nexenta, as well as Solaris 10 and 11).

    I was quoted $4800 for a two-node storage cluster. For us it looks much better and much cheaper than Sun Cluster, and we have already planned to buy at least two licenses.

  • schism says:

    So if one choose to go with a single head to avoid HA costs, how hard is it to get a spare identical head server up and going in the event the primary failed? I’m assuming there are pool configs and such that would need setup on the spare?

    I’m looking at building a OpenSolaris/ZFS SAN/NAS and I don’t necessarily NEED HA, but I definitely would want a way I could bring up a different head within 30-45minutes.

  • Pool configs are not very difficult – simply bringing the system online will allow you to import the entire pool by running “zpool import” or doing it from the Nexenta GUI. I have not specifically tested bringing a new system online after a hardware failure in that circumstance.

    I would say that you would want to run mirrored boot volumes so that you could take the entire config with you after a node failure.

    Also, the “Simple HA” plugin significantly less expensive as the standard HA plugin, and would give you much better resiliency in a failure scenario.

  • Leave a Reply

    You must be logged in to post a comment.