ZFSBuild 2012

It’s been two years since we built our last ZFS based server, and we decided that it was about time for us to build an updated system.  The goal is to build something that exceeds the functionality of the previous system, while costing approximately the same amount.  The original ZFSBuild 2010 system cost US $6765 to build, and for what we got back then, it was a heck of a system.  The new ZFSBuild 2012 system is going to match the price point of the previous design, yet offer measurably better performance.

The new ZFSBuild 2012 system is comprised of the following :

SuperMicro SC846BE16-R920 chassis – 24 bays, single expander, 6Gbit SAS capable.  Very similar to the ZFSBuild 2010 server, with a little more power, and a faster SAS interconnect.

SuperMicro X9SRI-3F-B Motherboard – Single socket Xeon E5 compatible motherboard.  This board supports 256GB of RAM (over 10x the RAM we could support in the old system) and significantly faster/more powerful CPU’s.

Intel Xeon E5 1620 – 3.6Ghz latest generation Intel Xeon CPU.  More horsepower for better compression and faster workload processing.  ZFSBuild 2010 was short on CPU, and we found it lacking in later NFS tests.  We won’t make that mistake again.

20x Toshiba MK1001TRKB 1TB SAS 6Gbit HDD’s – 1TB SAS drives.  The 1TB SATA drives that we used in the previous build were ok, but SAS drives give much better information about their health and performance, and for an enterprise deployment, are absolutely necessary.  These drives are only $5 more per drive than what we paid for the drives in ZFSBuild 2010.  Obviously if you’d like to save more money, SATA drives are an option, but we strongly recommend using SAS drives when ever possible.

LSI 9211-8i SAS controller – Moving the SAS duties to a Nexenta HSL certified SAS controller.  Newer chipset, better performance, and replaceability in case of failure.

Intel SSD’s all around – We went with a mix of  2x Intel 313 (ZIL), 2x 520 (L2ARC) and 2x 330 (boot – internal cage) SSD’s for this build.  We have less ZIL space than the previous build (20GB vs 32GB) but rough math says that we shouldn’t ever need more than 10-12GB of ZIL.  We will have more L2ARC (480GB vs 320GB) and the boot drives are roughly the same.

64GB RAM – Generic Kingston ValueRAM.  The original ZFSBuild was based on 12GB of memory, which 2 years ago seemed like a lot of RAM for a storage server.  Today we’re going with 64 GB right off the bat using 8GB DIMM’s.  The motherboard has the capacity to go to 256GB with 32GB DIMM’s.  With 64GB of RAM, we’re going to be able to cache a _lot_ of data.  My suggestion is to not go super-overboard on RAM to start with, as you can run into issues as noted here : http://www.zfsbuild.com/2012/03/05/when-is-enough-memory-too-much-part-2/

For the same price as our ZFSBuild 2010 project, the ZFSBuild 2012 project will include more CPU, much more RAM, more cache, better drives, and better chassis.  It’s amazing what two years difference makes when building this stuff.

Expect that we’ll evaluate Nexenta Enterprise, OpenIndiana, and revisit FreeNAS’s ZFS implementation.  We probably won’t go back over the Promise units, as we’ve already discussed them and they likely haven’t changed (and we don’t have any lying about not doing anything anymore).

We are planning to re-run the same battery of tests that we used in 2010 for the original ZFSBuild 2010 benchmarks.  We still have the same test blade server available to reproduce the testing environment.  We also plan to run additional tests using various sized working sets.  InfiniBand will be benchmarked in additional to standard gigabit Ethernet this round.

So far, we have received nearly all of the hardware.  We are still waiting on a cable for the rear fans and a few 3.5 to 2.5 drive bay converters for the ZIL and L2ARC SSD drives.  As soon as those items arrive, we will place the ZFSBuild 2012 server in our server room and begin the benchmarking.  We are excited to see how it performs relative to the ZFSBuild 2010 server design.

Here are a couple pictures we have taken so far on the ZFSBuild 2012 project:

Tuesday, September 18th, 2012 Hardware

61 Comments to ZFSBuild 2012

  • mketech says:

    Very cool! Looking forward to seeing the comparison.

  • JB says:

    Very excellent.. I am looking forward to this. Curious, will you do a test of FreeBSD 9 as well? FreeNAS is still based on 8.2 and an older verion of ZFS. It might be worth looking at FreeBSD as well.

    Are you still using an Infiniband network in this test? What has been your experience with Nexenta in IB support?

    Will you be testing VM as well? I have been more recently interested in combining my vm/storage units by putting storage into a virtual server and then serving up the other guests from it via iscsi. Being within the same system, there should be no latency and ‘like local’ access speeds. Of course you passthrough disks and pci devices and pin your vcpus for the ZFS storage vm to ensure performance. Interested? 🙂

  • Pablo says:

    The free version of vmware would cap his memory usage. And I’ve come across some strange behavior with passthrough hardware and having to re-add the devices after updating the host.

    I would be curious to see OmniOs brought out as the main OS and see how it fares compared to the usual openindiana install..

    We have a few Nexenta boxes here but I am wondering as to what server 2012 brings with regards to storage now. Copy on write, data deduplication and if it deals with caching properly, I would be worried about microsoft slowing down some of the traction zfs has gotten.

  • JB says:

    @Pablo — I was thinking Xenserver or Xen when I suggested it.. no limitations on memory there. If I saw that kind of behaviour you described I would be very wary as well to try this.

    Win2012 might be interesting I agree.

    What are the qty of drives you are getting? You mentioned the types but not the qty. Are each of the SSDs in twos? Will you fill the chassis with 16 1tb and 2 for hotspare?

  • Sorry – totally missed the quantity – will update the article.

    20x 1TB SAS drives (18+2HS)
    2x L2ARC
    2x ZIL
    2x Boot (in internal non-hot swap cage)

  • Also – Windows 2012 will be tested in a lab we have access to. The new Storage Spaces looks really interesting, especially if you look at LSI’s HA-DAS initiative.

  • JB says:

    If you were to build this array using only SSDs.. do you think there would be any need at all for a ZIL or L2ARC?

    For the price of a 2tb drive, I can get a 256GB SSD. Yes it is 1/10 the capacity, but if you don’t need 10TB of usable storage, and you want to have more heads to ensure IOPS, going all SSD is becoming compelling.

    Sorry if this comment is a little off topic, but looking at your setup got me thinking. 🙂

  • If you were doing all SSD, I would still recommend a ZIL device. I would assume that if you were doing an all SSD array, price would be less of a concern, and you could go with a very high-end ZIL device (ZeusRAM) that has IO response times that are nearly non-existent. I would also still recommend a dedicated ZIL device due to wear that it would induce on the pool devices.

    L2ARC would be less of a concern, as the latency accessing L2ARC devices is going to be the same as accessing pool devices, so I probably would not segregate those.

  • JB says:

    Price is still a concern. Its just that if the need for storage space is not so big like 10TB for your setup, then SSDs start to look attractive at the sub $1 per/gb price they are at now.

    Those ZeusRAMs start at around $2400 for 8gb.. that one drive could buy me over 10 SSD 256GB drives for 2.5TB of storage. If you really only needed something like 1.5TB of usable storage that would be 12 SSD drives. If you bought SAS drives for anywhere near the same space, you are maxed out at 4 X 1TB drives and you are done. The rest of the space just wasted energy. Performance on 4 SAS would be horrible vs. 12 SSD drives that combined would deliver great IOPS.

    I digress. I am wondering if you are concerned at all with the new Intel SSD you purchased using Sandforce controllers now instead of their trusted old Marvell ones they have always used.

    Have you ever explored or considered using PCI SSD for the ZIL? There is one that has gotten many good reviews and the price is right; the OWC Mercury Accelsior. Without the SAS bottleneck they claim speeds up to 820MB/s read and 763MB/s write with IOPS in the 100k range. Ironically I ask you if you are concerned about Sandforce being used with your Intel SSD, and the exact same controller is used on these ones. 🙂 Priced at $3 per/GB though they seem quite attractive. I imagine these would make L2ARC screaming fast.

  • If price was a concern, but you did want to go all SSD, I would still recommend a dedicated SLC ZIL device for wear leveling concerns. If you do not use a dedicated ZIL device all synchronous writes go to the ZIL on the pool devices, effectively reducing their lifespan.

    The only catch on that is that SATA SLC drives do not have the response times to enable high IO. You’ll probably be limited to 2-3k write IOPS by using consumer SLC drives for the dedicated ZIL device. By moving to a ZeusRAM device, you are looking at an order of magnitude more write IOPS (datasheet claims 80,000 4k random).

    As for concern about using the sandforce controller based SSD’s – not at this time. They were inexpensive enough that they can be replaced if need be. We looked long and hard at the OCZ Talos drives but could not fit it into the budget and hit the same cost numbers we hit with ZFSBuild 2010.

    If we were going to go with a PCI ZIL device, we would likely look at the DDRDrive X1 (http://ddrdrive.com/). It’s on the Nexenta HSL, and has had pretty good sucess in the ZFS market (and was specifically built for ZIL duties). The OWC Mercury devices look interesting, but not interesting enough to head down that path yet.

  • JB says:

    I know SLC is preferred, but I think more recent SSD controllers make this point a little less sticky with all their new management capabilities.

    If high IO is what you want in the cheap and reliable, I think the OCZ Vertex 4 VTX4-25SAT3-64G is the SSD of choice now. I am ordering 4 of these to test. If you go through the specs of how this was designed, it integrates 512MB of DRAM into it (1GB in higher capacity models), which I think is what accounts for its great performance gains and its consistency in performance. In effect it appears the controller now acts as a sort of ZIL and ARC on this portion to then manage the writes to the SSD more efficiently. I have found nothing but awesome reviews everywhere and editors choice picks on this unit. The price is right too @ $60 for a 64GB, and a 512GB goes for $400.

    Looking at Nexenta compatibility list, they list the Vertex 3 there, so I believe the Vertex 4 would be just as fine, similar to your purchase of the 313 over 311 that is actually on the compatibility list.

    Something to consider.. and would be happy to share my results when I get them.

    The only downside to using PCI SSD for these operations is no hotswap capability.. storage goes down if you want to replace them, and without a HA in place thats just not good enough if your environment is hosting related. Great for office storage though.

    I like the DDRDrive too, but its priced right up there with ZeusRAM. After so many years in the market, I am not sure why a competitor with the capability to add your own ram capacity has not come along yet to at least serve the gamer and media markets.. their price point is way out of reach for mother enthusiasts, but I think that would change with a competitor overnight. I need to come up with a device that costs $50 to manufacture and sell it for $2500… I’m in the wrong business. 🙂

  • eegee says:

    Can I kindly request that when you try FreeNAS, you also give the latest FreeNAS 8.3.0-BETA* a try?

    “This update brings with it Version 28 of the ZFS filesystem, as well as a number of updates to the drivers and utilities in the base system.”
    http://www.freenas.org/about/news/item/freenas-830-beta1-released
    http://www.freenas.org/about/news/item/freenas-830-beta2-is-available

  • Pablo says:

    I was curious if anyone here knows of any sites/articles where they ran a test with something like, adding a 16GB (or any size of dram based drive) Zil to a system and see the performance gain.

    And then for the next test, take out the zil and replace it with the same capacity of system memory.

    Kinda silly, but I’m still wondering what the differences would be like.

    In systems where you can have excesive amounts of memory and maybe limiting the arc would be of some benefit, it would be nice to use that excess memory as a zil device itself.

  • While this could be tested, I’m not sure what the point is. The ZIL is there to ensure posix compliance for sync writes. It resides either in the pool or on a dedicated SLOG device. If you are going to put it in RAM, it defeats the purpose of the ZIL (which is to ensure sync writes land on stable storage). If you are going to put it in RAM, which is not stable, you may as well disable it all together, and you’ll see even less of a performance impact.

    I think (and I’m guilty of not being clear when I write) that you are confusing the ZIL and dedicated SLOG devices. By moving the ZIL off of the pool disks, you increase performance of the pool by not writing a ton of small writes to the disk all the time. By increasing the speed of the SLOG device, you further increase performance. By moving it to RAM, you remove the “persistent, stable” requirement of the ZIL, and as such defeat the purpose of leaving it enabled.

  • Testing FreeNAS, it’s also on our list of things to test. We hope that the newer versions fare better than the original test.

  • rk says:

    Matt, you mention “SAS drives give much better information about their health and performance”. Could you elaborate on specific features you’re talking about here? Perhaps share some screenshots?

    No question that monitoring is critical, and not just in an enterprise environment! However SAS drives are substantially more expensive than SATA as you scale up your storage, and ZFS is supposed to “love cheap disks”. Some of the additional features you pay for with SAS, e.g. bit error rates that are order(s) of magnitude lower (due to “fat sectors” and the like) are completely wasted with ZFS (which implements its own additional parity).

    By way of example, on Windows I’ve been using Hard Disk Sentinel for years to monitor extensive SMART stats on SATA disks, and it’s proven quite adept at predicting failure (mainly by watching the Reallocated Sector Count) and has even helped to troubleshoot intermittent issues (e.g. large Command Timeout counts which turned out to be due to a faulty PSU). Provided you have compatible hardware (and its compatibility is getting fairly extensive these days) it can actually see individual SATA disks behind an HBA / array controller and report every single SMART stat available on each drive plugged in. Here are some screenshots to give you an idea of what it looks like:

    http://4.bp.blogspot.com/_gp2sybtGu-0/TCf8OL1uR9I/AAAAAAAAAPg/cNfnApbYJ00/s1600/HDD+sentinel.PNG

    http://images.snapfiles.com/screenfiles/hdsentinel2.gif

    Smartmontools does something similar for a wider range of OS’s, although it takes some more work to make the results meaningful (and isn’t always as good at “seeing through” a RAID controller / HBA).

    I couldn’t imagine managing any large collection of disks “blindly” without such tooling. However I’ve found that provided you can get access to detailed SMART information, that tends to be more than sufficient to catch misbehaving drives before they affect the rest of the storage system. I’m currently evaluating NexentaStor, and such a level of transparency into disk health and performance stats is a critical item on my shopping list. Have even looked into patching additional SMART functionality into the community edition.

    I’d like to hear more about specific advantages SAS gives you in Nexenta, out of the box.

  • Check out this link – it gives a good rundown of commands in the SCSI command set that aren’t implemented at all in the ATA command set. ftp://ftp.t10.org/t10/document.04/04-136r0.pdf

    Whether or not this is a huge functional difference, I don’t know. I’m going to trust my gut and go with SAS all the way. Nexenta support recommends SAS, and only has a handful of SATA drives on the HSL at all. That in and of itself has me looking at SAS for everything.

    The other benefit that SAS has is that you get multi-pathing. With two paths to every drive, implementing HA becomes a lot easier when you have two distinct paths without having to add a 40 dollar interposer board to each disk drive.

  • schism says:

    Why not the BPN-SAS-846A backplane which has 6 SAS connectors (4 drives per connector)?

  • del1798 says:

    Matt – ever try one of these for a home rig? http://www.amazon.com/Gigabyte-GC-RAMDISK-i-RAM-Hard-Drive/dp/B000EPM9NC

    SATA, and 1.5Gbps at that, but latency is key for SLOG. I have a friend who uses this with Nexenta community edition and is very happy with it.

  • rk says:

    Out of curiosity, will you be doing any testing/review of Napp-It? Or do you consider that base covered by the OpenIndiana testing you’ll perform?

  • del1798 – have never tried one of those, but I would assume that it would perform well. The fact that it has a battery backup is key, but I would be concerned about how long it lasted. As long as the battery backup was good for 24 hours I would not have terrible concerns about it. I have no idea however how Nexenta would see it, and whether it’d behave properly, have proper drivers, or if you could get any type of support on it.

  • RK – We’ve discussed a full-fledged review of Napp-It for a while. We played with it quite a long time ago and the interface was not terribly friendly, but performance was comparable to Nexenta. It still wasn’t as fast as OpenSolaris (at the time). If the results are similar to OI, we will not break them out on the graphs, but if they are significantly different, we will leave them as separate data points.

  • If I was doing something that had very high throughput requirements, and I had drives that could saturate a 4x wide SAS bus, then I would consider it. If your workload is even remotely random, there is no reason to use the 6x backplane connectors. The only drives that can really drive that kind of throughput consistently are SSD drives doing purely sequential reads. I don’t know of many workloads that are purely sequential reads. If you are concerned about flooding a 4x wide 6Gbit connection (24Gbit total, so you’d need at least 3x 10Gbit uplinks to get rid of all of that data) then the BPN-SAS-846A is a better solution. Personally, I don’t see the need for it.

  • schism says:

    How do you feel about OpenIndiana now that the project leader stepped down? I’m still trying to decide what platform to go with for my ZFS based NAS/SAN.

    Nexenta and Solaris are expensive. FreeBSD seems like an option, but I question it’s performance.

  • I had not heard that the OI project lead had stepped down. As for the performance of ZFS on FreeBSD, we hope to have some info for you on that in the near future.

  • wroshon says:

    A non-volatile RAM disk along the lines that the one del1798 mentions could be a better choice for the ZIL, which is write heavy, than SSD. I discovered one that is designed to dump to compact flash on power loss, which takes care of the worry about long term power outages. The price is much higher than mirrored SSDs, so I’m not sure if it’s practical. Interesting though.

    http://www.amazon.com/Acard-ANS-9010BA-5-25-inch-Dynamic/dp/B007HNH93O/ref=pd_sim_sbs_e_4

  • Johannesa says:

    I would love to see an native Zfs on Linux Benchmark compared to the illumos OS’s. TY

  • barbz127 says:

    Is there any chance you could compare Nas4free which is the community fork/version of freenas – its on BSD 9.1-release.
    I would like to know if there is any benefits of choosing the community version over the commercial.

    Thanks
    Paul

  • cnagele says:

    Hi Matt,

    What about including OmniOS in the test results? OpenIndiana is heading a bunch of directions, but OmniOS is pretty minimal which makes it nice for a server.

    Chris

  • We’ll look in to it, but right now the list of OS’s that we want to test is starting to get a little long in the tooth. We’d like to have this system in production in the next month, and that will probably preclude us from testing _every_ possible OS out there.

  • cnagele says:

    Makes sense. OI should be close enough anyway. Looking forward to it!

  • lifeismycollege says:

    I first want to thank you for all the info that you guys have provided here. It has been very helpful.

    I do have one question related to the L2ARC SSD drives that you used for the 2010 build. I had considered the Intel 320 drives as well but opted for a couple 710 emlc drives as my drive bays to house these two drives are not hot swappable.

    Can you tell me if either of those two 320 drives you striped for L2ARC ever died? I am wondering if I am being a bit paranoid. 🙂

    My build consists of only 8 x 1.5TB drives so I know I am being rather aggressive.

    Thank you…

  • admin says:

    lifeismycollege: We have yet to see a L2ARC drive fail.

  • edattoli says:

    I’ve two questions:

    1. Are you connecting only one LSI HBA port to the backplane? That’s enough for the 24 discs?

    2. In the motherboard/chasis picture I can see the LSI HBA and one more card. What is this PCI card?

  • 1 – We are only connecting to one LSI HBA port. It’s 6Gbit/sec per lane. That’s a total of 24Gbit/sec (4 lanes). We will bottleneck the network before we bottleneck the disks.

    2 – The second card is the 20Gbit Infiniband card. This is the special sauce for this setup, and has allowed us to see benchmark results that we could have never imagined.

  • edattoli says:

    TKS MATT !!
    One more think: At the end of the “ZFSBuild 2012” post say “We are still waiting on a cable for the rear fans and a few 3.5 to 2.5 drive bay converters for the ZIL and L2ARC SSD drives”

    Then, I’ve some questions:

    1. Can you let me know what’s the part number or details for the rear fans cable?
    2. If you can buy custom bays like the Supermicro MCP-220-00043-0N. Why are you using drive bay converters (I think you are using Icy Docks) ?

    Best regards.-

  • jcdmacleod says:

    Matt,

    I am about to set off on a similar project, Nexenta based. I have a couple of questions… If this was greenfield for you would you still go infiniband or 10G, considering 10G is much more affordable now?

    Secondly, we are looking at a 2.5″ version, and I am stuck on head + JBOD vs a single chassis. I suppose the benefit of head + JBOD would be the option of HA, however that gets expensive. How stable have you found a single chassis to be?

    I’m toying with a couple of single chassis deployments vs a larger HA deployment.

    Thanks,

    John

  • We have found ZFSBuild2010 to be quite stable over the last two years. Purchasing good hardware is a must for these builds though, with special attention being paid to high-quality power supplies. SuperMicro puts some great PSU’s in their equipment, and we’ve been very fortunate that we haven’t had any issues. If you have any doubts about hardware selection, go to a vendor like RackTop solutions or Pogo Linux and they’ll get you set up with something solid.

    If you _do_ have the budget to go HA, definitely do it. The knowledge that an entire head can go offline and you can still be in production makes for a lot better sleep at night.

    If you’re going to go solo on the system, distribute the load on them so that if one system goes down, you don’t take down your entire infrastructure.

    As far as the 10Gbit vs InfiniBand conversation, yes, 10Gbit infrastructure has come down in price, and it’s probably competitive (on the low end) with the prices that we paid for our InfiniBand setup. That being said, InfiniBand prices have come down too. 40Gbit InfiniBand hardware can be had for a pretty low price today. QDR switches can be had for under $3000, and adapter cards for $200-$300. With the advantages of RDMA protocols (SRP, iSER, etc), running InfiniBand has some serious advantages from a technology standpoint and a price standpoint.

    In summation, if we were starting over today and were looking at the hurdle of learning InfiniBand (there’s a steep learning curve) or deploying ethernet, it’d be a pretty simple decision to just deploy ethernet. There is a _lot_ to learn about InfiniBand that goes well beyond the operation of ethernet. If you don’t mind paying a little more, and don’t need the additional performance that InfiniBand offers, there is absolutely nothing wrong with using 10Gbit ethernet for the link-layer. It’s much easier to learn, much easier to troubleshoot, and not nearly as ‘niche’ as Infiniband.

  • Brian Matsik says:

    We’re following much of your hardware recommendations to build a new storage server and I am wondering why you went for the E16 instead of the E26 chassis? We’re going through a redundancy discussion now and I am wondering just how much of a SPOF we will have going with the E16 over the E26. We are looking to use some SATA drives for large space for file servers and some web content so the E16 makes more sense. Are you worried about a backplane failure in this configuration? I’m afraid we might be overly cautious in our build and may be limiting ourselves in drive selection.

  • We went with the E16 because we are not dual-porting this system. If we were going to used dual controllers and multipathing to the drives, we would have gone with the E26. If you are just going to use SATA drives, the E26 is not going to give you any advantage, as SATA drives are not dual ported and cannot be multipathed without an interposer. If you want to use the E26 backplane, you will want to use SAS or NL-SAS drives only.

  • sauce214 says:

    Maybe I missed it, but Infiniband card are you using for this build?

  • We ordered a Mellanox MHRH29-XTC. It may not have been mentioned previously.

  • nOon29 says:

    @JB why you still need a zil even with all ssd.

    The reason are quite simple ssd is great things but have a reaaly big issue.
    They can’t 2 two things at the same (they can’t read and write for example) and if they have really good access time to read or write (less than 1ms) they really sucks when they have to destroy data (10ms).
    So when you built a full ssd case and want to have a lot of write you have to think i always need x% free space on my array (after that the write performance will go down it’s the write cliff).

    An other point is withtout separate zil tfs will create one on the pool so you will have a lot of fragmentation of the data. And for example if you have oracle databases with a load of read it can be really troublesome in that case.
    That’s why today a separate zil is a must thing to have.
    Plus if you want to build a scale up nas the zil must be placed on the shelve (not on the controller) so the pci ssd doesn’t fit in those shelves.
    At least the only valuable zil today is zeusram.

  • nOon29 says:

    @matt
    i’m ok with you today for me infiniband is the best technology but you miss something, today most of infrastructure are virtualized, and most of them with vmware.
    And in vsphere 5 vmware just don’t support SRP protocol.

  • ptman says:

    I’m also very interested in ZFS on Linux. Could you consider testing it or giving your reasons not to test it?

  • ZFS on Linux is simply not on our radar at this time as a viable solution. The OpenSolaris/Illumos/FreeBSD implementations are arguably better across the board, and we are putting our time towards reviewing those at this time.

  • I have heard quite a little talk about ESXi 5.x getting SRP support in the near future. I can only assume that this is probably helped out by the push for more RDMA in ESXi (http://cto.vmware.com/summer-of-vrdma/). I would expect that sometime in the future we’ll start seeing SRP introduced back into ESXi. Our environment, however, is largely Microsoft Hyper-V based and the IB drivers for Hyper-V do include SRP support, so it is a moot point for us.

  • tim.averill@averillconsulting.com says:

    So I am curious why you are using the storage server rather than seperate head and JBOD as Nexenta recommends. We are a new Nexenta partner and wondered if you have a build sheet you can share?

  • The “build sheet” is pretty much contained in the above post.

    As for the reason that we’ve gone with the single system – it just doesn’t seem to make sense to go head/JBOD if we are never going to do HA with this system, and we’re limited to 18TB of raw space w/ community edition licensing. If we were going to do HA or we were going to grow the system with enterprise licensing, we would absolutely go head/JBOD.

  • tim.averill@averillconsulting.com says:

    we are building out a similair datacenter purpose as yours. we are doing cloud servers , virtual desktop hosting and cloud storage and backup. NOT WEBSITES! 😉 so we were running against high storage costs with EMC etc.. and struggling to maintain any kind of margins and competitiveness because of storage. this seems like the solution.

    I have pinged you on skype and have a ton of questions if you would add me please that would be great.

  • Leave a Reply

    You must be logged in to post a comment.