Motherboard, CPU, Heatsink, and RAM selection

Motherboard Selection – SuperMicro X8ST3-F

SuperMicro X8ST3-FSupermicro Packaging

SuperMicro X8ST3-FMotherboard Top Photo

We are planning on deploying this server with OpenSolaris 2009.06. As such we had to be very careful about our component selection. OpenSolaris does not support every piece of hardware sitting on your shelf. We had several servers that we tested with that would not boot into OpenSolaris at all. Granted, some of these were older systems with somewhat odd configurations. In any event, component selection needed to be made very carefully to make sure that OpenSolaris would install and work properly.

In the spirit of staying with one vendor, we decided to start looking with SuperMicro. Having one point of contact for all of the major components in a system sounded like a great idea.

Our requirements started with requiring that it support the latest Xeon Nehalem architecture. The latest Xeon architecture is very efficient and boasts great performance even at modest speeds. We do not anticipate unusually high loads with this system though, as we will not be doing any type of RAID that would require parity. All of our RAID volumes will be mirrored VDEV’s. As we will not be requiring large amounts of CPU time, we decided that the system should be single processor based.

Single CPU Socket
Single CPU Socket for LGA 1366 Processor

Next on the list is RAM sizing. Taking in to consideration the functionality of the ARC cache in ZFS we wanted our system board to support a reasonable amount of RAM. The single processor systems that we looked at all support a minimum of 24GB of RAM. This is far ahead of most entry level RAID subsystems, most of which ship with 512MB-2GB of RAM (our 16 drive Promise RAID boxes have 512MB, upgradeable to a maximum of 2GB).

Ram Slots
6 RAM slots supporting a max of 24GB of DDR3 RAM.

For expansion we required a minimum of 2 PCI-E x8 slots for Infiniband support and for additional SAS HBA cards should we need to expand to more disk drives than the system board supports. We found a lot of system boards that had one slot, or had a few slots, but none that had just the right number while supporting all of our other features until we came across the X8ST3-F. The X8ST3-F has 3 X8 PCI-E slots (one is a physical X16 slot), 1 X4 PCI-E slot (in a physical X8 slot) and 2 32bit PCI slots. We believe that this should more than adequately handle anything that we need to put into this system.

Expansion Slots

PCI Express and PCI slots for Expansion

We also need Dual Gigabit Ethernet. This allows us to maintain one connection to the outside world, plus one connection into our current iSCSI infrastructure. We have a significant iSCSI setup deployed and we will need to migrate that data from the old iSCSI SAN to the new system. We also have some servers that do not have Infiniband capability, and will need to continue to connect to the storage array via iSCSI. As such we need a minimum of 2 gigabit ports. We could have used an add-on card, but we would prefer to use integrated Nics to keep the case clutter down. You can see the dual gigabit ports to the right of the video connection.

Rear panel connections on Supermicro X8ST3-F

Lastly, we required remote KVM capabilities. This is one of the most important factors in our system. Supermicro provides excellent remote KVM capabilities via their IPMI interface. We are able to monitor system temps, power cycle the system, re-direct CD/DVD drives for OS installation, and connect via a KVM over IP. This allows us to manage the system from anywhere in the world without having to send someone into the datacenter for any reason short of a hardware failure. There is nothing worse than waking up to a 3AM page and having to drive down to a datacenter just to have to press “enter” or something similar because a system is hung. It also allows us to debug issues remotely with a vendor without having to be in our datacenter with racks full of screaming chassis fans. You can see the KVM Connection in the previous photo on the left hand side, next to the PS/2 Mouse connection.

Our search (and phone calls to SuperMicro) lead us to the SuperMicro X8ST3-F. It supported all of our requirements, plus it had an integrated SAS controller. The integrated SAS controller was is an LSI 1068e based controller, and a definate bonus, as it allowed us to not use an HBA SAS card initially. The integrated SAS controller is capable of delivering 3gbit/sec per connector. We have 8 SAS ports onboard, so using 4 for internal drives and 4 for external enclosures, we would have a combined bandwidth of 24gbit/sec for a single SAS controller. The 1068e based controller is good for up to 144 drives in I/T (Initiator/Target) mode or 8 drives in RAID mode. Since we will not be using the onboard RAID and using ZFS to control our replication, we can use Initiator/Target mode. This little bonus turned out to be a big deal, allowing us to run up to nearly 5 additional chassis without getting an additional controller! The LSI1068e controller is also rated to handle 144,000 IOPS. If we manage to put 144 drives behind this controller, we could very well need that many IOPS. One caveat – to change from SW/RAID mode to I/T mode, you have to move a jumper on the motherboard, as the default is SW/RAID mode. This jumper can be found in between the two banks of 4 SAS ports. Simply remove the jumper, and it will switch to I/T mode.

Initiator/Target jumperJumper to switch from RAID to I/T mode and 8 SAS ports.

After speaking with SuperMicro, and searching different forums, we found that several people had successfully used the X8ST3-F with OpenSolaris 2009.06. With that out of the way we ordered the Motherboard.

Processor Selection – Intel Xeon 5504

Intel Xeon 5504 Processor

With the motherboard selection made, we could now decide what processor we wanted to put in this system. We initially looked at the Xeon 5520 series processors, as that is what we use in our BladeCenter. The 5520 is a great processor for our Virtualization environment due to the extra cache and hyperthreading, allowing it to work on 8 threads at once. Since our initial design plans dictated that we would be using Mirrored Striped VDEV’s with no parity, we decided that we would not need that much processing power. In keeping with that idea, we selected a Xeon 5504. This is a 2.0ghz processor with 4 cores. Our thoughts are that it should be able to easily handle the load that will be presented to it. If it does not, the system can be upgraded to a Xeon E5520 or even a W5580 processor, with a 3.2ghz operating speed if the system warrants it. Testing will be done to make sure that the system can handle the IO load that we will need to handle.

Cooling Selection – Intel BXSTS100A Active Heatsink with fan

Intel Retail Active Heatsink

We selected an Intel stock heatsink for this build. It has a TDP of 80Watts, which is exactly what our processor is rated at.

Memory Selection – Kingston Value Ram 1333mhz ECC Unbuffered DDR3

12GB Kingston Value Ram

We decided to initially populate this system with 12GB of RAM. This helps keep the costs in check a little bit. We are unsure as to whether this will be enough RAM or not, but our system board will be able to accept up to 24GB of RAM. If need be we can remove the 12GB of RAM and upgrade to 24GB when necessary. We selected Kingston ValueRam ECC Unbuffered DDR3 RAM for this project. We have had great luck with Kingston ValueRam in the past, and so far our experience has not let us down. We selected 1333 Mhz RAM so that if we need to upgrade our processor speed in the future, we are not bottlenecked by our main memory speed.

Thursday, April 8th, 2010 Hardware

3 Comments to Motherboard, CPU, Heatsink, and RAM selection

  • Flash1204 says:

    What Chassis was used for this build? It looks like the
    SC825TQ-R720LPB

  • admin says:

    We talked about chassis selection for the ZFSBuild 2010 project on the following URL:
    http://www.zfsbuild.com/2010/03/21/chassis-selection/

  • tsmooth3 says:

    “…We have 8 SAS ports onboard, so using 4 for internal drives and 4 for external enclosures…”

    What type of cable is used to connect the “4 for external enclosures” onboard SAS ports to the chassis used in this build? Would it be one of the sff-8087 to 4 SAS/SATA breakout cables in reverse? Would that connect to the HBA port of the supermicro chassis backplane? Would it be a different cable for connecting those onboard ports to an external enclosure?

  • Leave a Reply

    You must be logged in to post a comment.