OpenIndiana Benchmarks

After Oracle decided to change the course of OpenSolaris (forum thread), the open source community reacted by forking the code base through a new project called Illumos. The first downloadable ISO from the Illumos project is OpenIndiana.

OpenIndiana is based on OpenSolaris b147. It is important to take a minute and look at build numbers of popular milestones within the OpenSolaris development process. Here are some major ones.
OpenSolaris 2008.11: b101
OpenSolaris 2009.06: b111
OpenSolaris 2010.03: b134
OpenSolaris b147 forks to create OpenIndiana b147

The b134 (2010.03) release was held back and never released as an official OpenSolaris release. If you go to the OpenSolaris site, the most recent official ISO is 2009.06. We have been using b134 in all of our tests anyway, because b134 has measurably better iSCSI performance than 2009.06 (b111). Additionally, the NexentaStor and Nexenta Core Platform distributions we benchmarked were both originally based on OpenSolaris b134.

OpenIndiana is built on OpenSolaris b147, which means it has a number of bug fixes since b134 and even more bug fixes since b111 (2009.06). At this point, you can think of OpenIndiana as the latest and greatest OpenSolaris code with OpenIndiana logos added to it. At this point, OpenIndiana is not significantly different from OpenSolaris b147 in any specific technical way.


OpenIndiana b147 has several bug fixes. There were two that we noticed right away. The first was the screen resolution control. With OpenSolaris b134, we could not change the screen resolution. With OpenIndiana b147, we can.

The second thing was the GUI for managing the network connections. In OpenSolaris b134, the GUI tool was not a reliable way to modify the networking ports. Sometimes it would work. Other times, it would rip up the existing network configuration for seemingly no reason. With OpenSolaris b134, we had to revert to manually configuring the network port using the command line interface. With OpenIndiana, the GUI tool for managing the network ports seems to be reliable.

Still Lacking

Both OpenSolaris and OpenIndiana could benefit from including code to manage administrator notifications and LED lights. At the present time, the only way to notify an admin about a failed drive is through custom scripting. The FMA framework is there, but there should also be a code module that ties it all together neatly, so the operating system can be used to build SAN and NAS solutions without a bunch of custom coding.

Likewise, both operating systems would benefit from including LED management code that worked with all of the popular SES backplanes. When a drive fails, there is no way to determine quickly which drive failed without custom 3rd party code. With off the shelf SAN/NAS solutions, a red LED lights up on the drive caddy of the failed drive. OpenSolaris and OpenIndiana do not currently have such a feature.

4k Benchmark Results:

8k Benchmark Results:

16k Benchmark Results:

32k Benchmark Results:


The performance of OpenIndiana b147 is impressive. OpenIndiana is faster than OpenSolaris in some benchmarks and slower in others. If you are excited about OpenSolaris and would prefer a truly open source solution, take a serious look at OpenIndiana.

Monday, October 11th, 2010 Benchmarks

14 Comments to OpenIndiana Benchmarks

  • RoyK says:

    Thanks for this benchmark, but, could you post a zpool layout / hardware config too?

  • admin says:

    We have used the same hardware for every benchmark on this website. Details can be found here:

    We use the same zpool for every benchmark as well. It is a RAID10 style pool.

  • Patrick says:


    You mentioned in a previous post that mirrored/striped vdevs gives you the best performance for your workload.

    Did you do any testing of raidz/raidz2 configurations to bring you to this conclusion?

    It would be brilliant if you still had your testing results and could post them.



  • admin says:

    Patrick: We are still planning to post some benchmarks comparing RAID10, RAID5, and RAID50. So far, we have been focusing on RAID10.

  • mrhyd3 says:

    What do you guys feel about StormOS? Another deriv of NexentaCore? I assume it’s similar, but with all these derivs coming out – I want a server centric not desktop.

  • admin says:

    We have not played with StormOS, yet. StormOS is a desktop distribution based on the Nexenta Core Platform, and our focus is on the server side not the desktop side.

  • minchin says:

    Love this blog. I am also a ZFS NAS fanatic, having 2 severs myself. Of cousre the build you have is 10x more advance but I am learning a lot from your L2ARC and ARC articles.
    Thanks for taking the time to testing the various builds of OpenSolaris. Save me the time to try them out myself. Looks like I should give OpenIndiana a serious consideration. I was thinking of going to the dark-side of Oracle Solaris 10 9/10 because it supports port multiplier. I have several 8 bay port multiplied enclosures that I want to use but am currently access it through a eSATA to USB2 converter (yah, surprisingly, it works very well).
    Anway, keep up the great work.

  • admin says:

    minchin: Thanks for the kind words. We are glad you enjoy the site.

  • gostan says:

    thanks for the awesome guide NSLH!

    may I know which build do you deploy on your production server?


  • chuch says:

    I’ve been given the task to benchmark out ZFS system How did you get IOMETER to run on Solaris i can’t for the life of me get dynamo installed, got any tips ? any help would be greatly appreciated.

  • rakrav says:

    Thank you for such a fantastic blog.
    Can you please share some insights about how your ZFS Storage is working in Production ? I am also planning a similar kind of setup…

  • 2g33k4u says:

    Was wondering what people think would be a good arrangement for my system.

    7 2TB WD drives (5xGreen, 2xBlacks)
    2 60gb SSD’s
    16gb DDR3
    1155 socket with a CPU
    2 x dual port broadcomm Nics in a Aggr.

    I am looking for all around use Decent speed and capacity I figure at least one drive will be a hot spare.

    I am open to suggestions. Thank you in advance.
    P.S. currently using Openindiana 147x I love it and will remain with it most likely.

    I am starting over from scratch do to all the hardware upgrades I just ordered and figured a clean slate and apply what I have learned so far to new build.

    1TB of this will be used as a iscsi lun for my VMserver 5.0

    if this is wrong place for this post please move me to proper location. I truly love these articles.

  • For drives that large, there really isn’t any way that I’d consider going other than RAIDZ2. With 2TB drives the resilver time on them is going to be _very_ long. I would look at 4+2 Raid Z2, with one hot spare, for an available capacity of ~8TB. I would also use the 2 60GB drives as L2ARC, and find a 20GB Intel SLC (311 or 313 series) drive for the SLOG device. The SLOG device won’t matter nearly as much using iSCSI with VMware, but if you ever use NFS, you will destroy your performance if you do not have a dedicated SLOG device.

  • Leave a Reply

    You must be logged in to post a comment.