Storage Server 2015 was a roller coaster of a build. There were many things that tried to stop the updating of the storage server, although it ended up working out!

The idea of updating the storage server had been floating around in my head since around September 2014. This was around the time that I had switched to using ZFS and FreeNAS on the Server I built in 2013. The old server specs are as follows:

  • Intel Xeon L5520 (Quad Core with HT at 2.26Ghz)
  • Supermicro X8STI
  • 24GB of Crucial RAM
  • 3x 2TB Toshiba DT01ACA200
  • Corsair Force LS 60GB (playing with ARC and ZIL caching)
  • Some generic 600Watt PSU
  • Xenta 4u Server Chassis (cheap and cheerful)

All around the machine was pretty good for what it was doing. However it was limited and not that safe.

Firstly, it was NOT using ECC memory, which is not recommended for use with ZFS. Although I did not experience any data corrupotions through this…. your mileage may vary. Secondly, for those four drives I was using the onboard SATA ports. Although the board has an additional 8 SAS ports, the controller would not work with ZFS. Plus the chassis limits the drive expansion as it only has mounting for 8 drives internally.

So sometime later around April 2015, I came across a Supermicro SC846-R900B. A pretty neat 24 disk 4u server chassis with dual redundant 900Watt power supplies. The only thing it was missing was ALL the drive caddies. The package was so neat however, I could not pass up the opportunity to get my hands on it.

Inside Supermicro Chassis

Inside the chassis was a small fan control board (Supermicro JBPWR2) and an External SAS to Internal SAS connector which was connected to the centre piece… the SAS expander backplane. The Supermicro SAS846EL1 is a Gen1 SAS (3Gbps) Expander backplane. So it will take a single SAS connection (normally only 4 drives) and allow connecting upto 24 drives. Although 3Gbps wasn’t exactly amazingly speedy, it was still 3Gbps x4 connections (One SAS 8087 connetor carries four connections) and would be suitable for my own storage and test environment.

However, this did not turn out as planned. With my HBA connected the system would detect any drives connected to the expander and allow viewing of the drive information. Within an operating system, it would allow the drives to be read from, but none could be written to… uh oh. At first I thought it might be a drive issue, but they operated perfectly fine through SATA and directly attached to the HBA. The HBA was then suspect, but regardless of firmware version or type the same problem existed. As the expander is not designed to be user serviced, I was stuck. Interesting note on this. Linux and BSD based OS’s couldn’t write to the drives, whilst Windows could but only at <1MB/s.

This is where the plans for the system changed drastically. The idea to use the expander were out the window, which left me with three options.

  • Get a replacement backplane.
  • Use an alternative SAS expander.
  • Get more HBA’s.

Getting a replacement backplane was out of the question. Any replacements I could find were few and far between, as well as far too expensive. Which pretty much ruled out this option.

I was a bit sceptical of getting another SAS expander due to the issues I had been experiencing with the current one.

The final option… to get more HBA’s was looking pretty good at this point. I could either source a HBA that could connect 24 drives to it or get additional HBA’s. A 24 drive SAS card that could be flashed to use IT mode firmware was a bit darn expensive. So… more HBA’s it would be. Using more HBA’s would allow for each drive to be directly connected which would give the maximum amount of throughput to each drive…. awesome.

The only issue with using more HBA’s would be that a different motherboard would be required. The Supermicro X8STI only has 1 PCI-e connector (that can be used without a riser) which would prevent multiple HBA’s. After some cost calculating (and realising the X8STI did NOT support ECC memory), I sprang for an X8DTN+.

Supermicro Motherboard

So the X8DTN+ has dual socket LGA 1366 but also has 18 DIMM slots. Typically each 1366 socket only supports 6 DIMMs per CPU, whilst this particalar board has support for 9 per CPU. Basically this board would allow for a massive memory expansion in the future.

So overall the new system specs are as follow:

  • Dual Intel Xeon L5520 (Quad Core with HT at 2.26Ghz)(8 cores 16 threads total)
  • Supermicro X8DTN+
  • 24GB of ECC RAM

With the new board, some passive heat sinks were aquired for the processors and an additional HBA. This time a Dell Perc H310 was used which has some additional steps to flashing it to IT mode over normal LSI cards.

During all of this though, the old system decided that I would suffer from a disk failure. Thankfully, using Raid-Z1 I had disk redundancy and did not lose any data. Ordering a replacement drive plus 3 additional drives, I configure the 3 new drives into an additional Raid-Z1 and copied the data across from the damaged Raid-Z before replacing the damaged disk. After the array had resilvered (rebuild the data) I investigated the “damaged” drive. After formatting the drive it seemed perfectly fine and actually works to this day in my desktop. Somewhere along the line, the array must have tripped up. But it was better not to take a risk and so I replaced the drive.

After all of this the new system has the 6x 2TB drives formatted into a Raid-Z2. Giving me a total of 8TB of usable data and 2 disk redundancy. A new backup to this (offsite) is in the making.