Archive: Server Build 2013 Pt1
Server Build - Part 1#
Back in the Summer of 2013 when I originally built this machine, I used to have a lot of pre-built OEM machines. Mainly Dell Poweredge machines. However their only limit was their age. All the servers I had were either LGA 771 or older and only had an average of 8-16GB of DDR2 RAM.
So I decided (after a quick browse of ebay) to build my own.

This is what I picked up. All the parts were from ebay except for the SSD, as I wanted a new one (and it was cheap enough anyway). Specs:
- Intel Xeon L5520
- SuperMicro X8STI-3F
- 6x 4GB Crucial DDR3 (Non-ECC)
- Corsair Force LS

The CPU is a cheap LGA 1366 Xeon. The L5520 is a Hyper-Threaded Quad core clocked at 2.26Ghz with a nice balanced 8MB of L3 Cache. The “L” tag denotes that the Xeon is a “low power” version of the chip. This wasn’t a desired feature I was after when building the machine, it just so happened to be slightly cheaper than its normal “E” part brethren.

The motherboard is a Supermicro X8STI-3F, all round a very nice motherboard. Since building this machine I have become a very big fan of the Supermicro boards, due to their build quality and feature set. I was actually amazed at how cheap I got this board for, and now you can pick these up NEW on ebay for around 40 gbp.
I was always a fan of the Triple Channel Memory design of the LGA 1366 socket and actually having one made me really start to understand the engineering marvel that was modern computing.
Anyway, back to the board. Its most note-able features for me using it as a server are the IMPI dedicated interface and the dual Gigabit LAN ports built onto the board.

Oh and did I mention it comes in a stylish whitebox! HARDCORE!!!!!!!111!!!

These boards come very well packaged and contain all the regular stuff you might need to get going. Note the manual and its thickness, its only in English and super detailed.

Now this board whole host of features, such as the 6x SATA (3Gbps) and 8x SAS (6Gbps) ports providing ample disk connectivity. However one of the immediately noticeabe things about this board is the lack of expansion I/O on the board. Out of the box it comes with 1x PCI-e 16x 2.0, 1x PCI-e 8x 2.0 and a PCI-32. The motherboard is designed to fit into a 1U or 2U case, so the PCI-e are stacked in an odd way which makes the 8x inaccessible in a standard 4u or ATX tower case. But this was not an issue as there was no plans to use any expansion cards.

Rear I/O is simple, consisting of good old PS/2 Mouse and Keyboard, the 10/100M IPMI interface, 2x USB2.0, an RS-232 Serial connector, VGA and the Dual Gigabit LAN.

The LGA 1366 Socket upclose.

So to get testing (as the case hadn’t arrived) I gathered up some old parts to use. A 600Watt OCZ PSU, a Zalman CNPS 9900 (Green) and 2 random 250GB 2.5" SATA drives. The SATA drives were so I could test the RAID 0,1,10 capabilities of the board (RAID 5 requires an additional key thingy).

Using the box as a test bench, the assembly was quick and painless. With a quick look in the manual to see where the hook up for the power button was, the machine booted first time. With it immediately detecting all of the memory, no problem.

First step (after setting the BIOS up) was to test the memory. Although the memory was new, I thought it would better to test just in case. Interestingly Memtest86 detected the Xeon as an i7, which seemed a little strange but at least ECC support was still being detected (even though ECC wasn’t in use).

So after hooking all the drives up to the SAS controller, it was time to start playing.

The cheap Corsair Force LS would be the main storage heart, acting as the OS drive.

Going into a controllers BIOS is always strange, they’re always different but least LSI keeps theirs fairly similar.

Configuring the 2 HDD’s into a nice RAID 1. For redundancy of course.

After initializing the array, it was time to install something… Windows in this case… The spec read outs are a little easier to understand with a nice GUI than they are in Linux (although these days I would do it in Linux)

Whilst waiting, started to browse the manual. Told you it was super detailed!

Standard Windows Server Install, thankfully it saw the RAIDed drives no issue. But interestingly…. not the SSD…. Oh wait. That’s because I have it hooked up to the SAS controller and not configured! Herp a Derp!

Whoop!
Nice to see task manager reporting 24GB of RAM :P
Pi Time, a simple benchmark that calculates Pi.
You know how I said this was Windows…. well a quick test of prime number calculations in Linux.
So this is the crystal disk of the RAID 1…. In short, shit. But then that is two laptop drives.
Switching to RAID 0 yielded nearly double the performance (about the same as a standard 3.5 inch drive). But most importantly, no redundancy.
So after initialising the SSD on the SAS controller…… something was up.
Switching it to the SATA ports, got more realistic scores….. however, the read speeds are supposed to much higher.
Taking the drive and stuffing it in Primary desktop (AMD FX system with SATA 6Gbps) gave the better results.

So yeap, SSDs on SATA II or III really do get affected.
Anyway, all that was left was to wait for a case… Part 2