Home Server Build – Choosing Hardware. I’m going to show you the home server that I recently built and how I set it up. It’s got 16 CPU cores and 128GB of RAM, so it should be able to serve all the things! The motherboard, CPUs, and RAM were all bought second hand to cut down costs.
The rest of the parts were all purchased brand new. Primarily the server would be running multiple virtual machines at once, so a high core count was important to me. This is why I selected two Intel Xeon E5-2670 CPUs, these go for around $150 AUD each and have 8 cores and 16 threads, so when combined with a dual socket board I’ve got a total of 16 cores and 32 threads. I’ve seen these go for as little as $70 USD each, so they’re definitely worth a look at for a cheap server build, especially as in recent years there seems to be little performance to gain between different Intel CPU generations.
The 2670’s are based on the Sandy Bridge architecture and came out early 2012. They are clocked at 2.6GHz, can turbo up to 3.3GHz, and each have 20MB of cache available. Although the CPUs are cheap, keep in mind that a dual socket board may not be, I suggest checking all pricing before you commit to buy. To keep all these cores cool I’ve got two Noctua U12DX i4 CPU coolers, which have support for socket 2011 and I’ve found that they perform quite well, I’ve done a separate post on them with more details if you’re interested. For the board I’m using an Intel S2600CP2J.
While the board came out in 2012 it’s still got the features I need, such as two 1gbit NICs, SATA 3 support, and of course the most important factor, dual LGA2011 sockets. Sure there’s no support for DDR4 memory, NVMe drives, or newer fancy features, but for my server this will get the job done. The board also has 16 RAM slots so of course I had to fill all those out with 8GB sticks for a total of 128GB of memory.
I found 8GB sticks seem to be a pretty good sweet spot for size and price, once you start looking at 16GB sticks or higher the price goes up pretty quickly, and well 128GB of RAM should be more than enough for me anyway. I admit that this will probably be overkill, but I just had to do it! I was actually looking at buying all these parts separately on Ebay, but then found that I could buy them together as a bundle for $500 AUD cheaper so went with that, I got the bundle from natex.us, I’ll leave a link in the description.
I also got a 1TB Samsung 850 EVO SSD, as my main PC is always low on SSD space which prevents me running virtual machines I figured I’d just get something big with decent performance that should last me for quite a while. I’m running the hypervisor operating system on the same disks as the virtual machines, I did consider buying a smaller cheaper SSD to dedicate to the hypervisor but in the end decided it wasn’t worth the extra cost for my lab environment, if I have any major IO issues I’ll revisit that.
Speaking of the hypervisor, I still haven’t locked down what I’m going to end up using. I’ve used Xen, Hyper-V and VMware ESXi in the past so I’ve got a bit of experience with how those all work. At the moment I’m testing out Windows Hyper-V server 2016, as it’s free. Although the hypervisor operating system is free, you still need to license any Windows virtual machines that you run on top of it, but of course Windows does have a trial so you could keep rebuilding your VMs. I’ll either stick with this or the free version of ESXi that VMware have on offer, it’ll take me some time to evaluate them in my environment. At the moment I’ve only setup some Linux virtual machines that i regularly use and have had no problems at all so far. After getting the server setup and configured, basically I just power on the server then connect to it from my Windows desktop with Hyper-V manager.
From there I can create, manage, start and stop virtual machines on the server over the network. The other hypervisors all have similar functionality, but yeah that’s how they all generally work. For the power supply I went with a Corsair HX850i. The only real requirement here is that the power supply needs to have 2 EPS connectors, as there are two CPUs to power, so keep that in mind as many desktop power supplies may only have 1. I’ve put all these parts into a Phanteks Enthoo Luxe case with tempered glass.
As the motherboard is a server board, it’s size is SSI EEB rather than standard ATX, so I was a little limited in cases that I could use. While it’s possible to use an ATX case and drill your own holes, I wanted something that would just work out of the box without any case modding. I could have paid less and got the version with the plastic window, but, well… DAT GLASS. I’ve done a full review post of the case if you’re interested. Now let’s take a look at some benchmarks! I’m only going to be performing CPU based benchmarks here, as I’m not using a dedicated GPU, this is a server and I’m not going to be using it to play games or do any graphical work. I’ll mostly be connecting to virtual machines remotely over the network from my desktop, so CPU power was definitely the priority here. I’ll throw in a 7700K into the results too, just for scale. In the Cinebench benchmark the Xeons got a score of 2030, pretty nice.
In the PassMark CPU benchmark I got a CPU score of 19,622 which was in the 99th percentile for all CPU tests, not bad at all for some old Xeons. In GeekBench 4 I got a single core score of 2,627 and a multicore score of 23,742. As expected the 7700K has a much better single core performance, but is of course no match for 16 cores. I let the 7-Zip benchmark run for 10 passes with a dictionary size of 32mb which resulted in a score of 58,301 MIPS.
I then used Handbrake to encode a 500mb MP4 video file that I recorded from 1080p to 720p. The dual xeons completed the task averaging 75 frames per second. Testing was completed with an ambient room temperature of 21 degrees celsius, and the 16 CPU cores during idle sat anywhere between 29 and 37 which makes me wonder if my thermal paste is equally spread out, as that’s quite a bit of variance.
During benchmarking with all 16 cores maxed out the core temperatures ranged from 58-63 degrees celsius, not bad at all! I’m really impressed with the Noctua coolers. Honestly I could probably throw in a graphics card and just use this server as a desktop PC or workstation, as I’m still using the crappy PC I built in late 2010. The Xeon CPUs outperform my Intel i7 950 CPU with it’s 12gb RAM, but I don’t know, I like the idea of having a server to keep the workloads separate. I can do something else cooler when I replace my desktop, probably something with a higher clocked CPU with less cores, as I should hopefully have all the cores I need on the server. Something important to note if you’re looking to do a build with the E5-2670 CPU’s is that the SR0KX revision has proper support for Intel VT-d which is what I have here. If you instead get SR0H8 which is the previous revision, you won’t have VT-d support, this is why the SR0H8 ones go for a little cheaper as they are missing this feature. Now if you don’t need VT-d then you can save some money and get those, otherwise if you do you’ll need to pay extra and ensure you get the SR0KX revision. Overall I’m happy with how the server build turned out, despite being limited in case selection I think that the end result looks pretty nice.
I can run multiple virtual machines with plenty of resources assigned to each and everything works perfectly, no complaints with regards to performance at all, especially at this price. I spent just under $2000 AUD all up, so around $1500 USD, and about half of that was on the second hand motherboard, memory, and dual CPU bundle. Comparing it against much more expensive newer Intel Xeon CPUs makes me think that it was a good deal, but maybe that will change with the Intel i9 and AMD threadripper CPU launches coming soon.
So what did you guys think about my server build? Be sure to let me know your thoughts down in the comments, and I’d be interested in hearing if you’re running any servers at home. Leave a share on the post if you found it useful, hopefully not that one, come on it’s not that bad?