This components list is for the 2022 version of the Lab. I basically doubled everything on the server side, so I could have 2 servers instead of 1.
Component List for Servers
I pretty much followed this guide, except for a few small changes since some of the parts were discontinued or changed model number.
Quantity | Name | Details |
2 | SAMSUNG 870 EVO Series 2.5″ 2TB SATA III V-NAND Internal Solid State Drive (SSD) MZ-77E2T0B/AM | 2TB SSD, in hind sight, I should have bought an 8TB. I’ll have to install a new one later.. |
2 | Supermicro SSD-DM032-SMCMVN1 32GB SATA DOM | SATADOM installs right on the motherboard, and you will install the esxi image onto this drive. 32GB is plenty. |
2 | SUPERMICRO MBD-M11SDV-8C+-LN4F-O Mini ITX Server Motherboard | 8 cores, and its pretty beefy, and you can overclock it. I have quite a bit running and i’m only around 60-70% utilization. And up to 512gb of memory should be plenty. Also this model comes with an active fan on the cpu! MBD-M11SDV-8C+-LN4F-O – Active CPU Fan MBD-M11SDV-8C+-LN4F – CPU Heatsink |
4 | Noctua NF-A6x25 PWM, Premium Quiet Fan, 4-Pin (60mm, Brown) | Honestly these don’t fit great, it is a snug fit. I would probably have went with just 1, or maybe a single larger fan. But they sure are quiet! |
2 | Supermicro CSE-721TQ-350B 350W Mini-Tower Chassis | The tower |
1 | 256GB 4x64GB DDR4-2666 PC4-21300 2Rx4 RDIMM ECC Registered Memory by NEMIX RAM | I bought 256gb of memory. This is enough for 128gb for each server. It should be plenty, and it leaves room for expansion later. |
1 | 128GB 2x64GB DDR4-2666 PC4-21300 2Rx4 RDIMM ECC Registered Memory by NEMIX RAM | I actually expanded by adding an additional 64GB per supermicro. This is not required, but if you run everything I’m running, you’ll be hovering right around 100-110 GB of consumed memory per server. Too close for my comfort. |
2 | Intel X550-T 10Gb Network Cards | If you’re using multiple servers and a vSAN, I found out it is definitely worth it to link the two servers via a 10Gb link, and use that for vMotion and vSAN. |
Multiple | – Ethernet cables – SATA Cables – Monitor/keyboard for configuring and installing esxi on the supermicro – Surge protector or battery pack | You’ll obviously need some extra components that you might already have laying around.. |
Optional Component For Rack Enclosure
The Build
I built this a month or so ago so I’ll have to go back and get some pictures of the internal components, for now, follow the guide here: https://jorgedelacruz.uk/2020/10/05/supermicro-analysis-of-the-best-home-lab-server-2020-supermicro-m11sdv-8c-ln4f/
I don’t have exact steps for the build, but if you’ve built a computer before, its basically the same thing. Ram goes in the ram slots, sata cables to your ssd.. All pretty easy.
Awesome setup, mate! Appreciated you took the time to put all the new components, and also links to the blog!
I have updated my kit, exactly same hardware, same rack, same everything, but moved to a 1U rackmount to use better space, and of course far from my desk as the heat temp on 1U is a bit tricker: https://jorgedelacruz.uk/2022/02/23/supermicro-my-preference-homelab-choice-for-2022-supermicro-a-server-5019d-ftn4/
But AMD ftw! speed/temperature does not have match, yet.
I love this setup! A single server powered so much for me, now I need to add NSX-T and a few other components, and thus need more juice 🙂
I appreciate your original blog, it set me down the AMD path!
Hi Matt,
2 questions –
1 – Would you still go with this config in 2024? The form factor/power draw/noise level is obviously nice, but the tech is getting pretty long in the tooth (vga, 8c/16t/ DDR4 2666, etc). I’m looking at at this setup or a stack of NUCs currently, though.
2 – Will this mobo boot from the m.2 slot? I’d much rather use an m.2 than that satadom.
Thanks!
TD
1. It depends. If you want to run most of the vSphere stack (outside of VCF) then 2 of these servers connected via a 10Gb link will be more than enough. I never hit CPU/MEM thresholds on my servers, it was always a slow vSan transfer speed that broke things.
But if you’re trying to migrate your lab to VCF, then that has yet to be determined. I am very close on my final testing, and when I’m done (if successful) then I will write a blog about it.
I’ve heard good things about the NUCs though.
2. I looked at better storage options at one point, and as I’m typing this on my phone I won’t remember what those were. SSDs are slower compared to a disk option that attaches directly to the MB. I forgot the name of it. I don’t believe these boards had that option, so SSD was my best option. I only used the satadom because I was following another blog, and they used it. You’ll be fine with pretty much anything else.
Hi Matt,
Awesome, thanks for the input. I’m mostly interested in a lab for nsx-t and avi right now and it sounds like this mini-server build would still work well for that. I’ve had very mixed experiences with SM support in the past, but overall they make good “prosumer” gear IME. I’ll probably pull the trigger on this setup. Thanks very much for documenting this so well and linking to Jorge’s blog as well.
NUCs (and NUC offshoots) are a lot of fun if you like small form factor computing, but they have a lot of inherent limitations. They can be made to work with esxi, but there’s usually a number of gotchas to work around with flings and boot options, etc, first. For modest workloads they can be great, but you can also spend a lot of time going down tshoot/workaround rat-holes with them if you’re not careful.
I’m guessing it was the m.2 you were looking at – the “gumstick” SSD. SM lists an m.2 2280 on the board, so I guess I’ll see if I can get it working. I have a bunch of smaller 2280s.
Anyhow, thanks again!
TD
Well, I lied. I grabbed the satadom. It was less than $60 and frees up the m.2 for datastore. Saves me the hassle of running power and sata to the top of the case if nothing else.
I’m definitely glad you posted your comment. I read into the M2 back in the day but just didn’t see it on the MB. I was thinking it looked like a pcie slot 🙂
And I wanted to keep that pcie slot free in case I needed it (ended up adding 10GB NIC using the pcie slot)
But the board does actually have a M2 slot, so I would definitely get a 4TB ssd and use that in lieu of regular ssds. Way faster write/read speeds.
Yes I would definitely save the M2 for compute, leave another ssd or the satadom for the OS.
Good luck!
Hi Matt – just wanted to say thanks again for all your work documenting this stuff. It has been very helpful.
FWIW the M.2 is pcie 3.0 on this board – though it appears to *only* support the 2280 length cards. But, given how stale some of the tech on the board is (VGA???) the m.2 is very welcome. I got all the parts over the last week or so and assembled everything today. The satadom worked great for the esxi install and the m.2/nvme drive was detected and available for datastore. I’m definitely glad I decided to grab that satadom after all, made for cheap and easy OS/DATA segregation. I grabbed a cheap $12 VGA to HDMI converter that “just worked” too. Couldn’t have been much easier.
I can’t say I’m a fan of this case, though – it has a number of design problems that bother me, especially considering the price. I’ll probably go with a different mini-itx option for lab box #2.