Background
Ok so I’ve had my home lab for a couple years at this point, and overall I’m impressed. I have had a ton of workloads on these things from vSphere, vSAN, vCenter, NSX-T, NSX ALB (Avi Networks), Tkgs (vCenter and Tanzu), and Tkgm (Standalone Tanzu). I’ve also installed various firewalls, and DNS tools like pfSense, bind, Pi-hole, etc. And of course other Kubernetes distros like K3s, Openshift, and others. I’ve had a ton of it running at the same time, and I don’t seem to be hitting any CPU or MEM related bottlenecks. Pretty impressive for a nested environment running on just 2 physical CPUs..
The only issue I’ve really found is when a vm vMotions between one physical supermicro to another, it seems slow. Slow enough that I think it can back up various processes and in some cases caused crashes. I can’t prove it, but at least that is how it seemed. I have been running with a standard 1Gb link through my unmanaged switch into the other supermicro. So everything from physical ESXi to ESXi and to the internet and back all came from the same 1Gb link. That could have been the issue.
Solution
So, I decided to purchase 2 PCI-E 10Gb adapters, so I can get 10Gb between the physical ESXis.
Specifically, I purchased this: Intel X550-T2. There’s quite a few on ebay, some on amazon. I’m sure others will work, but this is what I purchased and it’s been working great! This card will provide you 2 10Gb ports, i’m currently only using one.
Setup and Installation
If you’ve followed my guide so far to build out the SuperMicros, you should be familiar with the Motherboard. The SUPERMICRO MBD-M11SDV-8C comes with only 1 PCIe slot.
My purchase included 2 attachments for the card, I had to remove the larger one and install the smaller bracket to fit my Mini-Tower Chassis. Install whatever works for your case.
Another image of the back
Disconnect the power cable, ethernet, and other components and get ready to open up the server.
Removed the tray to see the motherboard. You can see I’m focused on a particular screw at the top, this will need to be removed to fit the 10Gb card. Also where I’ve highlighted in yellow is where you’ll place the card.
The installed card and replaced screw.
Everything reinstalled. You can see the ethernet connections here. 1 Ethernet on the 1Gb link goes to the internet, home network. The other 2 10Gb nics are directly connected to the other SuperMicro. Again I’m only using 1 link here, but i’ve installed both in case I need it in the future.
Validate the Install
After closing up the server and hooking up the cables, power it on and wait for ESXi to boot up.
You should now see the new Nics available under Physical NICs. They should say 10000 Mbps, full duplex. If it doesn’t say that, then you might have installed a bad card.
To finish validating the install, I would recommend doing a speed test. between the ESXi hosts. You can apparently doing this using iperf3, installed natively with ESXi… I was not able to get this working with vSphere 8. So instead, I created 2 VMs (ubuntu or centos), 1 on each physical host, then I did a speed test using iperf.
Speed Test Results
VM on Physical Host 1 to VM on Physical Host 2
Before: 939 Mbps
After: 9.31 Gbps (I’m assuming the slight loss was due to undersized VMs or ubuntu/centos overhead. I used 2 CPU each)
IPERF Testing
So again you’ll need to have 2 VMs running to do this, centos or ubuntu or whatever distro you like is fine. Create the proper networking on the physical hosts so that the Switch connects to the physical NIC using the 10Gb link
You can see in my example above, I created vSanvMotionSwitch with 2 PGs. And my ubuntu vm was called ubuntutest. Uplink is the vmnic4, 10000 Mbps
Host 1:
# Install iperf3
sudo apt -y install iperf3
# Setup the server to listen
iperf3 -s -B 10.10.10.9
Host 2:
# Install iperf3
sudo apt -y install iperf3
# Setup the server to connect and send packets
iperf3 -i 2 -c 10.10.10.9
Output:
root@matt-virtual-machine:/home/matt# iperf3 -s -B 10.10.10.9
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.10.10, port 48142
[ 5] local 10.10.10.9 port 5201 connected to 10.10.10.10 port 48144
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 998 MBytes 8.37 Gbits/sec
[ 5] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 2.00-3.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 3.00-4.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 4.00-5.00 sec 1.07 GBytes 9.22 Gbits/sec
[ 5] 5.00-6.00 sec 1.09 GBytes 9.35 Gbits/sec
[ 5] 6.00-7.00 sec 1.09 GBytes 9.32 Gbits/sec
[ 5] 7.00-8.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 8.00-9.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 9.00-10.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 10.00-10.04 sec 44.5 MBytes 9.40 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.04 sec 10.8 GBytes 9.27 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
There ya go, hope that helps.