This is not a required step to build your home lab. It’s just an extra step that will give you a slight performance boost if you want it. Obviously you should know that modifying any of these BIOS settings can break your system and I nor Supermicro are responsible for it. I’m providing this as a reference on how I did it for my setup.
Reboot the device either by sshing into esxi and typing “reboot” or by resetting the power button on front of the Supermicro. Upon reboot, press F11 to enter the Aptio Setup Utility BIOS screen. (I don’t have any screenshots of this, but it’s pretty straightforward.)
Modify CPU Settings
There are 2 settings that I adjusted to increase my clock speed.
cTDP Control setting from default of 55W to 75W
Determinism Slider from default of Auto to Power
To modify these, navigate to the Advanced Tab, then select CPU Configuration. You will see the 2 options for cTDP and Determinism Slider, modify them to the settings listed above.
Use nano/vi/vim or your favorite editor and create this file.
- name: blue
- containerPort: 5000
- name: app_color
- name: http
Apply the blue-deployment-l4.yaml file
kubectl apply -f blue-deployment-lb.yaml
Run “kubectl get pods” to see the status. You will see the following if done correctly
kubectl get pods
NAME READY STATUS RESTARTS AGE
blue-c967796c6-p24kc 1/1 Running 0 76s
blue-c967796c6-sfk7s 1/1 Running 0 76s
Check the services and see if the LoadBalancer endpoint was created successfully. The IP 10.10.4.18 should now be accessible and you should be able to test it.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blue LoadBalancer 10.109.206.160 10.10.4.18 80:32242/TCP 4m4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h47m
supervisor ClusterIP None <none> 6443/TCP 4h47m
Validate the Avi LB VirtualService
Here is the newly created VirtualService. This was auto created through the built in AKO from TKGs. Note the IP address 10.10.4.18
Click edit on the Virtual Service and we can see that Application Profile is set for “System-L4-Application”, indicating this is an L4 vip. Additionally note that there is no Pool set at the bottom. This is actually done through an L4 Policy Set as shown below.
Now that we have the supervisor cluster up and running and our namespace created, we can deploy a guest cluster via the CLI. I installed an ubuntu 20 vm in vCenter for use as my jumpbox. I installed kubectl and the vsphere plugin in this environment. There are windows plugins and plugins for all the linux distros as well.
Install kubectl and vsphere plugin on jump server
You can download and install kubectl very easily (in linux) with these commands:
Log into supervisor cluster and verify cluster is healthy
kubectl vsphere login --vsphere-username firstname.lastname@example.org --server=https://10.10.4.50 --insecure-skip-tls-verify
kubectl config use-context dev
kubectl get pods --all-namespaces ### Should see a list of all the pods running
kubectl get nodes ### Everything should show Ready
kubectl get tanzukubernetesreleases ###Checkout the latest releases
Create yaml file to build guest cluster
Create a file called guest_cluster.yaml with the following content
In this guide we will configure Workload Management for vCenter 7. We’ll be using vCenter Server Network (DSwitches) instead of NSX-T. Additionally we’ll be using the Avi Load Balancer (NSX Advanced Load Balancer).
If you’ve followed the guide this far, you’ve deployed 3 esxi hosts nested on your baremetal esxi install. This guide takes it a step further by deploying vcenter and creating a vSAN cluster on the esxi hosts.
Mount the ISO and use the install wizard to configure vCenter 7
I’m using windows 10, and it was relatively easy to mount the ISO. In Windows explorer, I just navigated over to the downloads directly where the ISO was, and double clicked it. Open the directory vcsa-ui-installer -> win32 -> installer.exe.
So now that we have our SuperMicro server setup, and we’ve installed ESXi7, the next step is to do some basic configuration on our BareMetal ESXi7. We will setup the network, view the storage, and prepare the esxi for the nested esxi environment.
This is basically the whole setup for the baremetal. Next step will be deploying the esxi vms.