Login to your account at https://my.vmware.com/ and go to Products and Accounts ->Products -> All Products
Select VMware NSX Advanced Load Balancer, and click View Download ComponentsThen click Go to DownloadsThen Download Now.Under Software you will see the latest versions, at the time of writing we are currently on 21.1.1.Select the version and the VMware file type (ova) and click download on the right side.
Deploy the OVA template in vCenter
This step is pretty easy, but i’ll include a few pictures.
Right click on your VM folder and select Deploy OVF TemplateSelect the avi controller ova fileClick next a few times until you get to the Customize Template. It’s not required but I would suggest adding a static IP address here for the Avi controller. I’m using 10.10.4.5. Add the mask and gateway, and no other settings are required. Click next and deploy the ova.
Configure Avi Controller
There’s only a few steps here required to configure the controller, then we can move into the tanzu side of the house.
Access the Avi controller by fqdn, and set a new password.Set some additional System Settings, backup passphrase, dns resolver, dns search domain. Leave everything else default and Submit.This is the screen you should be seeing now.
Configure Cloud
Navigate to Infrastructure -> Clouds and select the pencil “Edit” on Default-CloudSelect VMware CloudAdd in the credentials for vCenter and the IP address, select Next.Select the vSAN Datacenter, click NextLastly configure the management network and static IP ranges.Wait a few moments and your cloud should turn green.
Configure PodNetwork
Navigate to Infrastructure -> Networks and select PodNetwork. (If this does not exist, go back to vCenter and under the Networks tab, add a Distributed Port Group called “PodNetwork” under DSwitch.Click edit on the PodNetwork and add the subnet and static range as listed above.
Configure Default route for SEs
Navigate to Infrastructure -> Routing and click Create.Add 0.0.0.0/0 and the next hop of 10.10.4.1
Create IPAM profile and DNS profile and add them to the Cloud
IPAM allows Avi to auto allocate IP addresses to newly created virtualservices.
Navigate to Templates -> IPAM/DNS Profiles and click Create IPAM Profile.Modify the settings to the above and click Save.Modify the Dns profile settings to the above and click Save.Lastly navigate back to the Infrastructure -> Clouds and click edit on the Default-CloudAdd the ipam profile and dns profile to the cloud and click Save.
Create a controller certificate
Navigate to Templates -> Security -> SSL/TLS Certificates and click Create Controller CertificateCreate a new certificate called ControllerCert with the fqdn as common name and the IP address as a SAN name. Everything else is default.Then navigate to Administration -> Settings -> Access Settings and click the pencil on the right to edit.Modify the SSL/TLS Certificate (for the controller) to the newly created certificate and save.
Create a test vs
Add the name “test-vs” and set the Network by selecting VM Network, and the available subnet 10.10.4.0/24. Change Application Profile to “System-L4-Application” and the port to 443. Then on the bottom right, select Pool and in the drop down click Create Pool.The pool name will prefill. Change the port to 443 and select System-TCP health monitor. Click NextIf you already have a server in mind then add it here. I always add the avi controller (by fqdn) because it’s fast and I know it will have connectivity to itself 🙂 Click next and save the pool.You will now see the pool in the drop down. Click Next through all the screens and save and create the virtual.The virtual service will be marked down for a few minutes while Avi spins up some service engines to handle the traffic. Check back in 5 min or so.After a few minutes the virtualservice will show Green and Avi is configured and ready to go.
This is not a required step to build your home lab. It’s just an extra step that will give you a slight performance boost if you want it. Obviously you should know that modifying any of these BIOS settings can break your system and I nor Supermicro are responsible for it. I’m providing this as a reference on how I did it for my setup.
Reboot the device either by sshing into esxi and typing “reboot” or by resetting the power button on front of the Supermicro. Upon reboot, press F11 to enter the Aptio Setup Utility BIOS screen. (I don’t have any screenshots of this, but it’s pretty straightforward.)
Modify CPU Settings
There are 2 settings that I adjusted to increase my clock speed.
cTDP Control setting from default of 55W to 75W
Determinism Slider from default of Auto to Power
To modify these, navigate to the Advanced Tab, then select CPU Configuration. You will see the 2 options for cTDP and Determinism Slider, modify them to the settings listed above.
That is it. Save the settings and exit the BIOS.
The recommended and default power settings for this board is 55WHere are the settings under NB Configuration.
This is really the final step in setting up TKGs, testing the deployment. We will create a simple 2 pod deployment and use the Avi load balancer as the load balancer.
Deploy the Blue Application in Tanzu Guest Cluster
Login to the guest cluster and enable privileges
Run the following commands to login to the vSphere Tanzu cluster, and switch context to the new guest cluster that was created. By default Tanzu has a fair amount of Pod security, and we will be restricted in what we can create, unless we open up the access. Since this is a lab environment, it should not be an issue. The last command will essentially provide full access for creating services, deployments, pods, etc. More info: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-4CCDBB85-2770-4FB8-BF0E-5146B45C9543.html
Use nano/vi/vim or your favorite editor and create this file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
spec:
selector:
matchLabels:
app: blue
replicas: 2
template:
metadata:
labels:
app: blue
spec:
containers:
- name: blue
image: mattadam07/bluegreen:latest
ports:
- containerPort: 5000
env:
- name: app_color
value: "blue"
---
apiVersion: v1
kind: Service
metadata:
name: blue
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 5000
protocol: TCP
selector:
app: blue
Apply the blue-deployment-l4.yaml file
kubectl apply -f blue-deployment-lb.yaml
deployment.apps/blue created
service/blue created
Run “kubectl get pods” to see the status. You will see the following if done correctly
deployment.apps/blue created
service/blue created
kubectl get pods
NAME READY STATUS RESTARTS AGE
blue-c967796c6-p24kc 1/1 Running 0 76s
blue-c967796c6-sfk7s 1/1 Running 0 76s
Check the services and see if the LoadBalancer endpoint was created successfully. The IP 10.10.4.18 should now be accessible and you should be able to test it.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blue LoadBalancer 10.109.206.160 10.10.4.18 80:32242/TCP 4m4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h47m
supervisor ClusterIP None <none> 6443/TCP 4h47m
Validate the Avi LB VirtualService
Here is the newly created VirtualService. This was auto created through the built in AKO from TKGs. Note the IP address 10.10.4.18
Click edit on the Virtual Service and we can see that Application Profile is set for “System-L4-Application”, indicating this is an L4 vip. Additionally note that there is no Pool set at the bottom. This is actually done through an L4 Policy Set as shown below.
Now that we have the supervisor cluster up and running and our namespace created, we can deploy a guest cluster via the CLI. I installed an ubuntu 20 vm in vCenter for use as my jumpbox. I installed kubectl and the vsphere plugin in this environment. There are windows plugins and plugins for all the linux distros as well.
Install kubectl and vsphere plugin on jump server
Kubectl
You can download and install kubectl very easily (in linux) with these commands:
Log into supervisor cluster and verify cluster is healthy
kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://10.10.4.50 --insecure-skip-tls-verify
kubectl config use-context dev
kubectl get pods --all-namespaces ### Should see a list of all the pods running
kubectl get nodes ### Everything should show Ready
kubectl get tanzukubernetesreleases ###Checkout the latest releases
Create yaml file to build guest cluster
Create a file called guest_cluster.yaml with the following content
kubectl apply -f guest_cluster_tkgs.yaml
kubectl get cluster ### View the cluster status
kubectl get tanzukubernetescluster ### View the cluster status
Guest cluster is deployingStill provisioning the clusterShould be able to see the new vms spinning up in vCenter
After deploying the supervisor cluster, the next step is to setup the namespace where we will deploy our guest cluster.
Create Namespace
In the Menu click Workload Management. Then navigate to the Namespaces tab. Click Create Namespace.Select the vSAN cluster and choose a name, I’m using “dev” Then select the workload network and finally add a description.You should see the config status “Running” and kubernetes status “Active”. We need to configure Permissions, storage, capacity and usage, and the associated vm classes and content libraries for this namespace.
Configure Dev Namespace
Click Permissions and configure as shown above. If you’re using a different user, you can configure that here. I’m simply using the administrator for all access. Click OK.Select Storage and choose the vSAN Default Storage Policy. Click OK.Under Capacity and Usage, configure as shown above. I’m setting limits on memory and storage, but not CPU.In the VM Service section, click Add VM Class and select the “best-effort-small” class. This will provide enough cpu and memory to the vms to handle a few deployments. If you need more, “best-effort_medium” would be a good fit as well.Lastly, select the Add Content Library under VM Service, and add the kubernetes library.Finished setup will look something like this.
Before you can setup workload management in vCenter 7, you need to create a content library and setup the subscription to point to vmware’s library.
Add Content Library
Select the Menu and navigate to Content LibrariesAdd a name for the content library and click Next.Select Subscribed content library and add this Subscription URL: https://wp-content.vmware.com/v2/latest/lib.json Additionally if you want to save space, select Download content when needed.Yes to bypass the certificate warning.Select the storage location, i’m using the vSAN datastore.Review the summary page and click Finish.
In this guide we will configure Workload Management for vCenter 7. We’ll be using vCenter Server Network (DSwitches) instead of NSX-T. Additionally we’ll be using the Avi Load Balancer (NSX Advanced Load Balancer).
Licensing for Supervisor Cluster
Right click on your vSAN cluster and navigate to Licensing. Select Assign Supervisor Cluster License and select the appropriate license. If you need to add a new license select Menu at the top -> Administration -> Licenses -> Add
Configuring Workload Management
Click the Menu and navigate to Workload Management, and you should see this page. (Assuming you licensed correctly). Click Get Started.This alert is just informing you that Avi must already be preconfigured. If you haven’t done so yet, please do so now. Additionally we do not have NSX-T running in this lab, so vCenter Server Network is selected. Click Next.Select the vSAN Cluster and click Next.Pick the control plane size. I have found that Tiny was more than enough for my needs.Select the default storage policy for control plane. I am using the vSAN Default Storage Policy. Click NextAdd in the details for the Avi load balancer. The name must be DNS compliant, so avi-controller-1 is simple and works. Type: Avi Controller IP: Use the IP and port here Then add your username and password. Add your Avi Controller Cert here as well. If you haven’t generated this yet, please do so now. Again, as with everything VMware, make sure DNS works!
I’m using the 10.10.4.0/24 network for my management network. Select your starting range in that network and add your gateway. Add the dns server, search domain, and ntp server. Click Next.Add in the Pod network (Workload Network) 10.10.5.0/24 is the network i’m using. Add the dns server then click Add for workload network.In the popup add a name for the network and select the PodNetwork portgroup. Lastly add the gateway, subnet, and ip ranges. Click Save.Everything should look like this. Click Next.Select the kubernetes content library we created. Click Next.All set! Click Finish.You should see this screen. At this point go grab some coffee because this step takes quite a while, specially if your content library is set to “Download library content only when needed,” as mine is. It will download all the required ovas and start spinning up the supervisor cluster.After a while (~45min for me) you should see your supervisor cluster up and running!You can click the Menu and navigate to VMs and Templates and there should be 3 supervisor control plane vms running.
If you’ve followed the guide this far, you’ve deployed 3 esxi hosts nested on your baremetal esxi install. This guide takes it a step further by deploying vcenter and creating a vSAN cluster on the esxi hosts.
Download vCenter Server
Login to your account at https://my.vmware.com/ and go to Products and Accounts ->Products -> All Products
Select VMware vSphere. View Download ComponentsSelect your version and download the vCenter Server. I’m using VMware vCenter Server 7.0U2b with Enterprise Plus Download the VMware vCenter Server Appliance (7.5GB) VMware-VCSA-all-7.0.2-17958471.iso
Mount the ISO and use the install wizard to configure vCenter 7
I’m using windows 10, and it was relatively easy to mount the ISO. In Windows explorer, I just navigated over to the downloads directly where the ISO was, and double clicked it. Open the directory vcsa-ui-installer -> win32 -> installer.exe.
Stage 1
You should see a popup like this. Go ahead and click InstallClick Next to deploy vCenter serverAccept the EUL.Put the IP/fqdn of the first esxi host, and the credentials.Select Yes to accept the warning.Specify a name for the vm, and set the root passwordFor deployment size I chose Tiny since it more than met my needs. If you need more, select Small.Select “Install on a new vSAN cluster containing the target host” Feel free to modify the names.We’re going to claim all the 200GB disks as capacity tier, and the 20GB disk as cache tier. The other disk we will not use. Additionally, I selected “Enable Thin Disk Mode” and “Enable Deduplication and compression” Since it’s a lab, i’m not too worried about a vSAN failure. Worst case, i’ll just rebuild the entire lab and get more practice.Set the fqdn for vcenter, IP address and mask, default gateway, and dns server. vCenter is very picky about dns.. Make sure that the fqdn resolves and the ip address reverse lookup resolves as well.Here’s the summary page. Go ahead and hit Finish then grab some coffee. This step takes a while.Congratulations! It’s installed. Now onto stage 2 for some additional configuration. Click Continue
Stage 2
Into the setup wizard for stage 2. Click NextYou’re welcome to sync with a public ntp server (or private), I just selected the host for mine. Additionally, it’s a lab and ssh access to vCenter is very handy when troubleshooting issues later.Set the SSO domain, I chose the default “vsphere.local” and enter your password.Choose whether you want to join the CEIP.Summary page. If all looks right, click Finish.Stage 2 Completed. Vcenter is all setup. You can now access the UI:
vSAN Initial Setup
Launch vCenter in the browser.Login with the administrator@vsphere.local account and password.You’re going to see lots of alarms and warnings, don’t worry.. we’re going to fix it all in the next few steps.
Step 1: Cluster Basics
Navigate to the vSAN Cluster and select Configure. Then under Configuration click Quickstart. This provides a easy to use wizard for deploying HA and vSAN.Step 1: Click Edit under Cluster Basics and make sure that all the options are turned on. vSphere DRS, vSphere HA, and vSAN.
Step 2: Add Hosts
Step 2: Under Add hosts, click ADD. Then add in the IP or fqdn of each of the other 2 esxi hosts. Then the user and passwords for each.Select the hosts and click OK to accept the certificate security warning.Summary of the hosts. Click NextReady to add them, click Finish. After you click finish, this will take some time. Just be patient.Hosts are added, now on to step 3.
Step 3: Configure Hosts
Step 3: Click Configure under Configure cluster. I left all these settings default.Set the vmnics as shown above. We will use this to setup vSAN and vMotion. Click Next.I am using vlan20 (10.10.2.0/24) for my vmotion traffic. So I configured 3 interfaces for this traffic, 1 per esxi in the cluster. Also i’m not using vlans, so I have unchecked that box. Click Next.Similarly, the vSAN vlan is vlan30 (10.10.3.0/24) and I configured 3 IP addresses on this network. Uncheck vlan if not in use. Click Next.I left all of these settings default. You can turn on “Virtual Machine Monitoring” if you want. Everything else is fine as default. Click Next.For the disks, select the “Group by:” as Host, and expand the hosts. You will see all the volumes that we created on the esxi setup.Go through and claim the following: 200GB Claim as capacity tier 20GB Claim as cache tier 4GB do not claim Click Next.Skip this step, since we have already configured internet access.Summary page. Review everything and click Finish. This step takes a while.. be patient.Eventually everything will normalize and look like this. You can ignore those yellow alerts. As long as nothing is red, you will be fine.
Licensing vCenter
Click the Menu and navigate to Hosts and Clusters. Right click the vcenter1 instance, and select Assign License. Select the appropriate vCenter license and click Ok.
Licensing the ESXi hosts in vSAN cluster
Enter your licenses separated out by a new line. Click Next. Then you have the option to name your licenses, Next. Summary page. Save.After adding the licenses, you will see them available here.Select the Menu at the top and navigate to Hosts and ClustersRight click on one of the ESXi hosts in the vSAN cluster and select Assign License. Then in the popup that will appear, you select the appropriate license. Repeat this steps for the other 2 ESXi hosts.
Set vSAN as default storage policy
Right click on the vCenter vm, and navigate to VM Policies, and select Edit VM Storage PoliciesAt the top select vSAN Default Storage Policy from the drop down and click Ok.
So now that we have our SuperMicro server setup, and we’ve installed ESXi7, the next step is to do some basic configuration on our BareMetal ESXi7. We will setup the network, view the storage, and prepare the esxi for the nested esxi environment.
Log in to the UI of the baremetal Esxi device, and you’ll see a screen like this
Network Configuration
Navigate to the Networking tab, and select physical Nics. As you can see I have 4 physical NICs on my SuperMicro, and I have my ethernet plugged into vmnic1.Navigate to the Virtual Switches tab, we will need to create 1 more virtual switch.Click “Add standard virtual switch” and configure using the details above. Make sure to open the Security tab and Accept all the options: Promiscuous Mode, MAC address changes, and Forged transmits. Click add and you will see a screen like this.Navigate to the port groups tab, it’s time to create our networks.Create the first network with the options listed above. We’re going to repeat these steps to create 4 more networks.Here’s a list of the networks that you should have. Note that the vSwitch is vSwitch1 for vlan10-50. I’m not doing vlan tagging in my setup, feel free to do it on your end if you prefer.
Storage Configuration
You should see your SSD here. I have a 2GB SSD, but you should see your physical disk listed here.
This is basically the whole setup for the baremetal. Next step will be deploying the esxi vms.