bookmark_borderTanzu Kubernetes on vCenter 7 ā€“ Deploy Avi Controller and Service Engines

In order for tkgs to function, we need the Avi load balancer deployed. This is a very easy step and if you prefer to follow the official documentation it can be found here: https://avinetworks.com/docs/latest/installing-avi-vantage-for-vmware-vcenter/

Download Avi ova from VMware portal.

Login to your account at https://my.vmware.com/ and go to Products and Accounts ->Products -> All Products

Select VMware NSX Advanced Load Balancer, and click View Download Components
Then click Go to Downloads
Then Download Now.
Under Software you will see the latest versions, at the time of writing we are currently on 21.1.1.
Select the version and the VMware file type (ova) and click download on the right side.

Deploy the OVA template in vCenter

This step is pretty easy, but i’ll include a few pictures.

Right click on your VM folder and select Deploy OVF Template
Select the avi controller ova file
Click next a few times until you get to the Customize Template. It’s not required but I would suggest adding a static IP address here for the Avi controller. I’m using 10.10.4.5. Add the mask and gateway, and no other settings are required. Click next and deploy the ova.

Configure Avi Controller

There’s only a few steps here required to configure the controller, then we can move into the tanzu side of the house.

Access the Avi controller by fqdn, and set a new password.
Set some additional System Settings, backup passphrase, dns resolver, dns search domain. Leave everything else default and Submit.
This is the screen you should be seeing now.

Configure Cloud

Navigate to Infrastructure -> Clouds and select the pencil “Edit” on Default-Cloud
Select VMware Cloud
Add in the credentials for vCenter and the IP address, select Next.
Select the vSAN Datacenter, click Next
Lastly configure the management network and static IP ranges.
Wait a few moments and your cloud should turn green.

Configure PodNetwork

Navigate to Infrastructure -> Networks and select PodNetwork. (If this does not exist, go back to vCenter and under the Networks tab, add a Distributed Port Group called “PodNetwork” under DSwitch.
Click edit on the PodNetwork and add the subnet and static range as listed above.

Configure Default route for SEs

Navigate to Infrastructure -> Routing and click Create.
Add 0.0.0.0/0 and the next hop of 10.10.4.1

Create IPAM profile and DNS profile and add them to the Cloud

IPAM allows Avi to auto allocate IP addresses to newly created virtualservices.

Navigate to Templates -> IPAM/DNS Profiles and click Create IPAM Profile.
Modify the settings to the above and click Save.
Modify the Dns profile settings to the above and click Save.
Lastly navigate back to the Infrastructure -> Clouds and click edit on the Default-Cloud
Add the ipam profile and dns profile to the cloud and click Save.

Create a controller certificate

Navigate to Templates -> Security -> SSL/TLS Certificates and click Create Controller Certificate
Create a new certificate called ControllerCert with the fqdn as common name and the IP address as a SAN name. Everything else is default.
Then navigate to Administration -> Settings -> Access Settings and click the pencil on the right to edit.
Modify the SSL/TLS Certificate (for the controller) to the newly created certificate and save.

Create a test vs

Add the name “test-vs” and set the Network by selecting VM Network, and the available subnet 10.10.4.0/24. Change Application Profile to “System-L4-Application” and the port to 443. Then on the bottom right, select Pool and in the drop down click Create Pool.
The pool name will prefill. Change the port to 443 and select System-TCP health monitor. Click Next
If you already have a server in mind then add it here. I always add the avi controller (by fqdn) because it’s fast and I know it will have connectivity to itself šŸ™‚ Click next and save the pool.
You will now see the pool in the drop down. Click Next through all the screens and save and create the virtual.
The virtual service will be marked down for a few minutes while Avi spins up some service engines to handle the traffic. Check back in 5 min or so.
After a few minutes the virtualservice will show Green and Avi is configured and ready to go.

bookmark_borderTanzu Kubernetes on vCenter 7 ā€“ Deploy an Application (Blue)

This is really the final step in setting up TKGs, testing the deployment. We will create a simple 2 pod deployment and use the Avi load balancer as the load balancer.

Deploy the Blue Application in Tanzu Guest Cluster

Login to the guest cluster and enable privileges

Run the following commands to login to the vSphere Tanzu cluster, and switch context to the new guest cluster that was created. By default Tanzu has a fair amount of Pod security, and we will be restricted in what we can create, unless we open up the access. Since this is a lab environment, it should not be an issue. The last command will essentially provide full access for creating services, deployments, pods, etc. More info: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-4CCDBB85-2770-4FB8-BF0E-5146B45C9543.html

kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://10.10.4.50 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace=dev --tanzu-kubernetes-cluster-name=tkg-cluster-01
kubectl config use-context tkg-cluster-01
kubectl create clusterrolebinding psp:authenticated --clusterrole=psp:vmware-system-privileged --group=system:authenticated

Create file blue-deployment-l4.yaml

Use nano/vi/vim or your favorite editor and create this file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  selector:
    matchLabels:
      app: blue
  replicas: 2
  template:
    metadata:
      labels:
        app: blue
    spec:
      containers:
      - name: blue
        image: mattadam07/bluegreen:latest
        ports:
        - containerPort: 5000
        env:
        - name: app_color
          value: "blue"
---
apiVersion: v1
kind: Service
metadata:
  name: blue
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 5000
    protocol: TCP
  selector:
    app: blue

Apply the blue-deployment-l4.yaml file

kubectl apply -f blue-deployment-lb.yaml
deployment.apps/blue created
service/blue created

Run “kubectl get pods” to see the status. You will see the following if done correctly

deployment.apps/blue created
service/blue created
kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
blue-c967796c6-p24kc   1/1     Running   0          76s
blue-c967796c6-sfk7s   1/1     Running   0          76s

Check the services and see if the LoadBalancer endpoint was created successfully. The IP 10.10.4.18 should now be accessible and you should be able to test it.

kubectl get services
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
blue         LoadBalancer   10.109.206.160   10.10.4.18    80:32242/TCP   4m4s
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        4h47m
supervisor   ClusterIP      None             <none>        6443/TCP       4h47m

Validate the Avi LB VirtualService

Here is the newly created VirtualService. This was auto created through the built in AKO from TKGs. Note the IP address 10.10.4.18

Click edit on the Virtual Service and we can see that Application Profile is set for “System-L4-Application”, indicating this is an L4 vip. Additionally note that there is no Pool set at the bottom. This is actually done through an L4 Policy Set as shown below.

And lastly let’s test the URL: http://10.10.4.18

bookmark_borderTanzu Kubernetes on vCenter 7 – Deploy Guest Cluster

Now that we have the supervisor cluster up and running and our namespace created, we can deploy a guest cluster via the CLI. I installed an ubuntu 20 vm in vCenter for use as my jumpbox. I installed kubectl and the vsphere plugin in this environment. There are windows plugins and plugins for all the linux distros as well.

Install kubectl and vsphere plugin on jump server

Kubectl

You can download and install kubectl very easily (in linux) with these commands:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo mv kubectl /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
kubectl version

vSphere Plugin

The easiest way to download this is to navigate to the supervisor cluster’s floating IP address. In my case it is 10.10.4.13

Select your OS and download the cli plugin. Upload to your jump box, and running the following commands.
sudo mv kubectl-vsphere /usr/local/bin/kubectl-vsphere
sudo chmod +x /usr/local/bin/kubectl-vsphere
kubectl vsphere

Login to Supervisor Cluster

OPTIONAL: Set Environment variable for vsphere password.

echo "KUBECTL_VSPHERE_PASSWORD='supersecretpassword123'" >> /etc/environment

Log into supervisor cluster and verify cluster is healthy

kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://10.10.4.50 --insecure-skip-tls-verify
kubectl config use-context dev
kubectl get pods --all-namespaces ### Should see a list of all the pods running
kubectl get nodes ### Everything should show Ready
kubectl get tanzukubernetesreleases ###Checkout the latest releases

Create yaml file to build guest cluster

Create a file called guest_cluster.yaml with the following content

---
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: tkg-cluster-01
spec:
  topology:
    controlPlane:
      count: 1
      class: best-effort-small
      storageClass: vsan-default-storage-policy
    workers:
      count: 2
      class: best-effort-small
      storageClass: vsan-default-storage-policy
  distribution:
    version: v1.20.7

Deploy TKGs guest cluster

kubectl apply -f guest_cluster_tkgs.yaml
kubectl get cluster ### View the cluster status
kubectl get tanzukubernetescluster ### View the cluster status
Guest cluster is deploying
Still provisioning the cluster
Should be able to see the new vms spinning up in vCenter

bookmark_borderTanzu Kubernetes on vCenter 7 ā€“ Namespace Setup

After deploying the supervisor cluster, the next step is to setup the namespace where we will deploy our guest cluster.

Create Namespace

In the Menu click Workload Management. Then navigate to the Namespaces tab. Click Create Namespace.
Select the vSAN cluster and choose a name, I’m using “dev” Then select the workload network and finally add a description.
You should see the config status “Running” and kubernetes status “Active”. We need to configure Permissions, storage, capacity and usage, and the associated vm classes and content libraries for this namespace.

Configure Dev Namespace

Click Permissions and configure as shown above. If you’re using a different user, you can configure that here. I’m simply using the administrator for all access. Click OK.
Select Storage and choose the vSAN Default Storage Policy. Click OK.
Under Capacity and Usage, configure as shown above. I’m setting limits on memory and storage, but not CPU.
In the VM Service section, click Add VM Class and select the “best-effort-small” class. This will provide enough cpu and memory to the vms to handle a few deployments. If you need more, “best-effort_medium” would be a good fit as well.
Lastly, select the Add Content Library under VM Service, and add the kubernetes library.
Finished setup will look something like this.

bookmark_borderTanzu Kubernetes Content Library in vCenter 7

Before you can setup workload management in vCenter 7, you need to create a content library and setup the subscription to point to vmware’s library.

Add Content Library

Select the Menu and navigate to Content Libraries
Add a name for the content library and click Next.
Select Subscribed content library and add this Subscription URL: https://wp-content.vmware.com/v2/latest/lib.json
Additionally if you want to save space, select Download content when needed.
Yes to bypass the certificate warning.
Select the storage location, i’m using the vSAN datastore.
Review the summary page and click Finish.

bookmark_borderTanzu Kubernetes on vCenter 7 – Deploy Supervisor Cluster (WCP)

In this guide we will configure Workload Management for vCenter 7. We’ll be using vCenter Server Network (DSwitches) instead of NSX-T. Additionally we’ll be using the Avi Load Balancer (NSX Advanced Load Balancer).

Licensing for Supervisor Cluster

Right click on your vSAN cluster and navigate to Licensing. Select Assign Supervisor Cluster License and select the appropriate license. If you need to add a new license select Menu at the top -> Administration -> Licenses -> Add

Configuring Workload Management

Click the Menu and navigate to Workload Management, and you should see this page. (Assuming you licensed correctly). Click Get Started.
This alert is just informing you that Avi must already be preconfigured. If you haven’t done so yet, please do so now. Additionally we do not have NSX-T running in this lab, so vCenter Server Network is selected. Click Next.
Select the vSAN Cluster and click Next.
Pick the control plane size. I have found that Tiny was more than enough for my needs.
Select the default storage policy for control plane. I am using the vSAN Default Storage Policy. Click Next
Add in the details for the Avi load balancer. The name must be DNS compliant, so avi-controller-1 is simple and works.
Type: Avi
Controller IP: Use the IP and port here
Then add your username and password.
Add your Avi Controller Cert here as well. If you haven’t generated this yet, please do so now.
Again, as with everything VMware, make sure DNS works!

I’m using the 10.10.4.0/24 network for my management network. Select your starting range in that network and add your gateway. Add the dns server, search domain, and ntp server. Click Next.
Add in the Pod network (Workload Network) 10.10.5.0/24 is the network i’m using. Add the dns server then click Add for workload network.
In the popup add a name for the network and select the PodNetwork portgroup. Lastly add the gateway, subnet, and ip ranges. Click Save.
Everything should look like this. Click Next.
Select the kubernetes content library we created. Click Next.
All set! Click Finish.
You should see this screen. At this point go grab some coffee because this step takes quite a while, specially if your content library is set to “Download library content only when needed,” as mine is. It will download all the required ovas and start spinning up the supervisor cluster.
After a while (~45min for me) you should see your supervisor cluster up and running!
You can click the Menu and navigate to VMs and Templates and there should be 3 supervisor control plane vms running.