Use nano/vi/vim or your favorite editor and create this file.
- name: blue
- containerPort: 5000
- name: app_color
- name: http
Apply the blue-deployment-l4.yaml file
kubectl apply -f blue-deployment-lb.yaml
Run “kubectl get pods” to see the status. You will see the following if done correctly
kubectl get pods
NAME READY STATUS RESTARTS AGE
blue-c967796c6-p24kc 1/1 Running 0 76s
blue-c967796c6-sfk7s 1/1 Running 0 76s
Check the services and see if the LoadBalancer endpoint was created successfully. The IP 10.10.4.18 should now be accessible and you should be able to test it.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blue LoadBalancer 10.109.206.160 10.10.4.18 80:32242/TCP 4m4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h47m
supervisor ClusterIP None <none> 6443/TCP 4h47m
Validate the Avi LB VirtualService
Here is the newly created VirtualService. This was auto created through the built in AKO from TKGs. Note the IP address 10.10.4.18
Click edit on the Virtual Service and we can see that Application Profile is set for “System-L4-Application”, indicating this is an L4 vip. Additionally note that there is no Pool set at the bottom. This is actually done through an L4 Policy Set as shown below.
Now that we have the supervisor cluster up and running and our namespace created, we can deploy a guest cluster via the CLI. I installed an ubuntu 20 vm in vCenter for use as my jumpbox. I installed kubectl and the vsphere plugin in this environment. There are windows plugins and plugins for all the linux distros as well.
Install kubectl and vsphere plugin on jump server
You can download and install kubectl very easily (in linux) with these commands:
Log into supervisor cluster and verify cluster is healthy
kubectl vsphere login --vsphere-username firstname.lastname@example.org --server=https://10.10.4.50 --insecure-skip-tls-verify
kubectl config use-context dev
kubectl get pods --all-namespaces ### Should see a list of all the pods running
kubectl get nodes ### Everything should show Ready
kubectl get tanzukubernetesreleases ###Checkout the latest releases
Create yaml file to build guest cluster
Create a file called guest_cluster.yaml with the following content
In this guide we will configure Workload Management for vCenter 7. We’ll be using vCenter Server Network (DSwitches) instead of NSX-T. Additionally we’ll be using the Avi Load Balancer (NSX Advanced Load Balancer).