Long title, but basically this is a guide to deploy L7 ingress on top of your WCP and NSX-T setup. If you’ve followed my previous guides, you should have NSX and WCP configured, and a supervisor cluster and guest cluster already configured. We will need to add a little bit more config to NSX-T and then spin up an Avi Controller and service engines. Finally we will deploy AKO and configure L7 ingress.
Table of Contents
Create Avi Tier-1 in NSX
By this point we’ve created a few T1 gateways so the process is the same for these ones. Create 2 T1s: 1 for management, and the other for VS network. I named them T1-Avi-Mgmt and T1-Avi-VIP and under Route Advertisements make sure to check them all.
Create Avi Segments under the the T1 gateways
Deploy and Configure Avi (NSX Advanced Load Balancer)
In the interest of not duplicating work, see this guide: https://mattadam.com/2021/10/12/tanzu-kubernetes-on-vcenter-7-deploy-avi-controller-and-service-engines/ Stop at the Configure Cloud section, and that is where this guide will start.
We need to create the nsx manager and vcenter credentials, for use in the cloud.
Configure NSX-T Cloud
After the Avi controller is deployed, log in and navigate to Infrastructure -> Clouds. Then select Create NSX-T Cloud.
Set static IP Ranges (Optional)
If you have a dhcp range configured in your networks, you likely can skip this step. Just verify that the networks have been discovered by Avi. Navigate to Infrastructure -> Cloud Resources -> Networks and select the “nsxt” cloud
Set static routes for Vip and management subnets
This is done in the avi cli.
#Exact commands used: shell #login to the avi shell with credentials switchto cloud nsxt configure vrfcontext T1-Avi-VIP #Enter submode static_routes #Enter submode next_hop 172.16.40.1 route_id 2 prefix 0.0.0.0/0 save save configure vrfcontext management #Enter submode static_routes #Enter submode next_hop 172.16.30.1 route_id 3 prefix 0.0.0.0/0 save save
Create a DNS and IPAM profile.
Navigate to Templates -> Profiles -> IPAM/DNS Profiles
Install AKO on K8s Cluster
In this step we’ll deploy AKO and a test deployment with an ingress to ensure everything is working.
#login to cluster kubectl vsphere login --vsphere-username firstname.lastname@example.org --server=https://10.10.1.2 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace=dev --tanzu-kubernetes-cluster-name=tkg-cluster-01 kubectl config use-context tkg-cluster-01 #Helm Setup kubectl create ns avi-system helm repo add ako https://projects.registry.vmware.com/chartrepo/ako helm show values ako/ako --version 1.7.1 > values.yaml nano values.yaml
#Example values.yaml file edits clusterName: my-cluster #Change the name here layer7Only: true #Set this to true. NSX will handle L4. Avi will handle L7. nsxtT1LR: '/infra/tier-1s/Avi-T1' #Set this to the Avi T1 from NSX. vipNetworkList:  - Comment out this line and replace it with the following: vipNetworkList: - networkName: Avi-Vip cidr: 172.16.40.0/24 serviceType: NodePort #I am using NodePort for my lab. shardVSSize: SMALL #Small will create a small number of VS for sharding. ControllerSettings: serviceEngineGroupName: Default-Group controllerVersion: '21.1.4' cloudName: nsxt controllerHost: 'avi-controller.home.lab' tenantName: admin avicredentials: username: "admin" password: "password123"
After saving the values.yaml file, the following commands are used to install AKO.
helm install ako/ako --generate-name --version 1.7.1 -f /ako/values.yaml namespace=avi-system