Skip to content

Matt Adam

Tech Blog

Menu
  • Home
  • Home Lab
    • Home Lab
    • Home Lab with NSX-T
  • Kubernetes
    • Openshift
    • K3s
    • Tanzu
      • Tanzu – TKGs (WCP)
      • Tanzu – TKG (TKGm)
    • Avi Kubernetes Operator (AKO)
  • About
  • Privacy Policy
Menu

AKO – L7 Ingress on vSphere with Kubernetes (WCP – TKGs) with NSX-T

Posted on July 19, 2022September 28, 2022 by Matt Adam

Long title, but basically this is a guide to deploy L7 ingress on top of your WCP and NSX-T setup. If you’ve followed my previous guides, you should have NSX and WCP configured, and a supervisor cluster and guest cluster already configured. We will need to add a little bit more config to NSX-T and then spin up an Avi Controller and service engines. Finally we will deploy AKO and configure L7 ingress.

Table of Contents

  • NSX Networking
    • Create Avi Tier-1 in NSX
    • Create Avi Segments under the the T1 gateways
  • Deploy and Configure Avi (NSX Advanced Load Balancer)
    • Create Credentials
    • Configure NSX-T Cloud
    • Set static IP Ranges (Optional)
    • Set static routes for Vip and management subnets
    • Create a DNS and IPAM profile.
  • Install AKO on K8s Cluster

NSX Networking

Create Avi Tier-1 in NSX

By this point we’ve created a few T1 gateways so the process is the same for these ones. Create 2 T1s: 1 for management, and the other for VS network. I named them T1-Avi-Mgmt and T1-Avi-VIP and under Route Advertisements make sure to check them all.

[All Static Routes, All NAT IP’s, All DNS Forwarder Routes, All LB VIP Routes, All Connected Segments & Service Ports, All LB SNAT IP Routes, All IPSec Local Endpoints]

Create Avi Segments under the the T1 gateways

Create 2 segments, one for the Vip/Data traffic and another for management.

Deploy and Configure Avi (NSX Advanced Load Balancer)

In the interest of not duplicating work, see this guide: https://mattadam.com/2021/10/12/tanzu-kubernetes-on-vcenter-7-deploy-avi-controller-and-service-engines/ Stop at the Configure Cloud section, and that is where this guide will start.

Create Credentials

We need to create the nsx manager and vcenter credentials, for use in the cloud.

Navigate to Administration -> User Credentials and click Create.
Create the first set of credentials for NSX-T.
And the last set for vcenter.

Configure NSX-T Cloud

After the Avi controller is deployed, log in and navigate to Infrastructure -> Clouds. Then select Create NSX-T Cloud.

Set the cloud name, type is NSX-T Cloud. Then the object prefix as avi.
In the NSX-T section set the nsx manager address (ip or fqdn) and the pre-created credentials. Then click Connect
For the management network, set the Transport zone as nsx-overlay-transportzone (Overlay) and set the Avi-T1 router as well as the Avi-Mgmt segment.
For the data network, similarly you will use the Avi-T1 router and the overlay segment is Avi-Vip
For vCenter server, set the name, address, and credentials created in the earlier step. Also if you haven’t already created a new content library in vCenter do so now and refresh the page. The content library is used for Avi to store the SE images.
We will set the IPAM and DNS profile later, Set the DNS resolver and click Save.

Set static IP Ranges (Optional)

If you have a dhcp range configured in your networks, you likely can skip this step. Just verify that the networks have been discovered by Avi. Navigate to Infrastructure -> Cloud Resources -> Networks and select the “nsxt” cloud

select the Avi-Mgmt network and click Edit on the right side.
Click Add Subnet
Set the subnet and an IP Range that can be used for the SEs.
Do the same thing for the Avi-Vip network.

Set static routes for Vip and management subnets

This is done in the avi cli.

#Exact commands used:

shell #login to the avi shell with credentials
switchto cloud nsxt

configure vrfcontext T1-Avi-VIP #Enter submode
static_routes #Enter submode
next_hop 172.16.40.1
route_id 2
prefix 0.0.0.0/0
save
save

configure vrfcontext management #Enter submode
static_routes #Enter submode
next_hop 172.16.30.1
route_id 3
prefix 0.0.0.0/0
save
save
ssh into the avi controller, type “shell” and then your credentials.
Type switchto cloud nsxt
Configure the vrfcontext Avi-T1 as shown above.
Configure the vrfcontext management as shown above.

Create a DNS and IPAM profile.

Navigate to Templates -> Profiles -> IPAM/DNS Profiles

Select create in the top right.
Create the IPAM profile as shown.
Create the DNS Profile as shown.
Lastly navigate back to the nsxt cloud we created earlier, and add in the IPAM and DNS profile.

Install AKO on K8s Cluster

In this step we’ll deploy AKO and a test deployment with an ingress to ensure everything is working.

Exact Commands:

#login to cluster
kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://10.10.1.2 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace=dev --tanzu-kubernetes-cluster-name=tkg-cluster-01
kubectl config use-context tkg-cluster-01

#Helm Setup
kubectl create ns avi-system
helm repo add ako https://projects.registry.vmware.com/chartrepo/ako
helm show values ako/ako --version 1.7.1 > values.yaml
nano values.yaml
#Example values.yaml file edits

clusterName: my-cluster #Change the name here
layer7Only: true #Set this to true. NSX will handle L4. Avi will handle L7.
nsxtT1LR: '/infra/tier-1s/Avi-T1' #Set this to the Avi T1 from NSX.

vipNetworkList: [] - Comment out this line and replace it with the following:
  vipNetworkList:
   - networkName: Avi-Vip
     cidr: 172.16.40.0/24

serviceType: NodePort #I am using NodePort for my lab.
shardVSSize: SMALL #Small will create a small number of VS for sharding.

ControllerSettings:
  serviceEngineGroupName: Default-Group
  controllerVersion: '21.1.4'
  cloudName: nsxt
  controllerHost: 'avi-controller.home.lab'
  tenantName: admin   

avicredentials:
  username: "admin"
  password: "password123"

After saving the values.yaml file, the following commands are used to install AKO.

helm install  ako/ako  --generate-name --version 1.7.1 -f /ako/values.yaml namespace=avi-system 

8 thoughts on “AKO – L7 Ingress on vSphere with Kubernetes (WCP – TKGs) with NSX-T”

  1. Michel AL Daher says:
    August 9, 2022 at 8:38 am

    Hello Matt,

    first thank you for the blog very helpful, i have a small question why do we need to add the static routes on the VRF, it should not be automated ?

    Reply
    1. Matt Adam says:
      August 9, 2022 at 8:52 am

      Without a default gateway, the nodes wouldn’t know how to reach any other network. So you must set the DG in the management vrf as well as the vip/data vrf.

      Reply
  2. Francisco Menezes says:
    September 28, 2022 at 4:56 am

    Hi Matt,

    I guess we can deploy different AKO instances on different Guest Clusters and by using different values.yaml file differentiate VIP networks according to PODs. Is this correct?
    The idea would be to have different VIPs for different Clusters.

    If the answer is “yes” (as I believe) my ask is for L4 services. Can something similar be done for L4 LB (different VIPs for different Clusters) – I see an issue here, since there would be no AKO yaml file where to define the VIP network.

    Reply
    1. Matt Adam says:
      September 28, 2022 at 11:35 am

      Yes, for each guest cluster you can deploy a new AKO and associate a different network for L7.

      Unfortunately you cannot with L4 in TKGs. The AKO for L4 in TKGs actually runs in the supervisor cluster, and cannot be changed. It’s a design limitation, and may or may not be addressed in the future. You could consider using TKGm instead. TKGm provides a great deal more flexibility for L4 and L7, and the AKO is deployed in the guest cluster, instead of the supervisor clusters. You can check out my install guide for TKGm here: https://mattadam.com/tanzu-tkg-tkgm/

      Reply
  3. Manuel Pagani says:
    September 28, 2022 at 4:57 pm

    Hi Matt,

    I have followed your guide to install AKO on vSphere with Tanzu with NSX-T, the deployment of AKO is successful (I installed it with ServiceType NodePort) but when I go to expose an application (e.g. NGINX) the Service Engines in ALB fail to contact the TKGs node network (10.244.0.81/28).

    the Segment for the Data Network of the Service Engines is in the same Tier1 as the TKGs, but it cannot contact the TKGs nodes because I noticed that there is a rule in the DFW for TKGs node IPs that denies incoming traffic from any network.

    I can’t figure out how I can get the Service Engines Data Network to talk to the TKGs nodes network, do you have any ideas?

    Thank you very much.

    Reply
    1. Matt Adam says:
      September 28, 2022 at 7:34 pm

      Well I would set a DFW rule allowing everything, just to remove that variable from the equation. If it still fails, try moving the tkgs workload to a different T1.

      Reply
      1. Manuel Pagani says:
        September 29, 2022 at 2:28 am

        Hi Matt,

        thanks for the reply, I tried to add a rule that allowed communication between Data Network segment and TKGs node segment, but still nothing changed. What do you mean by changing T1? Can I change the T1 that created the vSphere with Tanzu when you create a new Namespace on Workload Management and in case it still remains supported?

        thank you very much

        Reply
        1. Matt Adam says:
          September 29, 2022 at 11:44 am

          I’ll send you a DM, and we can TS this.

          Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Upgrading the ESXi Node from vSphere7 to vSphere 8
  • Setting up the Kubernetes Dashboard
  • Running a DNS server in K3s
  • Raspberry Pi Kubernetes Cluster
  • Pod Routing: NodePort, ClusterIP, NodePortLocal

About

My name is Matt Adam and I’m a Product Line Manager at VMware.

I support the NSX Advanced Load Balancer (Avi Networks) with a focus on containers and Kubernetes. I have a background in load balancing, automation, development, and public cloud.

© 2023 Matt Adam | Powered by Minimalist Blog WordPress Theme