This guide will walk you through how to set each K8s cluster to use their own Avi tenant (and possibly their own VIP Subnet). Multitenancy is important especially if you want to limit the blast radius of a specific K8s cluster, or you could do this to set some RBAC controls in Avi so that users only have access to specific tenants/K8s clusters. Either way, this guide will show you how to separate any kubernetes cluster into their own Avi tenant using the AKO values.yaml file.
Software Versions used in this demo
Software | Version |
---|---|
Avi Controller & Service Engines | 21.1.4 |
AKO | 1.7.2 |
TKGm | 1.5.4 |
Deploy your Kubernetes clusters
In my example below, I’m using 2 tkgm clusters. (Simply to illustrate that both will get their own tenant and subnet) I’ve got some guides on how to deploy these here: Deploy TKGm clusters. It doesn’t have to be TKGm though, TKGs, other Tanzu deployments, Openshift, etc..
In my deployment I’m also running the control plane through Avi as a VS, again not required for the multitenancy part below.
Requirement for this step:
- Avi controller deployed
- 1 or more K8s clusters
- Able to reach internet (for helm install)
- Able to reach Avi controller
Setup Tenants in Avi

Setting up AKO Multitenancy
With Avi up and running, let’s setup AKO. I’ve posted some install guides for AKO, so let’s use those.
1 2 3 4 5 | ### Install AKO using the following commands kubectl config use-context guest-cluster-1-admin@guest-cluster-1 kubectl create ns avi-system helm repo add ako https: //projects .registry.vmware.com /chartrepo/ako helm install ako /ako --generate-name --version 1.7.2 -f ako_values.yaml -n avi-system |
Example values.yaml file below.
The main pieces here are
- ControllerSettings.tenantName – Specify the Avi tenant used for this K8s cluster
- NetworkSettings.vipNetworkList – Additionally, you can specify specific VIP subnets to be used by the K8s cluster. Make sure this subnet/network is also set in the Avi IPAM.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | # Default values for ako. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: projects.registry.vmware.com/ako/ako pullPolicy: IfNotPresent ### This section outlines the generic AKO settings AKOSettings: primaryInstance: true # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true. enableEvents: 'true' # Enables/disables Event broadcasting via AKO logLevel: WARN # enum: INFO|DEBUG|WARN|ERROR fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations. apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080 deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true. clusterName: guest-cluster-1 # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing. # NamespaceSelector contains label key and value used for namespacemigration # Same label has to be present on namespace/s which needs migration/sync to AKO namespaceSelector: labelKey: '' labelValue: '' servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode ### This section outlines the network settings for virtualservices. NetworkSettings: ## This list of network and cidrs are used in pool placement network for vcenter cloud. ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds. # nodeNetworkList: [] # nodeNetworkList: # - networkName: "network-name" # cidrs: # - 10.0.0.1/24 # - 11.0.0.1/24 enableRHI: false # This is a cluster wide setting for BGP peering. nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud. bgpPeerLabels: [ ] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement. # bgpPeerLabels: # - peer1 # - peer2 vipNetworkList: [ ] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud. vipNetworkList: - networkName : Data-vlan7 cidr: 192.168.7.0/24 ### This section outlines all the knobs used to control Layer 7 loadbalancing settings in AKO. L7Settings: defaultIngController: 'true' noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs. serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal shardVSSize: SMALL # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED passthroughShardSize: SMALL # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects. ### This section outlines all the knobs used to control Layer 4 loadbalancing settings in AKO. L4Settings: defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes. autoFQDN: disabled # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled. ### This section outlines settings on the Avi controller that affects AKO's functionality. ControllerSettings: serviceEngineGroupName: Default-Group # Name of the ServiceEngine Group. controllerVersion: '21.1.4' # The controller API version cloudName: vcenter # The configured cloud name on the Avi controller. controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller tenantName: guest1 # Name of the tenant where all the AKO objects will be created in AVI. nodePortSelector: # Only applicable if serviceType is NodePort key: '' value: '' resources: limits: cpu: 350m memory: 400Mi requests: cpu: 200m memory: 300Mi podSecurityContext: { } rbac: # Creates the pod security policy if set to true pspEnable: false avicredentials: username: 'admin' password: 'PASSWORD' authtoken: certificateAuthorityData: persistentVolumeClaim: '' mountPath: /log logFile: avi.log |

Deploy a L4 LB or L7 Ingress
Now that AKO is setup, we can deploy a quick test Ingress to verify that it is working as expected.

Example of the ingress.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | --- apiVersion: apps/v1 kind: Deployment metadata: name: blue spec: selector: matchLabels: app: blue replicas: 2 template: metadata: labels: app: blue spec: containers: - name : blue image: alexfeig/bluegreen : latest ports: - containerPort : 5000 env: - name : app_color value: "blue" imagePullSecrets: - name : regcred --- --- apiVersion: v1 kind: Service metadata: name: blue spec: type: NodePort ports: - name : http port: 80 targetPort: 5000 protocol: TCP selector: app: blue --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: blue labels: app: gslb spec: rules: - host : blue.avi.home.lab http: paths: - path : / pathType: Prefix backend: service: name: blue port: number: 80 |


I created a second cluster “guest-cluster-2” and set AKO to write to the Avi tenant “guest2”


And finally quick view of all the Applications and their tenants:
