This guide will walk you through how to set different namespaces in a K8s cluster to use their own Avi tenant (and possibly their own VIP Subnet). Multitenancy is important especially if you want to limit the blast radius of a specific K8s namespace, or you could do this to set some RBAC controls in Avi so that users only have access to specific tenants/K8s namespaces. Either way, this guide will show you how to separate any kubernetes namespace into their own Avi tenant using the AKO values.yaml file.
Software Versions used in this demo
Software | Version |
---|---|
Avi Controller & Service Engines | 21.1.4 |
AKO | 1.8.1 |
Kubernetes Cluster – TKGm | 1.5.4 |
Deploy your Kubernetes cluster
Deploy any kubernetes cluster. In my demo I’m using a tkgm cluster on 1.5. No advanced configuration except that I did not deploy AKO yet. More on AKO Auto-deployment in workload clusters
Requirement for this step:
- Avi controller deployed
- 1 K8s cluster
- Able to reach internet (for helm install)
- Able to reach Avi controller
Setup Tenants in Avi

Setup Namespaces in Kubernetes cluster
Now we can create 2 namespaces, and apply a kubernetes label to the namespaces.
1 2 3 4 5 6 7 | #Create Namespace 1 kubectl create ns namespace1 kubectl label ns namespace1 avitenant=tenant1 #Create Namespace 2 kubectl create ns namespace2 kubectl label ns namespace2 avitenant=tenant2 |

Deploy AKO #1
With Avi up and running, let’s setup AKO for namespace 1. I’ve posted some install guides for AKO, so let’s use those.
1 2 3 4 5 | ### Install AKO using the following commands kubectl config use-context guest-cluster-1-admin@guest-cluster-1 kubectl create ns avi-system helm repo add ako https: //projects .registry.vmware.com /chartrepo/ako helm install ako /ako --generate-name --version 1.7.2 -f ako_values.yaml -n avi-system |
Example values.yaml file below.
The main pieces to look for here for namespace tenancy are
- AKOSettings.primaryInstance – The first AKO deployed into the K8s cluster will be primary and set to “true”, all others will be set to “false”
- AKOSettings.clusterName – I’m not entirely sure if this needs to be unique per AKO, but go ahead and change it to avoid problems later
- AKOSettings.namespaceSelector – This key/value mapping must match the k/v from the namespace label
- NetworkSettings.vipNetworkList – Optional, but you can change VIP networks for each namespace
- ControllerSettings.tenantName – This is where you set the Avi tenant for the k8s objects to reside
- AKOSettings.vipPerNamespace – Optional, and only for EVH. If true, it will create a parent VS for each namespace. Default is false. Also default mode is SNI, not EVH
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | # Default values for ako. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: projects.registry.vmware.com/ako/ako pullPolicy: IfNotPresent ### This section outlines the generic AKO settings AKOSettings: primaryInstance: true # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true. enableEvents: 'true' # Enables/disables Event broadcasting via AKO logLevel: WARN # enum: INFO|DEBUG|WARN|ERROR fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations. apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080 deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true. clusterName: guest-cluster-1 # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing. # NamespaceSelector contains label key and value used for namespacemigration # Same label has to be present on namespace/s which needs migration/sync to AKO namespaceSelector: labelKey: 'avitenant' labelValue: 'tenant1' servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode ### This section outlines the network settings for virtualservices. NetworkSettings: ## This list of network and cidrs are used in pool placement network for vcenter cloud. ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds. # nodeNetworkList: [] # nodeNetworkList: # - networkName: "network-name" # cidrs: # - 10.0.0.1/24 # - 11.0.0.1/24 enableRHI: false # This is a cluster wide setting for BGP peering. nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud. bgpPeerLabels: [ ] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement. # bgpPeerLabels: # - peer1 # - peer2 # vipNetworkList: [] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud. vipNetworkList: - networkName : Data-vlan7 cidr: 192.168.7.0/24 ### This section outlines all the knobs used to control Layer 7 loadbalancing settings in AKO. L7Settings: defaultIngController: 'true' noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs. serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal shardVSSize: SMALL # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED passthroughShardSize: SMALL # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects. ### This section outlines all the knobs used to control Layer 4 loadbalancing settings in AKO. L4Settings: defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes. autoFQDN: disabled # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled. ### This section outlines settings on the Avi controller that affects AKO's functionality. ControllerSettings: serviceEngineGroupName: Default-Group # Name of the ServiceEngine Group. controllerVersion: '21.1.4' # The controller API version cloudName: vcenter # The configured cloud name on the Avi controller. controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller tenantName: tenant1 # Name of the tenant where all the AKO objects will be created in AVI. nodePortSelector: # Only applicable if serviceType is NodePort key: '' value: '' resources: limits: cpu: 350m memory: 400Mi requests: cpu: 200m memory: 300Mi podSecurityContext: { } rbac: # Creates the pod security policy if set to true pspEnable: false avicredentials: username: 'admin' password: 'PASSWORD' authtoken: certificateAuthorityData: persistentVolumeClaim: '' mountPath: /log logFile: avi.log |

Deploy AKO #2
The primary AKO is now running in namespace avi-system and is set to manage only the objects in namespace1. Now we’ll deploy a second AKO instance, in namespace avi-system2 (or whatever) and it will target only objects in namespace2.
1 2 3 | ### Install AKO #2 using the following commands kubectl create ns avi-system2 helm install ako /ako --generate-name --version 1.7.2 -f ako_values2.yaml -n avi-system2 |
Example values.yaml file below.
Again the main things here are the tenantName and the namespaceSelector. But here’s the full list.
- AKOSettings.primaryInstance – The first AKO deployed into the K8s cluster will be primary and set to “true”, all others will be set to “false”
- AKOSettings.clusterName – I’m not entirely sure if this needs to be unique per AKO, but go ahead and change it to avoid problems later
- AKOSettings.namespaceSelector – This key/value mapping must match the k/v from the namespace label
- NetworkSettings.vipNetworkList – Optional, but you can change VIP networks for each namespace
- ControllerSettings.tenantName – This is where you set the Avi tenant for the k8s objects to reside
- AKOSettings.vipPerNamespace – Optional, and only for EVH. If true, it will create a parent VS for each namespace. Default is false. Also default mode is SNI, not EVH
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | # Default values for ako. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: projects.registry.vmware.com/ako/ako pullPolicy: IfNotPresent ### This section outlines the generic AKO settings AKOSettings: primaryInstance: false # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true. enableEvents: 'true' # Enables/disables Event broadcasting via AKO logLevel: WARN # enum: INFO|DEBUG|WARN|ERROR fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations. apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080 deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true. clusterName: guest-cluster-1b # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing. # NamespaceSelector contains label key and value used for namespacemigration # Same label has to be present on namespace/s which needs migration/sync to AKO namespaceSelector: labelKey: 'avitenant' labelValue: 'tenant2' servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode ### This section outlines the network settings for virtualservices. NetworkSettings: ## This list of network and cidrs are used in pool placement network for vcenter cloud. ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds. # nodeNetworkList: [] # nodeNetworkList: # - networkName: "network-name" # cidrs: # - 10.0.0.1/24 # - 11.0.0.1/24 enableRHI: false # This is a cluster wide setting for BGP peering. nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud. bgpPeerLabels: [ ] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement. # bgpPeerLabels: # - peer1 # - peer2 # vipNetworkList: [] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud. vipNetworkList: - networkName : Data-vlan7 cidr: 192.168.7.0/24 ### This section outlines all the knobs used to control Layer 7 loadbalancing settings in AKO. L7Settings: defaultIngController: 'true' noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs. serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal shardVSSize: SMALL # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED passthroughShardSize: SMALL # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects. ### This section outlines all the knobs used to control Layer 4 loadbalancing settings in AKO. L4Settings: defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes. autoFQDN: disabled # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled. ### This section outlines settings on the Avi controller that affects AKO's functionality. ControllerSettings: serviceEngineGroupName: Default-Group # Name of the ServiceEngine Group. controllerVersion: '21.1.4' # The controller API version cloudName: vcenter # The configured cloud name on the Avi controller. controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller tenantName: tenant2 # Name of the tenant where all the AKO objects will be created in AVI. nodePortSelector: # Only applicable if serviceType is NodePort key: '' value: '' resources: limits: cpu: 350m memory: 400Mi requests: cpu: 200m memory: 300Mi podSecurityContext: { } rbac: # Creates the pod security policy if set to true pspEnable: false avicredentials: username: 'admin' password: 'PASSWORD' authtoken: certificateAuthorityData: persistentVolumeClaim: '' mountPath: /log logFile: avi.log |

Deploy a L4 LB or L7 Ingress
Now that both AKOs are setup, we can deploy some ingresses to test the functionality.
First, let’s deploy the ingress to the default namespace, nothing should happen in Avi or AKO at this point.
Test in Default Namespace

Example of the ingress.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | --- apiVersion: apps/v1 kind: Deployment metadata: name: blue spec: selector: matchLabels: app: blue replicas: 2 template: metadata: labels: app: blue spec: containers: - name : blue image: alexfeig/bluegreen : latest ports: - containerPort : 5000 env: - name : app_color value: "blue" imagePullSecrets: - name : regcred --- --- apiVersion: v1 kind: Service metadata: name: blue spec: type: NodePort ports: - name : http port: 80 targetPort: 5000 protocol: TCP selector: app: blue --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: blue labels: app: gslb spec: rules: - host : blue.avi.home.lab http: paths: - path : / pathType: Prefix backend: service: name: blue port: number: 80 |

Delete the ingress and continue.
Test in Namespace1
Deploy that same ingress spec to the namespace1 namespace.


Test in Namespace2
Deploy that same ingress spec to the namespace1 namespace.


