Skip to content

Matt Adam

Tech Blog

Menu
  • Home
  • Home Lab
    • Home Lab
    • Home Lab with NSX-T
  • Kubernetes
    • Openshift
    • K3s
    • Tanzu
      • Tanzu – TKGs (WCP)
      • Tanzu – TKG (TKGm)
    • Avi Kubernetes Operator (AKO)
  • About
  • Privacy Policy
Menu

AKO – Multitenancy – Each namespace in a cluster get’s their own Avi Tenant

Posted on October 12, 2022November 17, 2022 by Matt Adam

This guide will walk you through how to set different namespaces in a K8s cluster to use their own Avi tenant (and possibly their own VIP Subnet). Multitenancy is important especially if you want to limit the blast radius of a specific K8s namespace, or you could do this to set some RBAC controls in Avi so that users only have access to specific tenants/K8s namespaces. Either way, this guide will show you how to separate any kubernetes namespace into their own Avi tenant using the AKO values.yaml file.

Table of Contents

  • Software Versions used in this demo
  • Deploy your Kubernetes cluster
  • Setup Tenants in Avi
  • Setup Namespaces in Kubernetes cluster
  • Deploy AKO #1
  • Deploy AKO #2
  • Deploy a L4 LB or L7 Ingress
    • Test in Default Namespace
    • Test in Namespace1
    • Test in Namespace2

Software Versions used in this demo

SoftwareVersion
Avi Controller & Service Engines21.1.4
AKO1.8.1
Kubernetes Cluster – TKGm1.5.4

Deploy your Kubernetes cluster

Deploy any kubernetes cluster. In my demo I’m using a tkgm cluster on 1.5. No advanced configuration except that I did not deploy AKO yet. More on AKO Auto-deployment in workload clusters

Requirement for this step:

  • Avi controller deployed
  • 1 K8s cluster
    • Able to reach internet (for helm install)
    • Able to reach Avi controller

Setup Tenants in Avi

Navigate to Administration -> Accounts -> Tenants. Add in the additional tenants.

More on Tenants

Setup Namespaces in Kubernetes cluster

Now we can create 2 namespaces, and apply a kubernetes label to the namespaces.

#Create Namespace 1
kubectl create ns namespace1
kubectl label ns namespace1 avitenant=tenant1

#Create Namespace 2
kubectl create ns namespace2
kubectl label ns namespace2 avitenant=tenant2

Deploy AKO #1

With Avi up and running, let’s setup AKO for namespace 1. I’ve posted some install guides for AKO, so let’s use those.

  • Official Install Guide
  • MattAdam.com Guide
### Install AKO using the following commands
kubectl config use-context guest-cluster-1-admin@guest-cluster-1
kubectl create ns avi-system
helm repo add ako https://projects.registry.vmware.com/chartrepo/ako
helm install  ako/ako  --generate-name --version 1.7.2 -f  ako_values.yaml  -n avi-system

Example values.yaml file below.
The main pieces to look for here for namespace tenancy are

  • AKOSettings.primaryInstance – The first AKO deployed into the K8s cluster will be primary and set to “true”, all others will be set to “false”
  • AKOSettings.clusterName – I’m not entirely sure if this needs to be unique per AKO, but go ahead and change it to avoid problems later
  • AKOSettings.namespaceSelector – This key/value mapping must match the k/v from the namespace label
  • NetworkSettings.vipNetworkList – Optional, but you can change VIP networks for each namespace
  • ControllerSettings.tenantName – This is where you set the Avi tenant for the k8s objects to reside
  • AKOSettings.vipPerNamespace – Optional, and only for EVH. If true, it will create a parent VS for each namespace. Default is false. Also default mode is SNI, not EVH
# Default values for ako.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: projects.registry.vmware.com/ako/ako
  pullPolicy: IfNotPresent

### This section outlines the generic AKO settings
AKOSettings:
  primaryInstance: true # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true.
  enableEvents: 'true' # Enables/disables Event broadcasting via AKO 
  logLevel: WARN   # enum: INFO|DEBUG|WARN|ERROR
  fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations.
  apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080
  deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI 
  disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true.
  clusterName: guest-cluster-1   # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT
  cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp
  enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services
  layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing.
  # NamespaceSelector contains label key and value used for namespacemigration
  # Same label has to be present on namespace/s which needs migration/sync to AKO
  namespaceSelector:
    labelKey: 'avitenant'
    labelValue: 'tenant1'
  servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible 
                     # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 
  vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode

### This section outlines the network settings for virtualservices. 
NetworkSettings:
  ## This list of network and cidrs are used in pool placement network for vcenter cloud.
  ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds.
  # nodeNetworkList: []
  # nodeNetworkList:
  #   - networkName: "network-name"
  #     cidrs:
  #       - 10.0.0.1/24
  #       - 11.0.0.1/24
  enableRHI: false # This is a cluster wide setting for BGP peering.
  nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud.
  bgpPeerLabels: [] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement.
  # bgpPeerLabels:
  #   - peer1
  #   - peer2
  #  vipNetworkList: [] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud.
  vipNetworkList:
   - networkName: Data-vlan7
     cidr: 192.168.7.0/24

### This section outlines all the knobs  used to control Layer 7 loadbalancing settings in AKO.
L7Settings:
  defaultIngController: 'true'
  noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs.
  serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal
  shardVSSize: SMALL   # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED
  passthroughShardSize: SMALL   # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL
  enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects.

### This section outlines all the knobs  used to control Layer 4 loadbalancing settings in AKO.
L4Settings:
  defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes.
  autoFQDN: disabled   # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled.

### This section outlines settings on the Avi controller that affects AKO's functionality.
ControllerSettings:
  serviceEngineGroupName: Default-Group   # Name of the ServiceEngine Group.
  controllerVersion: '21.1.4' # The controller API version
  cloudName: vcenter   # The configured cloud name on the Avi controller.
  controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller
  tenantName: tenant1   # Name of the tenant where all the AKO objects will be created in AVI.

nodePortSelector: # Only applicable if serviceType is NodePort
  key: ''
  value: ''

resources:
  limits:
    cpu: 350m
    memory: 400Mi
  requests:
    cpu: 200m
    memory: 300Mi

podSecurityContext: {}

rbac:
  # Creates the pod security policy if set to true
  pspEnable: false


avicredentials:
  username: 'admin'
  password: 'PASSWORD'
  authtoken:
  certificateAuthorityData:


persistentVolumeClaim: ''
mountPath: /log
logFile: avi.log
You can see all the commands I ran here. Tailed the logs of ako-0 to see that it was up and running.

Deploy AKO #2

The primary AKO is now running in namespace avi-system and is set to manage only the objects in namespace1. Now we’ll deploy a second AKO instance, in namespace avi-system2 (or whatever) and it will target only objects in namespace2.

### Install AKO #2 using the following commands
kubectl create ns avi-system2
helm install  ako/ako  --generate-name --version 1.7.2 -f  ako_values2.yaml  -n avi-system2

Example values.yaml file below.
Again the main things here are the tenantName and the namespaceSelector. But here’s the full list.

  • AKOSettings.primaryInstance – The first AKO deployed into the K8s cluster will be primary and set to “true”, all others will be set to “false”
  • AKOSettings.clusterName – I’m not entirely sure if this needs to be unique per AKO, but go ahead and change it to avoid problems later
  • AKOSettings.namespaceSelector – This key/value mapping must match the k/v from the namespace label
  • NetworkSettings.vipNetworkList – Optional, but you can change VIP networks for each namespace
  • ControllerSettings.tenantName – This is where you set the Avi tenant for the k8s objects to reside
  • AKOSettings.vipPerNamespace – Optional, and only for EVH. If true, it will create a parent VS for each namespace. Default is false. Also default mode is SNI, not EVH
# Default values for ako.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: projects.registry.vmware.com/ako/ako
  pullPolicy: IfNotPresent

### This section outlines the generic AKO settings
AKOSettings:
  primaryInstance: false # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true.
  enableEvents: 'true' # Enables/disables Event broadcasting via AKO 
  logLevel: WARN   # enum: INFO|DEBUG|WARN|ERROR
  fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations.
  apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080
  deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI 
  disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true.
  clusterName: guest-cluster-1b   # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT
  cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp
  enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services
  layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing.
  # NamespaceSelector contains label key and value used for namespacemigration
  # Same label has to be present on namespace/s which needs migration/sync to AKO
  namespaceSelector:
    labelKey: 'avitenant'
    labelValue: 'tenant2'
  servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible 
                     # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 
  vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode

### This section outlines the network settings for virtualservices. 
NetworkSettings:
  ## This list of network and cidrs are used in pool placement network for vcenter cloud.
  ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds.
  # nodeNetworkList: []
  # nodeNetworkList:
  #   - networkName: "network-name"
  #     cidrs:
  #       - 10.0.0.1/24
  #       - 11.0.0.1/24
  enableRHI: false # This is a cluster wide setting for BGP peering.
  nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud.
  bgpPeerLabels: [] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement.
  # bgpPeerLabels:
  #   - peer1
  #   - peer2
  #  vipNetworkList: [] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud.
  vipNetworkList:
   - networkName: Data-vlan7
     cidr: 192.168.7.0/24

### This section outlines all the knobs  used to control Layer 7 loadbalancing settings in AKO.
L7Settings:
  defaultIngController: 'true'
  noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs.
  serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal
  shardVSSize: SMALL   # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED
  passthroughShardSize: SMALL   # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL
  enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects.

### This section outlines all the knobs  used to control Layer 4 loadbalancing settings in AKO.
L4Settings:
  defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes.
  autoFQDN: disabled   # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled.

### This section outlines settings on the Avi controller that affects AKO's functionality.
ControllerSettings:
  serviceEngineGroupName: Default-Group   # Name of the ServiceEngine Group.
  controllerVersion: '21.1.4' # The controller API version
  cloudName: vcenter   # The configured cloud name on the Avi controller.
  controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller
  tenantName: tenant2   # Name of the tenant where all the AKO objects will be created in AVI.

nodePortSelector: # Only applicable if serviceType is NodePort
  key: ''
  value: ''

resources:
  limits:
    cpu: 350m
    memory: 400Mi
  requests:
    cpu: 200m
    memory: 300Mi

podSecurityContext: {}

rbac:
  # Creates the pod security policy if set to true
  pspEnable: false


avicredentials:
  username: 'admin'
  password: 'PASSWORD'
  authtoken:
  certificateAuthorityData:


persistentVolumeClaim: ''
mountPath: /log
logFile: avi.log
Same as before, tail the logs if needed.

Deploy a L4 LB or L7 Ingress

Now that both AKOs are setup, we can deploy some ingresses to test the functionality.
First, let’s deploy the ingress to the default namespace, nothing should happen in Avi or AKO at this point.

Test in Default Namespace

Deploy the ingress.

Example of the ingress.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  selector:
    matchLabels:
      app: blue
  replicas: 2
  template:
    metadata:
      labels:
        app: blue
    spec:
      containers:
      - name: blue
        image: alexfeig/bluegreen:latest
        ports:
        - containerPort: 5000
        env:
        - name: app_color
          value: "blue"
      imagePullSecrets:
      - name: regcred
---

---
apiVersion: v1
kind: Service
metadata:
  name: blue
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 5000
    protocol: TCP
  selector:
    app: blue

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blue
  labels:
    app: gslb
spec:
  rules:
  - host: blue.avi.home.lab
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: blue
            port:
              number: 80
In Avi, under “All Tenants” we don’t see any new VSs created here. Good.

Delete the ingress and continue.

Test in Namespace1

Deploy that same ingress spec to the namespace1 namespace.

Now we can see the appropriate Avi configurations have been made in tenant1. The service is Green.

Test in Namespace2

Deploy that same ingress spec to the namespace1 namespace.

Now we can see the appropriate Avi configurations have been made in tenant1. The service is Green.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Setting up the Kubernetes Dashboard
  • Running a DNS server in K3s
  • Raspberry Pi Kubernetes Cluster
  • Pod Routing: NodePort, ClusterIP, NodePortLocal
  • Configure Bootstrap VM for OpenShift and Install OpenShift with vSphere

About

My name is Matt Adam and I’m a Product Line Manager at VMware.

I support the NSX Advanced Load Balancer (Avi Networks) with a focus on containers and Kubernetes. I have a background in load balancing, automation, development, and public cloud.

© 2023 Matt Adam | Powered by Minimalist Blog WordPress Theme