Skip to content

Matt Adam

Tech Blog

Menu
  • Home
  • Home Lab
    • Home Lab
    • Home Lab with NSX-T
  • Kubernetes
    • Openshift
    • K3s
    • Tanzu
      • Tanzu – TKGs (WCP)
      • Tanzu – TKG (TKGm)
    • Avi Kubernetes Operator (AKO)
  • About
  • Privacy Policy
Menu

AKO – Multitenancy – Each cluster get’s their own Avi Tenant

Posted on September 26, 2022November 17, 2022 by Matt Adam

This guide will walk you through how to set each K8s cluster to use their own Avi tenant (and possibly their own VIP Subnet). Multitenancy is important especially if you want to limit the blast radius of a specific K8s cluster, or you could do this to set some RBAC controls in Avi so that users only have access to specific tenants/K8s clusters. Either way, this guide will show you how to separate any kubernetes cluster into their own Avi tenant using the AKO values.yaml file.

Table of Contents

  • Software Versions used in this demo
  • Deploy your Kubernetes clusters
  • Setup Tenants in Avi
  • Setting up AKO Multitenancy
  • Deploy a L4 LB or L7 Ingress

Software Versions used in this demo

SoftwareVersion
Avi Controller & Service Engines21.1.4
AKO1.7.2
TKGm1.5.4

Deploy your Kubernetes clusters

In my example below, I’m using 2 tkgm clusters. (Simply to illustrate that both will get their own tenant and subnet) I’ve got some guides on how to deploy these here: Deploy TKGm clusters. It doesn’t have to be TKGm though, TKGs, other Tanzu deployments, Openshift, etc..
In my deployment I’m also running the control plane through Avi as a VS, again not required for the multitenancy part below.

Requirement for this step:

  • Avi controller deployed
  • 1 or more K8s clusters
    • Able to reach internet (for helm install)
    • Able to reach Avi controller

Setup Tenants in Avi

Navigate to Administration -> Accounts -> Tenants. Add in the additional tenants.

More on Tenants

Setting up AKO Multitenancy

With Avi up and running, let’s setup AKO. I’ve posted some install guides for AKO, so let’s use those.

  • Official Install Guide
  • MattAdam.com Guide
### Install AKO using the following commands
kubectl config use-context guest-cluster-1-admin@guest-cluster-1
kubectl create ns avi-system
helm repo add ako https://projects.registry.vmware.com/chartrepo/ako
helm install  ako/ako  --generate-name --version 1.7.2 -f  ako_values.yaml  -n avi-system

Example values.yaml file below.
The main pieces here are

  • ControllerSettings.tenantName – Specify the Avi tenant used for this K8s cluster
  • NetworkSettings.vipNetworkList – Additionally, you can specify specific VIP subnets to be used by the K8s cluster. Make sure this subnet/network is also set in the Avi IPAM.
# Default values for ako.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: projects.registry.vmware.com/ako/ako
  pullPolicy: IfNotPresent

### This section outlines the generic AKO settings
AKOSettings:
  primaryInstance: true # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true.
  enableEvents: 'true' # Enables/disables Event broadcasting via AKO 
  logLevel: WARN   # enum: INFO|DEBUG|WARN|ERROR
  fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations.
  apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080
  deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI 
  disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true.
  clusterName: guest-cluster-1   # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT
  cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp
  enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services
  layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing.
  # NamespaceSelector contains label key and value used for namespacemigration
  # Same label has to be present on namespace/s which needs migration/sync to AKO
  namespaceSelector:
    labelKey: ''
    labelValue: ''
  servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible 
                     # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 
  vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode

### This section outlines the network settings for virtualservices. 
NetworkSettings:
  ## This list of network and cidrs are used in pool placement network for vcenter cloud.
  ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds.
  # nodeNetworkList: []
  # nodeNetworkList:
  #   - networkName: "network-name"
  #     cidrs:
  #       - 10.0.0.1/24
  #       - 11.0.0.1/24
  enableRHI: false # This is a cluster wide setting for BGP peering.
  nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud.
  bgpPeerLabels: [] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement.
  # bgpPeerLabels:
  #   - peer1
  #   - peer2
  vipNetworkList: [] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud.
  vipNetworkList:
   - networkName: Data-vlan7
     cidr: 192.168.7.0/24

### This section outlines all the knobs  used to control Layer 7 loadbalancing settings in AKO.
L7Settings:
  defaultIngController: 'true'
  noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs.
  serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal
  shardVSSize: SMALL   # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED
  passthroughShardSize: SMALL   # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL
  enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects.

### This section outlines all the knobs  used to control Layer 4 loadbalancing settings in AKO.
L4Settings:
  defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes.
  autoFQDN: disabled   # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled.

### This section outlines settings on the Avi controller that affects AKO's functionality.
ControllerSettings:
  serviceEngineGroupName: Default-Group   # Name of the ServiceEngine Group.
  controllerVersion: '21.1.4' # The controller API version
  cloudName: vcenter   # The configured cloud name on the Avi controller.
  controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller
  tenantName: guest1   # Name of the tenant where all the AKO objects will be created in AVI.

nodePortSelector: # Only applicable if serviceType is NodePort
  key: ''
  value: ''

resources:
  limits:
    cpu: 350m
    memory: 400Mi
  requests:
    cpu: 200m
    memory: 300Mi

podSecurityContext: {}

rbac:
  # Creates the pod security policy if set to true
  pspEnable: false


avicredentials:
  username: 'admin'
  password: 'PASSWORD'
  authtoken:
  certificateAuthorityData:


persistentVolumeClaim: ''
mountPath: /log
logFile: avi.log
You can see all the commands I ran here. Tailed the logs of ako-0 to see that it was up and running.

Deploy a L4 LB or L7 Ingress

Now that AKO is setup, we can deploy a quick test Ingress to verify that it is working as expected.

Deploy the ingress.

Example of the ingress.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  selector:
    matchLabels:
      app: blue
  replicas: 2
  template:
    metadata:
      labels:
        app: blue
    spec:
      containers:
      - name: blue
        image: alexfeig/bluegreen:latest
        ports:
        - containerPort: 5000
        env:
        - name: app_color
          value: "blue"
      imagePullSecrets:
      - name: regcred
---

---
apiVersion: v1
kind: Service
metadata:
  name: blue
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 5000
    protocol: TCP
  selector:
    app: blue

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blue
  labels:
    app: gslb
spec:
  rules:
  - host: blue.avi.home.lab
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: blue
            port:
              number: 80
You can see the Ingress has been created, and in the Top left it indicates “guest1” tenant. The subnet here is Data-vlan7 on 192.168.7.0/24

I created a second cluster “guest-cluster-2” and set AKO to write to the Avi tenant “guest2”

Here you can see the application “green.avi.home.lab” in tenant “guest2”

And finally quick view of all the Applications and their tenants:

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Setting up the Kubernetes Dashboard
  • Running a DNS server in K3s
  • Raspberry Pi Kubernetes Cluster
  • Pod Routing: NodePort, ClusterIP, NodePortLocal
  • Configure Bootstrap VM for OpenShift and Install OpenShift with vSphere

About

My name is Matt Adam and I’m a Product Line Manager at VMware.

I support the NSX Advanced Load Balancer (Avi Networks) with a focus on containers and Kubernetes. I have a background in load balancing, automation, development, and public cloud.

© 2023 Matt Adam | Powered by Minimalist Blog WordPress Theme