Skip to content

Matt Adam

Tech Blog

Menu
  • Home
  • Home Lab
    • Home Lab
    • Home Lab with NSX-T
  • Kubernetes
    • Openshift
    • K3s
    • Tanzu
      • Tanzu – TKGs (WCP)
      • Tanzu – TKG (TKGm)
    • Avi Kubernetes Operator (AKO)
  • About
  • Privacy Policy
Menu

AMKO – Deploy a multi-cluster Application

Posted on September 19, 2022September 26, 2022 by Matt Adam

Table of Contents

  • Summary
  • Setting up the environment
    • Avi Controller & SEs
    • Configure GSLB in Avi Controller
    • TKGm – Management Cluster
    • TKGm – Guest Clusters
  • Configure AKO on each cluster
    • Configure AKO on Cluster 1
    • Test AKO by creating an Ingress
    • Configure AKO on Cluster 2
  • Configure AMKO on guest-cluster-1
    • Create the gslb-members config file
    • Create and Deploy the amko_values.yaml config file
    • Deploy a GSLB Service using the AKO HostRule

Summary

This guide will help you to deploy an application over multiple Kubernetes clusters and utilize Global Site Load Balancing (GSLB) to load balance between them.

In this guide we’ll deploy the following:

  • Avi Controller & SE/s
  • TKGm
    • 1 Management cluster
    • 2 Guest Clusters
  • AKO – Avi Kubernetes Operator (Local Config)
  • AMKO – Avi Multi-Cluster Kubernetes Operator (GSLB Config)

Setting up the environment

Avi Controller & SEs

There’s already some good guides on here for deploying Avi in vCenter. The main thing is that you deploy the Avi controller and create at least 1 SE. The SE creation is not required, but it will make spinning up TKGm clusters a bit faster (since you won’t have to wait for the SE creation). If you deploy a VirtualService, it will auto create the SE.

  • Guide on MattAdam.com
  • Official Installation Documentation

Also make sure to deploy a DNS VS:

Set the VirtualService to “System-DNS”
And set that DNS VS as the local DNS Service.

Then in your home DNS server, make sure you point the proper FQDN to this DNS service. I am using bind as my home DNS server, and here is an example entry in: /var/named/home.lab.db

avi       IN  NS      avins1.home.lab.
avins1       IN  A       192.168.7.24

Note the 192.168.7.24 IP address is the IP of my VS created in Avi.

Configure GSLB in Avi Controller

We need to enable the GSLB functionality in Avi, and setup the GSLB domain name.

Navigate to Infrastructure -> GSLB -> Site Configuration and click the Edit button.
Set the GSLB name, the username and password (use the same as the admin credentials for the avi controller) IP address and Port will prefill. Add in the GSLB Subdomain: gslb.avi.home.lab Switch the Client Group IP Address Type to private. Click “Save and Set DNS Virtual Services”
Set the DNS Virtual Service as the “dns-local” that was created earlier. and Click Save.
Avi should show the GSLB member as green.

TKGm – Management Cluster

I named the cluster:

  • Management Cluster: management-cluster-1

Again referencing already created guides: Create a TKGm Management Cluster. You can follow this guide as is, with 1 small change. I deployed the management cluster with a label to prevent AKO from installing automatically in the guest clusters. The way this normally works is that if you don’t specify any Labels during the management cluster creation, it will auto deploy AKO on each guest cluster. If you specify a label, then each guest cluster that has that same label will get AKO installed automatically. So if you don’t want AKO installed automatically, then you would label the management cluster, and just not label the guest clusters.

I did this only for extra practice, and because you can then helm install Avi in the guest clusters directly, and use the latest AKO versions. This is not required, but to do this edit the yaml file with the following.

# Edit the management-cluster.yaml file

# Replace:
AVI_LABELS: ""


# With:
# AVI_LABELS: ""     ### Make sure to comment out this line.
AVI_LABELS: |
    'ako': 'yes'

Here’s a full example of my management-cluster.yaml config.

AVI_CA_DATA_B64: LS0tLS...S0tCg==
AVI_CLOUD_NAME: vcenter
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_CONTROLLER: avi-controller.home.lab
AVI_DATA_NETWORK: Data-vlan7
AVI_DATA_NETWORK_CIDR: 192.168.7.0/24
AVI_ENABLE: "true"
# AVI_LABELS: ""
AVI_LABELS: |
    'ako': 'yes'
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 192.168.7.0/24
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: Data-vlan7
AVI_PASSWORD: <encoded:TT....Eh>
AVI_SERVICE_ENGINE_GROUP: Default-Group
AVI_USERNAME: admin
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: management-cluster-1
CLUSTER_PLAN: dev
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
CONTROL_PLANE_MACHINE_COUNT: 1
WORKER_MACHINE_COUNT: 1
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: ""
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /vSAN Datacenter
VSPHERE_DATASTORE: /vSAN Datacenter/datastore/vsanDatastore
VSPHERE_FOLDER: /vSAN Datacenter/vm/tkgm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /vSAN Datacenter/network/VM Network
VSPHERE_PASSWORD: <encoded:T...Eh>
VSPHERE_RESOURCE_POOL: /vSAN Datacenter/host/vSAN Cluster/Resources
VSPHERE_SERVER: vcenter.home.lab
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB....U0uAr/T2MRsJLw== admin@home.lab
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "2"
DEPLOY_TKG_ON_VSPHERE7: true

TKGm – Guest Clusters

Check out this guide that I created for supervisor clusters: Create a TKGm Guest Cluster. For this guide, I created 2 guest clusters with the following names:

  • Guest Cluster 1: guest-cluster-1
  • Guest Cluster 2: guest-cluster-2

The only additional thing I did for each of these guest clusters is again make sure that they’re not labeled, so that AKO does not auto deploy.

Example guest-cluster-1.yaml

AVI_CA_DATA_B64: LS0tLS...S0tCg==
AVI_CLOUD_NAME: vcenter
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_CONTROLLER: avi-controller.home.lab
AVI_DATA_NETWORK: Data-vlan7
AVI_DATA_NETWORK_CIDR: 192.168.7.0/24
AVI_ENABLE: "true"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 192.168.7.0/24
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: Data-vlan7
AVI_PASSWORD: <encoded:TT....Eh>
AVI_SERVICE_ENGINE_GROUP: Default-Group
AVI_USERNAME: admin
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: guest-cluster-1
CLUSTER_PLAN: dev
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: ""
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /vSAN Datacenter
VSPHERE_DATASTORE: /vSAN Datacenter/datastore/vsanDatastore
VSPHERE_FOLDER: /vSAN Datacenter/vm/tkgm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /vSAN Datacenter/network/VM Network
VSPHERE_PASSWORD: <encoded:TT....Eh>
VSPHERE_RESOURCE_POOL: /vSAN Datacenter/host/vSAN Cluster/Resources
VSPHERE_SERVER: vcenter.home.lab
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB....U0uAr/T2MRsJLw== admin@home.lab
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "2"

Example guest-cluster-2.yaml config:

AVI_CA_DATA_B64: LS0tLS...S0tCg==
AVI_CLOUD_NAME: vcenter
AVI_CONTROL_PLANE_HA_PROVIDER: "true"
AVI_CONTROLLER: avi-controller.home.lab
AVI_DATA_NETWORK: Data-vlan7
AVI_DATA_NETWORK_CIDR: 192.168.7.0/24
AVI_ENABLE: "true"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 192.168.7.0/24
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: Data-vlan7
AVI_PASSWORD: <encoded:TT....Eh>
AVI_SERVICE_ENGINE_GROUP: Default-Group
AVI_USERNAME: admin
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: guest-cluster-2
CLUSTER_PLAN: dev
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: ""
VSPHERE_CONTROL_PLANE_MEM_MIB: "8192"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /vSAN Datacenter
VSPHERE_DATASTORE: /vSAN Datacenter/datastore/vsanDatastore
VSPHERE_FOLDER: /vSAN Datacenter/vm/tkgm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /vSAN Datacenter/network/VM Network
VSPHERE_PASSWORD: <encoded:TT....Eh>
VSPHERE_RESOURCE_POOL: /vSAN Datacenter/host/vSAN Cluster/Resources
VSPHERE_SERVER: vcenter.home.lab
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB....U0uAr/T2MRsJLw== admin@home.lab
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "40"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "2"

So after deploying a management TKGm cluster and 2 guest clusters, your Avi controller should look very similar to this:

Configure AKO on each cluster

Configure AKO on Cluster 1

Ok so before we deploy any AMKO components, let’s deploy AKO on each cluster and test them with an ingress, to make sure that everything is working.

### Switch contexts to use the guest-cluster-1
tanzu cluster kubeconfig get guest-cluster-1 --admin
kubectl config use-context guest-cluster-1-admin@guest-cluster-1

### AKO Install
kubectl create ns avi-system
helm repo add ako https://projects.registry.vmware.com/chartrepo/ako
helm install  ako/ako  --generate-name --version 1.7.2 -f  guest_cluster_1_ako_values.yaml  -n avi-system

See an example guest_cluster_1_ako_values.yaml file below:
The important things to change:

  • clusterName – Name this something unique, like guest-cluster-1
  • vipNetworkList – This is a list of VIP networks, it is the same network that needs to be configured in the IPAM profile.
  • shardVSSize – For Labs, just use SMALL. This will decide how many VSs to create, and scale the traffic out on to each VS.
  • ControllerSettings – Add the details for your Avi controller here.
  • avicredentials – Add the avi controller credentials.
# Default values for ako.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: projects.registry.vmware.com/ako/ako
  pullPolicy: IfNotPresent

### This section outlines the generic AKO settings
AKOSettings:
  primaryInstance: true # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true.
  enableEvents: 'true' # Enables/disables Event broadcasting via AKO 
  logLevel: WARN   # enum: INFO|DEBUG|WARN|ERROR
  fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations.
  apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080
  deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI 
  disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true.
  clusterName: my-cluster   # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT
  cniPlugin: '' # Set the string if your CNI is calico or openshift. enum: calico|canal|flannel|openshift|antrea|ncp
  enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services
  layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing.
  # NamespaceSelector contains label key and value used for namespacemigration
  # Same label has to be present on namespace/s which needs migration/sync to AKO
  namespaceSelector:
    labelKey: ''
    labelValue: ''
  servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible 
                     # with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1 
  vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode

### This section outlines the network settings for virtualservices. 
NetworkSettings:
  ## This list of network and cidrs are used in pool placement network for vcenter cloud.
  ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds.
  # nodeNetworkList: []
  # nodeNetworkList:
  #   - networkName: "network-name"
  #     cidrs:
  #       - 10.0.0.1/24
  #       - 11.0.0.1/24
  enableRHI: false # This is a cluster wide setting for BGP peering.
  nsxtT1LR: '' # T1 Logical Segment mapping for backend network. Only applies to NSX-T cloud.
  bgpPeerLabels: [] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement.
  # bgpPeerLabels:
  #   - peer1
  #   - peer2
  vipNetworkList: [] # Network information of the VIP network. Multiple networks allowed only for AWS Cloud.
  vipNetworkList:
   - networkName: Data-vlan7
     cidr: 192.168.7.0/24

### This section outlines all the knobs  used to control Layer 7 loadbalancing settings in AKO.
L7Settings:
  defaultIngController: 'true'
  noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs.
  serviceType: NodePort # enum NodePort|ClusterIP|NodePortLocal
  shardVSSize: SMALL   # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED
  passthroughShardSize: SMALL   # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL
  enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects.

### This section outlines all the knobs  used to control Layer 4 loadbalancing settings in AKO.
L4Settings:
  defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes.
  autoFQDN: disabled   # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled.

### This section outlines settings on the Avi controller that affects AKO's functionality.
ControllerSettings:
  serviceEngineGroupName: Default-Group   # Name of the ServiceEngine Group.
  controllerVersion: '21.1.4' # The controller API version
  cloudName: vcenter   # The configured cloud name on the Avi controller.
  controllerHost: 'avi-controller.home.lab' # IP address or Hostname of Avi Controller
  tenantName: admin   # Name of the tenant where all the AKO objects will be created in AVI.

nodePortSelector: # Only applicable if serviceType is NodePort
  key: ''
  value: ''

resources:
  limits:
    cpu: 350m
    memory: 400Mi
  requests:
    cpu: 200m
    memory: 300Mi

podSecurityContext: {}

rbac:
  # Creates the pod security policy if set to true
  pspEnable: false


avicredentials:
  username: 'admin'
  password: 'PASSWORD'
  authtoken:
  certificateAuthorityData:


persistentVolumeClaim: ''
mountPath: /log
logFile: avi.log

Let’s check the status of the AKO Pod by running the following:

# Get the status of the AKO pod
kubectl get pods -n avi-system

# Tail the logs of the AKO pod:
kubectl logs ako-0 -n avi-system -f
Waiting for the ako-0 pod to be ready..
Pod is ready, now i’m checking the logs

Test AKO by creating an Ingress

We’re going to deploy a simple ingress via kubectl, and test that AKO is correctly creating the objects in Avi.

Example ingress.yaml:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  selector:
    matchLabels:
      app: blue
  replicas: 2
  template:
    metadata:
      labels:
        app: blue
    spec:
      containers:
      - name: blue
        image: alexfeig/bluegreen:latest
        ports:
        - containerPort: 5000
        env:
        - name: app_color
          value: "blue"
      imagePullSecrets:
      - name: regcred
---

---
apiVersion: v1
kind: Service
metadata:
  name: blue
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 5000
    protocol: TCP
  selector:
    app: blue

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blue
spec:
  rules:
  - host: blue.avi.home.lab
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: blue
            port:
              number: 80
# Apply the Ingress
kubectl apply -f ingress.yaml
Applying the ingress
Ingress config is pushed to Avi and we can see the VS created.
If your DNS is correctly configured and pointing to the Avi DNS Service, you should be able to query the fqdn: blue.avi.home.lab and see this page!

Configure AKO on Cluster 2

Switch contexts to guest-cluster-2, and repeat the same steps as above. Make sure to adjust any Networks required for cluster 2 (in my case i’m using the same VS network). Also change the cluster name to something unique like: guest-cluster-2.

As far as deploying an ingress to test, you can do this, but make sure to change up some of the unique identifiers so that the same labels don’t get added to the Avi Objects. If you deploy an Ingress in cluster 2, I’d recommend deleting it before adding AMKO.

### Switch contexts to use the guest-cluster-1
tanzu cluster kubeconfig get guest-cluster-2 --admin
kubectl config use-context guest-cluster-2-admin@guest-cluster-2

### AKO Install
kubectl create ns avi-system
helm install  ako/ako  --generate-name --version 1.7.2 -f  guest_cluster_2_ako_values.yaml  -n avi-system

### Wait for AKO to bootup

# Get the status of the AKO pod
kubectl get pods -n avi-system

# Tail the logs of the AKO pod:
kubectl logs ako-0 -n avi-system -f


### Apply the Ingress
kubectl apply -f ingress2.yaml
Avi with both the VS shard’s for both of the guest clusters.

Configure AMKO on guest-cluster-1

AMKO can be run in a federated manner, in which an AMKO pod is deployed onto each of the clusters. In this example, we’re just deploying a simple AMKO pod (via helm) into one of the clusters, guest-cluster-1.

Create the gslb-members config file

We need to create a kubeconfig file with the permissions to read the service and the ingress/route objects for all the member clusters. More info
Name this file gslb-members and generate a secret with the kubeconfig file in cluster-amko as shown below:

### Switch back to guest-cluster-1 context
kubectl config use-context guest-cluster-1-admin@guest-cluster-1

### Setup the context file
cp ~/.kube/config gslb-members

### Create the K8s secret
kubectl create secret generic gslb-config-secret --from-file gslb-members -n avi-system

Create and Deploy the amko_values.yaml config file

First thing we’ll do is create the yaml file for amko. Couple things to change in here:

  • currentCluster – Set this as guest-cluster-1, or your leader cluster.
  • currentClusterIsLeader – Set this to true, since we’re deploying AMKO in the leader cluster
  • memberClusters – Since we’re not federating AMKO in this example, just put the guest-cluster-1
  • configs.gslbLeaderController – Avi controller fqdn/IP
  • configs.controllerVersion – Avi Controller Version
  • configs.memberClusters – Add both k8s clusters here, in full context [guest-cluster-1-admin@guest-cluster-1]
  • configs.useCustomGlobalFqdn – Set this to true, since our GSLB will use the format of *.gslb.avi.home.lab
  • gslbLeaderCredentials – Set the Avi credentials
  • globalDeploymentPolicy.appSelector – You can trigger AMKO either via a namespace, or via an App selector. So we’ll use App Selector and create a label.
  • globalDeploymentPolicy.matchClusters – Set these to both k8s clusters.

Create an example amko_values.yaml config:

# Default values for amko.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: projects.registry.vmware.com/ako/amko
  pullPolicy: IfNotPresent

# Configs related to AMKO Federator
federation:
  # image repository
  image:
    repository: projects.registry.vmware.com/ako/amko-federator
    pullPolicy: IfNotPresent
  # cluster context where AMKO is going to be deployed
  currentCluster: 'guest-cluster-1-admin@guest-cluster-1'
  # Set to true if AMKO on this cluster is the leader
  currentClusterIsLeader: true
  # member clusters to federate the GSLBConfig and GDP objects on, if the
  # current cluster context is part of this list, the federator will ignore it
  memberClusters:
  - guest-cluster-1-admin@guest-cluster-1

# Configs related to AMKO Service discovery
serviceDiscovery:
  # image repository
  # image:
  #   repository: projects.registry.vmware.com/ako/amko-service-discovery
  #   pullPolicy: IfNotPresent

# Configs related to Multi-cluster ingress. Note: MultiClusterIngress is a tech preview.
multiClusterIngress:
  enable: false

configs:
  gslbLeaderController: 'avi-controller.home.lab'
  controllerVersion: 21.1.4
  memberClusters:
  - clusterContext: guest-cluster-1-admin@guest-cluster-1
  - clusterContext: guest-cluster-2-admin@guest-cluster-2
  refreshInterval: 1800
  logLevel: INFO
  # Set the below flag to true if a different GSLB Service fqdn is desired than the ingress/route's
  # local fqdns. Note that, this field will use AKO's HostRule objects' to find out the local to global
  # fqdn mapping. To configure a mapping between the local to global fqdn, configure the hostrule
  # object as:
  # [...]
  # spec:
  #  virtualhost:
  #    fqdn: foo.avi.com
  #    gslb:
  #      fqdn: gs-foo.avi.com
  useCustomGlobalFqdn: true  #####################if your gslb is different, ie gslb.co.com

gslbLeaderCredentials:
  username: 'admin'
  password: 'PASSWORD'

globalDeploymentPolicy:
  # appSelector takes the form of:
  appSelector:
    label:
      app: gslb
  # Uncomment below and add the required ingress/route/service label
  # appSelector:

  # namespaceSelector takes the form of:
  # namespaceSelector:
  #   label:
  #     ns: gslb   <example label key-value for namespace>
  # Uncomment below and add the reuqired namespace label
  # namespaceSelector:

  # list of all clusters that the GDP object will be applied to, can take any/all values
  # from .configs.memberClusters
  matchClusters:
  - cluster: guest-cluster-1-admin@guest-cluster-1
  - cluster: guest-cluster-2-admin@guest-cluster-2

  # list of all clusters and their traffic weights, if unspecified, default weights will be
  # given (optional). Uncomment below to add the required trafficSplit.
  # trafficSplit:
  #   - cluster: "cluster1-admin"
  #     weight: 8
  #   - cluster: "cluster2-admin"
  #     weight: 2

  # Uncomment below to specify a ttl value in seconds. By default, the value is inherited from
  # Avi's DNS VS.
  # ttl: 10

  # Uncomment below to specify custom health monitor refs. By default, HTTP/HTTPS path based health
  # monitors are applied on the GSs.
  # healthMonitorRefs:
  # - hmref1
  # - hmref2

  # Uncomment below to specify a Site Persistence profile ref. By default, Site Persistence is disabled.
  # Also, note that, Site Persistence is only applicable on secure ingresses/routes and ignored
  # for all other cases. Follow https://avinetworks.com/docs/20.1/gslb-site-cookie-persistence/ to create
  # a Site persistence profile.
  # sitePersistenceRef: gap-1

  # Uncomment below to specify gslb service pool algorithm settings for all gslb services. Applicable
  # values for lbAlgorithm:
  # 1. GSLB_ALGORITHM_CONSISTENT_HASH (needs a hashMask field to be set too)
  # 2. GSLB_ALGORITHM_GEO (needs geoFallback settings to be used for this field)
  # 3. GSLB_ALGORITHM_ROUND_ROBIN (default)
  # 4. GSLB_ALGORITHM_TOPOLOGY
  #
  # poolAlgorithmSettings:
  #   lbAlgorithm:
  #   hashMask:           # required only for lbAlgorithm == GSLB_ALGORITHM_CONSISTENT_HASH
  #   geoFallback:        # fallback settings required only for lbAlgorithm == GSLB_ALGORITHM_GEO
  #     lbAlgorithm:      # can only have either GSLB_ALGORITHM_ROUND_ROBIN or GSLB_ALGORITHM_CONSISTENT_HASH
  #     hashMask:         # required only for fallback lbAlgorithm as GSLB_ALGORITHM_CONSISTENT_HASH

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:

resources:
  limits:
    cpu: 250m
    memory: 300Mi
  requests:
    cpu: 100m
    memory: 200Mi

service:
  type: ClusterIP
  port: 80

rbac:
  # creates the pod security policy if set to true
  pspEnable: false

persistentVolumeClaim: ''
mountPath: /log
logFile: amko.log

federatorLogFile: amko-federator.log

Now it’s time to deploy the amko_values.yaml file.

### Make sure you're in the correct context, guest-cluster-1 context
kubectl config use-context guest-cluster-1-admin@guest-cluster-1

### Install AKO via helm
helm install  ako/amko  --generate-name --version 1.7.1 -f amko_values.yaml  --namespace=avi-system

### Tail the AMKO pod to see the logs
kubectl logs amko-0 -n avi-system -c amko -f

Deploy a GSLB Service using the AKO HostRule

Now that AMKO is up and running in guest-cluster-1, we need to modify our ingress slightly, and add the HostRule.
We can do all of this by simply modifying the ingress.yaml file we had created earlier

### As you can see the file is basically the same, I'll annotate the additions.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  selector:
    matchLabels:
      app: blue
  replicas: 2
  template:
    metadata:
      labels:
        app: blue
    spec:
      containers:
      - name: blue
        image: alexfeig/bluegreen:latest
        ports:
        - containerPort: 5000
        env:
        - name: app_color
          value: "blue"
      imagePullSecrets:
      - name: regcred
---

---
apiVersion: v1
kind: Service
metadata:
  name: blue
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 5000
    protocol: TCP
  selector:
    app: blue

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blue
  labels: ### Add the label so that we can tag the ingress.
    app: gslb ### Add the label so that we can tag the ingress.
spec:
  rules:
  - host: blue.avi.home.lab
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: blue
            port:
              number: 80


### Add the following HostRule

apiVersion: ako.vmware.com/v1alpha1
kind: HostRule
metadata:
  name: specific-host-rule
spec:
  virtualhost:
    fqdn: blue.avi.home.lab
    enableVirtualHost: true
    gslb:
      fqdn: blue.gslb.avi.home.lab

And that’s basically it, save the file ingress.yaml, and run the apply command:

### Apply the ingress
kubectl apply -f ingress.yaml

### View the pods
kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
blue-8bd5b8489-mv4gq   1/1     Running   0          12m
blue-8bd5b8489-zghjv   1/1     Running   0          12m
You can see the blue ingress VS is there. Don’t worry about the red VS, it is because I deleted the VS from the guest-cluster-2.
Here is the GSLB Service, showing green.
And finally a quick dig against the Avi DNS Service and we can see that it is responding with the correct VS IP.
Hitting the blue application via gslb.
A quick glance at Avi’s DNS logs and we can see all the queries coming through, and all are successful.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Setting up the Kubernetes Dashboard
  • Running a DNS server in K3s
  • Raspberry Pi Kubernetes Cluster
  • Pod Routing: NodePort, ClusterIP, NodePortLocal
  • Configure Bootstrap VM for OpenShift and Install OpenShift with vSphere

About

My name is Matt Adam and I’m a Product Line Manager at VMware.

I support the NSX Advanced Load Balancer (Avi Networks) with a focus on containers and Kubernetes. I have a background in load balancing, automation, development, and public cloud.

© 2023 Matt Adam | Powered by Minimalist Blog WordPress Theme