TKGs – Use NSX-ALB (Avi) as Ingress Controller for vSphere with Tanzu

Hi!

Table of Contents

Introduction

Lately I’ve been playing around with Tanzu Kubernetes Grid Multicloud (TKGm) and Tanzu Kubernetes Grid Service (TKGs) in combination with NSX – Advanced Load Balancer (Avi).
A common Use Case is to provide your K8s environment with an Ingress Controller. While in TKGm this seems to be straight forward and thoroughly documented by VMware here, the TKGs approach remained a bit vague to me.

The official TKGs Documentation states the following:

So let’s say you enabled Workload Management in vSphere and configured it to use NSX-ALB. This means that your TKGs Supervisor Cluster uses NSX-ALB and also your Workload Clusters (to a certain degree). That NSX-ALB integration can be used to create L4 Load Balancers in your TKGs Workload Clusters. Great! But what about L7 Ingress? Well on that same page above, VMware refers to the official Avi Kubernetes Operation Documentation to install & configure AKO (Avi Kubernetes Operator). So let’s give it a try!

What’s my Setup?

ComponentDetails
Avi Controller Version20.1.6
Avi Controller IP10.10.30.180
AKO Version1.6
Avi Edition / LicenseAvi Enterprise (full featured Avi & AKO)
TKGsvSphere 7.0 u3
Helm Version3.7.2
Avi VIP NetworkAvi – FrontEnd 60 – 10.10.60.0/24
TKGs Workload NetworkAvi – TKGs 70 – 10.10.70.0/24
Avi CloudDefault-Cloud (vSphere)
Avi Service Engine GroupDefault-Group (N+M buffer)
Avi DNS Profiletkg-dns-profile
Avi IPAM Profiletkg-ipam-profile
Avi Basic Authentication EnabledYes
TKGs Supervisor Cluster10.10.70.11
TKGs Workload Namespacedev
TKGs Workload Clustertkgs-v2-wl01

Prerequisites

You need to have the following in place:

  • Avi Controller configured with (see VMware Install Guide):
    • A cloud (e.g.: Default-Cloud)
    • A Service Engine Group (e.g.: Default-Group)
    • A VIP Network with a Static IP Address Pool
    • A TKGs Workload Network with a Static IP Address Pool
    • An IPAM Profile
    • A DNS Profile
    • Basic Authentication enabled
  • TKGs Configured with(see VMware Install Guide):
    • Supervisor Cluster
    • The Workload Cluster where you would like to enable Ingress
  • Helm up and running

Let’s get started!

1. Make sure to login to your TKGs Workload Cluster

kubectl vsphere login --server=<Supervisor Cluster VIP> --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace=<Your-Namespace-of-Your-Workload-Cluster> --tanzu-kubernetes-cluster-name=<Workload-Cluster-Name>

2. Make sure to use the context of your Workload Cluster

kubectl config get-contexts
kubectl config use-context <Your-Workload-Cluster>

3. Create the ‘avi-systemNamespace in your Workload Cluster

kubectl create ns avi-system

4. Make sure that Service Accounts are allowed to run Containers as Root in the avi-system Namespace because the AKO Pod will Run as Root. As an example: you can achieve this by leveraging the VMware default PodSecurityPolicy as shown below:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rolebinding-default-privileged-sa-ns_avi-system
  namespace: avi-system
roleRef:
  kind: ClusterRole
  name: psp:vmware-system-privileged
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  apiGroup: rbac.authorization.k8s.io
  name: system:serviceaccounts

5. Add the VMware’s Public Harbor Repository to your Helm

helm repo add ako https://projects.registry.vmware.com/chartrepo/ako

5. Search the available charts for AKO:

helm search repo ako

6. Use the values.yaml file from this chart to edit the values related to the Avi Configuration

helm show values ako/ako --version 1.6.1 > values.yaml

7. Edit the values.yaml file to represent your Avi Configuration. I modified only the following settings:

AKOSettings:
  clusterName: tkgs-v2-wl01 # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT
  cniPlugin: 'antrea' # I'm using Antrea as CNI Plugin in my TKGS environment
  layer7Only: true # If this flag is switched on, then AKO will only do layer 7 loadbalancing. The Ako on the Supervisor Control Plane is still responsible for L4 loadbalancing. 
NetworkSettings:
  ## This list of network and cidrs are used in pool placement network for vcenter cloud.
  ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds.
  nodeNetworkList: 
    - networkName: "AVI - TKGS 70"
      cidrs:
        - 10.0.70.0/24
  vipNetworkList:  # Network information of the VIP network. Multiple networks allowed only for AWS Cloud.
    - networkName: "AVI - FrontEnd 60"
      cidr: 10.10.60.0/24

### This section outlines settings on the Avi controller that affects AKO's functionality.
ControllerSettings:
  serviceEngineGroupName: Default-Group   # Name of the ServiceEngine Group.
  controllerVersion: '20.1.6' # The controller API version
  cloudName: Default-Cloud   # The configured cloud name on the Avi controller.
  controllerHost: '10.10.30.180' # IP address or Hostname of Avi Controller
  tenantName: admin   # Name of the tenant where all the AKO objects will be created in AVI.
avicredentials:
  username: '<YOUR-AVI-USERNAME>'
  password: '<YOUR-AVI-PASSWORD>'

8. Install AKO on your Workload Cluster:

helm install ako-l7 ako/ako --version 1.6.2 -f /path/to/values.yaml --namespace=avi-system

9. Verify the installation via Helm:

helm list -n avi-system

10. Verify the installation via kubectl:

kubectl get pods -n avi-system
kubectl logs <POD-Name> -n avi-system
kubectl get ingressclass

Note: Info & Warning logs seemed to be ok; tested in lab environment

If all looks fine, then you should have a working AKO setup!

Run a Test Workload with Ingress

Time to test it, right? Let’s go!

Around November of 2021 I followed a very interesting NSX-ALB Architecture Course led by Nicolas Bayle who provided us with some of his K8s YAML examples via his Github. The examples below will leverage his Github. Thanks a lot!!

Basically we’ll be deploying 3 versions of a BusyBox website, create a ClusterIP service for each and an Ingress object using those services.

1. Deploy Busybox Website Pods & ClusterIP Services:

# Deploy 3 Versions of the Busybox Web App

kubectl apply -f https://raw.githubusercontent.com/tacobayle/k8sYaml/master/k8sDeploymentBusyBoxFrontEndV1.yml
kubectl apply -f https://raw.githubusercontent.com/tacobayle/k8sYaml/master/k8sDeploymentBusyBoxFrontEndV2.yml
kubectl apply -f https://raw.githubusercontent.com/tacobayle/k8sYaml/master/k8sDeploymentBusyBoxFrontEndV3.yml

# Create for each Version a Service Type ClusterIp

kubectl apply -f https://raw.githubusercontent.com/tacobayle/k8sYaml/master/k8sSvcClusterIpBusyBoxFrontEndV1.yml
kubectl apply -f https://raw.githubusercontent.com/tacobayle/k8sYaml/master/k8sSvcClusterIpBusyBoxFrontEndV2.yml
kubectl apply -f https://raw.githubusercontent.com/tacobayle/k8sYaml/master/k8sSvcClusterIpBusyBoxFrontEndV3.yml

2. Create Ingress YAML file ‘ingress-busybox-clusterip.yaml‘:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
    - host: ingress-example.avi.potus.local #Provide Your Hostname here
      http:
        paths:
          - pathType: Prefix
            path: "/v1"
            backend:
              service:
                name: web-front-1
                port:
                  number: 80
          - pathType: Prefix
            path: "/v2"
            backend:
              service:
                name: web-front-2
                port:
                  number: 80
          - pathType: Prefix
            path: "/v3"
            backend:
              service:
                name: web-front-3
                port:
                  number: 80

3. Deploy your Ingress:

kubectl apply -f ingress-busybox-clusterip.yaml

4. Verify that it is running:

kubectl get pods
kubectl get services
kubectl get ingress

5. Check the Applications your Avi Controller

That’s it!

Delete / Uninstall AKO from your Workload Cluster

If you would like to delete AKO from your Workload Cluster, you can easily achieve this with the following steps:

1. Make sure you are working (kubectl) in the context of your Workload Cluster

2. Edit the ConfigMap used for AKO and look for the ‘deleteConfig‘ flag and set it “true“:

 kubectl edit configmap avi-k8s-config -n avi-system 

2. Identify your Helm installed Releases

helm list --all-namespaces --all

3. Uninstall the AKO Helm Release

helm uninstall ako-1644248504 -n avi-system

If you have any comments or questions, don’t hesitate to let us know!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s