NSX Application Platform (NAPP) was first introduced in NSX-T 3.2 which is a containerized platform to host the below NSX features:
- NSX Intelligence
- NSX Malware Prevention
- NSX Network Detection and Response (NDR)
- NSX Metrics
To enable NSX Application Platform (NAPP), we require a Tanzu Kubernetes Cluster or any CNCF conformant upstream Kubernetes cluster with the required form factor to host the platform. By moving to NAPP, we achieve better scalability of these features as the platform can be scaled out as and when needed compared to the traditional ova based appliance model.
In Part 1 of this four-part blog series, we will discuss the pre-requisites for NSX Application Platform and walk through the existing environment where we have vSphere with Tanzu on vSphere networking with NSX Advanced Load balancer.
In Part 2, we will provision a Tanzu Kubernetes Cluster, perform pre-checks, deploy NSX Application platform and verify the platform components deployed on the Tanzu Kubernetes cluster.
In Part 3, we will demonstrate NAPP form factor upgrade from Standard to Advanced.
and finally in Part 4, we will perform NAPP scale-out to additional TKC worker nodes.
Let’s get started.
NSX Application Platform Prerequisites
Before deploying NAPP, we need to ensure the below pre-requisites are met.
- A supported TKC or upstream Kubernetes cluster with the required sizing to support NAPP standard or advanced form factor. The supported versions are documented at https://docs.vmware.com/en/VMware-NSX/4.1/nsx-application-platform/GUID-D54C1B87-8EF3-45B3-AB27- EFE90A154DD3.html
- The TKC or upstream kubernetes cluster should support services of type loadbalancer. Ingress controller is not required as the NAPP deployment workflow will configure contour as an ingress controller.
- The TKC or upstream kubernetes cluster should have a CSI driver installed to support provisioning of persistent volumes. TKG clusters will use the vSphere csi driver by default.
- NSX managers and the TKC cluster need access to https://projects.registry.vmware.com (HTTPS) to download the docker images & helm charts for deployment and upgrades. If internet access is not available, we need to set up a private container registry such as Harbor to host the docker images and helm charts.
- NSX version should be compatible with the NAPP version. NAPP has a different versioning compared to NSX. For example, NSX 4.1 version comes with NAPP 4.0.1. Please refer to the compatibility guide from the official VMware documentation at https://docs.vmware.com/en/VMware-NSX/4.1/nsx-application-platform/GUID-D54C1B87-8EF3-45B3-AB27-EFE90A154DD3.html
- Two DNS records with static IP addresses need to be created before starting the NAPP deployment workflow. These IP addresses come from the load balancer’s VIP allocation pool.
- Interface Service : This is the HTTPS endpoint to access NAPP
- Messaging service : This is the endpoint to the Kafka messaging broker cluster deployed in TKC / Kubernetes.
- Kubeconfig file needs to be downloaded from TKC or the upstream Kubernetes. If using TKGS, we need to create a kubeconfig file with a non-expiring token as per the instructions mentioned in https://docs.vmware.com/en/VMware-NSX/4.1/nsx-application-platform/GUID-52A52C0B-9575-43B6-ADE2-E8640E22C29F.html#GUID-52A52C0B-9575-43B6-ADE2-E8640E22C29F We will do this in Part 2.
- A valid NSX license is required as per https://docs.vmware.com/en/VMware-NSX/4.1/nsx-application-platform/GUID-34EC60F7-E335-45AF-A237-727E72E0F1F6.html#GUID-34EC60F7-E335-45AF-A237-727E72E0F1F6. Thanks to the vExpert program, I am entitled to NSX Data Center Evaluation licenses in my home lab which supports NAPP.
NSX Application Platform Sizing
NSX Application Platform comes in two form factors:
- Standard and
The standard form factor supports NSX Malware Prevention, NSX NDR and NSX Metrics and requires a TKC cluster with minimum of 1 master node and 3 worker nodes with 4 vCPU / 16GB RAM / 200GB storage. NSX Intelligence is not supported with standard form factor.
Advanced form factor requires a higher footprint for the TKC cluster and requires a minimum of 1 master node and 3 worker nodes with 16 vCPUs / 64 GB RAM / 1 TB storage. NSX Intelligence is only supported in advanced form factor.
Note: Scale Out of NSX Application platform is supported only in Advanced form factor.
We also have an evaluation form factor which can be used for test and PoC use cases but is neither supported for production nor can be upgraded to standard or advanced form factor.
Current environment walkthrough – vSphere with Tanzu
As mentioned in the pre-requisites, we require a Tanzu Kubernetes cluster or any upstream Kubernetes to support the deployment of NSX Application Platform. For this article, I have built a vSphere with Tanzu environment on vSphere networking with NSX Advanced Load balancer.
Just FYI, I did some blog posts in the past on vSphere with Tanzu where we walked through the architecture, deployment, Tanzu CLI and much more, but was on NSX-T networking. If you would like a refresh, please check them out from the below links:
- vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture – Part 1
- vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture – Part 2 – Supervisor Cluster
- vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture – Part 3 – TKG Compute Clusters
- vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture – Part 4 – TKG CLI
- NSX-T Architecture in vSphere with Tanzu – Part 1 – Per TKG Tier1 vs Per Namespace Tier1
- NSX-T Architecture in vSphere with Tanzu – Part 2 – MultiSupervisor Shared T0 vs Dedicated T0
- NSX-T Architecture in vSphere with Tanzu – Part 3 – Dedicated Tier 1 Edge Clusters
- NSX-T Architecture in vSphere with Tanzu – Part 4 – Proxy ARP Gateways
Our environment has two vSphere clusters managed by a single vCenter server “VxDC01-vCenter01.vxplanet.int”:
- Cluster “VxDC01-C01-MGMT-NAPP” is a shared management cluster enabled with vSphere with Tanzu. This hosts the management components (vCenter server, NSX manager, NSX ALB controllers etc) and also the TKC cluster used for NSX Application platform. This vSphere cluster is not prepared with NSX.
- Cluster “VxDC01-C02-Compute” is the compute cluster prepared with NSX. This hosts the overlay workloads and the NSX edge transport nodes.
Workload management is enabled on cluster “VxDC01-C01-MGMT-NAPP” using vSphere networking with NSX Advanced load balancer. This is a Cluster deployment (and not zone deployment introduced in vSphere 8.0) and the supervisor VMs are deployed within the boundary of cluster “VxDC01-C01-MGMT-NAPP”. The below vSphere networks are used in the deployment:
- VxDC01-C01-VDS01-MGMT-V1001 – VLAN 1001 (172.16.10.0/24) for Supervisor cluster management network
- VxDC01-C01-VDS01-GenVM-V1005 – VLAN 1005 (172.16.50.0/24) for TKG Workload network
- VxDC01-C01-VDS01-VIP-V1009 – VLAN 1009 (172.16.90.0/24) for VIP network. This is defined via IPAM profiles in NSX ALB.
Successful enablement of workload management will create a virtual service for the supervisor kube-VIP endpoint in NSX ALB.
As supervisor cluster is integrated with NSX ALB, it will run the AKO pod (Avi Kubernetes Operator) and the guest TKG clusters will use vSphere cloud provider plugin to relay requests of type load balancer to the supervisor cluster and get this provisioned as L4 Virtual services in NSX ALB.
Network “VxDC01-C01-VDS01-MGMT-V1001” on VLAN 1001 is the management network for the supervisor cluster VMs, NSX Managers, NSX ALB Controllers, vCenter server and ESXi hosts.
We have a single workload network. The primary workload network is “VxDC01-C01-VDS01-GenVM-V1005” (VLAN 1005). This is where the Tanzu Kubernetes clusters are deployed on.
We have a supervisor namespace “vxdc01-napp” that hosts the TKC clusters for NSX Application Platform. We have associated storage policies and few VM Classes to define the sizing of TKG clusters for NAPP (standard and advanced form factors).
As mentioned above, the TKC clusters are deployed on the primary workload network “VxDC01-C01-VDS01-GenVM-V1005” on the namespace. We will provision TKC cluster for NAPP in Part 2.
Current environment walkthrough – NSX Advanced load balancer
Now let’s walkthrough the NSX Advanced load balancer configuration for vSphere with Tanzu.
vSphere with Tanzu integration with NSX ALB supports only the “Default-Cloud” account. We have configured the “Default-Cloud” account to be vCenter write access cloud.
We use a vCenter content library to push the service engine ova templates to vCenter for deployment.
As discussed previously, the VDS portgroup “VxDC01-C01-VDS01-MGMT-V1001” (VLAN 1001) is the management network for service engine communication with the NSX ALB controller cluster. We choose DHCP for this network but it’s also supported to use static IP addressing using IP pools.
It’s required to attach an IPAM profile to support dynamic IP addressing for the VIPs. Our VIP network is “VxDC01-C01-VDS01-VIP-V1009”
Note: In the default configuration, only one VIP network is supported in the IPAM profile for vSphere with Tanzu integration. But there is an additional configuration where we can add multiple VIP networks to the IPAM profile and make an explicit VIP network selection for vSphere with Tanzu (however, this is not covered in this article).
Using DNS profile is optional, but we use it to simplify name resolution for the virtual services. For this purpose, we define a sub-domain called “tkg.vxplanet.int” that is authoritative on the DNS virtual service in NSX ALB.
Below is the DNS virtual service hosted in NSX ALB. We will have delegation / forwarder configured on the corporate DNS servers for tkg.vxplanet.int pointed to this DNS virtual service in NSX ALB.
Finally, this DNS virtual service is added as a system service in NSX ALB.
Below are the three network profiles that we defined for vSphere with Tanzu integration:
- VxDC01-C01-VDS01-MGMT-V1001 – VLAN 1001 (172.16.10.0/24) for Management
- VxDC01-C01-VDS01-GenVM-V1005 – VLAN 1005 (172.16.50.0/24) for TKG Workload network
- VxDC01-C01-VDS01-VIP-V1009 – VLAN 1009 (172.16.90.0/24) for VIP network
Service engines will operate in two-arm mode and will have the data interfaces plumbed into both VxDC01-C01-VDS01-VIP-V1009 and VxDC01-C01-VDS01-GenVM-V1005 for VIP front end and backend pool connectivity.
NSX ALB integration with vSphere with Tanzu requires that the default self-signed portal certificate be replaced with a CA signed certificate with valid SAN names in the certificate.
Service engine configuration including sizing, placement and HA mode is defined using service engine groups. We use the “Default-Group” for vSphere with Tanzu integration.
Note: In the default configuration, only the “Default-Group” is supported for vSphere with Tanzu integration. But there is an additional configuration where we can add multiple service engine groups and make explicit service engine group selection for vSphere with Tanzu (However, this is not covered in this article).
The service engines are deployed on the management cluster “VxDC01-C01-MGMT-NAPP” and is co-located with Tanzu Kubernetes Cluster VMs (workloads). This is defined under the placement settings in the service engine group properties.
For the deployed supervisor cluster, we see that the L4 virtual services for KubeAPI, HTTPS and CSI endpoints are provisioned successfully.
It’s time to wrap up Part 1. We did cover how the current vSphere with Tanzu and NSX Advanced load balancer environment is setup and will continue in Part 2 where we provision a Tanzu Kubernetes Cluster and deploy the NSX Application Platform.
I hope this article was informative.
Thanks for reading.
Continue reading? Here are the other parts of this series:
Part 2 : https://vxplanet.com/2023/05/16/nsx-4-1-application-platform-napp-part-2-deployment/
Part 3 : https://vxplanet.com/2023/05/18/nsx-4-1-application-platform-napp-part-3-form-factor-upgrade/
Part 4 : https://vxplanet.com/2023/05/19/nsx-4-1-application-platform-napp-part-4-napp-scale-out/