Welcome back!!! We are at Part 2 of the blog series on NSX 4.1 Application Platform (NAPP).
In Part 1 we discussed the pre-requisites for NAPP and walked through the current environment which is set up with vSphere with Tanzu on VDS networking with NSX Advanced load balancer.
Here is the link to Part 1, in case you missed it.:
In this article we will provision a Tanzu Kubernetes Cluster on the shared vSphere management cluster enabled with workload management (vSphere with Tanzu), perform pre-checks, deploy NSX Application platform (NAPP) and review the NAPP components deployed on the TKC cluster.
Let’s get started:
Provisioning Tanzu Kubernetes Cluster for NAPP
As discussed in Part 1, we have a dedicated supervisor namespace “vxdc01-napp” for NAPP on the management vSphere cluster “VxDC01-C01-MGMT-NAPP” where we will deploy the Tanzu Kubernetes Cluster.
Let’s login to supervisor cluster, switch the context to the supervisor namespace “vxdc01-napp” and perform some basic checks to deploy the Tanzu Kubernetes Cluster. Basically, we will verify that we have the vmclass, storageclass and Tanzu Kubernetes Release (TKRs) available to deploy the TKC.
As said in the agenda in Part 1, we will be deploying NAPP in standard form factor, and in the next part (Part 3) we will demonstrate upgrading the NAPP standard form factor to advanced. For the standard form factor, we require a vmclass added to the namespace that supports a minimum of 4 vCPUs and 16 GB memory, and we have chosen “best-effort-large”.
NAPP 4.0.1 supports TKC versions 1.20 – 1.22, hence we will deploy TKC 1.21.6 and will make sure the corresponding TKR is available in the namespace.
Next, we will deploy a TKC cluster for NAPP with the below spec. We will start with three compute (worker) nodes to support NAPP standard form factor and will scale out to five nodes in Part 4 where we demonstrate NAPP scale-out.
The control plane (master) and compute nodes (workers) will require additional mount paths for etcd and containerd directories with minimum of 64 GB disk space.
Below is the yaml spec for the TKC cluster:
and let’s deploy the TKC cluster.
This will take a while to complete. Let’s take a coffee break and once we come back the deployment should be completed.
Let’s connect to the TKC cluster and verify that storage class is available to provision PVCs for NAPP.
The TKC cluster supports services of type “load balancer” using the vSphere cloud provider plugin. We don’t require a manual installation of AKO (AVI Kubernetes Operator) on the TKC to support Ingress services for NAPP. This is because the NAPP deployment workflow will install contour as an ingress controller.
As NAPP requires load balancer functionality for the HTTPS and messaging service endpoints, lets create a test load balancer service and confirm it’s accessibility. This should get “External-IP” from the IPAM profile we defined in Part 1.
At this moment, the TKC cluster deployment is successful and is ready for NAPP enablement.
Generating a non-expiring kubeconfig file for NAPP
The default kubeconfig file of the TKC cluster is token based and has a validity of 10 hours after which we need to re-authenticate. For the purpose of NAPP, we need to generate a new kubeconfig file with a non-expiring token, as per the instructions at https://docs.vmware.com/en/VMware-NSX/4.1/nsx-application-platform/GUID-52A52C0B-9575-43B6-ADE2-E8640E22C29F.html
And below is the new kubeconfig that is generated:
Creating DNS records for NAPP endpoints
We need to pre-create the below two DNS records with IP address from the NSX ALB IPAM profile which we discussed in Part 1.
- Interface Service : This is the HTTPS endpoint to access NAPP
- Messaging service : This is the endpoint to the Kafka messaging broker cluster deployed in TKC / Kubernetes.
DNS records can be created anywhere as long as it is resolvable. Let’s create this in the DNS virtual service in NSX ALB. The endpoints are:
- vxdc01-nsxmgr01-napp.tkg.vxplanet.int (NAPP HTTPS endpoint)
- vxdc01-nsxmgr01-napp-stream.tkg.vxplanet.int (Kafka messaging endpoint)
The NAPP deployment workflow will resolve these endpoints and create load balancer services with these resolved IP addresses.
Deploying NSX Application Platform
At this moment, we are all good to deploy NSX Application Platform (NAPP). Navigate to System -> NSX Application Platform and click on “Deploy NSX Application Platform”.
If NSX manager and the TKC cluster has HTTPS access to the VMware public Harbor registry at https://projects.registry.vmware.com, the workflow can automatically pull the helm charts and NAPP docker images from the registry. If not, a private image registry need to be setup as per the instructions at https://docs.vmware.com/en/VMware-NSX/4.1/nsx-application-platform/GUID-FAC9DBE3-A8EE-4891-A723-942D0AB679F6.html
Click on “Save URL” and the workflow prompts to select the NAPP version. Since we are on NSX 4.1, the only compatible NAPP version is 4.0.1. Remember, NAPP has different versioning than NSX.
It might be required to upload the Kubernetes client tools if the deployment workflow doesn’t have a matching client version compared to the Kubernetes server version of the TKC cluster. Since we deployed v1.21 on the TKC cluster, let’s download the matching client version from customer connect portal (https://customerconnect.vmware.com) and upload to the deployment workflow.
Let’s upload the non-expiring kubeconfig file to authenticate with the TKC cluster. Post successful authentication, we will provide the NAPP HTTPS and messaging endpoint FQDNs for which we created the DNS records previously.
We sized the TKC cluster to support NAPP Standard form factor, so let’s choose the Standard option. In the next part (Part 3), we will upgrade the NAPP form factor to “Advanced”.
Let’s run the pre-checks and confirm that all tests have passed.
Finally, we will review the settings and clicking on Submit will start the NAPP deployment.
The deployment starts with the install of certificate manager components. We see the respective pods created under the namespace “cert-manager”.
The workflow deploys contour as the ingress controller. We see the respective pods created under the namespace “projectcontour”.
Next, the workflow deploys the core platform components on the namespace “nsxi-platform” (kafka broker messaging system, zookeeper ensemble, fluentd log collection, postgresql databases etc)
Once the core platform is deployed and registered, the metrics components are installed.
The deployment process takes a while and once the deployment succeeds, the NAPP dashboard should come up and the platform status should show as “Stable”.
We see that load balancer services are created for NAPP HTTPS and messaging (kafka) endpoints and are realized as L4 virtual services in NSX advanced Load balancer.
Ingress to the NAPP HTTPS endpoint is handled by contour using the httpproxy CRD registered by contour.
At this moment, the platform is now ready to be consumed for services like NSX Malware Prevention, NSX metrics and NSX Network Detection and Response. Note that, NSX Intelligence is not available in Standard form factor.
In the next blog post, we will demonstrate NAPP form factor upgrade where we switch from standard form factor to advanced form factor to support NSX Intelligence and NAPP scale-out. Stay tuned!!!
I hope the article was informative.
Thanks for reading.
Continue reading? Here are the other parts of this series:
Part 1 : https://vxplanet.com/2023/05/03/nsx-4-1-application-platform-napp-part-1/
Part 3 : https://vxplanet.com/2023/05/18/nsx-4-1-application-platform-napp-part-3-form-factor-upgrade/
Part 4 : https://vxplanet.com/2023/05/19/nsx-4-1-application-platform-napp-part-4-napp-scale-out/