NSX Advanced Load Balancer (AVI) Integration with VMware Cloud Director 10.3

NSX Advanced Load balancer (ALB – formerly AVI Vantage Platform) is now natively integrated with VMware Cloud Director. VCD 10.2 was the first version to support the integration and that was with ALB 20.1.1. In the current VCD version 10.3 (which is, at the time of writing this article), NSX ALB versions 20.1.3 and 20.1.4 are supported. If you are planning for integration, please check the VMware Interoperability Matrix to know about the latest supported combinations of VCD and NSX ALB (as you do always).

https://interopmatrix.vmware.com/

NSX ALB is available as the LBaaS option when NSX-T is used as the network backing type for the OrgVDCs. The native NSX-T loadbalancer isn’t used in VCD.

In this blog post we will walkthrough the NSX ALB design with VCD, NSX-T configuration, vSphere configuration and ALB – VCD integration along with some extra notes that we need to be aware of.

Let’s get started.

NSX ALB design in vCloud Director

There are two design choices available depending on the resource guarantees and the level of isolation required for tenant applications:

  • Shared Service Engine Group (SEG) Design
  • Dedicated Service Engine Group (SEG) Design

In a Shared SEG design, the VCD tenants (more specifically, OrgVDC gateways) share the Service Engines from a common Service Engine Group to host their virtual services. All the tenant applications share the data plane (as they are a part of same SEG) and the app isolation is achieved using vrf contexts. Each OrgVDC gateway that gets enabled with the load balancing service on a shared SEG design will plumb a data nic on the service engines. This data nic maps to a dedicated vrf context in the SEs. An SE supports a max of 10 data nics, and hence upto 10 OrgVDC gateways (unless limited by the hypervisor). As the SE data interfaces attach to respective Edge Gateways (T1) over the dedicated vrf context, the virtual services of each tenant (in OrgVDCs) will be confined to the respective vrfs in SE.

In a shared SEG design, an OrgVDC gateway typically gets a share of the total VS capacity of the SEG as per the tenant’s requirements.

The below sketch shows a Shared SEG design:

In a Dedicated SEG design, each VCD tenant (more specifically OrgVDC gateway) gets a dedicated service engine group to host their virtual services. There is a 1:1 mapping between the OrgVDC gateway and the SEG. Each SEG is a dedicated data plane instance. This design is suitable for tenants who require guaranteed SE resources and higher level of isolation to host their virtual services.

In this design, each service engine in a SEG will have only one data nic and that attaches to the OrgVDC gateway to which the SEG is associated.

The below sketch shows a Dedicated SEG design:

Choosing the SEG Import type

Service Engine Groups are created by the NSX ALB administrator in the NSX ALB Console in the default admin tenant. The SEGs are imported by the Service Provider administrator to VMware Cloud Director with Shared or Dedicated as the import type.

NSX-T Design in ALB – VCD integration

NSX ALB and VCD integration is achieved by creating an NSX-T Cloud in ALB and importing the same to VCD. In an NSX-T Cloud, the service engines are deployed on NSX-T logical segments – both management plane and data plane. ALB Controllers typically sits on the management VLAN network, in most cases L2 adjacent with the vCenter server and NSX-T managers.

  • SE Management : We create a dedicated NSX-T T0 Gateway, T1 Gateway and logical segment for the SE management traffic. This is to isolate SE management plane communication from the VCD tenant traffic. This T1 gateway and segment is added to the NSX-T Cloud configuration in the NSX ALB console as the SE management network. If gateway firewalling is enabled on the T1 gateway, make sure the necessary ports for SE to ALB controllers are open. DHCP is also enabled on the SE management segment.
  • SE Data : Depending on the SEG mode used, data nics attaches to the respective OrgVDC gateways. Data traffic is thus routed from the SEs through the OrgVDC gateway to the Provider T0 gateway and then to the external networks. The OrgVDC gateway and the data segment is automatically added to the NSX-T Cloud configuration when LB service is enabled on the gateway. DHCP service is also automatically enabled on the service segment.
  • SEs are automatically added to the NSX-T DFW exclusion list.
  • If the tenant creates Virtual services using the IP from the external allocation pool of T0 gateway, the VS IP is advertised by the T0 Provider gateway to external networks.
  • If the tenant creates virtual services using a tenant local network, the VS is local to the tenant and will be reachable only from the tenant network.

Now let’s take a look at the NSX-T configuration for ALB – VCD integration.

NSX-T Configuration

We have two T0 gateways – Provider T0 is where the VCD tenant gateways upstream to and Management T0 is for the management communication between ALB controllers and SE.

We have couple of T1 gateways:

  • A dedicated T1 gateway for SE management
  • T1s created by the VCD tenants (OrgVDC edge gateways)

We have a dedicated segment for AVI SE management. DHCP is enabled on this management segment.

We have a single Overlay transport zone. As such we have a single VCD Network Pool and a single NSX-T Cloud Account in NSX ALB

NSX ALB Configuration

In NSX ALB, we create an NSX-T Cloud Account which will be later imported into VCD. DHCP is selected for IP assignment of imported NSX-T networks – SE management and SE data

Note that if we use FQDN of NSX-T manager in the cloud account, the same FQDN (and not IP) has to be used while registering NSX-T managers in VCD.

There is a 1:1 mapping between the NSX-T transport zone with the Cloud account. We specify the management T1 Gateway and Segment which was manually created as a pre-requisite in NSX-T manager earlier. The data networks will be auto plumbed by VCD whenever an OrgVDC gateway is enabled with the load balancer service.

A vCenter Server account enabled with a content library need to be defined in the cloud account. This is the VCD resource vCenter that have the resource cluster and optional edge clusters. This is where the SEs will be deployed to. Based on the design decisions, SEs can be deployed on the vsphere resource cluster in the provider VDC or on the dedicated vSphere edge cluster (co-locating with NSX-T edge nodes). This vCenter account should be added as a compute manager in NSX-T and host prep’ed on the same overlay transport zone. If we select more than one vCenter server, the placement option for SEs need to be defined in the respective SEGs.

SE images will be pushed to the configured content library in the resource vCenter server.

Based on the requirement and design, one or more SEGs are created which will later map to OrgVDC gateways. Make sure that the SEGs are created in the default ‘admin’ tenant in ALB. Non-admin tenants are currently unsupported.

Depending on the capacity and availability requirements of tenants, HA mode, SE sizing and VS capacity are defined for each SE group.

We also define a naming prefix of the SEs on a SEG basis so that a vCenter admin can identify VCD Tenant SEs or shared SEs in vCenter and group them or apply anti-affinity rules as appropriate.

If we defined multiple vCenter servers or if the vCenter server has multiple clusters (resource cluster , edge cluster etc), we can define the SE placement of the SEG to the correct resource or edge vsphere cluster.

VMware Cloud Director Configuration

As stated earlier, the NSX-T FQDN registered in the NSX-T Cloud account in ALB should match with the NSX-T FQDN in VCD configuration. FQDN in one place and IP in other leads to ALB integration errors.

Refer https://kb.vmware.com/s/article/83889

The T0 provider Gateway needs to be imported into VCD and an external allocation IP pool need to be defined. All OrgVDC edge gateways attach to the T0 Provider gateway. The SE management T0 Gateway is not required for import into VCD.

External allocation IP pool is the routable external network for the VCD tenants. Tenants request a range from this allocation pool for use with load balancing services and NAT. Only virtual services configured with an IP from this allocation pool will be advertised outside of the T0 gateway.

Now let’s register NSX ALB in VCD. This is done in the VCD Provider portal.

Once the ALB registration is successful, we import the NSX-T Cloud account created previously in ALB.

Also make sure that we have a Network pool created in VCD on the same transport zone as the ALB NSX-T Cloud account.

Now it’s time to import the Service Engine Groups (created in the ‘admin’ tenant in ALB). For this blog post, I have created two SEGs and will import them in Dedicated mode which will be 1:1 mapped with OrgVDC gateways of tenants.

Now that the cloud account is added and the SEGs are imported, we will move ahead and enable load balancing services for the orgVDCs.

Enabling Load Balancing Service for VCD Organizations

We have two VCD Organizations, and will enable load balancing service on the CloudLabs OrgVDC Edge Gateway. This is done from the Tenant Portal of CLoudLabs Org by the Organization administrator.

Once Load balancer service is enabled under ‘General Settings’, a new NSX-T service segment gets initialized and attaches to the edge gateway. This service segment is the data network for SEs and gets added under the NSX-T Cloud configuration in ALB.

The Org administrator import the SEG to the edge gateway which can be then consumed by the Org users for their applications.

Since this is a dedicated SEG, the whole capacity is available for this edge gateway.

Application owners can now define the application pool with the required pool properties like:

  • Pool members
  • Default listening port
  • Persistence profiles
  • Load balance algorithm
  • SSL settings – re-encryption and client side SSL validation

The pool members belong to the same OrgVDC edge gateway.

Now, the Virtual Service can be created with the required properties:

  • VS IP Address
  • Application pool
  • Service Type
  • Port
  • Service Engine Group

The VS IP will be advertised externally only when the IP is chosen from the external allocation pool.

If this is the first virtual service, it will take some time to provision as the service engines need to be deployed and initialized in the backend. Subsequent virtual services will be faster as it just require a route addition.

The application should now be available for access externally.

Verifying the Virtual Service deployment

Now that the VCD tenant has deployed a sample web application, lets do a console walkthrough to verify the deployment.

From the NSX ALB UI, we should see that the virtual service is deployed on the SEG mapped to the orgVDC gateway.

Two SEs are deployed as per the configured HA mode on the SEG and they belong to the correct SEG.

SEs got a data nic interface attached to the OrgVDC edge gateway and that on a dedicated VRF.

Whenever new edge gateways are enabled with load balancer service, we should see new vrf contexts getting created in ALB under the cloud account.

In the NSX-T UI, we should see a new service segment attached to the edge gateway where the SE data nics goes to.

A static route is created on the edge gateway for the VS IP address with next hop pointing to the SE data interfaces.

In the vSphere UI, we should see the SEs created with the placement options that we defined in the SEG. With the SE naming prefixes, vSphere admin can track SEs to VCD tenants.

We should see the vCenter folders created for SEs as defined under the SEG properties.

Extra notes to be aware of

  • Only the default ‘admin’ tenant in ALB is currently supported. All SEGs have to be created in the default admin tenant to be able to import into VCD. SEGs created in custom ALB tenants can’t be imported.
  • All the Virtual Services created by the VCD organizations are reflected under the default admin tenant in ALB. Org users can raise a service ticket with the Org administrator to add any enhancements to the deployed VS on the ALB like attaching WAF profile, request policies etc.
  • If using ALB basic license, only Active – Standby HA mode is supported for the SEs.
  • Having a dedicated T0 Gateway for SE management is recommended. However it is also possible to share the T0 Provider Gateway for SE management if needed.
  • The SEs are automatically added under the DFW exclusion list. If the SE management T1 gateway has Gateway firewall enabled, make sure the necessary SE ->ALB Controller ports are open. https://avinetworks.com/docs/17.1/protocol-ports-used-by-avi-vantage-for-management-communication/
  • The default gateway firewall rule on the VCD tenant edge gateway is set to deny all. Make sure to create an allow rule for the virtual service to be able to access the VS from external networks.
  • Create vSphere anti-affinity rules for the SEs within the same SEG to force the placement on separate ESXi hosts.

It’s time to wrap up!!! This was a bit lengthier one, but I hope this article was informative.

Thanks for reading.

Leave a Reply