
In the previous blog post about my community project – NSX ALB Cloud Migrator, we looked at the project overview, migration workflow, current capabilities, limitations and usage instructions. We will now start doing some migration scenarios for virtual services across NSX ALB Clouds and VRF Contexts.
If you missed the Introductory article, you can read it here:
https://vxplanet.com/2022/03/16/nsx-alb-cloud-migrator-v1-0-my-first-python-project/
The migration tool, release notes and usage instructions are available in my Github repo at:
https://github.com/harikrishnant/NsxAlbCloudMigrator
This is Part 1 of the blog series where we migrate Virtual Services from one NSX ALB vCenter cloud account to another.
Throughout the blog series, I might use the term “Virtual Applications” which refers to Virtual Services and it’s dependencies – pools, pool groups, HTTP Policy Sets, VSVIPs, policies and profiles.
Let’s get started:
Use Cases for NSX ALB vCenter Cloud migration
- A customer migrating their old vCenter infrastructure to a new vCenter infrastructure (Eg: from vCenter 6.5 to vCenter 7.0). Workload VMs are migrated (using HCX for example) but NSX ALB Services Engines aren’t, because the virtual services are on old NSX ALB vCenter Cloud Account which needs to be migrated first.
- Migrating Virtual Services based on environment restructuring – For eg: migration of Virtual Services to Prod, non-Prod or DMZ vCenters
- Migration of Virtual Services to a new vCenter Datacenter object.
Current State
In the Current State, we have few virtual applications deployed in NSX ALB on the vCenter Cloud Account ‘Cloud-VC01’. This cloud connector is configured in write-access mode to vCenter on Compute Domain 1 (vCenter-Compute-1), so that NSX ALB automates the deployment and lifecycle management of service engines on vCenter-Compute-1. NSX ALB Controller cluster is deployed on a dedicated Management domain (under vCenter-Management).
The below sketch shows the current state of virtual application hosting and the traffic flow to virtual services and backend pools. Note that the virtual applications are hosted on service engines in Compute Domain 1 which will be migrated to Compute Domain 2.
[Click for HQ Image]

The below Virtual Services are currently deployed in cloud account ‘Cloud-VC01’.

The below table shows the description of Virtual Services, their pools / pool groups, reachability, policies and profiles.

We have three L7 Virtual services with SSL offloading enabled. We have an L4 Virtual Service for FTP and another L4 Virtual Service handling a DNS service. Virtual services for internal apps are hosted on the ‘global’ VRF and a DMZ Virtual Service for external access is hosted on the DMZ VRF. Internal VIPs are accessible via native L2 while the DMZ VIP is accessible via L3 static routes. Backend pool members are accessible over L2 and L3.
Note : VIP / Backend pool reachability configuration is not discussed in this article. Please refer to the below documentation for more information.
https://avinetworks.com/docs/21.1/virtual-service-placement-settings/
https://avinetworks.com/docs/21.1/static-route-support-for-vip-and-snat-ip-reachability/
Just FYI, below is the static route configuration for L3 reachability to the DMZ VIP. A route is also configured on the DMZ firewall that points to the SE data interfaces.

We have a new vCenter Cloud Connector configured called ‘Cloud-VC02’. This cloud connector is also configured in write-access mode to vCenter on Compute Domain 2 (vCenter-Compute-2). The necessary cloud connector routing configuration including VIP / Backend pool networks, VRFs and static routes are done.

In this migration, it is assumed that the VIP network is stretched and is available as DVS Port groups on Compute Domain 2 vCenter (vCenter-Compute-2). If VIP network is not available as L2, the other option will be to have VIPs L3 routed either via:
- Statically using static routes
- Dynamically using BGP (as /32 prefixes)
Note : VIP and backend pool reachability configuration need to be done on the cloud account before running the NSX ALB Cloud Migrator. Migrator only performs the virtual application migration. All Layer 3 routing configurations are manual.
Now let’s use the NSX ALB Cloud Migrator tool to migrate our applications to the new Cloud Account ‘Cloud-VC02’. We will do two runs of the tool:
- First run will migrate all the internal applications to Cloud Account “Cloud-VC02” on VRF “global”
- Second run will migrate DMZ applications to Cloud Account “Cloud-VC02” on VRF “DMZ-VC02”.
The below sketch shows the target state (post-migration) of virtual application hosting and the traffic flow to virtual services and backend pools.
[Click for HQ Image]

Run 1 – Migrating internal applications using NSX ALB Cloud Migrator
Start the NSX ALB Cloud Migrator as per the usage instructions in my Github repo or in the introductory article:
https://github.com/harikrishnant/NsxAlbCloudMigrator
https://vxplanet.com/2022/03/16/nsx-alb-cloud-migrator-v1-0-my-first-python-project/
Provide the NSX ALB controller URL, admin user (local only) & credentials, NSX ALB tenant and controller version information. Wait for successful authentication.

We will be presented with the list of cloud accounts, select the target cloud account to migrate our virtual applications to. In our case, it will be ‘Cloud-VC02‘

We will be presented with the list of VRF Contexts available in the selected cloud account, select the target VRF Context to migrate our virtual applications to. For this run, it will be ‘global‘

We will be presented with the list of Service Engine Groups (SEG) available in the selected cloud account, select the target SEG to migrate our virtual applications to. In our case, it will be “SEG-Intranet-Apps-VC02“

Next, we will be presented with the list of Virtual Services available in the logged in Tenant. Select the Virtual Services to migrate. For this run, we will input VS-Intranet,VS-Booking,VS-FTPUploads,VS-DNS-Intranet

The tool will scan selected VS’es for attached pools, pool groups, http policy sets and pools or pool groups configured in http policy sets as content switching policies. Enter a unique suffix for the run. All objects migrated will have this suffix for identification.

Now, the tool will migrate pools, pool groups and HTTP Policy sets of the selected virtual services.



The tool will next migrate pool groups and pools that are directly associated to virtual services.


The tool will next migrate VSVIPs of selected virtual services.

Finally, the tool will migrate all the selected virtual services in this run.

After successful migration, we will be logged out from the session.

Run 2 – Migrating DMZ applications using NSX ALB Cloud Migrator
Let’s do the second run to migrate the DMZ Virtual service. The workflow is the same except that the target VRF Context to be selected is “DMZ-VC02” and Virtual service is “VS-Gateway4Guest“.




Final State
At this moment, we should see that all the selected virtual services, pools, pool groups, VSVIPs and HTTP content switching policies have been successfully migrated to the target cloud account “Cloud-VC02“. All policies and profiles attached to the virtual services and pools should still be available after migration as well (as they are Tenant level constructs).

Let’s perform a quick validation of a migrated Virtual Service.
- Virtual Services are migrated with:
- VS in disabled state
- Traffic Enabled unchecked (means it won’t respond to ARP)
- The migrated VS
- has the correct migrated pool and pool group association
- has the correct migrated HTTP Policy sets applied
- has the correct policies and profiles applied (application profile, SSL Profile, WAF profile etc)
- has the correct SSL / TLS certificate applied
- has the correct SEG associated
- Basically it retained all of the configurations in the initial state
- Some limitations / exceptions are documented in the Release notes and in my introductory article.
https://github.com/harikrishnant/NsxAlbCloudMigrator/blob/main/RELEASENOTES.md



Performing the Cut-Over
On the Virtual Service in “Cloud-VC01“, uncheck the “VS Enabled” and “Traffic Enabled” options. This will disable the VS and make sure that it wont respond to ARP requests.
On the migrated virtual service in “Cloud-VC02“, enable the “VS Enabled” and “Traffic Enabled” options. If proper routing configuration exists in the Cloud account “Cloud-VC02”, the Virtual service will be placed on the service engines in Compute Domain 2 vCenter – “vCenter-Compute-2” and will come up Green. The virtual applications will be now accessible over the Cloud account “Cloud-VC02”.

and this will be the current state of migrated virtual applications and traffic flow to Virtual Services and backend Pools.

Let’s wrap up!!! We will meet in Part 2 to migrate virtual applications from NSX ALB vCenter Cloud to No-Orchestrator Cloud. Stay tuned.
I hope the article was informative. Thanks for reading.
Continue reading? Here are the other parts of this series:
Introduction : https://vxplanet.com/2022/03/16/nsx-alb-cloud-migrator-v1-0-my-first-python-project/
