How to migrate NSX-V to NSX 4.1.0 with the Migration Coordinator [User Defined Topology] 1

How to migrate NSX-V to NSX 4.1.0 with the Migration Coordinator [User Defined Topology]

This is the second part of the migrate NSX-V to NSX series. Click here if you haven’t seen part 1, where I describe the Fixed Topology migration mode . This time, we will cover the User Defined Topology migration mode in the NSX Migration Coordinator.

Lab overview

The lab contains of the following:

  • 1 management vCenter (alm-vc01.vkernel.lan) with 2 physical ESXi hosts (vSphere version 8.0.1)
  • 1 workload vCenter (lab-vc01.infra.lan) with 3 nested ESXI hosts (vSphere version 7.0.3)
  • vSAN is the primary storage in the workload vCenter.
  • NSX-V manager appliance is deployed in alm-vc01 but registered to lab-vc01 (VMware NSX for vSphere 6.4.13)

Migration Topology

Here is an overview of the migration topologies before and after the migration. We will migrate from NSX Edge services gateways with static routes in an Active-Standby mode to a topology with NSX edges configured with BGP routes in an Active-Active mode.

Migration topologies

Preparations

Before we can migrate to NSX, we need to make sure that the NSX and NSX-V environments are prepared. This is a crucial step to do, so take the time to read the following preperation guides:
Preparing NSX-V environment for user defined topology End-to-End Migration

Preparing NSX environment for user defined topology End-to-End Migration

Deploy a NSX Manager Appliance

In order to migrate from NSX-V to NSX, we need to deploy a new NSX appliance to run the NSX Migration Coordinator. Do not deploy additional appliances to form a cluster, this can be done after the migration has been completed.

Migrate NSX-V to NSX
Deployed a single NSX Manager in my lab.

Add compute managers

After the deployment of the NSX manager, turn on the NSX appliance, add a compute manager for the deployment of the NSX Edges (Infra vCenter – alm-vc01.vkernel.lan) and add the vCenter (lab-vc01.infra.lan) that has been configured with NSX-V as the second compute manager.

NSX Compute Managers
Adding compute managers in NSX.

Create Uplink segments

I’ve created two segments in NSX using the default nsx-vlan-transportzone transportzone. These segments will be used to configure the uplink interfaces of the NSX Edges towards the physical network for the north-south connectivity. We will eventually use these interfaces for the BGP routing.

NSX Uplink segments
2 uplink segments for the north-south connectivity.

Deploy NSX Edges

Create IP Pool for the NSX Edges

Create a new IP Pool that will be used for the NSX Edge TEP interfaces. This should be a new VLAN with an IP Range that doesn’t exists in the current NSX-V environment according to the documentation of VMware. I have tested it as wel with the same NSX-V TEP network and that works fine (Not recommended by VMware).
Communication should be allowed between the new TEP network and the current NSX-V TEP network.

NSX IP pool
Edge uplink pool for the NSX Edges.
IP Pool Edges
Make sure that the subnet and VLAN doesn’t exist in the NSX-V environment.

Configure NSX Edge Uplink Profile

Create an uplink profile for the NSX edges. Adjust the uplink profile settings like VLAN, MTU and/or teamings to your needs.

How to migrate NSX-V to NSX 4.1.0 with the Migration Coordinator [User Defined Topology] 2
Configure an uplink profile according to your needs.

Due to the user defined topology migration mode, we are able to deploy the NSX Edges from the NSX Manager GUI instead of using the OVA file.

Edge transport nodes
Deployed 2 NSX Edge nodes.

In my test case, I have deployed 2 NSX Edges and I have added both Edges to the edge-cluster cluster. The NSX Edges have two uplinks in the uplink VLANs segments.

NSX Edge cluster
Edge cluster containing the two Edge nodes.

Configuring the Tier-0 and Tier-1 gateways.

Tier-0 Gateway

The next thing I did was creating the Tier-0 gateway for the north-south routing. To configure the north-south connectivity, we need to attach the edge-cluster cluster to the Tier-0 gateway and configure routing between the virtual and the physical networks. We can do this by configuring static routes, BGP or OSPF as routing method in the Tier-0.

I’ve also configured route advertisements on the Tier-0 gateway. This will allow us to choose what we want to advertise towards the BGP neighbors.

Tier-0
Creating Tier-0 gateway.

After attaching the Edge cluster to the Tier-0 gateway, we can now add interfaces to the Tier-0 gateway. We have created 2 VLAN uplink segments in NSX, so I will configure 2 uplink interfaces per NSX Edge node for an active-active HA mode.

Tier-0 Interfaces
Creating the NSX Edge uplink interfaces.

In my case, I have chosen BGP as routing protocol and configured 2 BGP neighbors in the two uplink segments.

Tier-0 BGP
Configure BGP Neighbors.

Tier-1 Gateway

In NSX-V, I have the following edge services gateways and distributed logical routers:

NameTypePurpose
prod-nesg01edge services gatewayFor the north-south routing and firewall of the production environment.
prod-ndlr01distributed logical routerFor the distributed firewall and segments of the production environment.
dmz-nesg01edge services gatewayFor the north-south routing and firewall of the dmz environment.
dmz-nldr01distributed logical routerFor the distributed firewall and segments of the dmz environment.
dmz-nes02edge services gatewayThis edge has an uplink interface in one of the dmz segments. The uplink interface is used to loadbalance between to web servers.

During the NSX migration, we wil need to map the NSX-V edges to the correct Tier-0 or Tier-1 gateways. You cannot assign multiple NSX-V Distributed Logical Router to the same Tier-0 or Tier-1 gateway, so I created the following Tier-1 gateways:

NameMapped toPurpose
DMZ-LB-Tier-1dmz-nesg02To configure the dmz loadbalancer for the web servers.
DMZ-Tier-1dmz-ndlr01For the DMZ segments.
PROD-Tier-1prod-ndlr01For the Production segments.
Tier-1
Creating new Tier-1 Gateways.

Configure route advertisements on the Tier-1 gateways. This will allow us to choose what we want to advertise towards the Tier-0 gateway.
I will not configure route advertisements on the DMZ-LB-Tier-1 gateway. This gateway will be configured with a static route since it isn’t linked to the Tier-0 gateway.

Tier-1 route advertisements
Enabling route advertisements on Tier-1 gateways.

Enabling the migration service

The migration service coordinator service is by default disabled. You will see the following message when accessing the migrate feature in the NSX GUI:

Migration Coordinator disabled
Migration-coordinator service is not started.

To enable the service, we need to have a root console session or a SSH session on the NSX manager and we need to execute the following command:

start service migration-coordinator
SSH NSX
Starting the migration-coordinator service.

Migrate NSX-V to NSX

We are now ready to continue with the migration of the NSX-V environment. Let’s access the migrate page in NSX, select the Get Started drop down box under NSX for vSphere and click on User Defined Topology.

How to migrate NSX-V to NSX 4.1.0 with the Migration Coordinator [User Defined Topology] 3
Select User Defined Topology.

Import Configuration

We already covered the preperations of the NSX and NSX-V environments, so we can continue clicking on Next.

Import configuration
Click on Next.

Click on Complete migration in the migration mode. This will migrate the complete NSX-V environment.

Import configuration 1
Select the Migration Mode.

We should now configure the authentication towards the NSX-V enabled vCenter Server and the NSX-V manager itself.

Import configuration 2
Configure authentication towards vCenter and NSX-V
Import configuration 3
Configure the credentials for vCenter and NSX-V

Perform the import configuration and click on Continue.

Import configuration 4
Performing the import configuration.

Translate Configuration Layer 2

Perform the Translate Configuration Layer 2 task and click on Continue when the status is successful .

Translate Configuration Layer 2
Perform the Translate Configuration Layer 2 and click on Continue.

Resolve Configuration Layer 2

In the Resolve Configuration Layer 2 section, we need to give some input to resolve some warnings. In the table below, you will see the warnings that need to be taken care of.

Note:

Keep in mind that the actions mentioned below are based on my test migration use case.
Resolve Configuration Layer 2
Resolve the warnings.
CategoryMessageDetailsActions
L2TEP VLAN Id for NSX-T Edge TransportNodesPlease set an Edge TEP VLAN. If the NSX-T Edge VM(s) is on any NSX-V host to be migrated, please choose a different VLAN number from range 0-4093 other than VLAN [2711, 15, 2710, 2713]. Otherwise choose a VLAN number from range 0-4093 other than VLAN [2711, 15, 2710] or use the existing TEP VLAN 2713. Read More
VLAN ID
L2Migrate or skip VLAN DVPGs missing segments.
Please select ‘migrate’ to migrate all VLAN DVPGs that do not have NSXT segments of the same VLAN Ids in the right transport zones, or select ‘skip’ to see details and skip them later in their own feedback messages.Skip
L2Choose deleting all NSX-V transport-zones post migration.All NSX-V transport-zones must be deleted in order to remove NSX-V from the system after the migration completes. Please select automatic to let migrator automatically delete the NSX-V transport-zones at the last step of migration, or select manual to manually do it post migration. Please note all NSX-V edges whose interface configuration has viretual wire must be deleted and all virtual wires must also be deleted before the NSX-V transport-zones can be deleted.Manual
L2Missing VLAN segmentNo segment with VLAN [15] is found to match DVPortgroup(s) [‘lab-pg-mgmt’] in [lab-dvs-01]. Please create the VLAN segment in any VLAN transport zone and retry migration, or select skip to continue migration without migrating the DVPortgroup(s) [‘lab-pg-mgmt’] in [lab-dvs-01]. If skipped, 10 VM vNIC(s) and/or vmknic(s) connected to the DVPortgroup(s) will not be migrated to NSX-T and they will lose DFW configuration after migration.

And more
Accept
Maintenance ModeChoose Maintenance mode option for clusterDuring host migration stage user can select migration mode as In-Place or Maintenance for any cluster. In case of Maintenance mode, Maintenance mode option indicates whether user would like to manually bring host to maintenance mode or migration coordinator to automate vmotion of vms to bring host to maintenance mode. For automated maintenance mode, if VDS version is 7.0 or newer then DRS has to be enabled and if VDS version is 6.5, 6.7 then DRS has to be disabled as migration coordinator will execute best effort vmotion. Please specify Maintenance Mode Option for cluster lab-cls01.Automated
EdgeESG management interfaces for sitePlease select interfaces in site that are purely used for management connectivity. Each option in the list contains ‘edge ID, vNIC name and it’s primary IP address’Skip

Note:

In case you have NSX-V Edges deployed with HA, you will need to disable the Anti-Affinity rules for the NSX-V Edges.
DRS anti affinity rules
Disable the NSX-V Anti-Affinity rules in vCenter.

We can proceed by clicking on Continue when all warnings are fixed.

Resolve Configuration Layer 2
Click on continue to proceed.

Migrate Configuration Layer 2

Perform the Migrate Configuration Layer 2 task and click on Continue when the status is successful.

Note:

While the migration is in progress, do not delete migrated objects in NSX unless you need to fix a rollback failure, and do not change configurations in NSX for vSphere or in NSX unless you need to resolve blocking migration issues.
Migrate Configuration Layer 2
Click on Continue to proceed.
Migrate Configuration Layer 2
Click on Migrate to start the migration of the configuration to NSX.
Migrate Configuration Layer 2
Click on Continue to proceed.

Check Realization Layer 2

Perform the Check Realization Layer 2 task and click on Continue when the status is successful.

Check Realization Layer 2
Click on Continue to proceed.

Define Topology

We are now ready to map the NSX-V Edges to the Tier-0 or Tier-1 gateways. We will map the NSX-V Edges as following:

NameMapped toPurpose
DMZ-LB-Tier-1dmz-nesg02To configure a the dmz loadbalmcer for the web servers.
DMZ-Tier-1dmz-ndlr01For the DMZ segments.
PROD-Tier-1prod-ndlr01for the Production segments.

I do not map the Edge Services Gateway to any of the Tier gateways, because I do not have any network services that I would like to migrate. The NSX-V Edge Services Gateways only serves for north-south connectivity in my lab.

Define Topology
Map the NSX-V Edges.

The following warning is about the two NSX-V Edge Services Gateways that I did not map.

Define Topology
This is as expected so i will click on Skip and Continue.

Translate Configuration L3, L4-L7 Services

Perform the Translate Configuration L3, L4-L7 Services task and click on Continue when the status is succesfull.

Translate Configuration L3, L4-L7 Services
Perform the Translate Configuration L3, L4-L7 Services and click on Continue.

Resolve Configuration L3, L4-L7 Services

In the Resolve Configuration L3, L4-L7 Services section, we need to give some input to resolve some warnings. In the table below, you will see the warnings that need to be taken care of.

Note:

Keep in mind that the actions mentioned below are based on my test migration use case.
Resolve Configuration L3, L4-L7 Services
Resolve the warnings.
CategoryMessageDetailsAction
Appliance ManagementNTP server configuration already present on NSXT. Do you want to continue or skip NTP migration.NTP config migration
Skip
NS ServiceL7 Service configuration is not supported in NSX-T.In NSX-V, L7 Service with APPID ‘APP_ALL’ was created for discovery purpose only. Hence it will not be migrated to NSX-T.Accept
NS ServiceEmpty Application Group is not supported in NSX-T.‘vSphere Syslog Collector’ Application Group cannot be migrated because it has no Group Members. NSX-T does not allow Service with No Service Entry.

‘Microsoft Exchange 2010’ Application Group cannot be migrated because it has no Group Members. NSX-T does not allow Service with No Service Entry.

‘Microsoft Exchange 2007’ Application Group cannot be migrated because it has no Group Members. NSX-T does not allow Service with No Service Entry.

‘VMware vSphere Dump Collector’ Application Group cannot be migrated because it has no Group Members. NSX-T does not allow Service with No Service Entry.
Accept
EdgeFeature on Edge cannot be migratedFeature VPN on Edge edge-7 cannot be migrated to NSX-T. By proceeding with the migration, configuration of feature VPN on edge edge-7 will be lost after migration. Do you want to continue without this VPN configuration or cancel this migration?

Feature VPN on Edge edge-9 cannot be migrated to NSX-T. By proceeding with the migration, configuration of feature VPN on edge edge-9 will be lost after migration. Do you want to continue without this VPN configuration or cancel this migration?
Accept
EdgePlease select NSX-T Mapped Port type for interface migration.
Migrating all interfaces from ESGs is mostly not needed. Please evaluate as per your topology. Select NSX-T Mapped Port type for ESG edge-9 interface dmz-ls02 migration.
Service Port
EdgeStatic/default routes are configured on ESG. Provide feedback to migrate them.Skip
EdgeDo not attach Loadbalancer to TierLoadbalancer will be attached to Tier by default. If tier is standalone and no service interface is created, here must be ‘YES’ in case of Loadbalancer migration failureSkip
EdgeOrphan monitorThe monitor monitor-3 on edge-9 is not used by any valid pool.

The monitor monitor-1 on edge-9 is not used by any valid pool.
Accept
RBACFor user role migration, vIDM is required if you are using vCenter users with NSX roles assigned in NSX-V. Please check your configuration. If this scenario is applicable, configure vIDM, otherwise skip it. See documentation for configuring vIDM.Skip
Distributed FirewallSome of the default DFW rules will not be migratedDefault Rule NDP will not be migrated

Default Rule DHCP will not be migrated
Accept
Resolve Configuration L3, L4-L7 Services
Click on continue to proceed.

Migrate Configuration L3, L4-L7 Services

Perform the Migrate Configuration L3, L4-L7 Services task and click on Continue when the status is successful.

Note:

While the migration is in progress, do not delete migrated objects in NSX unless you need to fix a rollback failure, and do not change configurations in NSX for vSphere or in NSX unless you need to resolve blocking migration issues.
Migrate Configuration L3, L4-L7 Services
Click on Continue to proceed.
Migrate Configuration L3, L4-L7 Services
Click on Migrate to start the migration of the configuration to NSX.
Migrate Configuration L3, L4-L7 Services
Click on Continue to proceed.

Check Realization L3, L4-L7 Services

Perform the Check Realization L3,L4-L7 Services task and click on Continue when the status is successful.

Check Realization L3, L4-L7 Services
Click on Continue to proceed.

Add Static Route on LB Tier-1

Because of the Loadbalancer service that is running on the dmz-nesg02, I will need to add a static route to the DMZ-LB-Tier-1 gateway. This can be done by editing the Tier-1 gateway.

Add Static Route on LB Tier-1
Edit the Tier-1 gateway

Add a new static route with the network 0.0.0.0/0 and the next hop towards the load balancer subnet gateway.

Add Static Route on LB Tier-1
Add a default static route.
Add Static Route on LB Tier-1
Configure the next hop.

Migrate Edges

We are now ready to perform the migration of the Edges. Before starting the migration, you should open up some continuously pings to VMs on the segments, Loadbalancer VIPs and segment gateways. With these pings, we can monitor the amount of downtime during the migration of the Edges.

Click on Start to begin the migration.

At some point, the ping will return the error message: “Request Timed Out.” At that point, I need to disable/remove the static routes on the physical switch towards the NSX-V Edges. The traffic will now be routed with BGP towards the NSX Edges.

Migrate Edges
Migration of the Edge is successful, click on Continue to proceed.

Migrate Hosts

The last step is migrating all the ESXI transport nodes to the new NSX environment. During this procedure, the ESXI transport nodes will be put in maintenance mode, NSX-V VIBs will be removed and the NSX VIBs will be installed.

Migrate Hosts
The migration of the ESXI transport nodes is successful, Click on Finish to Continue.

The migration is a succes, we now have migrated the NSX-V environment to the new NSX environment with the migration coördinator.

Post Migration Tasks

After the migration some additional steps might be required. Please visit the following VMware documentation to view the Post Migration steps.

Leave a Comment