cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
712
Views
10
Helpful
4
Replies

VXLAN/L2 from ACI to Nexus 9000 ?

I have a wierd requirement here. We have a ACI fabric in one of our datacenters and the secondary DCs are still on NEXUS 7Ks/9Ks.  We are going to upgrade our secondary datacenters to ACI sometime this year, but because of the issues with Chip shortage, the ACI switches are pushed back to a later date.

In the interim, there are few apps that need to move from backup DC to ACI , and a layer 2 extension is required between the DCs.  I have jotted down the options here, which we are considering. Can someone pls clarify on the best approach?

1) Use L2 extension technologies like OTV - I have tried this before, and have had issues with OTV/VXLAN interop and ARP learning. I want to avoid OTV, because of the support issues

2) VXLAN - Can I install 2xnew nexus9Ks in 2nd datacenter, and build up VXLAN to ACI, or to another pair of Nexus 9Ks in the primary DC? We are OK to just use this Nexus 9K VXLAN path, for layer 2 extension. Or we even have extra non-cisco switches to bring up a VXLAN tunnel 

3) IP mobility - Simply configure the subnet in 2nd DC, in the ACI fabric, and do /32 host routing to achieve ip mobility.

4) Possibility of a remote leaf solution in 2nd DC? - issue is - we are migrating from the 2nd DC to the main ACI fabric (reverse direction). So, im not sure if this will work. 

There are other options too, but mostly looking at the options above. 

4 Replies 4

Robert Burns
Cisco Employee
Cisco Employee

All things considered, ACI Remote leaf is a perfect fit for this use case.  You'd be able to build out your primary ACI fabric at your Main DC, deploy a pair of RLs at the secondary site, and then do simple L2 extensions from the RL's to the legacy workloads.  The nice thing about RL architecture is there's minimal complexity involved - no need to setup OTV or a separate VXLAN network between sites.  Then once you replace the secondary DC switches with ACI you can decomission the Remote Leafs and redeploy them into either fabric.  

Robert

Robert

Thanks so much. 

let me map this ..

DC1 (ACI) --> WAN/IPN ---> RL DC2 --> L2 trunk to existing N7K Agg (DGW - say 10.1.1.1/24)

now say, im migrating a VM (no ip change) from DC2  - 10.1.1.20 to DC1.. I define the BD/EPG in DC1 (for 10.1.1.1/24) , and the VM will be local to DC1. Ill have /32 host routes from DC1, and all fine. 

On the other side, DC2 where the L2 extension is defined, will endpoint learning work ok ? Remember, DC2 wil have its own DGW thro Nexus and half of the workloads are still there. Will there be IP conflicts for DGW due to this? will endpoint learning work fine in DC2 with remote leaf ? 

The GW for the subnet can only exist on ACI or the legacy DC2 N7Ks at any one time.  Typically migration would be as follows:

1. Bring up ACI in DC1, create BD (no SVI assigned) as L2 with Flooding enabled, EPG created and assigned to RL interfaces via Static path.
2. Migrate the VMs from hosts located in DC2 > ACI-attached hosts in DC1.  At this point, understand the Legacy GW will still be used (even for VMs migrated to DC1).  
3. Shutdown the SVI on the  N7K, and create the SVI on the BD in ACI.  (You can even go as far as to assign the same BD MAC address as used on your N7K to remove any outages from VMs having to re-ARP for the new MAC.  At this point now the SVI will reside on the Leafs, including the remote Leafs.  If you require these VMs to be routed to other Subnets, then you'll need to either allow them to route via the ACI fabric internally, or via an L3Out (either on the RLs or DC1 Leafs which allow them to reach endpoints external to ACI.

Robert

Robert

 

Thanks. Ill look into this. 

We arent migrating the SVIs as of now. We have some services in DC2, which needs to move to DC1. Half of the VMS in 10.1.1.0/24 (say) will move to DC1 (ACI), and the other half will remain in DC2. The subnet is originally in DC2s IP supernet and the existing def gw is N7K. SO, the only way to do remote leaf is to flip the gateway to ACI, and have host routing through DC2, for the VMs still left in DC2?  

Can I put a pair of nexus 9K switches in each DC, and just build VXLANs thro NExus? if not, can i also use IP mobility solution ? By IP mobility, I mean that DC2 VLAN/ Gateway will stay there. Just define BD/EPG in DC1, and move the VM and leak the /32 routes to WAN.

THanks again. 

Save 25% on Day-2 Operations Add-On License