cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
55279
Views
3
Helpful
6
Replies

Nexus 1000v in same vlan with single physical nic

sushil
Level 1
Level 1

Hi,

I am new to nexus 1000v virtual switches.

Trying to create a demo environment (but will be configuring at large scale in couple of days).

TO start wit of my design is something like this in demo lab.

1. 2 ESXi host managed by Vcenter and an unmanaged non-cisco switch.Both the esxi host supports only Physical Nic.

2. I created one active VSM on one of the host and VEM on thee same host as well.Kept the provision of standby Vlan in ESXi.

I can make out  the VSM-VEM communication is not happening.Show module doesn't shows it as running.Though vem status using cli seems perfect.Soemthing got to do with the control Vlan,but not able to think about it.

Do let me know if this type of configuration is possible with only one Physical NIC per host.If yes please share the sample config and port profile to be configured?

How should I uplink this to physical switch which doesn't support trunking or just a regular access switch.

Does second esxi host needs to have a VEM running as well?I think yes.That means two licenses of Nexus 1000 V I would need.

Regards,

Sushil

6 Replies 6

Robert Burns
Cisco Employee
Cisco Employee

Just to clarify I understand your environment correctly:

- 2 ESX Hosts

- Each have only one NIC (hopefully you have more before putting this production for redundancy!)

- VSM to be deployed on one of these hosts

First, the 1000v is licensed per VEM not VSM.  So if you want both hosts to participate in the 1000v DVS, then you would need a VEM license for each CPU socket. 

Ex. If each ESX host is a 2-socket server, you would need a total of 4 x 1000v VEM licenses.  The VSM itelsef is not licensed nor does it need to be.

The VSM is supported to be run on a host where the VEM is also installed.

Using a non-managed switch and single NIC on each host, this will be difficult to demo and not very practical.  Rarely have I ever seen a production host with a single NIC.  There's no redundancy.  It also makes the deployment of the 1000v more difficult.

It can be done, but it's not going to be simple, and if you mess up you'll be using the "recover standard vswitch networking" utility often.  Here's the high level steps you would need to do to get it to work:

1. Ensure your Host has a vSwitch portgroup defined.  Usually there's a default one called "VM Network".  This port group should not have any VLAN assigned.

2. Assuming you're not using VUM, manually install the VEM agent on to the host running the VSM.

3. Deploy the first VSM using the OVF template & Installer Wizard. This will configure and deploy your 1000v.  All three VSM interfaces should be assigned to the "VM Network" vSwitch port group.

4. After it has been deployed and powered on open a browser to the VSM's IP and launch the Installer Application.  This will configure your  vCenter SVS connection.  Towards the end it will prompt to migrate the VSM to the DVS along with any virtual interfaces or VM Networking - decline and simply click "Finish".  Note: Attempting to migrate everything in one swoop will cause the Installation Application to Fail.

5.  Next we're you need to connect to the VSM with SSH or Console, and create your Uplink port profile and vEthernet Port Profiles.  The ones I've created look like this:

port-profile type ethernet sys-uplink

  vmware port-group

  switchport mode trunk

  switchport trunk native vlan 177

  switchport trunk allowed vlan 177

  channel-group auto mode on mac-pinning

  no shutdown

  system vlan 177

  state enabled

port-profile type vethernet vsm-interfaces

  vmware port-group

  switchport mode access

  switchport access vlan 177

  no shutdown

  system vlan 177

  state enabled

port-profile type vethernet esx-management

  vmware port-group

  switchport mode access

  switchport access vlan 177

  no shutdown

  system vlan 177

  state enabled

**VLAN 177 happens to be my native VLAN all my switches and unmanaged switch use.  This could also be VLAN 1.

6. Now we will add the host to the DVS from vCenter.  Networking - click on 1000v Switch on the right panel, right click and select "Add host...".  Select your host, but do NOT select an adaptor.  This will be done at a later step.  The host should successfully add to your DVS.

7.  With the host now connected to the DVS, but all vmnics, VSM and Management interfaces all still attached to the vSwitch, we need to migrate them.  From the Networking view, right click on your 1000v DVS in the right pane, and select "Manage Hosts...".  A wizard will appear which you must select your host and:

- Assign your vmnic to the Uplink Port Profile created previously

- Assign your Management Interface to the "esx-management" Port Profile

- Assign the VSM's virtual interfaces (3) to the "vsm-interfaces" Port Profile

Click Finsihed once done.

If all done correctly, everything will be migrated to the DVS now!  You can view the module showing up on your VSM with "show module" command on the VSM.  You can also view the Host configuration within vCenter and view the Networking tab.

I've recorded a video of this procedure here: https://supportforums.cisco.com/videos/3357

Regards,

Robert

Thanks a ton Robert.

It worked for me in test lab.

Now I have to impliment it in production where there are two NIC on each server and connecting to Cisco L3/L2 switch.

1.There are two Esxi host connecting to common Shared storage SAN in this case.

2.There will be a single Vlan operating that is Native Vlan on upstream switch.

Do you recommend to use one NIC say VMNIC0 on each host for management only?So that would be using the stadnard switch in this case.

3. Two VSM in HA mode will run on these hosts itself.Any specail consideration or configuration for VSM-VEM on same host?

4. How would my port-profile should look for this to be applied?

I was wondering if we should use NIC teaming or not?What if any of my NIC fails and communication breaks down.

Upstream switches are not stackable,so do you recommned if I use port channel instead of using VMNIC0 for management? Or do you suggest to have more NIC in the server?

We have purchased 4 nexus licenses hence can configure 4 VEM.Even If configure there hosts per VEM two are left in spare.

We would be connecting two new ESX host in few days connected via metro switches to this present datacenter and using separate storage for these new hosts i.e new SAN here.

Idea would be not to allow the communication between present datacenter and futuristic datacenter but to use remaining 2 nexus licenses and same Vcenter server (present which we building) to manage the futruistic datacenter.

Can it be done with VLAN implimentation? What if these are separate non-connected datacenters.As we have procured only one instance of Vcenter server and 4 enteprise lic with nexus 1000v.

Regards,

Sushil

Sushil,

Comments inline.

Regards,

Robert

1.There are two Esxi host connecting to common Shared storage SAN in this case.

2.There will be a single Vlan operating that is Native Vlan on upstream switch.

Do you recommend to use one NIC say VMNIC0 on each host for management only?So that would be using the stadnard switch in this case.

[Robert] Redundancy is key.  If you only have two physical NICs per ESX host, it would be recommended to use them together for redudancy.  You know at least one is needed for the DVS uplink, so keeping the other for management on the vSwitch leaves you with no redundancy.

3. Two VSM in HA mode will run on these hosts itself.Any specail consideration or configuration for VSM-VEM on same host?

[Robert] Two considerations.  First you have to ensure the VSM's vEthernet port profiles are defined with the "system vlan x" command.  The same command will also need to be in your uplink port profiles.  If you see my port profiles above you'll see they adhere to this.   The only port profiles that need the "system vlan x" command are VSM Port Profiles, Management, and IP storage (NFS/iSCSI) port profiles.  System VLANs are always up and facilitate the initial bring up communication before the VEMs are program.  There's a ton of information in the forums on system vlans if you want to learn more.  The second consideration is to create a DRS rule on your host Cluster in vCenter to keep the two VSMs on "separate" hosts.  This will also prevent a single host failure from taking down your 1000v management.

4. How would my port-profile should look for this to be applied?

[Robert] Depends what your goals are.  Normally there is one port profile per host "role".  Example.  You may have one for web servers, another for application, another for SQL servers.  Think of a port profile as a set of rules or configuration template that is duplicated to each virtual interface assigned to it.  This is a dynamic link, so any changes made to the assigned profile are immediated put into effect.  Each profile can have a different VLAN, ACL, QoS or other attributes assigned to them.  How many you craft up is up to you. 

I was wondering if we should use NIC teaming or not?What if any of my NIC fails and communication breaks down.

[Robert] You can only do NIC teaming if you attach BOTH NICs to the same switch - vSwitch or DVS.  You can't team between the two.  Also since your upstream switches are not clustered or stacked you can't team your uplinks.   You best option is to use MAC pinning on your uplink port profiles (same as my example above).  This behaves the same as the default vSwitch teaming method and requires no special upstream configuration. 

Upstream switches are not stackable,so do you recommned if I use port channel instead of using VMNIC0 for management? Or do you suggest to have more NIC in the server?

[Robert] Yes. Mac Pinning as explained above.  MAC pinning is your only option unless you have managed switches northbound.

We have purchased 4 nexus licenses hence can configure 4 VEM.Even If configure there hosts per VEM two are left in spare.

[Robert]  Licenses are not allocated per VEM they are per CPU socket.  If each ESX host has two physical CPUs, this would require 2 x 1000v licenses.  This host would then become a VEM (virtual ethernet module) as part of the 1000v system.

We would be connecting two new ESX host in few days connected via metro switches to this present datacenter and using separate storage for these new hosts i.e new SAN here.

Idea would be not to allow the communication between present datacenter and futuristic datacenter but to use remaining 2 nexus licenses and same Vcenter server (present which we building) to manage the futruistic datacenter.

Can it be done with VLAN implimentation? What if these are separate non-connected datacenters.As we have procured only one instance of Vcenter server and 4 enteprise lic with nexus 1000v.

[Robert] The 1000v is pretty flexible.  You can run it in L2 mode, assuming all ESX hosts are within the same VLAN or subnet, otherwise you can run it in L3 mode and allow the control traffic to be routed over an IP network.  1000v deployments are typically local to a datacenter, but with the latest version of 1000v -> 1.5 we introduce a feature called VXLAN.  You may wish to read up on it as it might be a good fit if you have remote datacenters and wish to manage all your VMs centrally.  If you end up having no communication between the two datacenters, you will need a separate vCenter per DC and thus separate 1000v at each location.

Regards,

Sushil

Robert,

In my production environment there would be managed Layer2 cisco 2960 or entry level L3 3560 would be there.So still I need to go for Mac- Pinning?

On server definitely use NIC team as it will surely help in case if NIC of server fails.Two cables from server NIC will go two two diffirent port on Upstream Switch.Any special configuration I need to perfomr at uplink switch or Nexus?

Also as both NIC will look as one,should I consider it as a single NIC while moving to DVS switch(as per above example on single NICand just moving host then adding port-profile) and remove standard switches altogether?

Is it the practice that management ports should be on vswitches or everything can be on DVS?

In case of NIC teaming still there be any management port acting as vswitch.

I am somehow running short of thoughts and aware that need to look into Port-cahnneling,Mac-Pinning and NIC teaming in vmware.I surely wanted quick answers from you as this weekend is production environment needs to be created.

Regards,

Sushil

You need MAC pinning if the pair of switches you are attaching your ESXi host to do not support some kind of mulitchassis etherchannel (MEC). So if you can use stacking (3750 or 2960S), VSS (6500) or vPC (Nexus 5k or 7k) then your dont need MAC pinning and you can simply use a normal port channel which is the best option.

If you dont have MEC then you need MAC pinning to use both uplinks from your ESXi host other wise you have a loop.

Regards

Pat

First of all, thanks a ton for a thorough description! Earlier, I spent hours and hours trying to figure all this out using the 'official' installation manual.  With your example configurations, It only took me a couple of minutes to configure most of what I needed.


I have a very similar setup but one ESXi (instead of two as above). I have vCenter and VSM running on the same host. I followed the steps given above but after 'Step 7.', I see an error that "a network configuration change disconnected the host from vCenter and the changes have been rolled back."

Is this because my vCenter is running on the same host where the changes are being made? If yes, I wonder if there is a work-around to this at all? Any ideas? Perhaps, running vCenter on a different host is an obvious one but  is there any CLI alternative to vCenter to migrate VMs and interfaces to DVS (as noted in step 7.)?

Thanks much!

vcenter-network-error.png

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: