Each Host/VEM can only be part of one DVS. One thing that might have happened is that the VEM still has the old VC dataset residing on it.
I'd take a look to see if that is the case.
This faq page has some additional data on checking this:
Thank you Sachin.
I can see the stale data on VEM Pointint to Old Vcenter ; I do not understand the relationship ( how this data/database created).
This is non prod enviornment so this should be fine but in Production enviornment this will be disaster ( Ideally I should have back up but I am not sure the vcenter database back up will solve the issue or not ). I tried to remove that dvswitch & got ioctl failed error so I will end up removing VEM & installing again.
I understand nexus 1000 extension key but never think about this dvs switch uuid ? I did not have nexus 1000 crash .
sbin # net-dvs -d -n "60 be 37 50 8f 76 69 52-b3 f2 1d 00 2c 5c 8b 85"
ioctl failed: 16
Operation failed: status = 0xbad000
This data is created when the host is added to the DVS. When the host was added to the old DVS, it received the approrpiate VC data.
If the database backup was done, I believe it should solve this issues since it would match the IDs from the VC database and would avoid having to give a "new" dvs ID to the object.
I'll touch base with a few folks and loop back around with additional details.
Let me understand this correctly, When I add Host to DVS/Nexus1000 It creates the local database cache in respective ESX/i /etc/vmware/dvsdata.db . I think this database is used in vcenter down scenario . when I bring back up the new vcenter & add dvs/nexus 1000 to it,I am assuming it's giving new UUID to dvs switch but all the ESX host have cached entry for old uuid in their database, so when I tried to add ESX into Vcenter DVS/nexus1000 , it gives me this error.
I am thinking you should be able to change UUID for DVS in vcenter mob.If not, than I will end up uninstalling & reinstalling VEM on each ESX host ( in my case it's few ,so not worry).
Just read this post while researching the steps to building a brand new VC.
Basiclly our VC is totally stuffed and the VC DB is totally stuffed as well so need to do a new clean build of a VC. Up until now it seems to be straight forward, then I read this post and now it is more complicated.
If I understand it correctly I am not going to be able to add the Host back to the new N1K DVS once added to the new VC without reinstalling the VEMs on each on the ESXi 4.0 host we have.
So issue is that when the VEW is deinstalled then reinstalled any Guest servers running on the Host will be off the network......
Only way I can think of doing this currently is to migrate the Hosts and DVS off the old VC to a temporary VC ( I have to reuse the old VC server as the new one), then rebuild the faulty VC and then migrate the DVS and Hosts back to the Newly built VC. When I say migrate the DVS, I would Build a new N1K switch, so would have 1 N1K connected to the old VC and a new N1K connected to the tempoary VC, then move Guest of a host, discconnect Host from old VC DVS then connect it to the DVS on the Temp VC, then move Guests to the moved Hosts. after rebuild do the same thing but in reverse.
Hope that makes sence, any ideas comments other solution welcomed.......
If I understood your proposed process correctly, this won't work unfortunately. The limitation is that a given host can only be added to a single N1k DVS.
The best solution would be to recover from a saved backup of the VC database if available.
One additional thing is that moving between VCs doesn't require a reinstall of the VEM. It would however require that the VEM is "detached" from other DVS entities. This can be checked by doing a "net-dvs -l" on the host.
Basically, removing the host from the DVS on VC1 and adding it to the DVS on VC2 would do the trick. (Assuming the N1k versions are the same on both VCs).