cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
210371
Views
0
Helpful
0
Replies

PSOD with ESXi 6.0 and Nexus 1000V

ffacilities
Level 1
Level 1

We're using the Cisco Nexus 1000V distributed vSwitch with a cluster of 13 hosts. We're on the latest VSM version [5.2(1)SV3(1.15)], but if I try to update the VEMs beyond 5.2(1)SV3(1.4), my hosts crash under high network load with the following PSOD:

#PF Exception 14 in world 33309:memMapKernel IP 0x418039b226fa addr 0x70

PREs:0x805bb7f023;0x805bb80023;0x805bb81023;0x0;

cr0=0x8001003d cr2=0x70 cr3=0x78f16000 cr4=0x216c

frame=0x439150e9b0c0 ip=0x418039b226fa err=0 rflags=0x10246

rax=0x0 rbx=0x4303bdb30600 rcx=0x0

rdx=0x439150e9b1dc rbp=0x1 rsi=0x439150e9b190

rdi=0x4303bdb30600 r8=0x0 r9=0x0

r10=0x43141f08e280 r11=0x0 r12=0x0

r13=0x4300d69d0ab0 r14=0x0 r15=0x4300d69d0aa8

*PCPU65:33309/memMapKernel-65

PCPU  0: UUSUUUSHUUSSSUUSSUUSSUUSSSSSUUUUUUUSUUSVSVSVSVSSSSSUSUUSUSSSSUSSU

PCPU 64: SSSHUSUSSH

Code start: 0x418039a00000 VMK uptime: 0:00:36:12.914

0x439150e9b188:[0x418039b226fa]PktList_SplitByUplinkPort@vmkernel#nover+0x6 stack: 0x0

0x439150e9b190:[0x418039b2285a]PktListIOCompleteInt@vmkernel#nover+0x106 stack: 0x0

0x439150e9b200:[0x418039b35a91]Portset_ProcessAllDeferred@vmkernel#nover+0x39 stack: 0x4300d69d2a40

0x439150e9b220:[0x418039b37c5f]Portset_ReleasePort@vmkernel#nover+0xbb stack: 0x4300d69d0b00

0x439150e9b240:[0x418039b6b5da]NetWorldletPerVMCB@vmkernel#nover+0x10a stack: 0x4300d69ceac0

0x439150e9b2b0:[0x418039abfd14]WorldletBHHandler@vmkernel#nover+0xe0 stack: 0x91000000ef

0x439150e9b410:[0x418039a32eed]BH_Check@vmkernel#nover+0xe1 stack: 0x417ff9af3a08

0x439150e9b480:[0x418039c0f8c2]CpuSchedIdleLoopInt@vmkernel#nover+0x182 stack: 0x20000000

0x439150e9b500:[0x418039c1318d]CpuSchedDispatch@vmkernel#nover+0x16b5 stack: 0xffffff0001941e

0x439150e9b620:[0x418039c13d54]CpuSchedWait@vmkernel#nover+0x240 stack: 0x0

0x439150e9b6a0:[0x418039c14095]CpuSchedTimedWaitInt@vmkernel#nover+0xc9 stack: 0x2001

0x439150e9b720:[0x418039c14166]CpuSched_TimedWait@vmkernel#nover+0x36 stack: 0x430650ffe080

0x439150e9b740:[0x418039a18f88]PageCacheAdjustSize@vmkernel#nover+0x344 stack: 0x0

0x439150e9bfd00:[0x418039c149ee]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0

base fs=0x0 gs=0x418050400000 Kgs=0x0


We're currently on ESXi 6.0 Update 2 build 3825889, but we've seen this same behaviour since ESXi 5.5 U1. Has anyone ever seen anything similar, with or without the Nexus 1000V? In case it's relevant, all of our hosts have a pair of Intel X520 10Gb NICs attached to the N1000V, along with a pair of I350 1Gb NICs attached to a separate VMware DVS. Servers are a mix of Dell PowerEdge R720 with dual Xeon E5-2690v2, R730 with dual Xeon E5-2697v3 and R630 with Xeon E5-2697v4 - same problem on all three types of hardware with three different generations of CPU.

0 Replies 0
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: