cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1371
Views
25
Helpful
7
Replies

Ise application initialization

ammahend
VIP
VIP

We are noticing that during high authentication volume hours ise application crashes and application status goes to initialization state, after sometime maybe an hour it recovers on his own and goes back to running state. Wondering if somebody else is experiencing similar issue with ise 3.1 patch 3. During this time obviously all authentication fails. Upgraded to patch 4 waiting to see any improvement. 

 It’s a 12 node deployment with independent pan and MNT and 8 PSNs. 

-hope this helps-
1 Accepted Solution

Accepted Solutions

VM, scale supports 40k per psn with current resource reservation, I think it’s a bug, Cisco TAC couldn’t figure out, we are trying to escalate to BU at this point.

-hope this helps-

View solution in original post

7 Replies 7

marce1000
VIP
VIP

 

 - Do you have virtual ISE's or appliances ? For VM's follow-up on performance with the hypervisor monitoring tools, if needed increase resources such as CPU and mem , or other.

 M.



-- ' 'Good body every evening' ' this sentence was once spotted on a logo at the entrance of a Weight Watchers Club !

VM, scale supports 40k per psn with current resource reservation, I think it’s a bug, Cisco TAC couldn’t figure out, we are trying to escalate to BU at this point.

-hope this helps-

 

 - You may try to show logging system ade/ADE.log ,use this particular command when high authentication volume  occurs at regular intervals (if time permits - check for related info's)

 M.



-- ' 'Good body every evening' ' this sentence was once spotted on a logo at the entrance of a Weight Watchers Club !

Yes we are

-hope this helps-

Greg Gibbs
Cisco Employee
Cisco Employee

You might see if the conditions for this bug are relevant. This is the only condition under which I've seen instability so far in ISE 3.1.

https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd41773 

 

 

just to let everyone know the issue was with cpu resource reservation, at the end of the day this was nothing more than miss communication between network and server team, the reservation was set to 14000 along with limit. 

reservation was changed to 16000 and checked cpu limit to unlimited the issue was resolved. In addition we also made some additional changes on ise and meraki. 

disable Endpoint Owner Directory and Profiler Forwarder Persistence Queue and change meraki interim update to every 3 hours compare to every 10 minutes (default). 

thanks for your inputs. 

-hope this helps-