cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
626
Views
0
Helpful
9
Replies

CUCM V8.6 - V11.X upgrade failure

Paul Austin
Level 4
Level 4

Hi Folks, I'm trying to upgrade a cluster from 8.6 to v11 but failing - the process fails on the post-install phase - no indications on why. I have loaded all of the required COP files, rebooted the node and tried to change the VM settings in VMWare - RAM and disk space only. 

However, I cannot select the second disk in the VM to change the size to 110Gb - it is greyed out. I presume the cop file - 

ciscocm.vmware-disk-size-reallocation-1.9.cop.sg would allow me to do that and I can see that it has installed after issuing the command show version active on the 8.6 server.

So question - any ideas on what I haven't done right to not get this option?

Do I need to install the COP files on ALL servers before migrating the individual nodes - I was going to upgrade them one by one to the inactive partition but NOT to switch version yet?

Looking at the release notes again, I notice that it asks to change the subnet mask to 255.255.255.0 - will this not cause confusion, is it absolutely necessary?

Is doing it using Cisco prime Collaboration Deployment much easier?

Thanks.

9 Replies 9

Manish Gogna
Cisco Employee
Cisco Employee

Hi Paul,

You may try reinstalling the disk size reallocation file, the following short video provides exact steps for upgrading cucm and IM&P from version 8.6 to 11.x , you may check this

https://supportforums.cisco.com/video/12724161/how-upgrade-cucmcups-86-cucmimp-110

HTH

Manish

Hi Manish, Yes I will have to do that. Jamie is my saviour on how to explain things nice and straightforward.

Paul,

What I can recommend is to go through this ppt from Cisco Live: "Best Practices Migrating Previous_11_BRKUCC-2011.pdf" (attached)

Leszek

First time I've been called saviour LOL

You should be able to change the HDD as I show in the video, and PCD for an upgrade, would not really make it easier, it's going to be 90% the same, only difference is that PCD will load the ISO to the VM and kick start the upgrade process.

If you're doing a migration, yes, it would help, but if you're going to use the same hostname/IP, you'd need to stage a lab where to do that, which also takes time and effort.

HTH

java

if this helps, please rate

Hi Jamie, I am using the same IP address's and hostnames as the V8.6 servers. This cluster is already on UCS servers. I'm a bit confused as to why I need to do this in a lab. Can't I just upgrade to the inactive partition or will it be best to define new blank VM's and migrate to them?

regards.

I was referring to using PCD for a migration, not for a regular in-place upgrade.

HTH

java

if this helps, please rate

Hi Jamie, managed to solve the hard disk change error - the customer had left snapshots on the vmware :( . still fails the upgrade but 1 question - the guide says change the servers to 24 bit mask - is this really necessary?

thanks

Paul

Explain the customer snapshots are not supported on CUCM, they could have avoided a headache by not doing that.

I believe the subnet mask note is related to this bug:

https://bst.cloudapps.cisco.com/bugsearch/bug/CSCub72346

https://bst.cloudapps.cisco.com/bugsearch/bug/CSCtt17619

The root problem is that you *could* have configured the mask as 255.255.255.000 and that was a valid entry in RHEL 4, but in RHEL 5+ it's no longer a valid option, and it will cause trouble. I don't think it's necessarily telling you that you need to change the mask to a /24, I use a /25 in my lab and never had any problem, but I cannot input 000 in the last octect.

HTH

java

if this helps, please rate

Thanks for that Jaime. Will review and hopefully upgrade this week.

Paul