cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1368
Views
1
Helpful
11
Replies

vManage is not enough space for updating

dijix1990
VIP
VIP

Can't update vManage from 20.9.1 to 20.9.2.1

[1-Mar-2023 6:03:11 MSK] Device: Failed to install: Signature verification Suceeded.\nSignature verification Suceeded.\nSignature verification Suceeded.\nerror-reason 'ERROR: Error: Disk space is not enough to install new image. Please set current software version as the default version and remove unused software versions first'
[1-Mar-2023 6:03:16 MSK] Failed to upgrade device.

  

11 Replies 11

dijix1990
VIP
VIP

there is not other version as available

dijix1990_0-1677640256499.png

 

dijix1990
VIP
VIP

how can I increase the disk?

dijix1990
VIP
VIP

awful product, I can't increase it

What size did you give to vManage in the initial setup?

HTH,
Please rate and mark as an accepted solution if you have found any of the information provided useful.

csco10260962
Level 1
Level 1

initial the compute guys rolled out vmanage 20.4.x i believe with a 100GB HD.Later on with futher upgrades we adjusted to 500GB HD. And upped mem to 64GB and 32vCPU instead of 16vCPU and 32 GB of mem. Now only issue is is upgrading from 20.7.2 to 20.9.x. After istallation and reboot vmanage tries to revert to older version.

Which 20.9.x version do you try to upgrade? It seems 20.9.2 has problems , use 20.9.3 or 20.9.2.1. Check release notes for initial check , diagnostics and caveats.

https://www.cisco.com/c/en/us/td/docs/routers/sdwan/release/notes/controllers-20-9/rel-notes-controllers-20-9.html#vmanage_upgrade_paths_20_9

HTH,
Please rate and mark as an accepted solution if you have found any of the information provided useful.

I tried 20.9.3. Maybe installation time out should be adjusted under settings in vmanage. But i will have to try.

Haven't heard back from TAC yet, i've got a case open for this.Direct upgrade should be supported according to release notes

cloudlogics
Level 1
Level 1

Hi All - I have a similar issue where I need to increase disk space for my on-prem vManage. Anyone aware of any formal document/process? This link doesn't appear to be working anymore

https://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/sd-wan/white-paper-c11-741440.html#Expandingthedatadiskpartition

Thanks

cloudlogics
Level 1
Level 1

In case someone else is looking to increase the disk space on on-prem vManage. These are the steps I followed:

  1. On vManage CLI Run “request nms all stop” to stop all services
  2. Increase the disk space on the VM in vCenter or whatever hypervisor you are using.
  3. On vManage CLI Run “request system resize-data-partition”. This will automatically expand the data disk to the new size.
  4. On vManage CLI Run “request nms all start”

joaobeja93
Level 1
Level 1

It didn't work. Any thoughts? Thanks!
/opt/unetlab/addons/qemu/vtmgmt-20.13.1# qemu-img info virtiob.qcow2
image: virtiob.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 200 KiB
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false




vManage1# request nms all stop
Successfully stopped NMS SDAVC server
Successfully stopped NMS CloudAgent v2
Successfully stopped NMS cloud agent
Successfully stopped NMS service proxy
Successfully stopped NMS service proxy rate limit
Successfully stopped NMS application server
Successfully stopped NMS data collection agent
Successfully stopped NMS messaging server
Successfully stopped NMS coordination server
Successfully stopped NMS configuration database
Successfully stopped NMS statistics database
Successfully stopped vManage Device Data Collector
Successfully stopped NMS OLAP database
Successfully stopped vManage Reporting
vManage1# request system resize-data-partition
Stopping Container Manager
ok: down: container-manager: 0s
resize2fs 1.46.5 (30-Dec-2021)
The filesystem is already 13107200 (4k) blocks long. Nothing to do!

Resizing of data partition completed
Starting Container Manager Again
timeout: run: container-manager: (pid 23610) 7s, normally down