Announcement:Cisco Communities NOT Affected by Heartbleed Vulnerability

In the past year that I’ve been working on UCS Central, I’ve seen the software take several big steps forward. Today, I’m happy to announce that UCS Central 1.1(2a) has been released and is available on


In the past year, I’ve received a lot of feedback from customers using UCS Central and much of that feedback is reflected in this release. The first thing that most customers asked about was importing policies from existing UCS Manager instances. UCS Central 1.1(2a) can import policies and service profile templates from UCS Manager version 2.2(1b) and newer. It can find and browse policies and service profile templates in a read only manner from UCS Manager 2.1(2a) and UCS Manager 2.1(3a) releases. While this sounds like a simple thing to implement - you just need a search function to find the policy and then import the XML, our goal was to make sure that there were no unintended consequences, and performing an estimate impact on a service profile template and all of its dependent policies across multiple domains is a little more challenging. However, the team has delivered something that is very powerful along with the guidance to use the feature safely.


Another UCS feature that I found was being used by many customers was the scheduled backups of UCS Central and the registered UCS Manager instances. We’ve enhanced this functionality to back up the files to UCS Central and then put a copy of that file on a remote file share, a key request from customers using this functionality.

The statistics collection and reporting in UCS Central 1.1(2a) has been greatly enhanced. UCS Central now supports Microsoft SQL as an external statistics database in addition to PostgreSQL and Oracle. UCS Central has collected bandwidth, power, temperature, and fan speed statistics through UCS Manager since the last release and stores the statistics for 14 days with the internal database or for a year or more with an external database. UCS Central 1.1(2a) now has built in reports for power, temperature, and fan speed in addition to the existing reports available for bandwidth. All of these reports can be viewed for the individual server, the chassis, the domain, or even compared across domains as appropriate. It will be of great help for customers trying to size and monitor their environment.


Also based on customer feedback, we’ve enhanced UCS Central to include remote actions such as server power on/off/reset, locator LED on/off, and chassis acknowledgements and decommissions. The ID allocation in UCS Central is now sequential instead of the UCS default mechanism which was difficult to predict. And, based on customer input around security, UCS Central 1.1(2a) can now work with 3rd party certificates.


In addition to the many features added based on customer feedback, there are some other enhancements in this release of UCS Central. They include:

  • VLAN and VSAN localization
  • A separate tab for UCS Central policies
  • Authentication Domain selection

UCS Central 1.1(2a) supports all versions of UCS Manager from 2.1(2a) and later. However, some functionality, such as policy import, might only be available when UCS Central is working with domains that have newer versions of UCS Manager.


If you already have UCS Central up and running, the easiest way to upgrade is to download the ISO image from, reboot the UCS Central virtual machine and boot from the ISO, run the upgrade option, and it will be back up and running in about 5 to 10 minutes. If you are doing a new deployment of UCS Central, it might be easier to download the OVA file and import the UCS Central virtual machine.


For more information, you can take a look at the UCS Central web page or view the content here on Communities, including the UCS Tech Talk on UCS Central 1.1(2a). Let us know how you are using UCS Central in your environment and what features or enhancements you would like to see.


Jacob Van Ewyk

UCS Management product manager

Skyline-ATS has just announced a new course which is “Openstack Cloud Deployment on UCS”. Description The OpenStack Cloud Deployment on Cisco UCS (OCDCU) v1.0 is a three-day instructor-led training course designed for Systems and Field Engineers, Consulting Systems Engineers, Technical Solutions Architects, Integrators, and Partners who are responsible for installation, configuration, and the implementation of an OpenStack Cloud. This course covers the key components and procedures needed to install, configure, and deploy OpenStack Cloud using Cisco Unified Computing System hardware. The OpenStack Cloud Deployment on Cisco UCS (OCDCU) v1.0 course will provide hands-on lab exercises utilizing Cisco Unified Computing System (UCS) hardware. Students will perform hands-on lab exercises that were created to gather the necessary skills to install, configure, and deploy an OpenStack Cloud on Cisco UCS hardware. Objectives Upon completing this course, the learner will be able to meet these overall objectives: • Describe the basic business advantages of an OpenStack Cloud • Describe the basic function of an OpenStack Cloud • Describe Ciscos UCS Accelerator Paks for OpenStack deployments • Describe the services in OpenStack (Keystone, Nova, Glance, Cinder, Neutron, Swift, and Horizon) • Describe additional services of OpenStack Cloud. • Install the appropriate services for an OpenStack Cloud • Create Virtual Machines using OpenStack Complete outline: For questions / enrollment please contact David Darrough- Account Manager,, 408-340-8028

Hello my fellow admins,
Just wanted to let you know that if you get the error message “Error loading /tools.t00 Fatal error: 10 (Out of resources)” when trying to install VMware’s ESXi 5.5 hypervisor on a Cisco UCS C240 M3 SFF rackable server with 1 or more NVIDIA GRID GPUs in it, just follow the next steps to fix it:


1. Go to System BIOS (Press F2)
2. PCI configuration > MMCFG
3. Change the value from Auto to 2 GB
4. Change the value of Memory Mapped IO above 4 G to Enabled
5. Save and reboot the system
6. Installation will complete



Hopefully you will solve this issue by yourself, not like me, after a couple of days with help from my Cisco technical friends :)

Apparently, the error cannot be reproduced on the same system but with no GPUs, or on ESXi 5.1. Cisco told me that they’re working with VMware on this one and hopefully they will create a more fashionable way on getting the systems up and running with ease.

Good luck!




Asked by Bill Shields (Cisco) to post it in the Cisco Communities. Original post at Error loading /tools.t00 Fatal error: 10 (Out of resources) – Cisco UCS C240 M3 server with ESXi 5.5 | a blog for everyt…


“UCSQL”.   Rolls right of your tongue, doesn’t it?    It puts “UCS” in a relational context (i.e. “SQL), which is exactly where it belongs.   The deeper you understand the UCS Manager, the more this makes sense. Want to get more utility and understand more about the UCS Manager?  Great! Here’s a new tool to help you.


For the past 5 years, I’ve made it my mission to help people better-understand the Crown Jewels of the Cisco Unified Data Center Solution:   UCS Manager and the UCS Management Model.   Arguably one of the most valuable aspects is the inherent “programmability” --- where all the server infrastructure and connectivity can be programmed using  the XML/API.   I’ve been a strong advocate that customers take full advantage of this capability --- to automate workflows and operations, as much as possible.  And as the author of the UCS Central Best Practice Guide,  I wanted to encourage the same level of programmability for UCS Central.   For example : “How can I script global inventory collection across all domains?”,  or “How can I programmatically get a list of all the backup files, so that I can copy them offline?”,  or “I want a script that shows me all service profiles in all domains, along with any associated blades”.


Both UCSM and UCS Central are “relational” engines.   But up until now, there has been no classic “relational” access method --- like SQL, which many people speak.    A traditional database engine creates data dictionaries that tell you the meta-data about the tables, like the column names.   But UCSM and UCS Central aren’t classic database engines --- instead, the meta-data is only kept in the code-generated schema files, which are dynamically generated when UCS Manager is built by Cisco Engineering (and then used as the input for creating UCS PowerTool, UCS Python SDK, etc.)   You can see the meta-data dynamically through “visore” or through the UCS Platform Emulator --- but you can’t see it in an interactive scriptable manner. 

The main UCS XML/API operation to “query” objects (“configResolveClass”)  is opaque --- meaning that without leveraging the schema files, the UCS Data Management Engine (DME) will not tell you the attribute names in advance --- it will simply return all attributes and values. 











Classic database engines have a “show” or “describe” operation, to help selectively choose which attributes you want to project on the “select” statement.    So then for UCS, how do you answer these questions in a scriptable, programmatic manner:

  • What are *all* the Class Names?
  • What are the attributes for a given Class?
  • What are the current values of specific Class attributes?

I could not find an appropriate tool --- so I started making one myself.   And given enough time flying to various UCS User Groups, I was able to construct a way to answer these questions --- and offer a more general/extensible framework :  “UCSQL” ---  a Python-based scripting tool that accesses the UCS DME using SQL, translated over the XML/API.  


Now, I can explore the UCS schema with statements such as “show All", "show lsServer”,  “show vnicFc”,  “show orgDomainGroup”,  or see all the UCS backup file locations in UCS Central with “select * from configBackup”, or pre-provision my SAN configuration with “select dn, addr from vnicFc”, or pre-provision my IP mapping with “select dn, addr from vnicEther” --- along with several other examples and sample output.


And so can you. Because “ucsql” has been released as a UCS Community Source Project.   It’s yours.    The source code is being hosted and can be downloaded from github here.   Any questions?  If so,  just visit/post the community at

Very interesting footnote here :  “ucsql” can be used against UCS Manager (single domain) or UCS Central (many domains), or UCS C-series XML/API  [ “What?!?”].     Yes, because all three share a common XML/API and all three share a managed-object model that uses the same names for objects, when possible.   If you want a quick report on firmware versions or faults, it’s just “select * from firmwareRunning”, or “select severity, descr, cause, created, type, dn from faultInst”.   Works equally well when talking to UCS Central, UCS Manager or UCS C-series.    Really.


Sound interesting? Well hopefully, it’s just the beginning.   There’s a lot more functionality that *could* be added, but it needs a development community.   That’s you.    Here’s a great opportunity for you to define, engage and participate directly in developing a new UCS Management access tool that you get to use for your own advantage.     And that’s the bottom line for “ucsql” and UCS in general :  greater utility for you.


UCS Manager vs. HP OneView

Posted by kevegan Jan 24, 2014

From Cisco Data Center Blog: UCS Manager vs. HP OneView


HP introduced the world to their OneView management appliance by comparing it to Cisco UCS Manager through a series of YouTube video attacks this past fall. I can almost hear the meetings… 'Forget stealth, forget the high road – let’s attack the leader in converged systems management directly – let’s attack Cisco UCS Manager!' While we can’t help but respect HP’s gumption in attempting to pick on their #1 competitor, our flattery turned to dismay that HP continues to miss the boat on how UCS management truly works. Rather than respond with feigned outrage, we patiently waited for HP to release OneView for our own test drive.


What we found is that HP OneView continues a pattern that we see with legacy-inspired, top-down software managers. There is no ability for policies to adapt to changing configurations or server types. No ability for profiles to cross server generations or even to create policies for rack servers. No ability to truly create templates and resource pools for automated assignment. It is visually appealing, but remains a licensed software stack working hard to try and correct the limitations of an aging platform.


UCS was developed around UCS Manager as the centralized nervous system to automate server provisioning through flexible and secure policy-based management. UCS model-based architecture is embedded in the fabric, allowing users to define policies for their applications – not for the hardware itself. It is not software sitting on an appliance that ‘fires and forgets’ scripted commands down to waiting end points. UCS Manager provides extreme flexibility to respond to business needs because there aren’t restrictions on the policies – any profile can be applied to any blade or rack server type or generation, regardless of configuration. It is the foundation for stateless computing, where infrastructure is configured on demand for any workload.


Cisco compares a handful of key management features to HP OneView in the following short videos:


Video 1: Enterprise Role-Based Access & Multi-Tenancy


Video 2: Managing Profiles Across Server Types and Generations


Video 3: FW Management, Policy and Deployment

Video 4: Profile Automation, Templates and Resource Pools

Video 5: Stateless Migrations and Upgrades -- Policy Adaptation


With nearly 30,000 unique UCS customers around the world experiencing Unified Management, Cisco has revolutionized systems management with UCS Manager. Cisco embraces competition from HP in converged management and looks forward to continuing innovation and leadership.


Cisco UCS Communities:


New to UCS Manager? Download the free UCS Platform Emulator (UCS PE):

We are pleased to announce the release of Cisco UCS Director 4.1. This release is now available for download on

Links to download this release are as follows:


Relevant documentation for Cisco UCSD 4.1 release can be accessed as follows:

Refer to this link for all documentation: Cisco UCS Director Documentation

The Cisco UCS Director 4.1 release delivers new scalable architecture/deployment model, several key features, major enhancements, Open Automation, SDK and bug fixes.  The following is a complete listing of the features in UCSD 4.1 release:


Platform Enhancements Include:


New architecture/deployment model for increased scalability

  • Multi-Node architecture with one Primary node, one or more service nodes to offload system tasks, one node for Remote Inventory DB, one node for Remote monitoring DB.
  • Increased scalability to support up to 50,000 VMs and 5000 physical devices
  • VM Parameter Monitoring Policies
  • System tasks remote execution, enhancements, Service Node policies


Custom Workflow Orchestration tasks

  • Provides ability to add new Workflow Tasks with well-defined inputs & outputs
  • Implementation of the logic is done through a script
  • Custom Tasks can be managed through
  • Once a custom task is properly created, there is a no inherent difference in behavior for ‘Built-In Tasks’ and ‘Custom Tasks’.
    • In 4.1, Cloupia Script (JavaScript + UCSD Libraries) is supported.
    • Future releases will support PowerShell

  o Custom Tasks are exportable & importable among systems

    • Policies -> Orchestration -> Custom Tasks


Orchestration Enhancements

  • Display all Workflow inputs/outputs in Service Request to admin user
  • Display/Edit Workflow input/outputs at the time of Service Request approval for admin user.
  • Ability to set Workflow user inputs as mandatory or optional.
  • Ability to resubmit a failed or cancelled Service Request with inputs changed (Workflow user inputs and Workflow tasks input/outputs)
  • Ability to create/cross launch workflow inputs while adding/modifying tasks.


UCSD Components for Customization and Integration


  • Open Automation supports following features
    • Register Feature Controller 
    • Add Tasks to System Scheduler
    • Register Connectors
    • Converged Stack Builder
    • Resource Computers (Group Resource Limits)
    • Stack View Provider
    • CloudSense™
    • CMDB
    • Open Automation Developer Guide

    • Examples for all supported features

    • Java docs

  • REST API Browser Enhancements
    • Improved API coverage – Over 700+ new APIs added.

    • REST API Browser Enhancements

      • Support for the 700+ newly added APIs.

      • Legacy (JSON) APIs supported.

      • Java sample code generation ‘Tab’ added.

      • URL generation added

    • Java SDK Enhancements
      • Support for the newly added APIs.
        • Examples to showcase the usage.
        • How to Create, Update, Delete objects
        • How to retrieve data, submit/cancel Workflows etc.
    • Java docs for all the supported API’s
    • Reformatted and improved Developer Guide for REST API
  • CloupiaScript & Custom Tasks
    • CloupiaScript Guide with Examples
    • Custom Tasks Guide
    • Example Custom Tasks
    • Examples for Accessing and Creating Reports
    • Javadocs

Support for VSG based Application Container Support

  • PNSC Management
    • Add PNSC account, Collect PNSC inventory,  provide inventory reports
    • Support for Firewall Policy with Zones, ACLs


  • VSG based Application Container support
    • Support for new VSG Applications Container
    • Deploy VSG in HA/Standalone mode
    • Register VSG with PNSC


Device Connector Enhancements Include:


UCS Manager Integration – Enhancements

  • Provisioning of Local Disk Configuration Policy, Maintenance Policy
  • Workflow task to add/remove VLAN from vNIC Templates, 
  • Workflow tasks to add VLAN to a vNIC of a Service Profile,
  • Workflow tasks for Server Pool Management
  • Allows to assign a Blade or Server Pool to a Group
  • Workflow task for cloning Service Profile and Service Profile Template
  • Associate Service Profile Template to Server Pool through task and action


Support for UCS Central Integration

  • Inventory Collection for UCS Central account discovers Domains, Domain Groups, Domain Group Policies, and Registration Policies.
  • In addition, for every UCS Domain, discovers inventory for UCSM including Fabric Interconnects, Chassis, and Servers.
  • Provisioning of Local/Global Service Profiles and Service Profile Templates
  • UCSM account can register/unregister from UCS Central
  • UCSM Policies can be set as local/global policies


Support for CIMC 1.5 (Double Peak)

  • UCS-D now uses XML API to communicate with C Server.
  • Enhanced Inventory Collection using XML API.
  • Added reports for Storage Adapter, VIC Adapter, Network Adapter
  • Added support to launch KVM Console of C Server through UCS-D
  • Support for Bare Metal Provisioning with Local Boot and FC SAN Boot.


EMC VMAX Integration – Enhancements

  • Fast Policy – CRUD operations,  Associate/Disassociate  Storage Group from Fast Policy
  • Storage Tier – CRUD operations, Add/Remove Disk Group to Storage Tier, Add/Remove Thin Pool to Storage Tier
  • FAST Controller – Display Status, Display/Update parameters/settings, Enable/Disable Fast Controller
  • Data Dev – CRUD Operations


EMC VNX, VNX2 Integration – Enhancements

  • Support for EMC VNX2 Series Storage
  • New Models supported: 5400, 5600, 5800,7600
  • Asset Discovery and Inventory Reports, Orchestration Tasks & Actions
  • CRUD Operations on
  • CIFS Server
  • CIFS Share
  • File System Quota
  • SnapView


NetApp Clustered Data ONTAP (C-Mode) – Enhancements

  • CRUD operations on:
    • Aggregate
    • CIFS Servers, Shares, ACL
    • Snapshot policy, schedule, Restoring files
    • Licensing
    • iSCSI initiators
    • Quota on Volume, Qtrees
    • FCP
    • iGroup
    • Clone LUN
    • DNS, NFS, LIF, Cluster Node configuration
    • Provisioning VM through VSC CMODE storage
    • SIS
    • SnapMirror
    • Vserver Peering
    • Cluster Peering
    • New Context Mappers


Hyper-V Support – Enhancements

  • Support for SMB 3.0 and SMIS(LUN) based storage providers.
  • Tasks for add/remove SMB 3.0/LUNs from Host/Cluster
  • Support for SCVMM 2012 R2.
  • SMT for SCVMM 2012 SP1 and SCVMM 2012 R2.
  • Support for Host Power Actions.
  • Web and RDP Access for VMs.
  • Stack View Support for Advanced Networking features in SCVMM 2012 SP1 and SCVMM R2


Network Provisioning  - Enhancements

  • New Device Support
    • Nexus 1110
    • Nexus 6K
    • Nexus 9K
    • MDS 9250i
    • N1K HA inventory
    • Enhancements in SAN Zone tasks  
      • A new task to create San Zone/Delete San Zone.
      • A new task to create San Zone Set / Delete San Zone Set.
      • A new task to Add/Remove San Zone members.
      • A new task to Add/Remove San Zone Set members
      • A new task to Add/Remove the Device Alias.
      • Ability to provide Device Alias and Zone Set names
    • Enhancements in SSH Task
      • Support for handling error scenarios
      • Support to return command output


VMware Integration – Enhancements

  • Report Enhancements:
    • Data Store Monitoring reports: Disk latency, IO throughput and IOPs, Total capacity, Used Capacity, Provisioned capacity and Over commitment ratio, Virtual Machine's .vmdk Disk Latency report
    • vSphere 5.5 support
    • Apply target vDC policies when moving a VM across VDCs
    • Policy Enhancements:
      • vDC System Policy Time zone options list and support all regions
      • vDC Storage policy for VMware with thin provisioning option (Flexibility)
      • IP Pool policy for defining Static IPs
      • Flexible vDC Network policy for different set of VM network requirements
    • New VMware tasks/actions Enhancements:
      • Create VMKernel port group on a standard VMware vSwitch.
      • Removing an ESXi host from Distributed vSwitch
      • VM Cloning task + Guest OS Customization
      • VM Select Task
      • Ability to hot add more CPU & Memory to VMs.

We are pleased to announce the release of UCS Manager 2.2(1b) codenamed “El Capitan”. The release is now available for download on release 2.2(1b).

Links to download this release are as follows:

  • Infrastructure software bundle: Click here to download
  • B-series and C-series software bundles for this release are available at the above link, under “Related Software”.
  • UCS Platform Emulator 2.2(1b):  Click here to download
    • NOTE:  From UCS PE 2.2(1bPE1) onwards, UCS PE supports uploading the B-Series and C-Series server firmware bundles.  Because of the large file sizes of the firmware bundles, UCS PE only supports uploading of only the stripped-down versions (attached to this document), which includes only the firmware metadata but not the actual firmware itself in the binaries.  The stripped-down version of the firmware bundles which contain metadata only of the B-series and C-series server firmware is reduced to approximately 50 kB in size.


Relevant documentation for the 2.2(1) release can be accessed as follows:


El Capitan delivers several key features and major enhancements in the Fabric, Compute, and Operational areas. The following is a complete overview of the features in El Capitan:


Fabric Enhancements:

  • Fabric scaling
    • El Capitan supports new underlying NxOS switch code, which enables UCS to increase the scale numbers on the 6200 Fabric Interconnects, supporting up to 2000 VLANs, 2750 VIFs, 4000 IGMP Groups, 240 vHBAs, and 240 Network Adapter Endpoints.


  • IPv6 Management Support
    • Allow management of UCS Manager and UCS servers using IPv6 addresses
    • Allow access to external services (e.g. NTP, DNS) over IPv6
    • External facing client applications (e.g. scp, ftp, tftp) and external facing services (e.g. sshd, httpd, snmpd) are now accessible over IPv6 addresses


  • Uni-Directional Link Detection (UDLD) Support
    • Uni-Directional Link Detection (UDLD) is Cisco’s data link layer protocol that detects and optionally disables broken bidirectional links
    • Supported in FI End-Host and Switching mode
    • A global policy and per-port policy are added to configure UDLD parameters including: mode, msg interval, admin state, recovery action


  • User Space NIC (usNIC) for Low Latency
    • UCS will support High Performance Computing (HPC) applications through a common low-latency technology based on the usNIC capability of the Cisco VICs
    • usNIC allows latency sensitive MPI applications running on bare-metal host OSes to bypass the kernel
    • Supported for Sereno-based adapters only (VIC 1240, VIC 1280, VIC 1225)


  • Support for Virtual Machine Queue (VMQ)
    • Enables support for MS Windows VMQs on the Cisco VIC adapter
    • Allows a network adapter to dedicate a transmit and receive queue pair to a Hyper-V VM NIC
    • Improves network throughput by distributing processing of network traffic for multiple VMs among multiple CPUs
    • Reduces CPU utilization by offloading receive packet filtering to the network adapter


Operational Enhancements:

  • Direct Connect C-Series to FI without FEX
    • Support direct connections of C-Series rack servers to the Fabric Interconnect without having to invest in a 2232PP FEX
    • Supported for the following rack servers connected with Single Wire Management and Cisco VIC 1225 adapter: C260 M2, C460 M2, C22 M3, C24 M3, C220 M3, C240 M3, C420 M3


  • Two-factor Authentication for UCS Manager Logins
    • Support for strengthened UCSM authentication, requiring a generated token along with username/password to authenticate UCSM or KVM logins
    • UCSM uses single authentication request which combines (token and password) in the password field of the authentication request


  • VM-FEX for Hyper-V Management with Microsoft SCVMM
    • UCSM will support full integration with SCVMM for VM-FEX configuration
    • A Cisco provider plugin is installed in SCVMM, fetches all network definitions from UCSM and periodically polls for configuration updates
    • Supported for SCVMM 2012 SP1, Windows Hyper-V 2012 SP1 & Windows Server 2012


  • CIMC In-band Management
    • CIMC management traffic takes the same path as data traffic via the FI uplink ports
    • Separate CIMC management traffic from UCSM management traffic increases bandwidth for FI management port
    • Support In-band CIMC access over IPv4/IPv6 (IPv6 access not supported Out-of-band due to NAT limitations)


  • Direct KVM Access
    • Direct KVM access launches KVM via URL:
    • System admins allow server admins to access the KVM console without requiring the UCSM IP address
    • The CIMC IP URLs are hosted on the Fabric Interconnect
    • Supported over out-of-band only


  • Server Firmware Auto Sync
    • Server Firmware gets automatically synchronized and updated to version configured in ‘Default Host Firmware Package’
    • Global policy allows user to configure options:
      • Auto Acknowledge (default)
      • User Acknowledge
      • No Action (feature turned off)
    • Guarantee server firmware consistency and compatibility when adding a new or RMA’ed server to a UCS domain


Compute Enhancements:

  • Secure Boot
    • Establish a chain of trust on the secure boot enabled platform to protect it from executing unauthorized BIOS images
    • Secure Boot utilizes the UEFI BIOS to authenticate UEFI images before executing them
    • Standard implementation based on the Trusted Computing Group (TCG) UEFI 2.3.1 specification


  • Enhanced Local Storage Monitoring
    • Enhance monitoring capabilities for local storage, providing more granular status of RAID controllers and physical/logical drive configurations and settings
    • New Out-of-Band communication channel developed between CIMC and the RAID Controller allows for near real-time monitoring of local storage without the need for host-based utilities or additional server reboot/re-acknowledgement
    • Support monitoring the progress and state of long-running operations (e.g. RAID Rebuild, Consistency Check)


  • Precision Boot Order Control
    • Support creating UCSM Boot Policies with multiple instances of Boot Devices (FlexFlash, Local LUN, USB, Local/Remote vMedia, LAN, SAN, and iSCSI)
    • Provides precision and full control over the actual boot order for all devices in the system:
      • Multiple Local Boot Devices (RAID LUN/SD Card/Internal USB/External USB) and SAN
      • Local & Remote vMedia devices
      • PXE/SAN boot in multipath environments


  • FlexFlash (Local SD card) Support
    • UCSM provides inventory and monitoring of the FlexFlash controller and SD cards
    • Local Disk Policy contains settings to enable ‘FlexFlash RAID Reporting’
    • Number of FlexFlash SD cards is added as a qualifier for server pools


  • Flash Adapters & HDD Firmware Management
    • UCSM Firmware bundles now contain Flash Adapter firmware and Local Disks firmware.
    • UCSM Host Firmware Policies can now designate desired firmware versions for Flash Adapters and Local Disks


  • Trusted Platform Module (TPM) Inventory
    • Allow access to the inventory and state of the TPM module from UCSM (without having to access the BIOS via KVM)


  • DIMM Blacklisting and Correctable Error Reporting
    • Improved accuracy at identifying “Degraded” DIMMs
    • DIMM Blacklisting will forcefully map-out a DIMM that hits an uncorrectable error during host CPU execution
    • Opt-in feature enabled through an optional Global Policy (Disabled by default)


The El Capitan features enable several UCS Solutions including:

  • VM-FEX with SCVMM for MS Private Cloud
  • Direct Connect C-Series for Smaller Big Data Clusters
  • Direct Connect C-Series for Smaller VDI Deployments
  • Direct Connect C-Series for FlexPod Reference Architecture with ESX 5.5
  • Enhanced Local Storage Monitoring for Improved System Management Integration and SMB VDI Solutions
  • PCIe Flash Cards Support for Non-Persistent VDI
  • usNIC-based HPC Solutions on Cisco UCS B-Series 
  • Ubuntu Support for OpenStack


Please stay tuned for more feature demo videos to be added to this blog post.

When a single UCS Manager domain contains a heterogeneous mix of different blade types with different processors and different memory footprints, it can be challenging to find the ideal blade for a given workload. I like to create a custom user label for each blade so that I can easily glance at the equipment tab and see what compute resources are available to me. The user label for each blade is shown in parenthesis as seen in the graphic below. By default, the user label field is blank and nothing is displayed next to the server on the equipment tab.

User label applied to each bladeNo user label (default)


Easy to create a meaningless label


It's actually quite easy to set a custom label for all of your blades using PowerTool. Try this:


Get-UcsBlade | Set-UcsBlade -UsrLbl "Custom Label" -Force


If it were really that easy, I wouldn't need to write a blog about it. If you tried that code, you found that you just gave every blade in your entire environment the same label. What is the point of labeling if everything is going to have the same label? Instead, let's examine how to set a unique label for each blade. We will start with serial number.


Get-UcsBlade | % { $_ | Set-UcsBlade -UsrLbl $_.Serial -Force }


or a more readable form of the exact same commands:


foreach($blade in Get-UcsBlade) {

    $serial = $blade.Serial

    $blade | Set-UcsBlade -UsrLbl $serial -Force



Now every blade has a unique label, although this isn't a particularly useful label for me since I don't have my blade serial numbers memorized.




Meaningful labels


Instead of using serial number as a label, I prefer a label in the format: B200 M3 / E5-2665 / 128GB


To get that label, I'll need to collect the blade model name, processor name, and memory footprint. I'll use the same framework I used for setting the label equal to the serial number:


foreach($blade in Get-UcsBlade) {

    $model = ($blade | Get-UcsCapability).Name -replace "Cisco UCS ", ""

    $long = $blade | Get-UcsComputeBoard | Get-UcsProcessorUnit -Id 1 | select -ExpandProperty Model

    $cpu = $long | Get-FriendlyProcName

    $mem = $blade.TotalMemory / 1KB

    $custom_label = "$model / $cpu / $mem"

   $blade | Set-UcsBlade -UsrLbl $custom_label -Force



If you run that code, you will likely get an error because PowerShell doesn't recognize the function "Get-FriendlyProcName". You can read my last blog about generating friendly names for processors and just use the function I've created; or you can grab the one snippet from that function that you actually need and use this code instead:


foreach($blade in Get-UcsBlade) {

    $model = ($blade | Get-UcsCapability).Name -replace "Cisco UCS ", ""

    $long = $blade | Get-UcsComputeBoard | Get-UcsProcessorUnit -Id 1 | select -ExpandProperty Model

    if($long -match 'Intel.*?([EXL\-57]+\s*\d{4}L*\b(\sv2)?)')


        $cpu = $Matches[1] -replace '- ', "-"




        $cpu = "unknown"


    $mem = $blade.TotalMemory / 1KB

    $custom_label = "$model / $cpu / $mem"

    $blade | Set-UcsBlade -UsrLbl $custom_label -Force



That code will combine the model name, CPU name, and memory footprint of each blade to create a very useful user label as shown below.



Taking it further


Just a few lines of code can provide a very useful user label. What else can you display? How about the model of the mezz card installed? Firmware version(s)? Number and size of disks? MAC addresses? All of these are possible. Taking it a step further, I might only want to display a user label for blades that are available for me to use. If you want to label only blades that are not already in use, take a look at the Association property of each blade.


Did you know you can apply a user label to a chassis? You could display number of uplinks, number of power supplies -- or even something as drab as the serial number of the chassis.


User labels can be utilized to display information useful to your administrators, and I've just shown you that it is easy to do. Enjoy!

Topic: Cisco Standalone C-Series Servers - Teach your old server new tricks



Direct YouTube Link: Cisco Standalone C-Series Servers - Teach your old server new tricks

Google+ Link:


In case you were not able to attend, members of our UCS TME team along with Johnny Devaprasad from Bright Computing hosted a deep dive discussion dive discussion on managing  Standalone C-Series Servers where we shared a number of new ‘tricks’ for managing your C-Series Servers based on areas of unique innovation in these systems.  This session provided best practices, recommendations, demonstrations, and pointers to relevant reference documentation for managing the Cisco UCS Servers.



Eric Williams , Technical Marketing Engineer, Cisco



Jeff Foster, Technical Marketing Engineer, Cisco

Steve McQuerry, Technical Marketing Engineer, Cisco
Johnny Devaprasad, Bright Computing


Links Relevant to this session:

Cisco IMC 1.5 Datasheet:

Standalone C-Series XML API Programmers Guide:

Standalone C-Series Non-Interactive Host Update Utility:

PowerTool for Standalone C-Series:

Standalone C-Series CIMC Mounted vMedia:


To learn more about Cisco Standalone C-Series:

Cisco C-Series Rack servers -

Cisco UCS Communities:

Whitepaper to C-Series server management -

Cisco UCS Manager:


To learn more about Cisco UCS Manager and Standalone C-Series:

Cisco UCS Communities:

Cisco Developed Integrations:

Cisco UCS Manager:

Cisco UCS Central:

Cisco UCS Management (Blog):

Whether you use PowerShell to pull inventory information from Cisco UCS Manager or from VMware vCenter or from Microsoft SCOM, you will find those tools just report the processor name exactly as Intel or AMD display them. I find those strings to be unfriendly if you want to display them for a user or perform any further processing on them.


They look like this:


Intel(R) Xeon(R) CPU E7- 2860  @ 2.27GHz

Intel(R) Xeon(R) CPU E7-L8867  @ 2.13GHz

Intel(R) Xeon(R) CPU E5-2650L 0 @ 1.80GHz

Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz

Intel(R) Xeon(R) CPU E7- 4850  @ 2.00GHz

Intel(R) Xeon(R) CPU E5-2440 0 @ 2.40GHz

Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz

Intel(R) Xeon(R) CPU           X5687  @ 3.60GHz

Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz


Take a look at the list above and notice all the irregularities:


  1. The E7 processors have a space after the dash and the E5 processors do not.

  2. The E5 processors have a zero between the model name and the speed.

  3. The Westmere/Nehalem processors have a lot of spaces between the word CPU and the model name.

  4. Second generation E5 processors have "v2" after the model name.


In addition to these irregularities, you likely don't need the words Xeon or CPU or the speed of the processor.


How do we make the ugly list above look like the list below?


Intel E7-2860

Intel E7-L8867

Intel E5-2650L

Intel E5-2643

Intel E7-4850

Intel E5-2440

Intel E5-2695 v2

Intel X5687

Intel E5540


It's all done with regular expressions in PowerShell. If you're not already familiar, a regular expression is a very powerful search mechanism that can appear very intimidating to the uninitiated. Luckily, I've written a function called Get-FriendlyProcName that takes care of the all of the complicated searching and replacing for you. You can provide it a single argument or give it pipeline input like this:


Get-FriendlyProcName "Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz"


-- or --


Get-UcsBlade | Get-UcsComputeBoard | Get-UcsProcessorUnit -Id 1 | select Model | Get-FriendlyProcName


The function will convert both Intel and AMD processor names into something more readable. If you run it in -verbose mode, you will get additional feedback. Try it out!

In case you were not able to attend the live session, members of our UCS TME team hosted a deep dive discussion on how monitoring really works under the covers in both UCS Manager and Standalone C-Series Servers.  In this session we touched on best practices, recommendations, demonstrations, and pointers to relevant reference documentation for monitoring Cisco UCS Servers.


A replay of this session and relevant documents are available at the following Communities site:


Also now available is the Cisco UCS Monitoring Resource Handbook available here:


These resources were developed and presented by:
- Eric Williams, Moderator, Technical Marketing Engineer, Cisco
- Jeff Foster, Technical Marketing Engineer, Cisco
- Jason Shaw, Technical Marketing Engineer, Cisco


Please let us know what you think of these monitoring resources!

The Cisco UCS Diagnostics software for Cisco UCS Blade Servers enables you to run tests on the hardware and verify the health of the components. The tool provides a variety of tests to exercise and stress the hardware subsystems on Cisco UCS Blade Servers. These tests can be run through the tool GUI or a CLI.


The subsystems it can test are

  • CPU
  • Memory
  • Video Memory
  • Storage (servers with MegaRAID controllers)


You can use the tool to run regression tests on the Cisco UCS Blade Servers after fixing or replacing hardware components. You can also use this tool to run comprehensive burn-in tests on Cisco UCS Blade Servers before deploying them in production environments.


Link to the software download site:!y&mdfid=283853163&softwareid=286123307&release=1.0%281a%29&os=


Release notes:


User Guide:


Let us know what you think of this tool.

Last month a competitor published a video on YouTube showing Cisco UCS Manager.   The twitter world almost instantly was abuzz with tweets denouncing the video. We asked some of our Cisco Partners the reason for their tweets and boy did we get a response.


Colin Lynch from the UK said that the competitor is not showing UCS in its best light, or put more bluntly are not using the product correctly.


Adam Eckerle, besides other things said,  “ I just find it tough to let this one go.


Jamie Doherty of R2 Unified Technologies said   “I teach my kids that if you aren't telling the whole truth, it is no different than lying.  So that is why I took to twitter last week to comment on the video that HP posted regarding OneView vs. UCS Manager (  Now in order to follow my own rules, full disclosure: I work for a VAR who specializes in UCS and converged infrastructure.  A few years ago we made the transition from HP Servers to UCS, but we always follow the latest and greatest technology to make sure we maintain our edge.


There were so many little details to discuss, but let’s just get right to the meat and potatoes of a few HP claims.  First, let’s have the 80 versus 81 server debate.  HP shows how easy it is to provision server number 81 in OneView and then goes into moderate detail of what needs to happen in a typical UCS environment, where the converged architecture leverages shared bandwidth for better scalability.  If you are familiar with UCS, you know that the core architecture separates the I/O from the processing, so part of the claim has merit…if you ignore what it took HP to get there.  Each HP chassis is still an isolated set of hardware with dedicated I/O modules that need to be integrated into the network each time you deploy one.  While some of their software has helped bridge that gap, it still is a second class citizen to the UCS design and gets killed in a cost/port comparison.  So if HP OneView has a strength in deploying the 81st server, they still have huge architectural weaknesses in deploying the 17th, 33rd, 47th, 65th, etc.  I don’t even want to go into architectural superiority in the QoS architecture of UCS which allows you to go well beyond that 80th server in UCS, that would just get too crazy, but I will mention that 40G and 100G are right around the corner, so this video is about to be outdated in 3…2…1…


Second, is the claim that you have to manually look through each server profile to adjust VLAN settings.  That is just flat out lying.  Man, how can you put that stuff on a video, digitized, so you look like a fool for the rest of your life?  Anyway, Cisco was the pioneer in extrapolating hardware from configuration/identity and with the assortment of vNIC Policies and Templates it takes just a few clicks to list the dependencies on the vNIC changes you wish to make.  If you deployed it properly, there is a very good chance you labeled it with the server type it was applied to which will make it even easier.


While I still have a little of your attention, let me get to the big gotcha in all of this…HP is comparing the wrong product.  Regardless if they do, or do not know this, both are bad places to be.  HP should be comparing OneView to UCS Director, both are designed for management of multiple server architectures and both are an additional cost to the customer (UCS Manager is $0).  Now if we were to compare these two products, it isn’t even a fair fight.  Yes, Cisco purchased Cloupia and are currently rebranding it as UCS Director, but they are still offering a truly converged architecture management solution, unlike HP.  In fact, if you have some time to kill (obviously if you do you MUST be running OneView and are about to deploy your 81st server), go check on the supported matrix for the OneView product on their website.  Can’t find it?  Yeah, neither could I.  It is referenced in several documents including the datasheet, but it is nowhere to be found as of today.  I did see some documentation on it supporting only the c7000 series chassis, which is also ironic since UCS Manager supports both blade and rack server architecture – they must have forgot to mention that one on the video.  On the flip side, UCS Director is managing UCS as well as several other core products on the market today including storage.


The rest of the video just makes me wonder if HP has heard of this new product, it is called VMware.  Most of the issues that OneView is addressing have already been addressed by VMware 2-3 years ago.  I am happy to go in depth with that in another blog if you want some clarity around moving workloads and balancing infrastructure.


Look, HP isn’t a bad server choice for specific customer base, but they have nothing revolutionary to these days – and to pick on Cisco UCS, a company who is eating their lunch day after day is just bad marketing (Not to mention the shirt Gary Thome is wearing…seriously man, you are going on a video, on the internet, FOREVER).  Anyway, now that I have put a permanent target on my back to, well, all of HP, have a great day.”


These are strong opinions voiced by Data Center professionals familiar with the Cisco UCS Manager.  Let us know what you think.

Event Link:


When:  Thursday, November 7, 2013 from 8-9AM PST


Topic:  Demystifying Monitoring for Cisco UCS Manager and Standalone C-Series Servers  


Join Cisco Technical Marketing Engineers for a deep dive discussion on how monitoring really works under the covers in both UCS Manager and Standalone C-Series Servers.  This session will provide best practices, recommendations, demonstrations, and pointers to relevant reference documentation for monitoring Cisco UCS Servers.



Eric Williams, Technical Marketing Engineer, Cisco



Jeff Foster, Technical Marketing Engineer, Cisco

Jason Shaw, Technical Marketing Engineer, Cisco


Links Relevant to this session:

UCS Manager MIB Reference Guide:

UCS Manager Fault Reference Guide:

C-Series MIB Reference Guide:

C-Series Fault Reference Guide:

Monitoring UCS Manager with Syslog:


To learn more about Cisco UCS Manager and Standalone C-Series:

Cisco UCS Communities:

Cisco UCS Manager:

Cisco UCS Central:

Cisco UCS Management (Blog):

I am often asked by customers why UCS has been so successful in such a short amount of time. My response is always the same in that it comes down to two things – 1) Cisco and our partners’ ability to understand and execute against customer needs and 2) A fundamental difference in the underlying architecture.


You may know that Cisco invented UCS service profiles and built the entire system around the notion of hardware state abstraction. Cisco’s approach has been so successful because every element of the system was designed from the beginning to have its configuration set through software, without any licensing requirements. Whether customers are running bare-metal, virtualized, or any combination therein, Cisco UCS service profiles have revolutionized computing and have challenged competitors to try and replicate the simplicity and increased productivity that UCS Manager policies and templates provide. It’s no secret that Cisco UCS Manager has revolutionized the way customers deploy and manage servers, but here are a few things about UCS Manager that you may not be aware of.


Did you know that Cisco UCS Manager is embedded software running within the Fabric Interconnects in a highly available clustered configuration? This is an important distinction from traditional architectures as Cisco UCS Manager is a fully redundant management engine right out of the box the moment the system receives power, without special clustering software or additional licensing fees. UCS Manager not only orchestrates and automates hardware server provisioning, but provides device discovery, inventory, configuration, power management, diagnostics, monitoring, fault detection, auditing, and statistics collection.


Did you know that Cisco UCS uses a model-based management approach? Cisco UCS Manager performs an exhaustive automated discovery and inventory of all system components, combines that with user defined configuration data (pools/policies/profiles/templates) and applies it to the hardware. This means that server configuration occurs by manipulating the object model through Cisco UCS Manager’s GUI, CLI, or XML API. This also allows all elements of the model to be continuously adaptive to the environment and is the reason why Cisco UCS Manager has the ability to manage both Cisco blade and rack servers, regardless of server type or generation. This is the true essence of hardware state abstraction. This approach contrasts to the traditional use of multiple element managers to configure every component separately and (hopefully) accurately. Even bundling these element managers into a single interface doesn’t create an adaptive model, where for example, you can apply the same service profile to any blade or rack server and have it automatically adjust state. With converged management as an add-on, traditional approaches are hamstrung by the limitations of software scripting of commonly used commands out to the hardware.


Did you know that Cisco UCS enables secure access to all Cisco UCS Manager functionality through the XML API? It provides a single point of integration for developers and system administrators to utilize the API to further customize and automate the system according to their unique requirements. Cisco embraces industry standard toolkits like PowerShell, Python and Java to create Software Developer Kits (SDKs) that enable straightforward integration into the Cisco UCS Manager XML API.  Imagine using these SDKs to orchestrate a private cloud, to manage OpenStack/open source environments, or to automate a virtualized solution to provision and manage all hardware compute elements. UCS Manager also provides system visibility to higher-level systems management and lifecycle tools from ISVs including Microsoft, BMC, CA, HP, IBM, Zenoss and others.


Did you know that UCS Central is actually an extension of the model-based UCS Manager XML API, aggregating multiple UCS domains into a common architecture that fundamentally shares the same hardware abstraction properties and programmatic API? UCS Central not only provides capabilities around shared global service profiles, templates, policies and pools, but it aggregates local domain configurations and policies without cutting off control. This provides further availability and flexibility by not tying multi-system management of potentially thousands of servers to a single non-redundant entity. By enabling automation of processes, Cisco UCS unified management allows data center managers to achieve greater efficiency, agility, and scale in their server operations while reducing complexity and risk.

Cisco revolutionized system management with UCS Manager and truly changed the way customers simplify, deploy and maintain their environments. As traditional vendors strive to achieve profile and template functionality, Cisco embraces the competition and looks forward to continuing innovation leadership in the server management arena. posting of blog:


New to UCS Manager? Download the free UCS Platform Emulator (UCS PE):

Cisco UCS Communities:

UCS Communities Webinar Managing at Scale with UCS Central:

Cisco UCS Manager Page:

UCS SingleConnect:

UCS Advantage Videos: