Cisco UCS Director team is pleased to announce the Support for IBM Storwize V7000 Unified, SAN Volume Controller (SVC) and VersaStack in Cisco UCS Director with 5.2.0.1 release.

Cisco UCS Director delivers unified management and orchestration for the industry’s leading converged infrastructure solutions that are based on Cisco Unified Computing System (Cisco UCS) and Cisco Nexus. Cisco UCS Director extends the unification of compute, network, and storage layers through Cisco UCS to provide data center professionals with single pane-of-glass management and storage solution choice. Cisco UCS Director supports converged infrastructure solutions, including VersaStack, NetApp FlexPod™ and FlexPod Express, EMC VSPEX and VCE Vblock™.


Key Features and Use-Cases

  • Discovery and Inventory of V7000 Unified, V7000 Block and SVC accounts. VersaStack POD creation.
  • Orchestration and Provisioning – Tasks to configure Drives, Create/Remove Mdisks, Add/Modify/Delete Volumes, Map/Unmap Volumes to host etc.
  • Monitoring and Reporting of Disks, Mdisks, Storage Pools etc
  • Out of the box Workflows :- Datastore Workflows for File and Block, Thin block datastore to ESXi, NFS datastore to ESXi, Compressed NFS datastore to ESXi, Thin NFS datastore to ESXi.
  • VM and Bare-Metal Provisioning

 

Additional Resources:

 

image002.png

Cisco UCS Director 5.2 is Available Now!

 

UCS Director team is pleased to announce the availability of Cisco UCS Director version 5.2

 

This release provides significant enhancements:

·       Expanding unified automation beyond the data center to include remote office support with UCS Mini and E-series support

·       Installation and guided setup wizards that deliver consistency and reduce deployment times by ensuring key elements are set-up in the exact order to prevent rework

·       More robust support for Microsoft Hyper-V

 

This release has been evaluated and tested by Cisco internal teams, customers and partners through the Early Field Trail program.

 

UCS Director 5.2 Feature Highlights:

 

Virtualization:

·       Microsoft Hyper-V updated feature set, covering SCVMM networking models, Topology reports, Linux guest customization, Run Script, Comprehensive Reports etc.

Network:

·       Nexus 1000v Hyper-V new platform support

·       DFA 2.0 support

Storage:

·       NetApp ONTAP API 8.2.2P1

·       Failover group mode support for C-mode

·       EMC RecoverPoint enhancements

Compute:

·       Expanding into Remote Office Branch Office space with UCS-Mini platform support

·       New platform support for UCS E-series and enhanced features for UCS C-series through project Gold Town integration (IMC Supervisor)

·       B200 M4 blade support

·       HP Onboard Administrator v4.3 support to enable Bare Metal OS deployment automation

ACI

·       APIC firewall policy

·       L3 External Routed Networks

·       ASAv VM deployment policy

·       Tenant onboarding onto Cluster

Platform

·       Resource Groups support for VNX, Clusters/Multiple hosts

·       BMA enhancements

·       Customer and field requests in various areas

Guided Setup

·       Create Wizard from Workflows

·       Vblock, VSPEX wizards

·       vDC creation with catalog for Hyper-V

·       Infrastructure discovery for virtualization and multi-domains

·       BMA Setup Wizard

 

 

Software Download Links:

·       Cisco UCS Director 5.2 Download Site

 

Documentation Links:

·       Release Notes

·       Install & Upgrade Guides

·       Compatibility Matrix

·       Open Automation & SDK

·       Refer to this link for all documentation: Cisco UCS Director Documentation

Is there was to check the UCS FI Disk Usage via UCSpowertool?

Cisco is pleased to announce the release of IMC Supervisor 1.0.

Supervisor.png

Cisco IMC Supervisor is a management system that enables monitoring of up to 1,000 Standalone C-Series and E-Series Servers. It supports bulk discovery, grouping and tagging of systems for monitoring and inventory purposes.

 

Cisco IMC Supervisor can be used to perform the following tasks for supported servers:

Inventory collection for managed servers

Centralized monitoring capabilities for servers and groups

Firmware management including firmware download, upgrade, and activation

Manage server actions including power control, LED control, KVM launch, IMC WebUI launch

Generate & Send E-Mail alerts for critical faults

Logical grouping and tagging of servers and summary views per group

Role Based Access Control (RBAC) support to restrict access based on role

 

The release is now available for download on Cisco.com:

 

IMC Supervisor 1.0 Software Download

IMC Supervisor 1.0 Release Notes

IMC Supervisor 1.0 Installation Guide

IMC Supervisor 1.0 Management Guide

UCS Communities Page

 

 

IMC Supervisor implements license enforcement. The license structure includes a base license (per instance of IMC Supervisor) and a required secondary license tied to the number of systems under management. Support is available and tied to the number of managed systems (endpoints). The license and support PIDs are below.

 

 

Description

License PID

Support PID

IMC Supervisor Base License

CIMC-SUP-BASE-K9=

N/A

Terms & Conditions Acceptance (Req’d)

CIMC-SUP-TERM

(Option for above PID)

N/A

100 Server Enablement

CIMC-SUP-B01=

CON-SAU-SUPB01

250 Server Enablement

CIMC-SUP-B02=

CON-SAU-SUPB02

1,000 Server Enablement

CIMC-SUP-B10=

CON-SAU-SUPB10

 

Please join our experts on December 18th at 8AM PT for our upcoming IMC Supervisor Tech Talk:  Cisco Tech Talk: IMC Supervisor

We are pleased to announce the release of Cisco UCS Director version 5.1.

Key drivers for Cisco UCS Director 5.1 software:

  • Intent to reduce the application deployment time and improve rapid delivery of services on converged infrastructure on Cisco innovative ACI.
  • Improve the customer adoption rate by reducing the UCSD initial setup and configuration time. 

Our customers, partners and Cisco internal team members have tested and provided valuable feedback through a formal Early Field Trial (EFT) program.  As a result, we are delighted to announce the following new features in Cisco UCS Director 5.1 software:

  • Platform enhancements: Resource Groups (for ACI only), Activities, Tagging Library
  • Support for Cisco APIC appliance with 1.0(1e)
  • Support for APIC Application Containers
  • Whiptail/Invicta account related changes
  • REST API changes
  • VCE Vision Intelligence Operation version 2.5
  • Introduction of Guided Setup Wizards – System Setup Wizard, Device Discovery Wizard, vDC Setup Wizard, FlexPod Setup Wizard
  • Support for two personalities during install – Cisco UCS Director and Cisco UCS Director Express for Big Data

 

Links to download this release are as follows:

 

Relevant documentation for Cisco UCSD 5.1 release can be accessed as follows:


Regards,

Cisco UCS Director Team


It almost feels like this blog entry should start with: Once upon a time….   Because it captures a journey of a young emerging technology and the powerful infrastructure tool it has become. The Cisco UCS journey starts with the tale of Unified Fabric and the Converged Network Adapter (CNA).

 

Most people think of Unified Fabric as the ability to put both Fiber Channel and Ethernet on the same wire between the server and the Fabric Interconnect or upstream FCoE switchs.  That is part of the story, but that part is as simple as putting a Fiber Channel frame inside of an Ethernet frame.   What is the magic that makes this happen at the server level?  Doesn’t FCoE imply that the Operating System itself would have to know how to present a Fiber Channel device in software and then encapsulate and send the frame across the Ethernet port?   Possibly, but that would require OS FCoE software support which would also require CPU overhead and require end users to qualify these new software drivers and compare the performance of software against existing hardware FC HBAs.

 

For UCS the key to the success of converged infrastructure was due greatly to the very first Converged Network Adapters that were released.  These adapters presented existing PCIe Fiber Channel and Ethernet endpoints to the operating system.  This required no new drivers or new qualification from the perspective of the operating system and users.  However at the heart of this adapter was a Cisco ASIC that provided two key functions:

 

1.)  Present the physical functions for existing PCIe devices to the operating system without the penalty of PCIe switching.

2.)   Encapsulate Fiber Channel frames into an Ethernet frame as they are sent to the northbound switch.

 

Converged Network Adapter
                                                                      Converged Network Adapter

 

It is the second function that we often focus on because that’s the cool networking portion that many of us at Cisco like to talk about.  But how exactly do we convince the operating system that it is communicating with an Intel Dual port Ethernet NIC and a Dual port 4GB Qlogic Fiber Channel HBA?  I mean these are the exact same drivers that we use for the actual Intel and Qlogic card, there’s got to be some magic there right?

 

Well, yes and no.  Lets start with the no.  Presenting different physical functions (PCIe endpoints) on a physical PCIe card is nothing new.  It’s as simple as putting a PCIe switch between the bus and the endpoints.  But like all switching technologies a PCIe switch incurs latency and it cannot encapsulate a FC frame into an Ethernet frame.  So that’s where the magic comes into play.  The original Converged Network Adapater contained a Cisco ASIC that sits on the PCIe bus between the Intel and Qlogic physical functions.  From the operating system perspective the ASIC “looks” like a PCIe switch providing direct access to the the Ethernet and Fiber Channel endpoints, but in reality it has the ability to move I/O in and out of the physical functions without incurring the latency of a switch.  The ASIC also provides a mechanism for encapsulating the FC Frames into a specific Ethernet frame type to provide FCoE connectivity upstream.

 

The pure beauty of this ASIC is that we have evolved it from the CNA to the Virtual Interface Card (VIC). These traditional CNAs have a limited number of Ethernet and FC ports available to they system (2 each) based on the chipsets installed on the card.  The Cisco VIC provides a variety of vNICs and vHBAs to be created on the card.  The VIC not only virtualizes the PCIe switch, it virtualizes the I/O endpoint.

 

Cisco Virtual Interface Card
                                                                   Cisco Virtual Interface Card

 

So in essence what we have created with the Cisco ASIC, that drives the VIC, is a device that can provide a standard PCIe mechanism to present an end device directly to the operating system.  This ASIC also provides a hardware mechanism designed to receive native I/O from the operating system and encapsulate and translate where necessary without the need for OS stack dependencies, for example native Fiber Channel encapsulated into Ethernet.

 

At the heart of the UCS M-Series servers is the System Link Technology.  It is this specific component that provides access to the shared I/O resources in the chassis to the compute nodes.  System Link Technology is the 3rd Generation technology behind the VIC and the 4th Generation technology for Unified Fabric within the construct of Unified Computing.  The key function of the System Link Technology is the creation of a new PCIe physical function called the SCSI NIC (sNIC) that presents a virtual storage controller to the operating system and maps drive resources to a specific service profile within Cisco UCS.

 

System Link Technology
                                                                        System Link Technology

 

It is this innovative technology that provides a mechanism for each compute node within UCS M-Series to have it’s own specific virtual drive carved out of the available physical drives within the chassis.  This is accomplished using standard PCIe and not MR-IOV.  Therefore it does not require any special knowledge of a change in the PCIe frame format by the operating system.

For a more detailed look at System Link Technology in the M-Series check out the following white paper.

 

The important thing to remember is that hardware infrastructure is only part of the overall architectural design for UCS M-Series. The other component that is key to UCS is the ability to manage the virtual instantiations of the system components.  In the next segment on UCS M-Series Mahesh will discuss how UCS Manager rounds out the architectural design.

Cisco UCS M-Series servers have been purpose built to fit specific need in the data center. The core design principles are around sizing the compute node to meet the needs of cloud scale applications.

 

When I was growing up I used to watch a program on PBS called 3-2-1 Contact, most afternoons, when I came home from school (Yes, I’ve pretty much always been a geek). There was an episode about size and efficiency, that for some reason I have always remembered. This episode included a short film to demonstrate the relationship between size and efficiency.

 

The plot goes something like this. Kid #1 says that his uncle’s economy car, that gets a whopping 15 miles to the gallon (this was the 1980s), is more efficient than a school bus that gets 6 miles to the gallon. Kid #2 disagrees and challenges Kid #1 to a contest. But here’s the rub, the challenge is to transport 24 children from the bus stop to school, about 3 miles a way, on a single gallon of fuel. Long story short, the school bus completes the task with one trip, but the car has to make 8 trips and runs out of fuel before it completes the task. So kid #2 proves the school bus is more efficient.

 

The only problem with this logic is that we know that the school bus is not more efficient in all cases.

 

For transporting 50 people a bus is very efficient, but if you need to transport 2 people 100 miles to a concert the bus would be a bad choice. Efficiency depends on the task at hand. In the compute world, a task equates to the workload. Using a 1RU 2-socket E5 server for the distributed cloud scale workloads that Arnab Basu has been describing would be equivalent to using a school bus to transport a single student. This is not cost effective.

 

Thanks to hypervisors, we can have multiple workloads on a single server so that we achieve the economies of scale. However there is a penalty to building that type of infrastructure. You add licensing costs, administrative overhead, and performance penalties.

 

Customers deploying cloud scale applications are looking for ways to increase the compute capacity without increasing the cost and complexity. They need all terrain vehicles, not school buses. Small, cost effective, and easy to maintain resources that serve a specific purpose.

 

Many vendors entering this space are just making the servers smaller. Per the analogy above smaller helps. But one thing we have learned from server virtualization is that there is real value in the ability to share the infrastructure. With a physical server the challenge becomes how do you share components in compute infrastructure without a hypervisor? Power and cooling are easy, but what about network, storage and management. This is where M-Series expands on the core foundations of unified compute to provide a compute platform that meets the needs of these applications.

 

There are 2 key design principles in Unified Compute:

 

1.) Unified Fabric
2.) Unified Management

 

Over the next couple of weeks Mahesh Natarajan and I will be describing how and why these 2 design principles became the corner stone for building the M-Series modular servers.

Earlier today, UCS Manager 3.0(1) was posted on Cisco’s website. This is a unique release of UCS Manager especially designed to bring UCS Manager to the new 6324 Fabric Interconnect (FI).

 

This is a platform specific release of UCS Manager that only runs on the 6324 Fabric Interconnect and does not run on existing UCS 6100 and 6200 series Fabric Interconnects. It has been designed to provide a UCS solution focused on remote office and branch sites as well as for customers who need a limited deployment of servers. Scaling is limited to 1 chassis (8 servers) and up to 7 rack-mount servers connected to the 6324 FI unified and scalability ports. It also has limited platform support such as supporting the new 5108 chassis, dual-voltage power supplies, B200M3 blades with VIC adaptors, and C220 M3 and C240 M3 rack-mount servers. Please see the hardware compatibility list for the latest server and adaptor support information.

 

It has been optimized for managing a 6324 FI based systems in data centers as well as remote offices and branch office locations. To help support this, UCS Manager 3.0(1) has been tested over the equivalent of an entry-level consumer grade DSL line - 1.5 Mbps, 300 - 500ms latency, and with temporary loss of connections. These tests have been done with both remote administrators connecting to UCS Manager as well as remote administrators managing UCS Manager through UCS Central. UCS Central is also where you can manage both UCS Manager 3.0(1) running on the 6324 Fabric Interconnect as well as any other UCS Manager 2.1(2a) or newer versions running on 6100 or 6200 series Fabric Interconnects.

 

There are a number of additional new features in the UCS Manager 3.0(1) release. They include:

  • Support for the 6324 Fabric Interconnect and scalability port
  • Support for the Dual Line Power Supply Unit and 110V when used with the 6324 Fabric Interconnect
  • Staggered boot support and power capping which is especially important when using 110V power supplies since they may not provide enough power for a fully loaded chassis.
  • Support for loading firmware via a local USB port instead of over the network

 

This release of UCS Manager is a platform specific release and does have some important limitations and un-supported features. It is based on UCS Manager 2.2(1x) and not UCS Manager 2.2(2x). Therefore, new UCS Manager features, defect fixes, server and adaptor support introduced in May 2014 may not be supported. Other unsupported features include Ethernet Switching Mode, Fibre Channel End Host Mode, Private VLANS, Port Security, and KVM Virtualization. As always, please see the UCS Manager Release Notes for additional details.

 

Along with UCS Manager 3.0(1), there will also be a UCS Central 1.2 release supporting the Cisco UCS 6324 Fabric Interconnect. Additional details on UCS Central 1.2 can be found in another blog post on Cisco Communities. As always, your feedback, comments, and enhancement ideas are always appreciated.

 

Jacob Van Ewyk

UCS Management product manager


Note:  This was posted by Eric Williams on behalf of Jacob Van Ewyk

UCS Central 1.2(1a) - Remote Management and Integration Improvements

 

While we just released UCS Central 1.1(2a) in March, we are no pleased to announce the availability of UCS Central 1.2(1a). It incorporates a number of incremental improvements in the UCS Central product including better support for managing remote and branch offices, improvements to the API to make integration with 3rd party tools and XML scripts easier, as well as a number of other improvements.

 

First of all, UCS Central has been upgraded to better support the management of remote and branch offices. Basically, that means we test and support UCS Central when the connection to UCS Central is the equivalent of a entry level consumer grade DSL line. In addition, when used with UCS Manager 3.0(1) or later, file transfers will be done over http/https instead of via an NFS share. This will provide greater resiliency and efficiency for file transfers while reducing the number of required open firewall ports.

 

UCS Manager 3.0(1) is a platform specific release that supports only the new 6324 Fabric Interconnect. For customers with both standard 6100 or 6200 series Fabric Interconnects as well as the new 6324 Fabric Interconnect, UCS Central supports multiple versions of UCS Manager. This allows customers to easily set up and manage UCS systems in the data center as well as a remote office/branch office with the same tools, policies, and templates.

 

Prior to UCS Central 1.2, scripting and API calls to UCS Central required an in-depth knowledge of the architecture. This meant that anyone who wanted to create API calls or integrate 3rd party products with UCS Central needed in-depth knowledge. With UCS Central 1.2, there is now a single Virtual Management Information Tree (vMIT) that can accept any API calls and automatically route the request to the proper location. This will drastically simplify the API as well as make it easier for 3rd party products to integrate directly with UCS Central.

 

There are also a number of operational and feature enhancements. These include a unified KVM launch manager which will allow users to select any valid KVM ID for launching (ex. IPv4, IPv6, Inband). UCS Central 1.2 also supports Precision Boot Order Control, configuring Fabric Interconnect Server and Ethernet Uplink ports, and running Estimate Impact when a UCS Manager domain comes out of standby mode. Finally, the UCS Central user interface has been modified to show not just UCS Manager faults in the fault panel, but also rotate the panel to show UCS Central faults and Pending Activities.

 

UCS Central is available for download from Cisco.com today. For new installations, it is easiest to download the UCS Central .OVA file. For existing UCS Central customers, the upgrade is very straightforward:

  • Download the UCS Central .ISO instance.
  • Reboot the UCS Central virtual machine and boot from the .ISO image.
  • Select Upgrade existing UCS Central installation.
  • Reboot the UCS Central virtual machine.

The whole process usually takes less than 5 minutes after the download has completed.

 

This release of UCS Central 1.2 has a number of platform support and feature enhancements. Your feedback, comments, and new feature requests would also be greatly appreciated.

 

Jacob Van Ewyk

UCS Management product manager

 

Note:  This was posted by Eric Williams on behalf of Jacob Van Ewyk

Cisco is pleased to announce Cisco IMC 2.0 is now available. 

 

Over the last five years I have witnessed impressive evolution of the management controller for C-Series. Cisco IMC has evolved from a basic white box baseboard management controller to the world’s state of the art integrated management controller. Whether integrated into the UCS architecture or operated in a ‘standalone’ environment, Cisco IMC 2.0 is second to none.

 

Cisco has taken an integrated ‘no host agent’ approach to the UCS C-Series management with the Cisco IMC.  Advanced feature sets are integrated into the hardware eliminating the need to manage host agent software or complex licensing. 

 

New Features in Cisco IMC 2.0

 

Security and Network Enhancements:

We are pleased to announce the release of UCS Manager 2.2(2c) codenamed “El Capitan MR1”. The release is now available for download on Cisco.com: release 2.2(2c).


Links to download this release are as follows:

  • Infrastructure software bundle: Click here to download
  • B-series and C-series software bundles for this release are available at the above link, under “Related Software”.

 

Relevant documentation for the UCS Manager 2.2(2c) release can be accessed as follows:

  

The UCS Manager 2.2(2) release enables support for new Intel Ivy Bridge server platforms, and delivers key features and enhancements in the Fabric and Operational areas. The following is a complete overview of the new hardware platforms and software enhancements in UCS Manager 2.2(2):

 

Hardware Support:

  • B260-M4 & B460-M4 (Scalable 2S-EX & 4S-EX blade servers)
  • B420-M3 (4S-EP Ivy Bridge blade server refresh)
  • B22-M3 (2S-EN Ivy Bridge blade server refresh)
  • C460-M4 (4S-EX rack server)
  • C22-M3 & C24-M3 (2S-EN Ivy Bridge rack server refreshes)
  • High Voltage PSU (380v) Support

 

Note: New SAP HANA solutions are based on the M4 server platforms, B260-M4, B460-M4 and C460-M4.

 

 

Operational Enhancements:

  • Scriptable vMedia
    • Scriptable vMedia provides the ability to programmatically mount an image on a remote server directly to CIMC (without requiring KVM).
    • A new vMedia Policy allows the user to define the vMedia mapping and specify the remote NFS/CIFS/HTTP/HTTPS share. The vMedia mount can be referenced as the boot device in the Boot Policy.
    • Scriptable vMedia enables key use cases such as scripting OS driver updates, security updates, and “PXE-less” automated OS provisioning.

 

  • Pre-Upgrade Validation Checks
    • Provide pre-upgrade checks to enhance the firmware auto-install process and ensure a smooth and successful upgrade process
    • Prior to an upgrade, the complete UCSM domain will be audited and warnings will be shown based on the configuration state, in case of:
      • Outdated configuration backup
      • Any critical/major faults exist
      • Management interface monitoring policy is disabled
      • FI reboot is pending
      • NTP configuration is not available

Also, an error will be shown in case the available Bootflash space is less than 20%

    • A new “Backup Config Reminder” option is added to the Backup Policy

 

  • GPU Firmware Management
    • UCSM Firmware bundles now contain GPU firmware
    • UCSM Host Firmware Policies can now designate desired firmware versions for supported Nvidia GPU Adapters

 

  • Wear-Level Monitoring for Flash Adapters
    • UCSM Monitoring for the Flash status and remaining Flash Life of Fusion IO Flash Adapters

 

  • KVM/vMedia Client Enhancements
    • Support the next gen KVM/vMedia client on UCS servers for better server management experience
    • vMedia tab replaced with a menu option, allowing continuous access to the KVM console while creating a vMedia mapping, as well as caching of previous vMedia mappings for faster remapping.
    • Other enhancements include a Chat option across multiple users of KVM sessions, Video Scaling allowing the KVM window to be resized according to the client resolution, and Auto-Reconnect allowing the client to automatically re-establish the session in case of connectivity loss.

 

 

Fabric Enhancements:

  • PVLAN Enhancements
    • Enhances the current implementation of PVLAN extending support to Community VLANs, Promiscuous mode, and PVLAN trunking on the host side
    • Support both physical and virtual environment, enhancing the support and interoperability with ESXi and N1Kv

 

  • Netflow Support
    • Enable high scale flow record collection with minimal overheard on the FI by distributing the NetFlow processing to the Cisco VIC adapters on the servers
    • Enhances the visibility required to detect traffic anomalies, ensure efficient bandwidth allocation and improve capacity planning
    • UCS Manager will create and manage Netflow policies, which can be applied to static and dynamic vNICs (VM-FEX) to capture VM to VM traffic within the same host

 

  • Cisco VIC Driver Enhancements – Adaptive Interrupt Coalescing (AIC)
    • Enables self tuning of the CPU interrupt for packet processing, by interrupting the host only once for multiple packets received, based on a defined coalescing timer
    • Improves the CPU efficiency and throughput, reducing the number of kernel context switches required to service the interrupts generated by the adapter
    • AIC relies on the eNIC driver to learn the nature of the traffic flowing through it and tune the interrupt coalescing timer to improve the CPU utilization without negatively impacting the traffic latency.

 

  • Cisco VIC Driver Enhancements – Accelerated Receive Flow Steering (ARFS)
    • ARFS is a hardware-assisted receive flow steering, with the goal of increasing data cache hit rate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
    • ARFS helps improve CPU efficiency and reduce traffic latency
    • ARFS is disabled by default and can be enabled from the Adapter Policy in UCSM

 

  • Cisco VIC Driver Enhancements – NetQueue Support
    • Enables support for VMware NetQueue on the Cisco VIC adapter
    • Allows a network adapter to dedicate a transmit and receive queue pair to a virtual machine NIC
    • Improves network throughput by distributing processing of network traffic for multiple VMs among multiple CPUs
    • Reduces CPU utilization by offloading receive packet filtering to the network adapter

In the past year that I’ve been working on UCS Central, I’ve seen the software take several big steps forward. Today, I’m happy to announce that UCS Central 1.1(2a) has been released and is available on Cisco.com.

 

In the past year, I’ve received a lot of feedback from customers using UCS Central and much of that feedback is reflected in this release. The first thing that most customers asked about was importing policies from existing UCS Manager instances. UCS Central 1.1(2a) can import policies and service profile templates from UCS Manager version 2.2(1b) and newer. It can find and browse policies and service profile templates in a read only manner from UCS Manager 2.1(2a) and UCS Manager 2.1(3a) releases. While this sounds like a simple thing to implement - you just need a search function to find the policy and then import the XML, our goal was to make sure that there were no unintended consequences, and performing an estimate impact on a service profile template and all of its dependent policies across multiple domains is a little more challenging. However, the team has delivered something that is very powerful along with the guidance to use the feature safely.

 

Another UCS feature that I found was being used by many customers was the scheduled backups of UCS Central and the registered UCS Manager instances. We’ve enhanced this functionality to back up the files to UCS Central and then put a copy of that file on a remote file share, a key request from customers using this functionality.


The statistics collection and reporting in UCS Central 1.1(2a) has been greatly enhanced. UCS Central now supports Microsoft SQL as an external statistics database in addition to PostgreSQL and Oracle. UCS Central has collected bandwidth, power, temperature, and fan speed statistics through UCS Manager since the last release and stores the statistics for 14 days with the internal database or for a year or more with an external database. UCS Central 1.1(2a) now has built in reports for power, temperature, and fan speed in addition to the existing reports available for bandwidth. All of these reports can be viewed for the individual server, the chassis, the domain, or even compared across domains as appropriate. It will be of great help for customers trying to size and monitor their environment.

 

Also based on customer feedback, we’ve enhanced UCS Central to include remote actions such as server power on/off/reset, locator LED on/off, and chassis acknowledgements and decommissions. The ID allocation in UCS Central is now sequential instead of the UCS default mechanism which was difficult to predict. And, based on customer input around security, UCS Central 1.1(2a) can now work with 3rd party certificates.

 

In addition to the many features added based on customer feedback, there are some other enhancements in this release of UCS Central. They include:

  • VLAN and VSAN localization
  • A separate tab for UCS Central policies
  • Authentication Domain selection

UCS Central 1.1(2a) supports all versions of UCS Manager from 2.1(2a) and later. However, some functionality, such as policy import, might only be available when UCS Central is working with domains that have newer versions of UCS Manager.

 

If you already have UCS Central up and running, the easiest way to upgrade is to download the ISO image from Cisco.com, reboot the UCS Central virtual machine and boot from the ISO, run the upgrade option, and it will be back up and running in about 5 to 10 minutes. If you are doing a new deployment of UCS Central, it might be easier to download the OVA file and import the UCS Central virtual machine.

 

For more information, you can take a look at the UCS Central web page or view the content here on Communities, including the UCS Tech Talk on UCS Central 1.1(2a). Let us know how you are using UCS Central in your environment and what features or enhancements you would like to see.

 

Jacob Van Ewyk

UCS Management product manager

Skyline-ATS has just announced a new course which is “Openstack Cloud Deployment on UCS”. Description The OpenStack Cloud Deployment on Cisco UCS (OCDCU) v1.0 is a three-day instructor-led training course designed for Systems and Field Engineers, Consulting Systems Engineers, Technical Solutions Architects, Integrators, and Partners who are responsible for installation, configuration, and the implementation of an OpenStack Cloud. This course covers the key components and procedures needed to install, configure, and deploy OpenStack Cloud using Cisco Unified Computing System hardware. The OpenStack Cloud Deployment on Cisco UCS (OCDCU) v1.0 course will provide hands-on lab exercises utilizing Cisco Unified Computing System (UCS) hardware. Students will perform hands-on lab exercises that were created to gather the necessary skills to install, configure, and deploy an OpenStack Cloud on Cisco UCS hardware. Objectives Upon completing this course, the learner will be able to meet these overall objectives: • Describe the basic business advantages of an OpenStack Cloud • Describe the basic function of an OpenStack Cloud • Describe Ciscos UCS Accelerator Paks for OpenStack deployments • Describe the services in OpenStack (Keystone, Nova, Glance, Cinder, Neutron, Swift, and Horizon) • Describe additional services of OpenStack Cloud. • Install the appropriate services for an OpenStack Cloud • Create Virtual Machines using OpenStack Complete outline: http://www.skyline-ats.com/home/course-detail?ClassId=1377 For questions / enrollment please contact David Darrough- Account Manager, ddarrough@skyline-ats.com, 408-340-8028

Hello my fellow admins,
Just wanted to let you know that if you get the error message “Error loading /tools.t00 Fatal error: 10 (Out of resources)” when trying to install VMware’s ESXi 5.5 hypervisor on a Cisco UCS C240 M3 SFF rackable server with 1 or more NVIDIA GRID GPUs in it, just follow the next steps to fix it:

 

1. Go to System BIOS (Press F2)
2. PCI configuration > MMCFG
3. Change the value from Auto to 2 GB
4. Change the value of Memory Mapped IO above 4 G to Enabled
5. Save and reboot the system
6. Installation will complete

2014-02-07_08-02-11_10-113-108-149-kvm-console.png

 

Hopefully you will solve this issue by yourself, not like me, after a couple of days with help from my Cisco technical friends :)

Apparently, the error cannot be reproduced on the same system but with no GPUs, or on ESXi 5.1. Cisco told me that they’re working with VMware on this one and hopefully they will create a more fashionable way on getting the systems up and running with ease.

Good luck!

 

Victor

 

Asked by Bill Shields (Cisco) to post it in the Cisco Communities. Original post at Error loading /tools.t00 Fatal error: 10 (Out of resources) – Cisco UCS C240 M3 server with ESXi 5.5 | a blog for everyt…

ucsql.jpeg.jpg

“UCSQL”.   Rolls right of your tongue, doesn’t it?    It puts “UCS” in a relational context (i.e. “SQL), which is exactly where it belongs.   The deeper you understand the UCS Manager, the more this makes sense. Want to get more utility and understand more about the UCS Manager?  Great! Here’s a new tool to help you.

 

For the past 5 years, I’ve made it my mission to help people better-understand the Crown Jewels of the Cisco Unified Data Center Solution:   UCS Manager and the UCS Management Model.   Arguably one of the most valuable aspects is the inherent “programmability” --- where all the server infrastructure and connectivity can be programmed using  the XML/API.   I’ve been a strong advocate that customers take full advantage of this capability --- to automate workflows and operations, as much as possible.  And as the author of the UCS Central Best Practice Guide,  I wanted to encourage the same level of programmability for UCS Central.   For example : “How can I script global inventory collection across all domains?”,  or “How can I programmatically get a list of all the backup files, so that I can copy them offline?”,  or “I want a script that shows me all service profiles in all domains, along with any associated blades”.

 

Both UCSM and UCS Central are “relational” engines.   But up until now, there has been no classic “relational” access method --- like SQL, which many people speak.    A traditional database engine creates data dictionaries that tell you the meta-data about the tables, like the column names.   But UCSM and UCS Central aren’t classic database engines --- instead, the meta-data is only kept in the code-generated schema files, which are dynamically generated when UCS Manager is built by Cisco Engineering (and then used as the input for creating UCS PowerTool, UCS Python SDK, etc.)   You can see the meta-data dynamically through “visore” or through the UCS Platform Emulator --- but you can’t see it in an interactive scriptable manner. 


The main UCS XML/API operation to “query” objects (“configResolveClass”)  is opaque --- meaning that without leveraging the schema files, the UCS Data Management Engine (DME) will not tell you the attribute names in advance --- it will simply return all attributes and values. 

<computeSystem

address="192.168.40.131"

availablePhysicalCnt="8"

descr=""

dn="compute/sys-1007"

id="1007"

name="AMS"

operGroupDn="domaingroup-root/domaingroup-EUROPE"

totalPhysicalCnt="8"/>

 

Classic database engines have a “show” or “describe” operation, to help selectively choose which attributes you want to project on the “select” statement.    So then for UCS, how do you answer these questions in a scriptable, programmatic manner:

  • What are *all* the Class Names?
  • What are the attributes for a given Class?
  • What are the current values of specific Class attributes?

I could not find an appropriate tool --- so I started making one myself.   And given enough time flying to various UCS User Groups, I was able to construct a way to answer these questions --- and offer a more general/extensible framework :  “UCSQL” ---  a Python-based scripting tool that accesses the UCS DME using SQL, translated over the XML/API.  

 

Now, I can explore the UCS schema with statements such as “show All", "show lsServer”,  “show vnicFc”,  “show orgDomainGroup”,  or see all the UCS backup file locations in UCS Central with “select * from configBackup”, or pre-provision my SAN configuration with “select dn, addr from vnicFc”, or pre-provision my IP mapping with “select dn, addr from vnicEther” --- along with several other examples and sample output.

 

And so can you. Because “ucsql” has been released as a UCS Community Source Project.   It’s yours.    The source code is being hosted and can be downloaded from github here.   Any questions?  If so,  just visit/post the community at communities.cisco.com/ucs


Very interesting footnote here :  “ucsql” can be used against UCS Manager (single domain) or UCS Central (many domains), or UCS C-series XML/API  [ “What?!?”].     Yes, because all three share a common XML/API and all three share a managed-object model that uses the same names for objects, when possible.   If you want a quick report on firmware versions or faults, it’s just “select * from firmwareRunning”, or “select severity, descr, cause, created, type, dn from faultInst”.   Works equally well when talking to UCS Central, UCS Manager or UCS C-series.    Really.

 

Sound interesting? Well hopefully, it’s just the beginning.   There’s a lot more functionality that *could* be added, but it needs a development community.   That’s you.    Here’s a great opportunity for you to define, engage and participate directly in developing a new UCS Management access tool that you get to use for your own advantage.     And that’s the bottom line for “ucsql” and UCS in general :  greater utility for you.