1 2 Previous Next

Cisco Metacloud

22 posts

Last week we had a great time at the upstream OpenStack Project Technical Gathering (PTG) in Atlanta, Georgia. I talked to a few of us at Cisco who had the opportunity to participate, asking similar questions of everyone:

What did you work on at the PTG?

What surprised you about the PTG?

What are you looking forward to at the next PTG?

Rob Cresswell, Project Technical Lead (PTL) for the OpenStack Dashboard (horizon), Cisco engineer

Mostly, I spoke with plugin authors about their concerns, and then did a ton of project management as a group. We worked on priorities, working out contributor availability, then a load of blueprint review, so we have a really nicely organised list of targets for Pike. I was surprised by how relaxed it was! I don’t know if it was because there were 5000 less people, or the agendas were more open, but it felt much more like a bunch of programmers solving problems than some tense, crowded environment at the summit. I'm looking forward to improvements in the tooling around scheduling. It wasn’t very transparent or organised. This seems to be a common concern though, so I think it will be addressed. I’d also really like it to be outside the US, but that seems unlikely. : )


Anne Gentle, Technical Product Manager for Cisco Metacloud

For my part, I listened and facilitated at the documentation sessions, met with the interoperability team to figure out why my refstack tests weren't working, got them working (score!), figured out a development environment to work on the Dashboard, started a patch for the Dashboard, and answered any questions I could about the OpenStack API documentation efforts. I I was surprised by the number of attendees who came from overseas, the global community was well-represented! I am looking forward to more relaxed collaboration as it was a distraction-free week.


Britt Houser, Cisco engineer

I split my time between Kolla and Ironic sessions. Between sessions I caught up with a lot of people (even from Cisco) that I don’t get to see face-to-face except at OpenStack events. Making that personal connection is way more important that anything that goes on in the sessions.

I was surprised at how fast the mascot stickers disappeared! I'm looking forward to being able to put faces with IRC handles for people who haven’t yet joined the community.


Nicolas Simonds, Cisco Metacloud engineer

Ostensibly I was there to work on Nova, but I did find a small bug with Glance and submitted a patch while I was there.  So the answer is different based on whether you use "intention" versus "observed output." I was really surprised that the stated intention of "increasing/improving collaboration and communication between teams" was realized so effectively on the first go-round.  It wasn't perfect, as everybody is still "finding their feet", but it was an auspicious start. To that end, I'm looking forward to seeing the refinements around cross-project collaboration, and the kinds of software that such things produces.


If you haven't gotten the cheat sheet for all the project mascots, here you go! These were used for signage and stickers and a lot of fun to see.


In my role, I have access to a Metacloud environment which I manage as a user. This involves running demos, managing users, setting quotas as well as provisioning the scarce resource of public/floating IPs!  Luckily I don't have to worry about the bare metal OS, patching of it or any of the OpenStack complexities like upgrades, stability, and security. That is all handled as part of the managed service that Metacloud provides. I am however at the end of the day responsible for all the virtual machines running in the environment. This can be challenging at times when as an admin I can't see what is actually running inside all of the VMs across all the tenants.


A little while ago due to a Docker vulnerability with one of my VMs, I was alerted to a large amount of traffic with one of my VMs. Luckily, I was alerted by the OPS team (again part of the managed service). After some investigation, we determined there was a large amount of traffic communicating with China. Naturally, I wanted to avoid this happening again so started looking at some ways I could prevent this from happening again.


Low and behold, I stumbled upon OpenDNS, now Cisco Umbrella. The free home version is still referred to as OpenDNS. By configuring my VMs to use Cisco Umbrella to resolve DNS queries I could provide an additional level of security. Basically, ensuring that if any of the VMs tried talking to any known malware, phishing, or ransomware sites it would be blocked by Cisco Umbrella because it has been identified as malicious.


Turns out this was a piece of cake to configure in OpenStack. I simply set the default DNS servers for the virtual network to use Cisco Umbrella instead of the previously used DNS. This way whenever a VM is launched on the network, the VM's OS will be configured to use these DNS servers.


From the Admin >> Networks >> NETWORK-NAME, click Edit Subnet next to the subnet you want to configure


Then under Subnet Detail add the DNS servers you want to use with each on a separate line (Cisco Umbrella/OpenDNS is and


That was it from a configuration standpoint. Any VMs spun up on this network, will now be configured with Cisco Umbrella. There are many benefits to this like SmartCache (ensuring that DNS works pretty well even during events like massive DDOS attacks like the one we experienced a few months ago). I also wanted some security features for this network so I took the additional step of adding my network to my Cisco Umbrella account.


From the Cisco Umbrella interface, Identity >> Networks I added my network (in this case the public IP range for my cloud) and shortly afterwards it was showing as active:




That's it from a configuration standpoint. I now had an additional layer of security!


After a short period of time, I noticed some activity in my dashboard:





I now had an additional layer of protection in my cloud and was immediately seeing some interesting things. Let's take a look at one of them!

In a single click I ran a security report and saw the following:




I'm not too sure what about this controlyourself.online site but it was marked as malicious by Cisco Umbrella. Specifically, a large number of calls were made around the same point in time (within a minute) to this suspicious domain but luckily they were all blocked by Cisco Umbrella.


In a single click I drilled down into this domain in Investigate and saw the following:


This is showing me requests made globally to this domain. OpenDNS / Cisco Umbrella has this visibility via the 95+ billion queries it resolves at its Data Centers around the world. What's interesting is there were no requests to this domain until Jan 26th when there was a spike. In fact the domain came online on Jan 25th. After about four days, there were hardly any requests seen for this domain globally.


I can also see that there was a fairly high likelihood that this domain was generated via a DGA (Domain Generation Algorithm):



Luckily, the numerous requests my VM made to this were not resolved and any further communication was prevented with these domains! I'm sure there are many other strategies to take for securing your cloud, but I found this one a piece of cake to get up and running. Would be interested to know what other people have tried.

Recently at AWS Re:Invent, Amazon announced the EC2 Systems Manager, a management service that helps you automatically collect software inventory, apply Windows OS patches, create system images, and configure Windows and Linux operating systems. If you operate in a hybrid cloud model with Cisco Metacloud in a private data center coupled with AWS services, you can now manage those instances together through AWS. To AWS, your instances in a Metacloud environment would be called “managed instances” once they are under AWS management.


Adding your Metacloud instances to your AWS account is a pretty straightforward process. First, you need to install and configure the AWS CLI tool on your instances, create an “activation token" that will be used to register your Metacloud instances with AWS, and finally register your instances. You can take this a step further by automating the instance registration in your OpenStack Orchestration templates.


Creating an Activation Token

Using the AWS CLI tool, you will need to create an activation token to register your Metacloud instances with AWS. First, we’ll need to make an IAM role for your instances to communicate to AWS. Create a file called iam.json on your machine with the following data:



  "Version": "2012-10-17",

  "Statement": {

    "Effect": "Allow",

    "Principal": {"Service": "ssm.amazonaws.com"},

    "Action": "sts:AssumeRole"




You will then need to use the AWS CLI to create the role and attach the policy:


$ aws iam create-role --role-name SSMServiceRole \

--assume-role-policy-document file://iam.json


$ aws iam attach-role-policy --role-name SSMServiceRole \

--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM


Next, create the activation token using the IAM role you just created:


$ aws ssm create-activation --default-instance-name MyWebServers \

--iam-role SSMServiceRole --region us-east-1


In the response, you will receive an activation code and an ID, which you should keep secure. You will use these to register your instances.


Registering Metacloud Instances Manually

Log into your Metacloud instance via SSH or the web console. Run the following commands as appropriate for your operating system, replacing “region”, “ActivationCode”, and “ActivationId” where appropriate:


Amazon Linux, RHEL 6.x, and CentOS 6.x

mkdir /tmp/ssm

sudo curl https://amazon-ssm-region.s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o /tmp/ssm/amazon-ssm-agent.rpm

sudo yum install -y /tmp/ssm/amazon-ssm-agent.rpm

sudo stop amazon-ssm-agent

sudo amazon-ssm-agent -register -code "ActivationCode" -id "ActivationId" -region "region"

sudo start amazon-ssm-agent


RHEL 7.x and CentOS 7.x

mkdir /tmp/ssm

sudo curl https://amazon-ssm-region.s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o /tmp/ssm/amazon-ssm-agent.rpm

sudo yum install -y /tmp/ssm/amazon-ssm-agent.rpm

sudo systemctl stop amazon-ssm-agent

sudo amazon-ssm-agent -register -code " ActivationCode" -id " ActivationId" -region "region"

sudo systemctl start amazon-ssm-agent



mkdir /tmp/ssm

sudo curl https://amazon-ssm-region.s3.amazonaws.com/latest/debian_amd64/amazon-ssm-agent.deb -o /tmp/ssm/amazon-ssm-agent.deb

sudo dpkg -i /tmp/ssm/amazon-ssm-agent.deb

sudo stop amazon-ssm-agent

sudo amazon-ssm-agent -register -code " ActivationCode" -id " ActivationId" -region "region"

sudo start amazon-ssm-agent


Automating Registration with Orchestration

Great, you’ve manually registered your existing Metacloud instances, but what about instances that you create later? You can fold in registration into your cloud-init block in an OpenStack Orchestration template. Take a typical compute resource block, and add the necessary steps:



    type: OS::Nova::Server


      name: {get_param: hostname}

      image: { get_param: instance_image }

      flavor: { get_param: instance_flavor }

      key_name: { get_param: ssh_key }


        - port: { get_resource: port }

      user_data_format: RAW



           template: |


              apt-get update

              apt-get -y upgrade

              apt-get install -y curl

              curl –s https://amazon-ssm-%amazon_region%.s3.amazonaws.com/latest/debian_amd64/amazon-ssm-agent.deb -o /tmp/amazon-ssm-agent.deb

              dpkg -i /tmp/amazon-ssm-agent.deb

              stop amazon-ssm-agent

              amazon-ssm-agent -register -code "%amazon_code%" -id "%amazon_id%" -region "%amazon_region%"

              start amazon-ssm-agent


             "%amazon_code%": { get_param: amazon_code }

             "%amazon_id%": { get_param: amazon_id }

             "%amazon_region%": { get_param: amazon_region }


As you can see here, Amazon-specific variables are parameters provided at run time, so you aren’t storing them in version control.

With that, your Cisco Metacloud instances can be managed through the AWS console, monitored with AWS Config, and you can even run commands on them using EC2 Run Commands.


Just like waiting on a cake to bake to perfection, I got a little impatient sometimes for the OpenStack client to have enough features to switch over all the documentation. Well, I'm here to say, that cake is baked and ready to frost.

The switchover is fairly painless. You can keep the same credentials script, either created by you or downloaded from the Dashboard. And, instead of having to remember the project name for every OpenStack service, like nova, glance, and keystone, you remember one command: openstack.

And when you're wondering what to put for most commands, the conversion pattern has been to use a space instead of a dash. For example, instead of nova service-list, use openstack service list.


The key verbs to learn are: create, list, show, set, plus these verbs are used with a space instead of a hyphen between the object and verb.

The object names are standardized nicely: server, volume, flavor, image, and so on. It's fairly easy to remember openstack server list, openstack volume list, openstack image list, and openstack flavor list.


So, instead of reaching for the "nova boot" command, use the openstack server create command.

Another pro-tip for some of the parameters passed in: change the underscore (_) to a hyphen (-). For example:

openstack server create --availability-zone zone:HOST,NODE

Note how the command uses --availability-zone instead of --availability_zone.

Also look for singular rather than plural, such as --security-groups becoming --security-group in the openstack server create command.

It's also great to see the use of "property" consistently rather than "meta" or "metadata" since different project CLIs had varied different interpretations for metadata. For example, you can add a description for your server by providing the --property description="Dev Server" parameter.

Another nice feature is that you can always fall back on the old commands if you simply cannot get your fingers to stop typing "nova list" as soon as you source your credentials. The python-novaclient remains as the underlying dependency with the python-openstackclient pip install.

You want to upgrade to the latest version of the python-openstackclient. Today I was able to get a copy of 3.3.0. To find out what version you have, run: pip freeze | grep python-openstackclient. To upgrade, in a virtual environment, run: pip install -U python-openstackclient.


The upstream docs are converting over now, by substituting project client commands with openstack client commands, and testing the parameters, arguments, and parameters. More than forty changes have been merged already and we are counting down to that frosted cake slice.


For Metacloud, we have documented how to install the python-openstackclient on Windows and on Mac OSX. You can also learn about providing your Metacloud credentials to use the CLI with ease.

The OpenStack Orchestration service, code named "Heat", is used to deploy all components of your application infrastructure in a managed way. Infrastructure templates can be integrated with configuration management systems to provide end-to-end deployment management. OpenStack Orchestration templates can be in one of two formats:

CloudFormation (JSON) format to maintain AWS compatibility

HOT (YAML): Heat Orchestration Template format for OpenStack native deployments


You may want to use the CloudFormation format if you are using OpenStack and AWS in a hybrid cloud scenario. This post focuses on the latter, the HOT format. These Orchestration service templates have four sections:


  • Meta
  • Parameters
  • Resources
  • Outputs


While this post provides an overview, much more detailed information on template structure is available in the OpenStack documentation.



The first section of the template will declare the supported Orchestration version and contain a description of what the template is deploying.



The value of heat_template_version tells the Orchestration service not only the format of the template but also which features that will be validated and supported. Beginning with the Newton release, the version can be either the date of the release or the code name of the release. Currently, the following values are supported for the heat_template_version key:





2015-10-15 <- Liberty release, the latest version supported by Cisco Metacloud


2016-10-14 or "newton"


The heat_template_version is the first line of the template and looks like this:


heat_template_version: 2015-10-15



The description key can be a one sentence description or long-form description of what the template is deploying. Here are examples of the description key. A description is not required but including one is considered best practice for documentation purposes.


Short form:

description: Deploys a three-tier web application.

Long form:

description: >

  This is how you can provide a longer description

  of your template that goes over several lines.



Parameters are considered inputs for an Orchestration template and allow users to customize templates at deploy time. For example, this allows for providing custom key-pair names or image IDs to be used for a deployment. From a template author’s perspective, this helps to make a template more easily reusable by avoiding hardcoded assumptions.




    type: string

    label: Key Name

    description: Name of key-pair to be used for compute instance


    type: string

    label: Image ID

    description: Image to be used for compute instance


    type: string

    label: Instance Type

    description: Type of instance (flavor) to be used


Template authors can also specify that a parameter value remain hidden when users request information about a stack deployed from a template. This is achieved by the hidden attribute and useful, for example when requesting passwords as user input:




    type: string

    label: Database Password

    description: Password to be used for database

    hidden: true


Finally, template authors can set default values for parameters or constraints on user input on each parameter:


Setting a default




    type: string

    label: Instance Type

    description: Type of instance (flavor) to be used

    default: m1.small


Setting constraints




    type: string

    label: Instance Type

    description: Type of instance (flavor) to be used


      - allowed_values: [ m1.medium, m1.large, m1.xlarge ]

        description: Value must be one of m1.medium, m1.large or m1.xlarge.


    type: string

    label: Database Password

    description: Password to be used for database

    hidden: true


      - length: { min: 6, max: 8 }

        description: Password length must be between 6 and 8 characters.

      - allowed_pattern: "[a-zA-Z0-9]+"

        description: Password must consist of characters and numbers only.

      - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*"

        description: Password must start with an uppercase character.



This section contains the declaration of the single resources of the template. At least one resource should be defined in any template, or the template would not really do anything. A "resource" represents a single component deployed in OpenStack, for example: compute instances, networks, volumes, key-pairs, and more. An example of a basic resource, shown below, demonstrates the definition of a simple compute resource with some fixed property values:




    type: OS::Nova::Server


      flavor: m1.small

      image: ubuntu-14.04


A list of resources and their properties is very extensive, and too long to list here. For a full list of supported resources and their properties, refer to the OpenStack documentation.



The outputs section defines output parameters that should be available to the user after a stack has been created. This would be, for example, parameters such as IP addresses of deployed instances, or URLs of web applications deployed as part of a stack. Outputs can be queried using the python-heatclient CLI tool.




    description: IP address of the deployed compute instance

    value: { get_attr: [my_instance, first_address] }


You can show the output above using the CLI tool:


$ heat output-show <STACKNAME> instance_ip


Understanding Heat template structure is key to becoming successful with an OpenStack cloud, allowing you to automate your infrastructure and software deployments in an orchestrated way.

Getting Started with Configuration Management Deployment

When you think like a systems engineer, you want to be able to repeat the deployment of your infrastructure over and over again throughout its lifecycle. You want to improve systems and fix systems when they break. Configuration management takes care of consistent configurations, versions, and updates across software, hardware, and networks. You can use configuration management tools to manage the ever growing list of servers, storage, and network configurations across your organization. When deploying applications, you want consistency and reliability as well as collaboration and trust across teams to enact the desired configuration, automatically.


Why or When Would You Use Configuration Management?

Use configuration management when your teams already know one of these technologies and you have a base set of configurations you know work well. For example, when you have a solid understanding of Puppet or Chef and an existing library of testing solutions. In a way, if you have already set up authentication for these config management systems, integration points to your cloud appear to be “free” or low-cost.


Or, maybe you want your teams to learn one of these configuration management tools and the techniques that go along with configuration management.


Would this approach give a better audit trail? Yes, absolutely. Any Orchestration templates and configuration management templates kept in version control provide an audit trail for changes, differences, and a history of configuration.


In our experience, a configuration management tool such as Ansible has a more modular approach than managing and maintaining large Orchestration templates. While there are ways to modularize Orchestration templates, Ansible already has this method built-in the playbook examples.


Also, observe that configuration management platforms are moving to platform-like experience. For example, see Chef Automate, Puppet MCollective, or Ansible Tower.


To start this learning lab, we'll go through an Ansible example.


An Overview of Ansible

To deploy to Metacloud using Ansible, you write configuration files in YAML to create playbooks, which contain plays much like a coach writes for a team. By combining units of organization called roles, such as for a group of hosts, you apply the configuration you want to the servers you manage. In this context, hosts are remote machines managed by Ansible. Roles can apply certain variable values, certain tasks, and certain handlers, which are regular tasks but only run when notified by say a configuration file change.


You describe the infrastructure for the application and connect computing, storage, networks, and databases if needed. You deploy the hosts and can even do rolling updates, or add and remove nodes from a cluster with a few Ansible commands.


Note: This process isn’t about installing Metacloud or OpenStack services with Ansible; this process describes using Ansible to deploy applications using cloud resources.


How Ansible Meets DevOps Needs

You can certainly tick off many checklist items for a 12-factor application with Ansible:


Codebase - YAML files describing the deployment can be tracked. Teams often use GitHub or similar version control systems to track changes in the codebase of playbooks.


Dependencies - You can use groups of with_items for dependencies and to both declare and isolate them.


Backing services - Configurations for groups of hosts give you the ability to set up and tear down web servers, database servers, load balancers, all separately from each other.


Concurrency - You can create asynchronous tasks so that longer-running tasks can keep going while Ansible looks for smaller tasks to complete at the same time. Many Ansible modules are written so that you specify the desired final state, allowing multiple executions without changing the initial application. Inspection is valued over change.


Development/Production parity - Ansible and Vagrant combined give developers a way to replicate the deployment locally so your development and production environments are at parity.


Our overall process for this deployment is to launch a VM with public network access, then install Ansible and run a playbook using cloud-init. The cloud-init startup script makes sure that VM downloads and launches an Ansible playbook by cloning a repository from GitHub.


These commands require you to install the python-openstackclient and have access to Metacloud credentials in your local environment. Refer to Installing OpenStack CLI Tools on Mac OSX, Installing OpenStack CLI Tools on Windows, and Providing Metacloud Credentials to CLI Tools.


  1. Get the name of the image you want to use for launching the VM. This example uses an Ubuntu 14.04 image.
    $ openstack image list
  2. Get the name of the flavor you want to use. This example uses the m1.smallflavor.
    $ openstack flavor list
  3. Discover the security groups available. To make sure that web traffic works for the Jekyll server, ensure that port 80 is available. Refer toConfiguring Access and Security for Instances.
    openstack security list
  4. Discover the networks available for launching upon. If you need to create networks, refer to docs.metacloud.com.
    $ openstack network list
  5. Create a cloud-init-ansible.txtfile that transfers information and commands to the VM as well as install Ansible and get a copy of the playbook to run on the cloud instance.
       #cloud-config    apt_sources:     - source: "ppa:ansible/ansible"    apt_update: true    packages:     - software-properties-common     - git     - wget     - unzip     - python2.7     - ansible    runcmd:     - mkdir -p /etc/ansible/roles     - echo "[jekyll]\n127.0.0.1 ansible_connection=local" >> /etc/ansible/hosts     - git clone https://github.com/ciscodevnet/metacloud-ansible-jekyll.git -b cloud /etc/ansible/roles/ansible-jekyll     - ansible --version     - ansible-galaxy install rvm_io.rvm1-ruby     - ansible-playbook /etc/ansible/roles/ansible-jekyll/site.yml    output : { all : '| tee -a /var/log/cloud-init-output.log' } 
  6. Launch the server, using the security group and the keypair you want inserted:
       $ openstack server create \
        --image "Ubuntu-14.04" \
        --flavor m1.small \
        --security-group jekyll \
        --key-name pub-key \
        --nic net-id=private-network \
        --user-data cloud-init-ansible.txt \
  7. If you need to add a floating IP address from your quota, use these commands to create a new floating IP in the pool:
       $ openstack ip floating pool list
       | Name                |
       | public-floating-60  |
       $ openstack ip floating create public-floating-601
       | Field       | Value                                |
       | fixed_ip    | None                                 |
       | id          | 93d429ae-e213-4f37-a609-da0ca48da8ef |
       | instance_id | None                                 |
       | ip          |                       |
       | pool        | public-floating-60                   |
  8. Next, list and associate an available floating IP address to this server so that it has external network access:
       $ openstack ip floating list
       | ID                                   | Floating IP Address | Fixed IP Address | Port                                 |
       | 82853365-1030-4c33-9f28-60ee4c52b46d |       |      | 339c2026-91e5-4945-8bf9-00c1601b46f1 |
       | 89d4a20c-3132-430c-8035-8d6c7ca45361 |        | None             | None                                 |
  9. Choose a floating IP address that does not have a fixed IP address already assigned to it (marked None in the example), and add it to your server.
    $ openstack ip floating add setup-ansible 
  10. Now you can connect to the launched server and take a look around to see how to continue with Ansible playbooks.
    $ ssh -vv cloud-user@
  11. Once you're connected to the VM through SSH, find out the Ansible version that was installed through the cloud-init file. The console log should also have this in it, since the cloud-init file has ansible --versionas a command.
       $ ansible -v
       ansible 2.1.0


Note: The username you use, such as cloud-user in the example, depends on the way the images are set up. Sometimes images use cloudas the operating system user. You can adjust your playbook accordingly and checkout different branches of the ansible-playbook to change to a different OS user name in the group_vars/jekyll.yml file.


Set up a git remote to push to the Ruby and Jekyll environment

To deploy a Jekyll site to your new server, run these commands in the git clone of a repo with a Jekyll site. An example is athttps://github.com/annegentle/summit-example.

$ git push staging master 

For example:

$ git push jekyll-staging master 

Now, when you point your browser to, you see the deployed master branch of the summit-example site

Deployed Jekyll site in browser

Figure 1: Deployed Jekyll site in browser



If you get a scheduler error, it’s possible you’re using the wrong network ID for --nic net-id=<value>. It’s also possible you have chosen a flavor that can’t be launched on the availability zone for your credentials.

From within the VM, you can get log information about the cloud-init configuration from /var/log/cloud-init.log and /var/log/cloud-init-output.log.


To troubleshoot the SSH connection further, use -vvv for the highest verbosity information back when making the ssh call. If you can’t SSH to the instance, make sure the security group you chose when launching allows SSH ingress traffic. Also ensure you have outward access to the SSH port selected (typically 22).


Since you’re using a public/private keypair to access the instance, make sure the paired public key did get copied to the virtual machine at launch time, otherwise the mismatch prevents you from connecting with the key.

Well, it is that time that comes around twice a year when the OpenStack summit is at the top of all our minds! Many people have been asking about Metapod and the OpenStack summit in Barcelona. We will be there!


I wanted to provide a quick summary of some of the topics that the Metapod team has submitted for talks. Feel free to vote for the ones you find interesting!


How do you vote?

First go here: https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/presentation/15913

Then search for either the first part of the session name or the presenter name.


  • Bite off More Than You Can Chew, Then Chew It: OpenStack Consumption Models (Jonathan Kelly, Walter Bentley, Tyler Britten)
  • BrokenStack: OpenStack Failure Stories (Jonathan Kelly)
  • DevOps on Cisco Metapod OpenStack Cloud (Srinivas Tadepalli, Prateek Tripathi, Chris Riviere)
  • Driving Science in the Hybrid Cloud with Ansible (Steven Carter, Jason Grimm, John Lothian)
  • HPC on OpenStack - real world deployments and use cases (Steven Carter, Jason Grimm, John Lothian)
  • Hybrid HPC on OpenStack – Virtualization, bare metal, containers and micro-services (Steven Carter, Jason Grimm, John Lothian)
  • Maat - Adaptive System Configuration and Recovery with Predicates (Noel Burton-Krahn)
  • Practical Neutron: A Hands-On Lab (Vallard Benincosa, Chris Riviere)

Mantl on Metapod

Posted by vbeninco May 10, 2016

How to install Mantl on Metapod!

I thought it might be a good idea to document the process for creating a jumphost so I can more quickly download an image and upload it to Metapod in the correct format. This may also be useful as you don't have to worry about messing around with various versions of the CLI tools. It can also be useful for just configuring a jump host with the CLI tools on it.


QCOW2 is quite a popular format for KVM. With Metapod, depending on the backend storage configuration, I may also need to convert the image to RAW.

  • Create an X-large Ubuntu VM and assign a floating IP.
  • Login to VM
    • ssh -v -i chris-trial5-metapod.pem cloud@public_ip
  • sudo apt-get update
  • Install CLI(from: https://support.metacloud.com/entries/100998896-Installing-Command-line-clients)
    • sudo apt-get install python-dev python-pip
    • sudo pip install python-novaclient
    • sudo pip install python-glanceclient
    • sudo pip install python-cinderclient
    • sudo pip install python-keystoneclient
  • Install qemu so we can convert the image:
    • sudo apt-get install qemu
  • Copied text from environment.sh (after downloading from Horizon >> Access & Security >> API Access-  file to a new one in terminal (alternately: scp -v -i chris-trial5-metacloud.pem keyfile.sh cloud@public_ip: )
  • converted image to desired format:
    • qemu-img convert -f qcow2 -O raw originalImageName.qcow2 newImageName.img
  • Upload image:
    • glance --os-image-api-version 1 image-create --progress --name "Leostream" --is-public True --disk-format raw --container-format bare --file NewImageName.img

We have found it can be a bit tricky to get the OpenStack CLI tools working correctly with Metapod since making the API endpoints secure. They are very easy to install in Ubuntu but a bit more difficult to install with OSX. I thought it might be a good idea to document and share this with others. Also credit to the Metapod OPS team for helping out with this.

  1. Install Xcode from AppStore
  2. From the Terminal, run:
    xcode-select --install
  3. Install Brew (essentially a package manager that allows for easy installation of other tools).
    ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

  4. Install qemu. This is very useful for converting images from one format to another. Completely optional but comes in handy when working with OpenStack
  5. Use brew to install python
    brew install python
  6. Install qemu - very useful for converting images from one format to another - optional but probably good to install this as well
    brew install qemu
  7. Use pip to install virtualenv - this allows the next packages we install to be self-contained within a virtual environment so we don't risk messing with some of OSX's core files
    pip install virtualenv
  8. Create your virtual environment. In this example, I will call my environment "openstack" and I am running the following command from my home directory.
    virtualenv -p /usr/local/bin/python --no-site-packages openstack
  9. Activate the virtual environment you just created
    . openstack/bin/activate
  10. Now we can finally install the OpenStack CLI tools!!
    pip install python-novaclient
    pip install python-glanceclient
    pip install python-cinderclient==1.0.9
    pip install python-keystoneclient
    pip install python-neutronclient
  11. Do a test by first connecting to the environment. Download the OpenStck RC file from Access & Security >> API Access page as shown below.

  12. Source the RC file
    . rc_filename.sh
    your password
  13. Run a test command. For instance, the one below will show us all VMs in our tenant. You should see similar output to below.
    nova list

(openstack) CRIVIERE-M-K0ET:piston criviere$ nova list


| ID                                   | Name             | Status | Task State | Power State | Networks                                |


| 95099acb-623a-46ac-b5ea-0e4ef312068b | leostream-broker | ACTIVE | -          | Running     | trial5-shared=, |

| 6ffd5640-0421-451c-81f2-4278441632e4 | oneops           | ACTIVE | -          | Running     | trial5-shared=, |

| ef177087-8090-41b5-a3d3-d34ccb3b2fd5 | workplease       | ACTIVE | -          | Running     | trial5-shared=, |


(openstack) CRIVIERE-M-K0ET:piston criviere$

In our highly fluid demo environment we learned that when you delete a project without deleting the floating IP address associated with a now defunct instance, that floating IP address goes into no-mans land.  Hopefully this will be changed at some point to automatically release them.  But for now, it doesn't.  Since we're a but crunched for public floating IP addresses in our environment, reclaiming these orphans is not just a noble cause, but also an imperative!

Bill Harper on my team shared a few thoughts on how to do this and I added my own to his instructions.  We came up with a way to find these lost souls.  It would be easy to script this, but in our usual slothfulness, we're just putting the manual instructions here so you don't need to trust everything you download from the internet... even if it is a Cisco site!


Step 0: Set environment to execute commands as an administrator!


These steps can only be done if you are an administrator as other accounts don't have this privilege.


Step 1 - List all the floating IP addresses

Using the floating-ip-bulk-list command we can see all of the floating IP addresses in the system.  Some of these may be assigned, others may not be.


$ nova floating-ip-bulk-list
| project_id                       | address      | instance_uuid                        | pool  | interface |
| 04afc038c08e4f28ac2faa580ca529b7 | | 2bd8cffd-64b0-408c-b4fc-ac069ebee2d4 | demo1 | vlan200   |
| 230536d448c04fdabdd1afb4db0589a8 | | -                                    | demo1 | vlan200   |
| cfb44ad164404a9098834aa726b45818 | | 8aee793d-1ee1-4a79-b4a4-fe8e2b6419cf | demo1 | vlan200   |


This is our first place to start finding the lost IPs!

Step 2 - run a command looking for IP address attached to projects that are not on the system (keystone).


$ for x in `nova floating-ip-bulk-list | grep -v project_id | grep -v +- | awk '{print $2}' | sort | uniq`; do keystone tenant-get ${x} 2>&1 | grep exists; done

This handy set of pipes will show us which tenant doesn't exist by referencing the project_id with the floating IP addresses.  You'll get some output of something like:

No tenant with a name or ID of '3c233bdffd1c420091b84d565d48660a' exists.

So now we just need to find all the IP addresses in this project/tenant!


Step 3 - Pinpoint the invalid project

nova floating-ip-bulk-list | grep 3c233bdffd1c420091b84d565d48660a 

This will give us a list of all the IP addresses that were once part of this once great now defunct project that we can reclaim. Suppose one of our addresses is  Let's get it back.


Step 4.  Get the address in question:

First, delete the IP address.

nova floating-ip-build-delete


Step 5.  Now add the address back into the pool:


Next,we add it back in:

nova floating-ip-bulk-create --pool demo1

Notice that I used the --pool demo1 flag to put this back in the demo1 group.  You will probably have another pool that you saw when you ran the command in step 1.  You probably will want to put it back into that same pool!


Repeat steps 4 & 5 for the other IP addresses you found in step 3 and there are as good as new.

OpenStack has flexible partitioning capabilities that allow cloud administrators to subdivide their environments into logical groups that can denote commonalities about the resources belonging to that group. For example, one group can be for compute nodes that share a common configuration, like fast disks. Another group can be for compute nodes that share a common power source and therefore, are part of the same fault domain. The two partitioning capabilities I’ll be discussing are Host Aggregates and Availability Zones. Administrators can optionally configure one, or both simultaneously; it’s purely up to them and what choices they want to offer their users. There’s often confusion about these two features, and in this post, I hope to provide clarity on when and how they can be used.


Both of these mechanisms are ultimately used to influence “where” (on which compute nodes), new instances (VMs) are launched. Cloud administrators define the actual Host Aggregate and Availability Zone “groupings” and then in turn expose those groups to the cloud users. Cloud users can select the groups to use based where they want their instances launched.


Some have said to me: “I thought the cloud was supposed to take care of the placement of all instances. Cloud was supposed to make it so the users don’t need to care about placement.”


It’s true to a certain degree, but users typically want this type of additional flexibility and choice because it helps them in architecting resiliency, efficiency, and performance into their applications. In order to effectively do it, they need to have some basic knowledge of the layout of the cloud. For example, they need to be able to select the appropriate bucket of compute nodes with the right characteristics to run their workload, without having to be concerned about selecting a specific compute node. That would be too granular and more difficult to manage. They need a framework that gives them the right amount of choice and control. Used correctly, Host Aggregates and Availability Zones can give users the exact amount of information they need and the means for coarse-grained placement.


Whenever possible, consideration should be given to these features in the planning phase of your OpenStack deployment, but there’s no restriction in enabling them after the fact.


The first feature we’ll discuss is Host Aggregates. Typically, Host Aggregates allow a cloud administrator to define groups of compute nodes based on their hardware configuration. This could provide users the choice of launching instances on specific hardware, like security-optimized compute nodes, memory-optimized compute nodes, or storage-optimized compute nodes. (This is only one example. Keep in mind that OpenStack provides flexibility for cloud administrators to define groupings any way they like.)


Unlike Availability Zones (which we’ll talk about later), Host Aggregate groups are not directly selectable by cloud users, because they are not directly seen, neither on the dashboard, nor the API. Host Aggregates are obscured since they are actually associated to an instance flavor definition. Metadata is assigned to a flavor via extra_specs definitions, thus confining which group (Host Aggregate) of compute nodes the flavor (and subsequently, instances) can run on. Recall that cloud users do not have the authority to create, modify, or delete flavors, so flavor manipulation is reserved for administrators only. Cloud users can view the extra_specs associated with a flavor by using the CLI command “nova flavor-show <flavor>”.


Host Aggregates Example

Let’s say I have a single OpenStack cluster with one availability zone defined (one AZ because the compute nodes are all in a single rack). There are seven compute nodes in the cluster, but three different hardware classes. Two of compute nodes are configured as security-optimized servers, two are memory-optimized servers, and the remaining three compute nodes are storage-optimized servers (with fast SSDs).


I want to offer these seven compute nodes to my cloud users in a way that differentiates them based on compute node hardware class using Host Aggregates.


Here are the high level steps in creating Host Aggregates for your environment:


  1. Create the Host Aggregate name and details
    1. Do not specify an AZ (leave blank)
    2. Create and assign a key value pair to be used
  2. Associate the appropriate compute nodes to this Host Aggregate
  3. Create a new flavor definition that contains the appropriate key value pair in the extra specs
  4. Do this for each class of server you are creating


Here are seven compute nodes in a single Availability Zone:




mhv1 and mhv2 are the security-optimized compute nodes, mhv3 and mhv4 are the memory-optimized compute nodes, and finally mhv5, mhv6, and mhv7 are storage-optimized compute nodes. We’ll create three Host Aggregates. Here is the desired Host Aggregate layout:




Get a listing of the compute node names, then create the host aggregates using the “nova aggregate-create,” “nova aggregate-set-metadata,” and “nova aggregate-add-host” commands shown below:




Host Aggregate “AGG1” has been successfully created.


I will continue with the creation of two additional Host Aggregates. “AGG2” and “AGG3”:








Here is the view from the Horizon dashboard. You can see the Aggregate Name, Metadata, and Hosts:




The next action is to associate these three Host Aggregates with flavors. Here we create the three new flavors (m1.small.sec, m1.small.mem, m1.small.ssd), then associate the appropriate “extra_spec” key/value pair to each (of course OpenStack is flexible enough to support as many flavors as I can imagine):



Now that I have the flavors defined, I can launch instances to validate the behavior and desired effect. Notice how the users don’t see the Host Aggregates per se, they only see the flavors. This is just the right of information that’s needed:



I will launch ten new instances using the “.sec” flavor. We can see that all of the instances have been constrained to only launch on mhv1 and mhv2 - exactly as configured. No other compute node is running any of my “.sec” flavors as confirmed by the output of the nova list command:




We can confirm the behavior of the other two flavor types. Again, I boot 10 new instances with the specific “.mem” flavor type. And validate where they are launched:




The 10 “.mem” instances are constrained to launching on mhv3 and mhv4.




Availability Zone Example

In the first scenario, we only had one Availability Zone defined. In the next example, we’ll build upon the Host Aggregates configuration and add new Availability Zone definitions. As was demonstrated by the previous example, Host Aggregates were used to give the users a choice of which type of compute node to deploy their instances to. In this scenario, we’ll give the cloud users an additional level of control by giving them the ability to select which Availability Zone to deploy to as well. This scenario creates two new AZs to replace the single AZ we used in the first example. Here we’ll use the names ZoneA and ZoneB. ZoneA and ZoneB define different fault domains, each with perhaps different power and network connectivity. This is an example of how you can make cloud users aware of these different domains and thus give them the ability to split their workload evenly across them for redundancy purposes.


Here are the high level steps in creating Availability Zones for your environment:


  1. Create the Host Aggregate name and details
    1. Specify an AZ (Use the Host Aggregate name as the Availability Zone name as well)
  2. Associate the appropriate compute nodes to this Availability Zone
  3. Do this for each Zone you are creating


We have now split the single AZ in the previous diagram into two separate AZs:




We’ll create the two new Availability Zones using “nova aggregate-create <name> <availability zone>” command. The creation is the same process as creating a Host Aggregate, however, you specify the Aggregate name AND the new Availability Zone name as arguments. Here we will create Availability Zone named “ZoneA.”




Next we’ll associate compute nodes to ZoneA, again, just as we did for Host Aggregates:




Now we’ll create ZoneB:




Now we have our two AZs defined:




One thing you’ll notice that’s different about creating Availability Zones is that you cannot associate a compute node to more than one Availability Zone. Here I tried to add mhv7 to ZoneB, but mhv7 already belongs to ZoneA:




Here we can see the Availability Zone definitions on the dashboard:




Next, we’ll launch .mem flavored instances using the --availability-zone argument in the nova boot command to boot the instances to “ZoneA”:



We can see that the instances are constrained to the memory-optimized compute nodes just like our Host Aggregate example before, but in this case, they have also been further constrained to launch on only the memory-optimized nodes in ZoneA (mhv3):



We’ll again launch .mem flavored instances, but this time to launch the instances on memory-optimized compute nodes in “ZoneB”:




We can see that the instances are still constrained to the memory-optimized compute nodes, but now only launch on the memory-optimized nodes in ZoneB (mhv4):




The last test is to omit the availability zone argument from the command, and we’ll see that the instances are load balanced on memory-optimized compute nodes across ZoneA AND ZoneB:




We see that the instances are balanced across mhv3 and mhv4:





You can see that I’ve used a combination of Host Aggregates (via flavor definitions) plus Availability Zones to launch my instances exactly where and on which class of hardware I want them. We have just shown examples of how and when these features can be used to group resources to provide flexibility and choice for cloud users.


These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard. These instructions were tested on the Ice House release on a Cisco distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack.


Make sure you can connect with OpenStack

Make sure the environment variables are set for OpenStack such as:


Test this works with something like:

nova list 

Get a Suitable CoreOS Image

You'll need a suitable version of CoreOS image for OpenStack Once you download that, upload it to glance. An example is shown below:

glance image-create --name CoreOS723 \ 
--container-format bare --disk-format qcow2 \ 
--file coreos_production_openstack_image.img \ 
--is-public True 

Create security group

nova secgroup-create kubernetes "Kubernetes Security Group" 
nova secgroup-add-rule kubernetes tcp 22 22 
nova secgroup-add-rule kubernetes tcp 80 80 

Provision the Master

nova boot \ 
--image <image_name> \ 
--key-name <my_key> \ 
--flavor <flavor id> \ 
--security-group kubernetes \ 
--user-data files/master.yaml \ 

<image_name> is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'


<my_key> is the keypair name that you already generated to access the instance.


<flavor_id> is the flavor ID you use to size the instance. Run nova flavor-list to get the IDs. 3 on the system this was tested with gives the m1.large size.


The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.


Next, assign it a public IP address:

nova floating-ip-list 

Get an IP address that's free and run:

nova floating-ip-associate kube-master <ip address> 


where <ip address> is the IP address that was available from the nova floating-ip-list command.


Provision Worker Nodes

Edit node.yaml and replace all instances of <master-private-ip> with the private IP address of the master node. You can get this by runnning nova show kube-master assuming you named your instance kube master. This is not the floating IP address you just assigned it.

nova boot \ 
--image <image_name> \ 
--key-name <my_key> \ 
--flavor <flavor id> \ 
--security-group kubernetes \ 
--user-data files/node.yaml \ 

This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.

This blog post is written, partly, in response to a post on techreplublic.com titled “Marketing OpenStack’s Progress: Now ‘It actually works’” by Matt Asay

Admittedly, I wasn’t at the OpenStack Summit in which Randy Bias declared in his State of the Stack talk that “OpenStack is at risk of collapsing under its own weight,” but I’m familiar with the sentiment, as I’m sure a lot of people close to the OpenStack community, and open source in general, are.

At least one other industry pundit has gone on record that the community-powered fast moving development is a double-edged sword for the OpenStack project at large. While propelling projects forward with, at most times, consistent momentum, it has lacked focus on interoperability, which has resulted in a number of conflicts among projects that, while disparate, all carry the OpenStack brand.

The result is the slow adoption of the cloud platform due to complexity and growing preference and focus on streamlined technologies that focus on simplicity. “OpenStack can run a fine private cloud, if you have lots of people to throw at the project and are willing to do lots of coding,” explained Gartner’s Alan Waite, further declaring that adoption will continue to suffer in the face of complexity and lack of compatibility.

But what if you could have the best of both worlds? What if you could have the collective mindshare of the OpenStack community, focused on growing the project in way that addresses real world problems? What if you could depend on a team that aims to move the right needles in terms of business objectives? What if you had a version of OpenStack that was as reliable and simple to use as some of its leaner competitors?

Cisco, equipped with an arsenal of resources and expertise, is providing one of the most critical services of all when it comes to their Private Cloud-as-a-Service offering: curation.

Rather than adopt the latest version of OpenStack, unabridged, as soon as it becomes available, the team (formerly of OpenStack startups Metacloud and Piston) has a clearly defined framework of values and priorities that OpenStack is measured against. “We operate customer-prem private clouds that are powered by an opinionated version of OpenStack,” explains Chief Architect of Cisco OpenStack® Private Cloud, Chet Burgess. “Since it’s delivered in a SaaS-type model, we’re on the hook for stability, performance, and meeting our SLAs, so we have to make sure new features included in our distribution of OpenStack are rock solid before we deliver those to customers.”

The framework for priority-based organization and engineering also factors in customer feedback and feature requests. As a result, the platform offers unique features, including an enhanced user interface, self-service project administration, resource allocation geared at simplifying Windows VM licensing, and a beefed up admin dashboard with historical and live performance statistics for every level of the stack. On average, the team rolls out non-disruptive upgrades to all of their customers’ availability zones every six to eight weeks.

Real differentiation, as in the case of Cisco Metapod, is attained when OpenStack is viewed as a starting point from which a team of engineers and product specialists use their expertise to build a deliberate version of the software, including the prescribed hardware on which it should run, to deliver a developer-friendly public cloud experience in the sanctum of a customer’s data center. It’s this viewpoint--that OpenStack is a raw material—that allows Cisco to turn it into a refined digital alloy that is defining the how the value of the cloud platform is to be extracted and delivered to meet the evolving needs of enterprises.

The bottom line is that the (growing) immensity of the OpenStack project, described by Randy Bias and others, allows the engineers behind Cisco Metapod to treat OpenStack like a catalog of parts and pieces. OpenStack doesn’t and shouldn’t have to be treated as an inseparable, monolithic code base. While a number of companies have created a business out of offering OpenStack to companies, the differentiation is in how it’s delivered and what version of OpenStack will deliver the results that their customers expect.

Questions? Comments? Leave them below, or tweet me at @palumbo.

Using the OpenStack Python APIs is basically using the same methods and objects that the OpenStack CLIs use, which hide the REST layer from the application developer. So what this implies is there is then a different set of libraries you must import in order to talk to each one of the OpenStack components. Over the last year there has been an effort to document the APIs better and provide working examples of how to use them. You can view the current status of this project by following the link below. You can also look at the source code for these libraries and utilities to figure out which methods are available as well as what objects you can reference.




The best way to understand the Python API is to start with a few examples. Let’s look at writing some code that will log in into the cloud, get a token, and then list all the running VM instances.



Next lets run the script and look at the output.


The next example will list the images in Glance. The code is similar to the last example but we have done some things to make the output from this code easier to read.


To run the above example, create a text file with its contents and then execute the code against Python as follows.


Below is the output from this simple script that shows the listing of the images in the AZ we have authenticated to.



As one can see by exploring the APIs within OpenStack, they are very powerful as well as complete. You can interface to them many different ways including the Horizon Dashboard, the CLI clients, the Python API methods as well as many other tools including Chef, Puppet, Ansible, Salt Stack, OpenShift, Cloud Foundry, RightScale, Clickr, Apprenda, Scalr and many more. As the community grows so does the tool chain that supports OpenStack.


To learn how to get started with OpenStack APIs using the command line or the REST interface, click here to review the full tutorial.

Filter Blog

By date:
By tag: