1 2 Previous Next

OpenStack

25 posts

This blog has the steps to deploy OpenStack Newton with OpenDaylight Boron and Open vSwitch on CentOS-7 in VirtualBox on Mac laptop.

 

Below are the versions used:

 

 

Below is the architecture:

 

arch.jpg

 

VirtualBox is installed on Mac laptop and the CentOS-7 VM is created in VirtualBox.

 

In VirtualBox, the CentOS-7-x86_64-Minimal-1611.iso image is used to boot a CentOS-7 VM with 4 GB RAM and the following two network adapters.  A host-only adapter is not needed.

  • eth0 as a NAT adapter
  • eth1 as an internal network adapter

 

vb1.png

 

eth0 is a NAT adapter.

 

vb2.png

 

eth1 is an internal network adapter.

 

vb3.png

 

Run the following bash script to configure VirtualBox.  It will forward the required TCP ports from the host (Mac laptop) to the guest (CentOS-7 VM) and will also create eth1 as an internal network adapter.

 

#!/bin/bash

# Forward TCP port 3022 on host to TCP port 22 on guest VM so
# that host can SSH into guest VM
if ! VBoxManage showvminfo devstack-odl | grep 3022 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "SSH,TCP,,3022,,22"
fi

# Forward TCP port 8080 on host to TCP port 80 on guest VM so
# that host can access OpenStack Horizon in browser
if ! VBoxManage showvminfo devstack-odl | grep 8080 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "HTTP,TCP,,8080,,80"
fi

# Forward TCP port 6080 on host to TCP port 6080 on guest VM so
# that host can access Nova VNC console in browser
if ! VBoxManage showvminfo devstack-odl | grep 6080 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "CONSOLE,TCP,,6080,,6080"
fi

# Forward TCP port 8282 on host to TCP port 8181 on guest VM so
# that host can access OpenDaylight web GUI at
# http://localhost:8282/index.html (admin/admin)
if ! VBoxManage showvminfo devstack-odl | grep 8282 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "ODL,TCP,,8282,,8181"
fi

# Forward TCP port 8187 on host to TCP port 8087 on guest VM so
# that we can curl the OpenDaylight controller
if ! VBoxManage showvminfo devstack-odl | grep 8187 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "ODL_neutron,TCP,,8187,,8087"
fi

# Add internal network adapter for guest VM
if ! VBoxManage showvminfo devstack-odl | grep eth1 > /dev/null
then
    VBoxManage modifyvm devstack-odl --nic2 intnet
    VBoxManage modifyvm devstack-odl --intnet2 "eth1"
fi

# Remove stale entry in ~/.ssh/known_hosts on host
if [ -f ~/.ssh/known_hosts ]; then
    sed -i '' '/\[127.0.0.1\]:3022/d' ~/.ssh/known_hosts
fi

































 

Below are the forwarded ports through eth0 (NAT interface) in VirtualBox.  Host is the Mac laptop and the guest VM is CentOS-7 booted in VirtualBox.

  • TCP port 3022 on host is forwarded to TCP port 22 on guest VM so that host can SSH into guest VM.
  • TCP port 8080 on host is forwarded to TCP port 80 on guest VM so that host can access OpenStack Horizon in browser.
  • TCP port 6080 on host is forwarded to TCP port 6080 on guest VM so that host can access Nova VNC console in browser.
  • TCP port 8282 on host is forwarded to TCP port 8181 on guest VM so that host can access the OpenDaylight GUI in browser.
  • TCP port 8187 on host is forwarded to TCP port 8087 on guest VM so that host can access neutron's ml2 ODL url in browser.

 

Below is the screenshot of the forwarded ports through eth0 (NAT interface) in VirtualBox.

 

vb4.png

forwarded_ports.png

 

Now, boot the CentOS-7 VM in VirtualBox.  Choose "VDI" as the disk format for the VM.

 

When booting the CentOS-7 VM in VirtualBox, press the Tab key and type the following kernel boot options.  This keeps the interface names as eth0 and eth1 in the CentOS-7 VM instead of enp0s*.

 

net.ifnames=0 biosdevname=0

 

Once the CentOS-7 VM boots, login into it and check the interfaces ("ip a" or "ifconfig").  eth0 will have an IP address like 10.0.2.15 and eth1 will not have any IP address.  Check the default gateway ("ip route").  10.0.2.2 will be the default gateway.  Make sure that you can ping a public DNS name like www.google.com and www.cisco.com.

 

Below are the output snippets of "ip a" and "ip route" inside the CentOS-7 VM.

 

$ ip a

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 08:00:27:60:77:7e brd ff:ff:ff:ff:ff:ff


$ ip route

default via 10.0.2.2 dev eth0  proto static  metric 100

10.0.2.0/24 dev eth0  proto kernel  scope link  src 10.0.2.15  metric 100

192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1

 

From the Mac laptop, SSH into the CentOS-7 VM using the forwarded port 3022.  Use the root password to login.

 

ssh -p 3022 root@127.0.0.1

 

Clone the DevStack Newton repository.

 

git clone https://git.openstack.org/openstack-dev/devstack -b stable/newton

 

Create stack user for DevStack.  Alternatively, you can use useradd and passwd to create a new stack user, and give sudo access to the stack user by typing visudo, adding "stack   ALL=(ALL)   ALL" under "root    ALL=(ALL)   ALL", and saving the file.


cd devstack

./tools/create-stack-user.sh

su stack

whoami

echo $HOME

cd

pwd

exit

exit

 

Copy the local.conf file below to the devstack directory.  It has the OpenStack core services (Horizon, Keystone, Nova, Neutron, Glance, RabbitMQ and MySQL) enabled.  It uses OpenvSwitch (OVS) as the virtual switch and VLAN for tenant networks.  It also enables the neutron ml2 ODL plugin to make neutron interact with OpenDaylight.

 

[[local|localrc]]
OFFLINE=True
HORIZON_BRANCH=stable/newton
KEYSTONE_BRANCH=stable/newton
NOVA_BRANCH=stable/newton
NEUTRON_BRANCH=stable/newton
GLANCE_BRANCH=stable/newton

ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
LOGDIR=$DEST/logs
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2

ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,horizon

# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
PUBLIC_INTERFACE=eth0
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True

# Enable neutron ODL plugin
enable_plugin networking-odl http://git.openstack.org/openstack/networking-odl stable/newton
ODL_MODE=allinone
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
ODL_GATE_SERVICE_PROVIDER=vpnservice
disable_service q-l3
ML2_L3_PLUGIN=odl-router
ODL_PROVIDER_MAPPINGS=public:br-ex





























 

Now, exit and SSH back in as the stack user into the CentOS-7 VM.

 

ssh -p 3022 stack@127.0.0.1

 

OpenDaylight requires Java 1.8.0 and Open vSwitch >= 2.5.0.

 

Install Java 1.8.0.

 

Java SE Development Kit 8 - Downloads

 

wget --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.rpm
sudo yum localinstall jdk-8u112-linux-x64.rpm 
java -version
rm -rf jdk-8u112-linux-x64.rpm



























 

Install OpenDaylight Boron.

 

https://www.opendaylight.org/downloads

 

http://docs.opendaylight.org/en/stable-boron/submodules/netvirt/docs/openstack-guide/openstack-with-netvirt.html#installing-openstack-and-opendaylight-using-devstack

 

https://github.com/openstack/networking-odl

 

wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.5.2-Boron-SR2/distribution-karaf-0.5.2-Boron-SR2.tar.gz
tar xvfz distribution-karaf-0.5.2-Boron-SR2.tar.gz 
rm -rf distribution-karaf-0.5.2-Boron-SR2.tar.gz
cd distribution-karaf-0.5.2-Boron-SR2/
export JAVA_HOME=/usr/java/jdk1.8.0_112
echo $JAVA_HOME


























 

Make sure that you are in the distribution-karaf-0.5.2-Boron-SR2 directory.

 

Start the OpenDaylight server.

 

sudo bash -c "export JAVA_HOME=/usr/java/jdk1.8.0_112 ; ./bin/start"


























 

Wait for 5 minutes so that the ODL boron server is up.

 

Start OpenDaylight client and connect to the karaf shell.

 

sudo bash -c "export JAVA_HOME=/usr/java/jdk1.8.0_112 ; ./bin/client"


























 

List the available ODL boron features in karaf shell.

 

opendaylight-user@root>feature:list

























 

In the karaf shell, install the odl-netvirt-openstack bundle, dlux and their dependencies needed for OpenStack neutron.

 

opendaylight-user@root>feature:install odl-netvirt-openstack odl-dlux-core odl-mdsal-apidocs

























 

List the installed ODL neutron northbound features.

 

opendaylight-user@root>feature:list -i | grep -i neutron
odl-neutron-service                            | 0.7.2-Boron-SR2  | x         | odl-neutron-0.7.2-Boron-SR2                | OpenDaylight :: Neutron :: API
odl-neutron-northbound-api                     | 0.7.2-Boron-SR2  | x         | odl-neutron-0.7.2-Boron-SR2                | OpenDaylight :: Neutron :: Northbound
odl-neutron-spi                                | 0.7.2-Boron-SR2  | x         | odl-neutron-0.7.2-Boron-SR2                | OpenDaylight :: Neutron :: API
odl-neutron-transcriber                        | 0.7.2-Boron-SR2  | x         | odl-neutron-0.7.2-Boron-SR2                | OpenDaylight :: Neutron :: Implementation

























 

List the installed ODL OVS southbound features.

 

opendaylight-user@root>feature:list -i | grep -i ovs
odl-ovsdb-hwvtepsouthbound-api                 | 1.3.2-Boron-SR2  | x         | odl-ovsdb-hwvtepsouthbound-1.3.2-Boron-SR2 | OpenDaylight :: hwvtepsouthbound :: api
odl-ovsdb-hwvtepsouthbound                     | 1.3.2-Boron-SR2  | x         | odl-ovsdb-hwvtepsouthbound-1.3.2-Boron-SR2 | OpenDaylight :: hwvtepsouthbound
odl-ovsdb-southbound-api                       | 1.3.2-Boron-SR2  | x         | odl-ovsdb-southbound-1.3.2-Boron-SR2       | OpenDaylight :: southbound :: api
odl-ovsdb-southbound-impl                      | 1.3.2-Boron-SR2  | x         | odl-ovsdb-southbound-1.3.2-Boron-SR2       | OpenDaylight :: southbound :: impl
odl-ovsdb-library                              | 1.3.2-Boron-SR2  | x         | odl-ovsdb-library-1.3.2-Boron-SR2          | OpenDaylight :: library  

























 

List the installed ODL netvirt OpenStack features.

 

opendaylight-user@root>feature:list -i | grep -i openstack
odl-netvirt-openstack                          | 0.3.2-Boron-SR2  | x         | odl-netvirt-0.3.2-Boron-SR2                | OpenDaylight :: NetVirt :: OpenStack

























 

Hit CTRL+d to exit from karaf shell.

 

Make sure that Open vSwitch's version is >= 2.5.0.

 

$ ovs-vsctl --version

ovs-vsctl (Open vSwitch) 2.5.0

 

Reboot the CentOS-7 VM and SSH in as the stack user.

 

ssh -p 3022 stack@127.0.0.1

 

Now, you are ready to deploy!

 

In the devstack directory, run stack.sh to deploy OpenStack Newton with OpenDaylight Boron and Open vSwitch.

 

./stack.sh

 

Below is the output of stack.sh once it finishes.

 

This is your host IP address: 10.0.2.15

This is your host IPv6 address: ::1

Horizon is now available at http://10.0.2.15/dashboard

Keystone is serving at http://10.0.2.15/identity/

The default users are: admin and demo

The password: nomoresecret

 

Verify if OpenDaylight has been correctly deployed with OpenStack.

 

Make sure that Open vSwitch is listening on TCP ports 6640 and 6653.

 

$ sudo ovs-vsctl show

3ee26796-ce1a-44a8-83eb-ebb0269c94b8

    Manager "tcp:10.0.2.15:6640"

        is_connected: true

    Bridge br-int

        Controller "tcp:10.0.2.15:6653"

            is_connected: true

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

        Port "tap6caac5d1-9e"

            Interface "tap6caac5d1-9e"

                type: internal

    ovs_version: "2.5.0"

 

$ sudo ovs-vsctl show | grep '6640\|6653'

    Manager "tcp:10.0.2.15:6640"

        Controller "tcp:10.0.2.15:6653"

 

Make sure that OpenDaylight, Open vSwitch and the OVSDB server are listening on TCP ports 6640 and 6653.


Note the PIDs of OpenDaylight (java), Open vSwitch and the OVSDB server.


$ sudo netstat -pan | grep ':6640\|:6653'

tcp        0      0 10.0.2.15:38298         10.0.2.15:6640          ESTABLISHED 18455/ovsdb-server 

tcp        0      0 10.0.2.15:46744         10.0.2.15:6653          ESTABLISHED 18465/ovs-vswitchd 

tcp6       0      0 :::6640                 :::*                    LISTEN      2125/java          

tcp6       0      0 :::6653                 :::*                    LISTEN      2125/java          

tcp6       0      0 10.0.2.15:6653          10.0.2.15:46744         ESTABLISHED 2125/java          

tcp6       0      0 10.0.2.15:6640          10.0.2.15:38298         ESTABLISHED 2125/java

 

Make sure that these PIDs match what is seen in the output of "ps".

 

$ ps aux | grep '18455\|18465\|2125'

stack     2125 12.7 19.3 4342780 750692 pts/4  Sl   01:02   3:01 /usr/java/jdk1.8.0_112/jre/bin/java -Djava.security.properties=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/etc/odl.java.security -server -Xms128M -Xmx2048m -XX:+UnlockDiagnosticVMOptions -XX:+UnsyncloadClass -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote -Djava.security.egd=file:/dev/./urandom -Djava.endorsed.dirs=/usr/java/jdk1.8.0_112/jre/lib/endorsed:/usr/java/jdk1.8.0_112/lib/endorsed:/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/lib/endorsed -Djava.ext.dirs=/usr/java/jdk1.8.0_112/jre/lib/ext:/usr/java/jdk1.8.0_112/lib/ext:/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/lib/ext -Dkaraf.instances=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/instances -Dkaraf.home=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT -Dkaraf.base=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT -Dkaraf.data=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/data -Dkaraf.etc=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/etc -Djava.io.tmpdir=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/data/tmp -Djava.util.logging.config.file=/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/etc/java.util.logging.properties -Dkaraf.startLocalConsole=false -Dkaraf.startRemoteShell=true -classpath /opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/lib/karaf-jaas-boot.jar:/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/lib/karaf-org.osgi.core.jar:/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/lib/karaf.branding-1.8.0-SNAPSHOT.jar:/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/lib/karaf.jar org.apache.karaf.main.Main

root     18455  0.0  0.0  43724  1696 ?        S<   00:56   0:00 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor

root     18465  0.0  0.9 268944 35496 ?        S<Ll 00:56   0:01 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor

 

Curl the OpenStack Horizon dashboard and make sure that there are no errors in the output.

 

$ curl localhost/dashboard
$


















 

Curl the OpenDaylight GUI.  Below is the expected output.

 

$ curl localhost:8181/index.html
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <title>OpenDaylight Dlux</title>
    <meta name="description" content="overview &amp; stats" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <script type="text/javascript">
var module = ['angular','ocLazyLoad','angular-ui-router','angular-translate', 'angular-sanitize', 'angular-translate-loader-static-files', 'angular-translate-loader-partial', 'angular-css-injector'];
var deps = ['common/config/env.module','app/core/core.module','common/login/login.module','common/authentification/auth.module','common/navigation/navigation.module','common/topbar/topbar.module','common/general/common.general.module','app/topology/topology.module','common/layout/layout.module'];
var e = ['oc.lazyLoad', 'ui.router', 'pascalprecht.translate', 'ngSanitize', 'angular.css.injector', 'app','app.core','app.common.login','app.common.auth','app.common.nav','app.common.topbar','app.common.general','app.topology','app.common.layout'];
        // global variables
    </script>
    <!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
  <!--[if lt IE 9]>
    <script src="assets/js/html5shiv.js"></script>
    <script src="assets/js/respond.min.js"></script>
    <![endif]-->
    <!-- compiled CSS -->
    <link rel="stylesheet" type="text/css" href="vendor/ng-grid/ng-grid.min.css" />
    <link rel="stylesheet" type="text/css" href="vendor/select2-bootstrap-css/select2-bootstrap.css" />
    <link rel="stylesheet" type="text/css" href="vendor/footable/css/footable.core.min.css" />
    <link rel="stylesheet" type="text/css" href="vendor/footable/css/footable.standalone.min.css" />
    <link rel="stylesheet" type="text/css" href="vendor/vis/dist/vis.min.css" />
    <link rel="stylesheet" type="text/css" href="vendor/ng-slider/dist/css/ng-slider.min.css" />
    <link rel="stylesheet" type="text/css" href="vendor/angular-material/angular-material.css" />
    <link rel="stylesheet" type="text/css" href="vendor/material-design-icons/iconfont/material-icons.css" />
    <link rel="stylesheet" type="text/css" href="assets/opendaylight-dlux-0.2.0.css" />
    <link rel="stylesheet" href="assets/css/sb-admin.css" />
    <script type="text/javascript" data-main="src/main.js" src="vendor/requirejs/require.js"></script>
    <link rel="stylesheet" href="assets/css/font-awesome.min.css" />
    <!-- the font-awesome is different from the 'official' one -->
    <!-- application CSS -->
  </head>
  <body class="skin-3">
    <div ui-view="mainContent" id="main-content-container"></div>
  </body>
</html>


















 

Check the OVS config.

 

$ sudo ovs-vsctl get Open_vSwitch . other_config
{local_ip="10.0.2.15", provider_mappings="public:br-ex"}
















 

Make sure that the neutron configuration file /etc/neutron/neutron.conf has the following ODL entries.

 

[DEFAULT]
service_plugins = odl-router,neutron.services.metering.metering_plugin.MeteringPlugin
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
















 

Make sure that the neutron ml2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini has the following ODL entries.

 

[ml2]
mechanism_drivers = opendaylight,logger
[ml2_odl]
port_binding_controller = network-topology
password = admin
username = admin
url = http://10.0.2.15:8087/controller/nb/v2/neutron
















 

Note the neutron ml2 ODL url:

 

$ grep 8087 /etc/neutron/plugins/ml2/ml2_conf.ini
url = http://10.0.2.15:8087/controller/nb/v2/neutron
















 

Make sure that neutron-server is using the right configuration files that have the ODL entries.

 

$ ps aux | grep ml2

stack     7523  0.1  2.2 290596 88872 pts/8    S+   01:05   0:04 /usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

 

Source userrc_early in the devstack directory and check neutron CLIs.

 

Check if all the neutron agents are running fine.

 

$ source userrc_early
$ neutron agent-list
+-----------------+----------------+--------------+-------------------+-------+----------------+--------------------+
| id              | agent_type     | host         | availability_zone | alive | admin_state_up | binary             |
+-----------------+----------------+--------------+-------------------+-------+----------------+--------------------+
| 7a84f626-a656   | DHCP agent     | devstack-odl | nova              | :-)   | True           | neutron-dhcp-agent |
| -426e-acae-     |                |              |                   |       |                |                    |
| 5395cb56822a    |                |              |                   |       |                |                    |
| 9a626977-97ed-  | Metering agent | devstack-odl |                   | :-)   | True           | neutron-metering-  |
| 486b-b9c4-2fb60 |                |              |                   |       |                | agent              |
| 9ebd69f         |                |              |                   |       |                |                    |
| de275c04-18bc-  | Metadata agent | devstack-odl |                   | :-)   | True           | neutron-metadata-  |
| 43f1-b01b-      |                |              |                   |       |                | agent              |
| 4ee9affc3654    |                |              |                   |       |                |                    |
+-----------------+----------------+--------------+-------------------+-------+----------------+--------------------+
















 

Create a neutron network, subnet and a router.

 

neutron net-create test-net
neutron subnet-create --name test-subnet test-net 11.11.11.0/24
neutron router-create test-router

$ neutron net-list | grep test-net
| 66e9a2a1-de76-4c92-b84a-e9aafdf75ad7 | test-net | 65a0a59d-90b7-476f-b117-64d7c7ab4901 11.11.11.0/24      |

$ neutron subnet-list | grep test-subnet
| 65a0a59d-90b7-476f-b117-64d7c7ab4901 | test-subnet         | 11.11.11.0/24      | {"start": "11.11.11.2", "end": "11.11.11.254"}                              |

$ neutron router-list | grep test-router
| 618e0100-50a7-4251-94b3-029811789c1d | test-router | null
















 

Curl neutron's ml2 ODL url and check if the neutron networks, subnets, routers and ports can be successfully retrieved.


$ curl -v -u admin:admin http://10.0.2.15:8087/controller/nb/v2/neutron/networks | grep '\"name\"'
      "name" : "public",
      "name" : "test-net",
      "name" : "private",

$ curl -v -u admin:admin http://10.0.2.15:8087/controller/nb/v2/neutron/subnets | grep '\"name\"'
      "name" : "test-subnet",      "name" : "public-subnet",
      "name" : "ipv6-public-subnet",
      "name" : "ipv6-private-subnet",
      "name" : "private-subnet",

$ curl -v -u admin:admin http://10.0.2.15:8087/controller/nb/v2/neutron/routers | grep '\"name\"'
      "name" : "router1",
      "name" : "test-router",

$ curl -v -u admin:admin http://10.0.2.15:8087/controller/nb/v2/neutron/ports
















Curl neutron's ml2 ODL url and check if the neutron network topology can be successfully retrieved.


$ curl -v -u admin:admin http://10.0.2.15:8087/restconf/operational/network-topology:network-topology














Check the OpenFlow 1.3 table in the OVS bridge br-int:

 

$ sudo ovs-ofctl -O OpenFlow13 dump-flows br-int
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=3340.050s, table=0, n_packets=0, n_bytes=0, dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x0, duration=3265.069s, table=0, n_packets=7, n_bytes=558, in_port=1,dl_src=fa:16:3e:46:31:aa actions=set_field:0x17->tun_id,load:0x1->NXM_NX_REG0[],goto_table:20
>NXM_OF_ETH_DST[],set_field:fa:16:3e:46:31:aa->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163e4631aa->NXM_NX_ARP_SHA[],load:0xa000002->NXM_OF_ARP_SPA[],IN_PORT
cookie=0x0, duration=899.385s, table=20, n_packets=0, n_bytes=0, priority=1024,arp,tun_id=0x51,arp_tpa=11.11.11.2,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:60:81:f2->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163e6081f2->NXM_NX_ARP_SHA[],load:0xb0b0b02->NXM_OF_ARP_SPA[],IN_PORT
cookie=0x0, duration=3340.083s, table=20, n_packets=16, n_bytes=1296, priority=0 actions=goto_table:30
cookie=0x0, duration=3340.050s, table=30, n_packets=16, n_bytes=1296, priority=0 actions=goto_table:31














Connect to the ODL karaf shell, and check if the neutron network, subnet and router that were created are captured in the ODL logs.

 

cd ~/distribution-karaf-0.5.2-Boron-SR2/

sudo bash -c "export JAVA_HOME=/usr/java/jdk1.8.0_112 ; ./bin/client"

 

opendaylight-user@root>log:display | grep test-net

Network{getName=test-net, getStatus=ACTIVE, getTenantId=Uuid [_value=fdd867a9-e0e4-46c1-8985-70b1cc590d7d], getUuid=Uuid [_value=66e9a2a1-de76-4c92-b84a-e9aafdf75ad7], isAdminStateUp=true, isShared=false, augmentations={interface org.opendaylight.yang.gen.v1.urn.opendaylight.neutron.l3.ext.rev150712.NetworkL3Extension=NetworkL3Extension{isExternal=false}, interface org.opendaylight.yang.gen.v1.urn.opendaylight.neutron.provider.ext.rev150712.NetworkProviderExtension=NetworkProviderExtension{getNetworkType=class org.opendaylight.yang.gen.v1.urn.opendaylight.neutron.networks.rev150712.NetworkTypeVxlan, getSegmentationId=81}}}

 

opendaylight-user@root>log:display | grep test-subnet

Subnet{getAllocationPools=[AllocationPools{getEnd=IpAddress [_ipv4Address=Ipv4Address [_value=11.11.11.254]], getStart=IpAddress [_ipv4Address=Ipv4Address [_value=11.11.11.2]], augmentations={}}], getCidr=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=11.11.11.0/24]], getDnsNameservers=[], getGatewayIp=IpAddress [_ipv4Address=Ipv4Address [_value=11.11.11.1]], getHostRoutes=[], getIpVersion=class org.opendaylight.yang.gen.v1.urn.opendaylight.neutron.constants.rev150712.IpVersionV4, getName=test-subnet, getNetworkId=Uuid [_value=66e9a2a1-de76-4c92-b84a-e9aafdf75ad7], getTenantId=Uuid [_value=fdd867a9-e0e4-46c1-8985-70b1cc590d7d], getUuid=Uuid [_value=65a0a59d-90b7-476f-b117-64d7c7ab4901], isEnableDhcp=true, augmentations={}}

 

opendaylight-user@root>log:display | grep test-router

Router{getName=test-router, getRoutes=[], getStatus=ACTIVE, getTenantId=Uuid [_value=fdd867a9-e0e4-46c1-8985-70b1cc590d7d], getUuid=Uuid [_value=618e0100-50a7-4251-94b3-029811789c1d], isAdminStateUp=true, isDistributed=false, augmentations={}}

 

Check ODL OpenFlow statistics and session statistics in the karaf shell:

 

opendaylight-user@root>ofp:showstats
FROM_SWITCH: no activity detected
FROM_SWITCH_TRANSLATE_IN_SUCCESS: no activity detected
FROM_SWITCH_TRANSLATE_OUT_SUCCESS: MSG[MultipartType] -> +1208 | 1208
FROM_SWITCH_TRANSLATE_SRC_FAILURE: no activity detected
FROM_SWITCH_PACKET_IN_LIMIT_REACHED_AND_DROPPED: no activity detected
FROM_SWITCH_NOTIFICATION_REJECTED: no activity detected
FROM_SWITCH_PUBLISHED_SUCCESS: MSG[PortStatusMessage] -> +6 | 6
FROM_SWITCH_PUBLISHED_FAILURE: MSG[MultipartReplyMessage] -> +6044 | 6044
TO_SWITCH_ENTERED: MSG[SetConfigInput] -> +1 | 1
TO_SWITCH_ENTERED: MSG[FlowModInputBuilder] -> +117 | 117
TO_SWITCH_ENTERED: MSG[RoleRequestInputBuilder] -> +4 | 4
TO_SWITCH_ENTERED: MSG[MultipartType] -> +7248 | 7248
TO_SWITCH_DISREGARDED: no activity detected
TO_SWITCH_RESERVATION_REJECTED: no activity detected
TO_SWITCH_READY_FOR_SUBMIT: MSG[] -> +4 | 4
TO_SWITCH_READY_FOR_SUBMIT: MSG[] -> +7248 | 7248
TO_SWITCH_READY_FOR_SUBMIT: MSG[] -> +118 | 118
TO_SWITCH_SUBMIT_SUCCESS: MSG[SetConfigInput] -> +1 | 1
TO_SWITCH_SUBMIT_SUCCESS: MSG[FlowModInputBuilder] -> +117 | 117
TO_SWITCH_SUBMIT_SUCCESS: MSG[RoleRequestInputBuilder] -> +4 | 4
TO_SWITCH_SUBMIT_SUCCESS_NO_RESPONSE: no activity detected
TO_SWITCH_SUBMIT_FAILURE: no activity detected
TO_SWITCH_SUBMIT_ERROR: no activity detected
REQUEST_STACK_FREED: MSG[RpcContextImpl] -> +118 | 118
OFJ_BACKPRESSURE_ON: no activity detected
OFJ_BACKPRESSURE_OFF: no activity detected

opendaylight-user@root>ofp:show-session-stats
SESSION : Uri [_value=openflow:185752284545496]
CONNECTION_CREATED : 1













 

Check the ODL web end points in the karaf shell.

 

opendaylight-user@root>web:list
ID  | State       | Web-State   | Level | Web-ContextPath           | Name                                             
------------------------------------------------------------------------------------------------------------------------------
269 | Active      | Deployed    | 80    | /moon                     | org.opendaylight.aaa.aaa-shiro (0.5.0.SNAPSHOT)  
273 | Active      | Deployed    | 80    | /oauth2                   | aaa-authn-sts (0.5.0.SNAPSHOT)                   
279 | Active      | Deployed    | 80    | /auth                     | aaa-idmlight (0.5.0.SNAPSHOT)                    
289 | Active      | Deployed    | 80    | /controller/nb/v2/neutron | org.opendaylight.neutron.northbound-api (0.8.0.SNAPSHOT)
296 | Active      | Deployed    | 80    | /restconf                 | MD SAL Restconf Connector (1.5.0.SNAPSHOT)       
299 | Active      | Deployed    | 80    | /apidoc                   | MD SAL Rest Api Doc Generator (1.5.0.SNAPSHOT)







 

Make sure that the ODL configurations have the right entries for OpenStack neutron and Open vSwitch.

 

opendaylight-user@root>config:list  | grep -i ovs

   featuresBoot = config,standard,region,package,kar,ssh,management,odl-neutron-service,odl-restconf-all,odl-aaa-authn,odl-dlux-core,odl-mdsal-apidocs,odl-ovsdb-openstack,odl-neutron-logger

Pid:            org.opendaylight.ovsdb.library

BundleLocation: mvn:org.opendaylight.ovsdb/library/1.4.0-SNAPSHOT

   felix.fileinstall.filename = file:/opt/stack/opendaylight/distribution-karaf-0.6.0-SNAPSHOT/etc/org.opendaylight.ovsdb.library.cfg

   service.pid = org.opendaylight.ovsdb.library

 

Hit CTRL+d to exit from karaf shell.

 

Since we have setup port forwarding on VirtualBox, the following links can be accessed on the Mac laptop to retrieve the neutron networks, subnets, ports and routers from neutron's ml2 ODL url!


http://localhost:8187/controller/nb/v2/neutron/networks

 

http://localhost:8187/controller/nb/v2/neutron/subnets

 

http://localhost:8187/controller/nb/v2/neutron/ports

 

http://localhost:8187/controller/nb/v2/neutron/routers

 

ODL_ml2_url.png

 

On the laptop, access the network topology at the ODL web endpoint using RESTCONF.

 

http://localhost:8282/restconf/operational/network-topology:network-topology

 

8181.png

 

The OpenStack Horizon dashboard can be accessed on the Mac laptop at http://localhost:8080/.  Use the username admin and password nomoresecret to login into Horizon.

 

horizon.png

 

Congratulations!  You've successfully deployed OpenStack Newton with OpenDaylight Boron and Open vSwitch!

 

Please refer my blog How to stack DevStack Newton on CentOS-7 in VirtualBox on Mac for steps to boot a nova instance in the OpenStack Horizon dashboard.

 

Boot a nova VM (test-vm) using cirros image, m1.tiny flavor, and attach it to the private network.  Also, create a floating IP in the public network and associate it to the nova VM.  Add security group rules to the "default" security group in order to SSH into and ping nova VMs.

 

$ cd ~/devstack

$ nova list
+--------------------------------------+---------+--------+------------+-------------+--------------------------------------------------------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks                                                           |
+--------------------------------------+---------+--------+------------+-------------+--------------------------------------------------------------------+
| 3204114d-d3b2-4493-8115-abd0b463152a | test-vm | ACTIVE | -          | Running     | private=10.0.0.5, fd38:25d7:fb99:0:f816:3eff:fe35:90f0, 172.24.4.5 |
+--------------------------------------+---------+--------+------------+-------------+--------------------------------------------------------------------+

$ openstack security group rule create --protocol tcp --dst-port 22 default  
$ openstack security group rule create --protocol icmp --dst-port -1 default


 

Find the DHCP namespace and SSH into the cirros VM from inside the DHCP namespace using the following credentials!

 

username:  cirros

password:  cubswin:)

 

$ neutron net-list | grep private
| f58ba1ee-9a21-4dea-ab96-10d06b2c46b5 | private | dda9ffee-b36c-400b-a93c-9ba3b36280ae fd38:25d7:fb99::/64 |

$ ip netns | grep f58ba1ee-9a21-4dea-ab96-10d06b2c46b5qdhcp-f58ba1ee-9a21-4dea-ab96-10d06b2c46b5

$ sudo ip netns exec qdhcp-f58ba1ee-9a21-4dea-ab96-10d06b2c46b5 ssh cirros@10.0.0.5
cirros@10.0.0.5's password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:35:90:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.5/24 brd 10.0.0.255 scope global eth0
    inet6 fe80::f816:3eff:fe35:90f0/64 scope link
       valid_lft forever preferred_lft forever


 

Here is the network topology you can see in http://localhost:8080/dashboard/project/network_topology/.

 

topology.png

 

Hope this blog is helpful!

In an OpenStack deployment, when packets from a nova instance need to reach the external world, they first go through the virtual switch (OpenvSwitch or Linux Bridge) for layer-2 switching, then through the physical NIC of the compute host, and then leave the compute host.  Hence, the path of the packets is:

 

packet_path.png


When, single Root I/O Virtualisation (SR-IOV) and PCI pass-through are deployed in OpenStack, the packets from the nova instance do not use the virtual switch (OpenvSwitch or Linux Bridge).  They directly go through the physical NIC, and then leave the compute host, thereby bypassing the virtual switch.  In this case, the physical NIC does the layer-2 switching of the packets.  Hence, the path of the packets is:


sr-iov.png


A virtual switch (OpenvSwitch or Linux Bridge) may also be used with SR-IOV if an instance needs a virtual switch for layer-2 switching.  A virtual switch is just software and obviously does not scale well like a physical NIC when the packet rate is high.


In order to deploy SR-IOV on Cisco's UCS servers, we need the UCSM ml2 neutron plugin developed by Cisco.  Here are few links about the UCSM ml2 neutron plugin:


ML2 Mechanism Driver for Cisco UCS Manager — Neutron Specs ed6fc9a documentation

OpenStack/UCS Mechanism Driver for ML2 Plugin Liberty - DocWiki

Cisco UCS Manager ML2 Plugin for Openstack Networking : Blueprints : neutron


SR-IOV needs specific NICs and currently works only on the following Cisco VICs:


  • Cisco VIC 1380 on UCS B-series M4 blades
  • Cisco VIC 1340 on UCS B-series M4 blades
  • Cisco VIC 1240 on UCS B-series M3 blades


Here is the Cisco VIC 1380 in the UCSM GUI:


vic1.png


vic2.png


Below are the neutron and nova configurations needed to enable SR-IOV in OpenStack:


In /etc/neutron/plugins/ml2/ml2_conf_sriov.ini:


[sriov_nic]
physical_device_mappings = physnet2:eth3
exclude_devices =


In /etc/nova/nova.conf:


[default]
pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2"}


Pass /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as --config-file and start neutron-server.


$ neutron-server \
  --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugin.ini \
  --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini


Pass /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as --config-file and start neutron-sriov-nic-agent.


$ neutron-sriov-nic-agent \
 --config-file /etc/neutron/neutron.conf -\
 --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini


OpenStack Docs: Using SR-IOV functionality

OpenStack Docs: Attaching physical PCI devices to guests


Not all OpenStack images can be booted with SR-IOV port.  The image must have the following ENIC driver that interacts with the VF (Virtual Function) for SR-IOV to work.


$ modinfo enic

filename:       /lib/modules/3.10.0-327.10.1.el7.x86_64/kernel/drivers/net/ethernet/cisco/enic/enic.ko

version:        2.1.1.83

license:        GPL

author:         Scott Feldman <scofeldm@cisco.com>

description:    Cisco VIC Ethernet NIC Driver

rhelversion:    7.2

srcversion:     E3ADA231AA76168CA78577A

alias:          pci:v00001137d00000071sv*sd*bc*sc*i*

alias:          pci:v00001137d00000044sv*sd*bc*sc*i*

alias:          pci:v00001137d00000043sv*sd*bc*sc*i*

depends:      

intree:         Y

vermagic:       3.10.0-327.10.1.el7.x86_64 SMP mod_unload modversions

signer:         Red Hat Enterprise Linux kernel signing key

sig_key:        E3:9A:6C:00:A1:DE:4D:FA:F5:90:62:8C:AB:EC:BC:EB:07:66:32:8A

sig_hashalgo:   sha256

 

Create an SR-IOV neutron port by using the argument "--binding:vnic-type direct".  Note the port ID in the output below.  It will be needed to boot the nova instance and attach it to the SR-IOV port.

 

neutron port-create provider_net --binding:vnic-type direct --name=sr-iov-port

 

neutron port-list

 

Boot a nova VM and attach the SR-IOV port to it.

 

nova boot --flavor m1.large --image RHEL-guest-image --nic port-id=<port ID of SR-IOV port> --key-name mc2-installer-key sr-iov-vm

 

nova list


Once the instance is up, ping it and SSH into it (security groups do not work for SR-IOV ports.  Please refer the "Known limitations of SR-IOV ports" section at the end of this blog).  Make sure that you can ping the gateway of the instance.  The packets from the instance will not go through the virtual switch (OpenvSwitch or Linux Bridge).  They directly go through the physical NIC of the compute host, and then leave the compute host, thereby bypassing the virtual switch.

 

Below are the steps to verify SR-IOV (VF) in the UCSM GUI.

 

u1.png

 

u2.png

 

Below are the steps to verify that the SR-IOV port does not use OpenvSwitch and bypasses it when sending/receiving packets:

 

1.  Find the compute node on which sr-iov-vm is running.

 

# nova show sr-iov-vm | grep host

| OS-EXT-SRV-ATTR:host                 | mc2-compute-12             |

| OS-EXT-SRV-ATTR:hypervisor_hostname  | mc2-compute-12             |

 

2.  Find the port ID of the SR-IOV neutron port.  d8379d3e-8877-4fec-8b50-ab939dbca797 is the port ID of the SR-IOV port in this case.

 

# neutron port-list | grep sr-iov-port

 

3.  SSH into the compute host (mc2-compute-12 in this case) and make sure that:

        1.  There is no interface with d8379d3e in its name on the compute host, and,

        2.  There is no interface with d8379d3e in the output of "ovs-vsctl show".

 

# ssh root@mc2-compute-12

[root@mc2-compute-12 ~]# ip a | grep d8379d3e

[root@mc2-compute-12 ~]#

[root@mc2-compute-12 ~]# ovs-vsctl show | grep d8379d3e

[root@mc2-compute-12 ~]#

 

The above outputs must be empty.  This shows that the SR-IOV port is not using OpenvSwitch for layer-2 switching and the physical NIC connected to the SR-IOV port does the layer-2 switching for packets from the instance sr-iov-vm.

 

On the other hand, in case of an instance booted with a regular OVS (non-SR-IOV virtio) port, the above outputs will not be empty.  Below is an example of regular OVS (non-SR-IOV virtio) port (ID f9936cb7-885d-451d-9fc2-a86a670be732) attached to a instance.  This OVS (non-SR-IOV virtio) port is part of the br-int (OVS integration bridge) in the output of "ovs-vsctl show".

 

# nova list

+--------------------------------------+-----------+--------+------------+-------------+----------------------------+

| ID                                   | Name      | Status | Task State | Power State | Networks                   |

+--------------------------------------+-----------+--------+------------+-------------+----------------------------+

| 206a42e9-fa07-4128-b276-4764beaf9a9b | test-vm   | ACTIVE | -          | Running     | mc2_provider=10.23.228.102 |

+--------------------------------------+-----------+--------+------------+-------------+----------------------------+

 

# neutron port-list | grep 10.23.228.102

| f9936cb7-885d-451d-9fc2-a86a670be732 |             | fa:16:3e:df:30:32 | {"subnet_id": "9258aa43-59b2-4f57-8207-b7d30e7963e0", "ip_address": "10.23.228.102"} |

 

SSH into the compute host running the instance.

 

# ssh root@mc2-compute-4

 

[root@mc2-compute-4 ~]# ip a | grep f9936cb7

14: qbrf9936cb7-88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP

15: qvof9936cb7-88@qvbf9936cb7-88: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master ovs-system state UP qlen 1000

16: qvbf9936cb7-88@qvof9936cb7-88: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master qbrf9936cb7-88 state UP qlen 1000

17: tapf9936cb7-88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master qbrf9936cb7-88 state UNKNOWN qlen 500

 

[root@mc2-compute-4 ~]# ovs-vsctl show

    Bridge br-int

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

        Port "qvof9936cb7-88"

            tag: 1

            Interface "qvof9936cb7-88"

 

Advantages of SR-IOV ports:

  • Faster packet processing as the virtual switch (OpenvSwitch or Linux Bridge) is not used.
  • Can be used for multicast, media applications, video streaming apps, and real time applications.


Known limitations of SR-IOV ports:

  • Security groups are not supported when using SR-IOV.
  • When using Quality of Service (QoS), max_burst_kbps (burst over max_kbps) is not supported.
  • SR-IOV is not integrated into the OpenStack Dashboard (horizon). Users must use the CLI or API to configure SR-IOV interfaces.
  • Live migration is not supported for instances with SR-IOV ports.
  • OpenStack Docs: SR-IOV

 

Hope this blog is helpful!

OpenStack cinder provides block storage for nova instances.  Cinder volumes are great to store files in a nova instance permanently.  Without cinder volume, the data/files in a nova instance are ephemeral (temporary) and will be lost once the instance in deleted.  Sometimes, we may need to save the data/files in an instance permanently even after the instance in deleted.  Cinder volumes can also be used to share files among multiple instances in OpenStack.

 

This blog has the steps to:

 

  1. Create a cinder volume (using ceph backend)
  2. Attach this volume to a nova instance
  3. Check the attached volume in the instance
  4. Create an XFS filesystem/partition on this attached volume in the instance
  5. Mount the partitioned volume in the instance
  6. Write data into this mounted volume in the instance
  7. Unmount, detach the volume and make sure that data in the volume still exists even after the instance in deleted

 

1.  Create a cinder volume (using ceph backend):

 

Here are few links about integrating OpenStack cinder with ceph backend.

 

Block Devices and OpenStack — Ceph Documentation

http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/

 

In Horizon, Go to Project --> Compute --> Volumes --> Volumes --> Create Volume and enter all the info and click "Create Volume".

 

1.png

 

Make sure that the created volume is in "Available" state.

 

1.1.png

 

Create a nova instance in Horizon.  Hope you know the steps for this   If not, please contact me at vhosakot@cisco.com .

 

Make sure that the instance is in "Active" state, can be pinged, and SSH'ed into with the right security groups (allow ICMP and SSH).

 

2.  Attach this volume to a nova instance:

 

Go to Project --> Compute --> Volumes --> Volumes --> <Name of the volume> --> Edit Volume --> Manage Attachments.

 

2.png

 

Select the name of the instance in the "Attach to Instance" drop-down and click "Attach Volume".

 

2.1.png

 

Make sure that the volume is in "In-use" state and note the path to which the volume is attached in the "Attached To" column.  Also, check the "Size" column and make sure that it is right.

 

2.2.png

 

3. Check the attached volume in the instance:

 

SSH into the instance and run the following commands.

 

Check the volume at /dev/vdb and make sure that it is fine.

 

$ lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  40G  0 disk 
|---vda1 253:1    0  40G  0 part /
vdb    253:16   0  20G  0 disk 

$ sudo parted /dev/vdb 'print'
Error: /dev/vdb: unrecognised disk label
Model: Virtio Block Device (virtblk)                                      
Disk /dev/vdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags: 

$ df -h /dev/vdb
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev

$ ls -l /dev/vdb
brw-rw----. 1 root disk 253, 16 Jun 29 05:25 /dev/vdb

$ file /dev/vdb
/dev/vdb: block special










 

This attached volume is new and a raw block device without any filesystem.  Hence, this attached volume must be partitioned and mounted before the instance can write data into the volume.  The filesystem/partition should be created only the first time a new volume is attached to an instance, and NOT every time the volume is re-attached to an instance.  Creating the filesystem will erase all the data in the volume.  So, create the filesystem/partition only if you want to erase all the data in the attached volume.

 

4.  Create an XFS filesystem/partition on this attached volume in the instance:


More information is at:

 

16.3. Accessing a Volume from a Running Instance

Deployment Guide — swift 2.12.1.dev28 documentation

 

$ sudo mkfs.xfs -f /dev/vdb
meta-data=/dev/vdb               isize=256    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0









 

Replace "xfs" above with "ext4" if an ext4 file system/partition is needed.

 

Check the partitioned XFS filesystem:

 

$ sudo parted /dev/vdb 'print'
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags: 
Number  Start  End     Size    File system  Flags
1      0.00B  21.5GB  21.5GB  xfs









 

5.  Mount the partitioned volume in the instance:

 

Mount the volume at /dev/vdb to /mnt/volume.

 

$ sudo mkdir -p /mnt/volume

$ sudo mount /dev/vdb /mnt/volume








 

Make sure that the volume is correctly mounted and the mounted partition's size (20 GB in this case) and file system (XFS) are correct.

 

$ lsblk

NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

vda    253:0    0  40G  0 disk

|---vda1 253:1    0  40G  0 part /

vdb    253:16   0  20G  0 disk /mnt/volume

 

$ sudo parted /dev/vdb 'print'

Model: Virtio Block Device (virtblk)

Disk /dev/vdb: 21.5GB

Sector size (logical/physical): 512B/512B

Partition Table: loop

Disk Flags:

Number  Start  End     Size    File system  Flags

1      0.00B  21.5GB  21.5GB  xfs

 

$ df -h /dev/vdb

Filesystem      Size  Used Avail Use% Mounted on

/dev/vdb         20G   33M   20G   1% /mnt/volume

 

$ xfs_info /dev/vdb

meta-data=/dev/vdb               isize=256    agcount=4, agsize=1310720 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=0        finobt=0

data     =                       bsize=4096   blocks=5242880, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

log      =internal               bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

 

Now that the attached volume is partitioned and mounted, the instance can write data into the volume!

 

6.  Write data into this mounted volume in the instance:

 

Write data (two files test_file_on_volume and large_file.txt) into the mounted volume at /mnt/volume in the instance!

 

$ cd /mnt/volume

$ df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb         20G   33M   20G   1% /mnt/volume

$ ls

$ echo 'test test' | sudo tee test_file_on_volume
test test

$ ls -lrt
total 4
-rw-r--r--. 1 root root 10 Jun 29 05:42 test_file_on_volume

$ cat test_file_on_volume
test test

$ sudo dd if=/dev/urandom of=large_file.txt bs=102400 count=100
100+0 records in
100+0 records out
10240000 bytes (10 MB) copied, 0.773203 s, 13.2 MB/s

$ ls -lrt
total 10004
-rw-r--r--. 1 root root       10 Jun 29 05:42 test_file_on_volume
-rw-r--r--. 1 root root 10240000 Jun 29 05:42 large_file.txt








 

These two files test_file_on_volume and large_file.txt will be transferred from the compute node (on which the instance is scheduled) into the ceph nodes and also replicated on all the ceph nodes over the network.  These files should exist even after deleting the instance .

 

NOTE: Data written/read in the attached volume travels on the network.  Hence, the network (NICs) must be fast in order to scale.  I found that configuring jumbo frames (with MTU 9000 bytes) in OpenStack improves performance when a lot of data is quickly written by the instance into ceph.  Please refer Jumbo Mumbo in OpenStack using Cisco's UCS servers and Nexus 9000 for steps to configure jumbo frames (with MTU 9000 bytes) in OpenStack.

 

7.  Unmount, detach the volume and make sure that data in the volume still exists even after the instance in deleted:

 

Unmount the volume in the instance:

 

$ sudo umount /mnt/volume

$ lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  40G  0 disk
|---vda1 253:1    0  40G  0 part /
vdb    253:16   0  20G  0 disk

$ df /dev/vdb
Filesystem     1K-blocks  Used Available Use% Mounted on
devtmpfs         1922568     0   1922568   0% /dev








 

Detach the volume from the instance:

 

Go to Project --> Compute --> Volumes --> Volumes --> <Name of the volume> --> Edit Volume --> Manage Attachments.

 

Select the instance and click "Detach volume" --> "Detach Volume".

 

7.png

 

Make sure that the "Attached to" column is empty.

 

7.1.png

 

How to make sure that the data/files in the volume still exists even after the instance in deleted:

 

  • Attach this volume to a nova instance (step 2 above)
  • Check the attached volume in the instance (step 3 above)
  • DO NOT create the XFS filesystem/partition again (step 4 above).  The filesystem/partition should be created only the first time a new volume is attached to an instance, and NOT when the volume is re-attached to an instance.  Creating the filesystem will erase all the data in the volume.  So, create the filesystem/partition only if you want to erase all the data in the attached volume.
  • Mount the partitioned volume in the instance (step 5 above)
  • Read/write data in this mounted volume in the instance (step 6 above)
  • Unmount and detach the volume from the instance (step 7 above)

 

Re-attach and mount the volume (steps 2 and 3 above) to the same/different instance and the files test_file_on_volume and large_file.txt will still be seen at /mnt/volume in the instance:

 

$ ls /mnt/volume
test_file_on_volume  large_file.txt








 

NOTE:  Always unmount and detach the attached volume before deleting/terminating the instance.

 

To attach a cinder volume to an instance using cinder and nova CLIs, please refer OpenStack Docs: Launch an instance from a volume.

 

Hope this blog is helpful!

This is a blog about how to:

 

  1. Snapshot an OpenStack nova instance as a glance image
  2. Boot an instance from the snapshot (glance image)
  3. Download the snapshot (glance image) as a file onto disk
  4. Validate the downloaded snapshot file (image) using qemu-img
  5. Upload this file (image) on disk into OpenStack glance

 

1.  Snapshot an OpenStack nova instance as a glance image:

 

1.png

 

Make sure that this snapshot is seen as an glance image.

 

1.1.png

 

2.  Boot an instance from the snapshot (glance image):

 

2.png

 

Make sure that the flavor of the instance booted is at least equal to or bigger than the flavor of the instance that was snapshotted in step 1.

 

2.1.png

 

Make sure to allow ICMP and SSH (TCP port 22) in nova security groups.  Once the above instance is up, make sure you can ping and SSH into the instance.

 

3.  Download the snapshot (glance image) as a file onto disk:

 

# glance image-list

+--------------------------------------+--------------------------------+

| ID                                   | Name                           |

+--------------------------------------+--------------------------------+

| 56c4028c-57ab-4d12-8cf4-d96d874e90c3 | test-vm-snapshot               |

+--------------------------------------+--------------------------------+

 

# glance image-download 56c4028c-57ab-4d12-8cf4-d96d874e90c3 --file test-vm-snapshot

#

 

# ls -l test-vm-snapshot

-rw-r--r--. 1 root root 960692224 Jul 18 20:26 test-vm-snapshot

 

This downloaded image can now be backed up or transferred over the network.

 

4.  Validate the downloaded snapshot file (image) using qemu-img

 

The downloaded snapshot file (image) can be validated using:

 

qemu-image check test-vm-snapshot

qemu-image info test-vm-snapshot

 

Sorry, unfortunately I do not have the output of the qemu-image commands above.

 

qemu-img(1): QEMU disk image utility - Linux man page

 

5.  Upload this file (image) on disk into OpenStack glance

 

# glance image-create --name 'image-from-downloaded-snapshot' --disk-format qcow2 --container-format bare --file test-vm-snapshot --visibility public --progress

[=============================>] 100%

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         | 6c323b8eb0406d7f6097bfb821e9d68a     |

| container_format | bare                                 |

| created_at       | 2016-07-18T20:28:25Z                 |

| disk_format      | qcow2                                |

| id               | 44734abf-60c1-42ee-ac2a-c3018f43f003 |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | image-from-downloaded-snapshot       |

| owner            | 3f61ded7cb7b46d594cbff5899e54ead     |

| protected        | False                                |

| size             | 960692224                            |

| status           | active                               |

| tags             | []                                   |

| updated_at       | 2016-07-18T20:28:37Z                 |

| virtual_size     | None                                 |

| visibility       | public                               |

+------------------+--------------------------------------+

 

# glance image-list

+--------------------------------------+--------------------------------+

| ID                                   | Name                           |

+--------------------------------------+--------------------------------+

| 44734abf-60c1-42ee-ac2a-c3018f43f003 | image-from-downloaded-snapshot |

+--------------------------------------+--------------------------------+

 

This uploaded image will be seen in Horizon as well under "Images".  This image can now be used to boot a nova instance!

This is a blog about configuring jumbo frames in OpenStack in order to scale neutron and nova in a production datacenter using Cisco's UCS servers, UCS Fabric Interconnects and Nexus 9000.

 

In computer networking, MTU (Maximum Transmission Unit) is the size (in bytes) of the largest packet that can be transferred by an interface without IP fragmentation.

 

MTU is:

  • Fixed before transmitting data (Ethernet)
  • Negotiated during handshake/connect time (point-to-point serial links)
  • Dynamically determined on-the-fly while transmitting data

 

The default MTU is 1500 bytes.  Jumbo MTU is 9000 bytes.

 

Here is an IP packet.

 

mtu.png


IP fragmentation is an process that breaks datagrams into smaller pieces (fragments), so that packets may be formed that can pass through a link with a smaller maximum transmission unit (MTU) than the original datagram size. The fragments are reassembled by the receiving host.  IP fragmentation adds additional processing overhead on the interface.

 

IP fragmentation is controlled by the DF (Don't Fragment) bit.

 

IP Fragmentation (DF bit in IP header):


df.png

 

Here are the neutron configurations needed to enable jumbo MTU (9000 bytes) in OpenStack.

 

In neutron.conf:

    [DEFAULT]

    global_physnet_mtu = 9000

    advertise_mtu = true

 

In openvswitch_agent.ini:

    [ovs]

    bridge_mappings = provider1:eth1,provider2:eth2,provider3:eth3


In ml2_conf.ini:

    [ml2]

    physical_network_mtus = provider2:4000,provider3:1500

    path_mtu = 9000

 

Enable the DHCP MTU option (26) in /etc/neutron/dnsmasq-neutron.conf:

    dhcp-option-force=26,1454

 

Restart dnsmasq (DHCP server) on all network nodes.

 

The MTU values only apply to new network resources.  network_device_mtu in nova.conf is deprecated in OpenStack Juno.

 

Use case to test jumbo frames in OpenStack:

 

  • Boot a nova instance
  • Create a cinder volume using ceph backend
  • Attach volume to instance and mount it
  • SSH into instance and write lots of data in this volume
  • Measure traffic drop on the OpenStack nodes and ceph nodes

 

Refer How to attach cinder/ceph XFS volume to a nova instance in OpenStack horizon for more information about the above steps in the use case.

 

Below are the steps to configure jumbo frames on the UCS Fabric Interconnects in Cisco's UCSM GUI:


ucsm1.png


ucsm2.png


Test jumbo MTU on the OpenStack nodes (controller and compute nodes):


# ip a | grep mtu

2: mx: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

3: t: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

4: p: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

7: br-inst: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN

8: br-prov: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN

9: br-int: <BROADCAST,MULTICAST> mtu 9000 qdisc noop state DOWN

10: phy-br-inst@int-br-inst: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

11: int-br-inst@phy-br-inst: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

12: phy-br-prov@int-br-prov: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

13: int-br-prov@phy-br-prov: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000

 

Send jumbo frames with 8972 bytes as payload using ping from OpenStack nodes.  These frames will not be fragmented due to -M do.

 

# ping -M do -s 8972 10.7.8.2

PING 10.7.8.2 (10.7.8.2) 8972(9000) bytes of data.

8980 bytes from 10.7.8.2: icmp_seq=1 ttl=64 time=0.118 ms

8980 bytes from 10.7.8.2: icmp_seq=2 ttl=64 time=0.066 ms

--- 10.7.8.2 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 3999ms

rtt min/avg/max/mdev = 0.062/0.082/0.118/0.022 ms

 

-M above is used to select Path MTU Discovery strategy. -M above can be:

  • do (prohibit fragmentation, even local one)
  • want (do PMTU discovery, fragment locally when packet size is large)
  • dont (do not set DF flag)

 

Test jumbo frames on Cisco's UCS Fabric Interconnects:

 

UCS-FI-A# connect nxos

UCS-FI-A(nxos)# show queuing interface | grep MTU

    q-size: 360640, HW MTU: 9216 (9216 configured)

 

Send jumbo frames with 9000 bytes as payload using ping from the UCS Fabric Interconnect.

 

UCS-FI-A(local-mgmt)# ping 10.23.223.20 count 5 packet-size 9000

PING 10.23.223.20 (10.23.223.20) from 10.23.223.45 : 9000(9028) bytes of data.

9008 bytes from 10.23.223.20: icmp_seq=1 ttl=255 time=0.741 ms

9008 bytes from 10.23.223.20: icmp_seq=2 ttl=255 time=0.796 ms

9008 bytes from 10.23.223.20: icmp_seq=3 ttl=255 time=0.740 ms

9008 bytes from 10.23.223.20: icmp_seq=4 ttl=255 time=0.775 ms

9008 bytes from 10.23.223.20: icmp_seq=5 ttl=255 time=0.814 ms

--- 10.23.223.20 ping statistics ---

5 packets transmitted, 5 received, 0% packet loss, time 4033ms

rtt min/avg/max/mdev = 0.740/0.773/0.814/0.034 ms

 

Configure and test jumbo frames on Cisco's Nexus 9000:


N9k# configure terminal

N9k(config)# interface Ethernet1/3

N9k(config-if)# mtu 9216

N9k(config-if)# end

 

N9k# show running-config interface ethernet 1/3

interface Ethernet1/3

  mtu 9216

 

N9k# show interface ethernet 1/3

Ethernet1/3 is up

  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec

 

Jumbo MTU can also be configured for port-channels.

 

Send jumbo frames with 9000 bytes as payload using ping from Nexus 9000.

 

N9k# ping 10.23.223.21 vrf management packet-size 9000 count 5

PING 10.23.223.21 (10.23.223.21): 9000 data bytes

9008 bytes from 10.23.223.21: icmp_seq=0 ttl=254 time=1.384 ms

9008 bytes from 10.23.223.21: icmp_seq=1 ttl=254 time=0.993 ms

9008 bytes from 10.23.223.21: icmp_seq=2 ttl=254 time=0.919 ms

9008 bytes from 10.23.223.21: icmp_seq=3 ttl=254 time=0.927 ms

9008 bytes from 10.23.223.21: icmp_seq=4 ttl=254 time=1.002 ms

--- 10.23.223.21 ping statistics ---

5 packets transmitted, 5 packets received, 0.00% packet loss

round-trip min/avg/max = 0.919/1.044/1.384 ms

 

Jumbo packet (with MTU 9000 bytes) captured in Wireshark packet sniffer:


jumbo_packet.jpg


Jumbo frames in OpenStack with neutron SR-IOV ports:

 

I observed that nova VMs attached to a neutron SR-IOV port will also pass jumbo MTU packets.  Below is how jumbo MTU is seen on the interface of a nova VM attached to a neutron SR-IOV port.

 

[cloud-user@sr-iov-vm ~]$ ip a | grep 9000

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000


Path MTU Discovery:

 

Path MTU Discovery (PMTUD) is a technique to determining the MTU of the network path between source and destination.  PMTUD works by setting the Don't Fragment (DF) flag bit in the IP headers of outgoing packets. Then, any device along the path whose MTU is smaller than the packet will drop it, and send back an Internet Control Message Protocol (ICMP) Fragmentation Needed (Type 3, Code 4) message containing its MTU, allowing the source host to reduce its Path MTU appropriately.  This process is repeated until the MTU is small enough to traverse the entire path without fragmentation.

 

Advantages of jumbo frames:

 

  • Greater efficiency
  • Lesser packet drop
  • Interfaces process fewer packets per second

 

Disadvantages of jumbo frames:

 

  • Slower per-packet processing since each packet is big (9000 byes)
  • Bigger packets on the wire may increase lag and latency
  • Packet corruption causes huge retransmits and congestion

In the movie Ghostbusters, Zuul is the demigod and gatekeeper of Gozer, the destructor.

 

zuul.png

 

In OpenStack, ZUUL is a program used to gate the source code repository of a project so that changes are only merged if they pass tests.  Zuul is a pipeline-oriented project gating system that facilitates running tests and automated tasks in response to gerrit events.

 

Zuul - A Project Gating System — Zuul 2.5.2.dev44 documentation

Zuul — OpenStack Project Infrastructure 0.0.1.dev12032 documentation

GitHub - openstack-infra/zuul

 

Components of Zuul:

  • Connections (Gerrit, SMTP)
  • Triggers (Gerrit, Timer, Zuul)
  • Reporters (Gerrit, SMTP)
  • Zuul Cloner
  • Launchers (Gearman Jenkins Plugin)
  • Statsd reporting (Metrics)
  • Zuul Client

 

Zuul architecture:


arch1.png















 

arch2.png


Zulu-Jenkins Integration:

 

The Gearman Jenkins Plugin makes it easy to use Jenkins with Zuul by providing an interface between Jenkins and Gearman

(https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin).

 

Gearman Plugin passes Zuul Parameters as Jenkins build parameters (http://docs.openstack.org/infra/zuul/launchers.html#zuul-parameters).


Example Jenkins Git SCM plugin configuration:

 

Source Code Management:

    Git Repositories:

         Repository URL: <your Gerrit or Zuul repository URL>

        Advanced:

        Refspec: ${Zuul_REF}

        Branches to build:

        Branch Specifier: ${Zuul_COMMIT}

    Advanced:

        Clean after checkout: True

 

Zuul v3 supports multi-node.  Example multi-node playbook:


### openstack-infra/zuul-playbooks/zuul-multinode.yaml

---

   hosts: controller

   roles:

     - zuul-controller

---

   hosts: compute

   roles:

     - zuul-compute

 

To deploy Zuul on bare-metal, devstack-vm-gate-wrap.sh must be replaced with bare-metal orchestration.

 

devstack-vm-gate-wrap.sh - openstack-infra/devstack-gate - Run DevStack in the gate

 

OpenStack Kolla deploys OpenStack cloud in Docker containers in gate using Zuul.

 

Kolla - OpenStack

project-config/kolla.yaml at master · openstack-infra/project-config · GitHub

 

Other useful links needed to deploy Zuul:


This blog has the steps to stack DevStack Newton on CentOS-7 in VirtualBox on Mac laptop.

 

Below are the versions used.

 

VirtualBox is installed on Mac laptop and the CentOS-7 VM is created in VirtualBox.

 

In VirtualBox, the CentOS-7-x86_64-Minimal-1511.iso image is used to boot a CentOS-7 VM with 4 GB RAM and the following two network adapters.  A host-only adapter is not needed.

  • eth0 as a NAT adapter
  • eth1 as an internal network adapter

 

ram.png

 

eth0 is a NAT adapter.

 

nat.png

 

eth1 is an internal network adapter.

 

intnet.png

 

Run the following bash script to configure VirtualBox.  It will forward the required TCP ports from the host (Mac laptop) to the guest (CentOS-7 VM) and will also create eth1 as an internal network adapter.

 

#!/bin/bash
# Forward TCP port 3022 on host to TCP port 22 on guest VM so
# that host can SSH into guest VM
if ! VBoxManage showvminfo devstack-odl | grep 3022 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "SSH,TCP,,3022,,22"
fi
# Forward TCP port 8080 on host to TCP port 80 on guest VM so
# that host can access OpenStack Horizon in browser
if ! VBoxManage showvminfo devstack-odl | grep 8080 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "HTTP,TCP,,8080,,80"
fi
# Forward TCP port 6080 on host to TCP port 6080 on guest VM so
# that host can access Nova VNC console in browser
if ! VBoxManage showvminfo devstack-odl | grep 6080 > /dev/null
then
    VBoxManage modifyvm devstack-odl --natpf1 "CONSOLE,TCP,,6080,,6080"
fi
# Add internal network adapter for guest VM
if ! VBoxManage showvminfo devstack-odl | grep eth1 > /dev/null
then
    VBoxManage modifyvm devstack-odl --nic2 intnet
    VBoxManage modifyvm devstack-odl --intnet2 "eth1"
fi
# Remove stale entry in ~/.ssh/known_hosts on host
if [ -f ~/.ssh/known_hosts ]; then
    sed -i '' '/\[127.0.0.1\]:3022/d' ~/.ssh/known_hosts
fi























 

Below are the forwarded ports through eth0 (NAT interface) in VirtualBox.  Host is the Mac laptop and the guest VM is CentOS-7 booted in VirtualBox.

  • TCP port 3022 on host is forwarded to TCP port 22 on guest VM so that host can SSH into guest VM.
  • TCP port 8080 on host is forwarded to TCP port 80 on guest VM so that host can access OpenStack Horizon in browser.
  • TCP port 6080 on host is forwarded to TCP port 6080 on guest VM so that host can access Nova VNC console in browser.

 

Below is the screenshot of the forwarded ports through eth0 (NAT interface) in VirtualBox.

 

ports1.png

ports.png

 

Now, boot the CentOS-7 VM in VirtualBox.  Choose "VDI" as the disk format for the VM.

 

When booting the CentOS-7 VM in VirtualBox, press the Tab key and type the following kernel boot options.  This keeps the interface names as eth0 and eth1 in the CentOS-7 VM instead of enp0s*.

 

net.ifnames=0 biosdevname=0

 

Once the CentOS-7 VM boots, login into it and check the interfaces ("ip a" or "ifconfig").  eth0 will have an IP address like 10.0.2.15 and eth1 will not have any IP address.  Check the default gateway ("ip route").  10.0.2.2 will be the default gateway.  Make sure that you can ping a public DNS name like www.google.com and www.cisco.com.

 

Below are the output snippets of "ip a" and "ip route" inside the CentOS-7 VM.

 

$ ip a

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 08:00:27:60:77:7e brd ff:ff:ff:ff:ff:ff


$ ip route

default via 10.0.2.2 dev eth0  proto static  metric 100

10.0.2.0/24 dev eth0  proto kernel  scope link  src 10.0.2.15  metric 100

192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1

 

From the Mac laptop, SSH into the CentOS-7 VM using the forwarded port 3022.  Use the root password to login.

 

ssh -p 3022 root@127.0.0.1

 

Clone the DevStack Newton repository.

 

git clone https://git.openstack.org/openstack-dev/devstack -b stable/newton

 

Create stack user for DevStack.  Alternatively, you can use useradd and passwd to create a new stack user, and give sudo access to the stack user by typing visudo, adding "stack   ALL=(ALL)   ALL" under "root    ALL=(ALL)   ALL", and saving the file.

 

cd devstack

./tools/create-stack-user.sh

su stack

whoami

echo $HOME

cd

pwd

exit

exit

 

Copy the local.conf file below to the devstack directory.  It has the OpenStack core services (Horizon, Keystone, Nova, Neutron, Glance, RabbitMQ and MySQL) enabled.  It uses OpenvSwitch (OVS) as the virtual switch and VLAN for tenant networks.

 

[[local|localrc]]
OFFLINE=True
HORIZON_BRANCH=stable/newton
KEYSTONE_BRANCH=stable/newton
NOVA_BRANCH=stable/newton
NEUTRON_BRANCH=stable/newton
GLANCE_BRANCH=stable/newton

ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
LOGDIR=$DEST/logs
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2

ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,horizon

# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
PUBLIC_INTERFACE=eth0
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True























 

Now, exit and SSH back in as the stack user into the CentOS-7 VM.

 

ssh -p 3022 stack@127.0.0.1


Run stack.sh and stack DevStack Newton.

 

./stack.sh

 

Below is the output of stack.sh once it finishes.

 

This is your host IP address: 10.0.2.15

This is your host IPv6 address: ::1

Horizon is now available at http://10.0.2.15/dashboard

Keystone is serving at http://10.0.2.15/identity/

The default users are: admin and demo

The password: nomoresecret

 

Add security group rules to the "default" security group in order to SSH into and ping nova VMs.  This can be done in Horizon as well.

 

cd devstack
source openrc
openstack security group rule create --protocol tcp --dst-port 22 default
openstack security group rule create --protocol icmp --dst-port -1 default























 

Check the two neutron networks and the neutron router created by DevStack.

 

net-list.png

router-list.png

 

On the Mac laptop, enter http://localhost:8080 in a browser to access the Horizon GUI.  Login as admin and use nomoresecret as the password.  After logging into Horizon, in the upper left, select "demo" as the project.

 

Boot a nova VM (test-vm) using cirros image, m1.tiny flavor, and attach it to the private network.  Also, create a floating IP in the public network and associate it to the nova VM.  We can now SSH into the nova VM using the floating IP!

 

ssh cirros@172.24.4.7

password: cubswin:)

$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000

    link/ether fa:16:3e:28:05:85 brd ff:ff:ff:ff:ff:ff

    inet 10.0.0.9/24 brd 10.0.0.255 scope global eth0

    inet6 fdc7:3411:13c6:0:f816:3eff:fe28:585/64 scope global dynamic

       valid_lft 86375sec preferred_lft 14375sec

    inet6 fe80::f816:3eff:fe28:585/64 scope link

       valid_lft forever preferred_lft forever


172.24.4.7 above is the floating IP in the public network associated to test-vm and 10.0.0.9 is the private IP address of test-vm.

 

fip.png

 

Below is the network topology of the nova VM test-vm attached to the private network.  DevStack creates the public network, private network and the neutron router router1 that connects them.

 

topo.png

 

In Horizon, go to Project --> Compute --> Instances --> test-vm --> Console and right click on the link "Click here to show only console" and click on "Open Link in New Tab".

 

new_tab.png


Now, change the IP address in the new tab's web address from 10.0.2.15 to localhost as VirtualBox redirects TCP port 6080 from the host (Mac laptop) to the guest (CentOS-7 VM).  Now, the nova VM's console can be accessed in Horizon!


vnc_address.png


vnc_console.png


If a Cirros image is used to boot the nova VM, use the default credentials below to login into the console.


username: cirros

password: cubswin:)


In the CentOS-7 VM, the devstack logs will be /opt/stack/logs.  Run ./unstack.sh to unstack Devstack.

 

The VirtualBox VDI file will be at /Users/<userID>/VirtualBox\ VMs/ on the Mac laptop.  This VDI file is portable and can be re-used to boot the CentOS-7 VM in VirtualBox on any machine.

 

Hope this blog is helpful!

It is nearly here, the OpenStack Summit in Barcelona, October 24-28. As always, the agenda is packed with great sessions, including in depth tutorials and hands on workshops. I have installed the mobile app and started building out my schedule. With so many great sessions from which to choose, it is hard to find time for everything. With that in mind, I have something that may be a big help to you, a consolidated listing of the sessions offered by Cisco and DevNet.

 

Tuesday, 25 OctoberTrackTitle
Speaker(s)
11:15 a.m. -  11:30 a.m.#vBrownBagColusa - Cisco UCS C3260 and Ceph for Object StoresOliver Wasdorf, Cisco
12:15 p.m. - 12:55 p.m.

How to Contribute

The Path to Becoming an AUC (Active User Contributor)

Maish Saidel-Keesing, Cisco

Shamail Tahir, IBM

1:45 p.m. -  1:59 p.m.#vBrownBagA Scalable Neutron ML2 Mechanism Driver for the VPP PlatformNaveen Joy, Cisco

5:05 p.m. – 5:45 p.m.

NetworkingDeploying IPv6 in OpenStack EnvironmentsShannon McFarland, Cisco

5:05 p.m. – 5:45 p.m.

Operations War StoriesBrokenStack: OpenStack Failure Stories

Jonathan Kelly, Cisco

Jason Grimm, Cisco

Chris Riviere, Cisco

Wednesday, 26 OctoberTrackTitleSpeaker(s)

12:15 p.m. – 12:55 p.m.

Operations War StoriesRabbitMQ at Scale, Lessons Learned

Matthew Popow, Cisco

Wei Tie, Cisco

Weiguo Sun, Cisco

12:30 p.m. - 12:44 p.m.#vBrownBagHands on with Containerized Deployment of OpenStackCharles Eckel, Cisco

2:15 p.m. – 2:55 p.m.

ContainersOpenStack is an Application! Deploy and Manage Your Stack with Kolla-Kubernetes

Ryan Hallisey, Red Hat

Ken Wronkiewicz, Cisco

Michał Jastrzębski, Intel

2:15 p.m. – 2:55 p.m.

How To & Best PracticesQoS QoS Baby

Anne McCormick, Cisco

Robert Starmer, Kumulus Technologies

Alka Sathnur, Cisco

3:00 p.m. - 3:15 p.m.#vBrownBagAchieving Significant Scale in an Incredibly Short Amount of Time - What We Learned from OSIC Scale TestingSteven Dake, Cisco

5:05 p.m. – 5:45 p.m.

Cloud App DevelopmentDeploying Apps on OpenStack: Things We Found While Playing Around

Anne Gentle, Cisco

Hart Hoover, Cisco

5:05 p.m. – 5:45 p.m.

How To & Best PracticesPut Applications/NFV Performance Optimization Intelligence Into Your Cloud

Ian Zhang, Cisco

Liping Mao, Cisco

5:55 p.m. – 6:35 p.m.

Cloud Models & EconomicsBite Off More Than You Can Chew, Then Chew It: OpenStack Consumption Models

Tyler Britten, Blue Box

Walter Bentley, Rackspace

Jonathan Kelly, Cisco

Thursday, 27 OctoberTrackTitleSpeaker(s)
9:00 a.m. - 9:40 a.m.Sponsored TalksMetacloud -- OpenStack for the EnterpriseJason Grimm, Cisco
9:50 a.m. - 10:30 a.m.Sponsored TalksAccelerating NFV Deployments on OpenStack
Naren Narenda, Cisco
10:45 a.m. -  10:59 a.m.#vBrownBagJumbo Mumbo in OpenStackVikram Hosakote, Cisco
11:00 a.m. - 11:40 a.m.Sponsored Talks

OpenStack and the Cisco Next-Generation Data Center:

Customer Stories from BBVA, KazTransCom, SAP, and Standard Bank of South Africa

Mike Cohen, Cisco

Martin Klein, SAP

Cesar Martinez Segura, BBVA

Brenda Mey, SBSA

Maxim Popov, KazTransCom

11:50 a.m. - 12:30 p.m.Sponsored TalksML2/VPP -- Blazingly Fast Networking for Neutron

Ian Wells, Cisco

Jérôme Tollet, Cisco

11:50 a.m. - 12:30 p.m.IT StrategyThe Latest in the Container World and the Role of Container in OpenStack

Anni Iai, Huawei

Steven Dake, Cisco

Gal Sagie, Open Source Software Architect

Kenji Kaneshige, Fujitsu

Todd Moore, IBM

1:50 p.m. - 2:30 p.m.Sponsored TalksContainerization of OpenStack Services at Cisco and SAP to Gain Operational Advantage

Lew Tucker, Cisco

Steven Dake, Cisco

Michael Schmidt, SAP

3:30 p.m. – 4:10 p.m.

IT StrategyCloud Strategies for Greater Business ImpactEnrico Fuiano, Cisco

3:30 p.m. – 4:10 p.m.

Project UpdatesHorizon Project Overview

David Lyle, Intel

Rob Cresswell, Cisco

4:40 p.m. – 5:20 p.m.

Telecom & NFV StrategyGluon - Accelerating Development of New Networking Services for OpenStack

Tobias Ford, AT&T

Vincent Button, Nokia Alcatel-Lucent

Ian Wells, Cisco

Marco Rodrigues, Juniper Networks

Jeff Collins, Ericsson

5:30 p.m. – 6:10 p.m.

Community BuildingMaking Meetup Magic: Growing the OpenStack Community Through Local Events

Gary Kevorkian, Cisco

Kenneth Hui, Rackspace

Tassoula Kokkoris, IBM

Lisa-Marie Namphy, HPE

 

 

Hopefully this helps you identify and prioritize the sessions that are most important to you. Worst case, if you get double booked and miss something, we will pull together the slides and recordings from all of these sessions so you can catch what you missed or relive a session that was particularly helpful. You can access sessions from previous OpenStack Summits and several other past events within the Videos page of our OpenStack site within Cisco DevNet. We put this site together to help you keep abreast of Cisco's involvement with OpenStack that may be of interest to you, including our contributions to OpenStack and how we use OpenStack within our products and solutions, plus easy access to plugins and drivers, developer related resources and events, and more.

 

Travel safe and see you in Barcelona!

The blog about Cisco's scalable CPNR DHCP and DNS neutron plugin for OpenStack is at

 

Cisco's scalable, enterprise-class DHCP and DNS solution for OpenStack Neutron

Vikram Hosakote

Best REST in OpenStack

Posted by Vikram Hosakote Sep 13, 2016

This is a blog about the best practices to develop, test, debug and scale REST APIs in an OpenStack cloud.  OpenStack is a great way to deploy a cloud in data centers and is one of the most popular open-source projects to deploy highly-scalable clouds.  OpenStack is all about RESTful services.  Every service in OpenStack talks to every other service using REST APIs.  As a cloud operator deploying OpenStack, it is very important to understand how REST APIs work in order to debug and scale an OpenStack cloud effectively.

 

What is a RESTful service?

 

  • Representational State Transfer (REST) is a stateless, client-server based, cacheable web service
  • Uses HTTP protocol for communication
  • Data is considered as a “resource” and accessed using Uniform Resource Identifiers (URIs) that typically look like a web link
  • URI is RFC3986 (RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax)
  • Uses HTTP clients like curl, browsers (Google Chrome Postman) and wget

 

REST in OpenStack

 

OS.png

 

 

I gave a talk "Best REST in OpenStack" at Cisco Live in Las Vegas in 2016.  My talk covered the following topics:

 

  • Run OpenStack Neutron CLIs and analyze REST packets in WireShark
  • REST for big data use cases – REST pagination in Neutron
  • Implementing a RESTful server using Python Flask

 

The slides from my talk are at http://www.slideshare.net/Vikram_Hosakote/best-rest-in-openstack.

 

The scripts I used in my talk are at GitHub - vhosakot/Cisco-Live-Workshop: Scripts for Cisco Live Workshops.

 

Troubleshooting / debugging REST APIs

 

 

Sample script to troubleshoot / debug REST APIs using Python

 

import httplib
import logging
import requests

httplib.HTTPConnection.debuglevel = 1
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True

result = requests.get('http://www.cisco.com/')








Output of above Python script with REST API debug messages


>>> result = requests.get('http://www.cisco.com/')
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): www.cisco.com
send: 'GET / HTTP/1.1\r\nHost: www.cisco.com\r\nConnection: keep-alive\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.10.0\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Server: Apache
header: ETag: "10839-53789488e5b33"
header: Accept-Ranges: bytes
header: Content-Encoding: gzip
header: CDCHOST: wemxweb-publish-prod1-01
header: Content-Length: 14215
header: Content-Type: text/html
header: Expires: Wed, 13 Jul 2016 20:25:27 GMT
header: Cache-Control: max-age=0, no-cache, no-store
header: Pragma: no-cache
header: Date: Wed, 13 Jul 2016 20:25:27 GMT
header: Connection: keep-alive
header: access-control-allow-origin: *
DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 14215


>>> print result
<Response [200]>






 

Sample script to mock a REST server to unit-test Python REST APIs

 

$ python

>>> import requests
>>> import requests_mock
>>> session = requests.Session()
>>> adapter = requests_mock.Adapter()
>>> session.mount('mock', adapter)
>>> adapter.register_uri('GET', 'mock://test.com', text='data')
>>> resp = session.get('mock://test.com')
>>> resp.status_code, resp.text
(200, 'data')





 

https://github.com/openstack/requests-mock

https://pypi.python.org/pypi/mock-server/0.3.7

https://github.com/gabrielfalcao/HTTPretty

http://python-mock-tutorial.readthedocs.io/en/latest/mock.html

 

REST API caching and cache-aware REST clients

 

RFC 7234 - https://tools.ietf.org/html/rfc7234#section-5

REST supports the following HTTP cache-control headers:

  • max-age
  • max-stale
  • min-fresh
  • no-cache
  • no-store
  • no-transform
  • only-if-cached
  • must-revalidate
  • public
  • private


cache.png

 

Bulk REST operations and REST API batching


Bulk REST operations allow us to perform REST operations (GET, POST, PUT, DELETE) on bulk (more than one) objects (networks, ports, subnets, etc) at once.  This reduces the number of REST messages between client and the server.

 

http://developer.openstack.org/api-ref-networking-v2.html#bulkCreateNetwork

https://wiki.openstack.org/wiki/Neutron/APIv2-specification#Bulk_version

 

Bulk REST API JSON request format to create five neutron networks in OpenStack using POST operation:


POST v2.0/networks.json 
Content-Type: application/json 
Accept: application/json 
{ 
"networks": [ 
   { 
     "name": "sample_network_1", 
     "admin_state_up": false 
   }, 
   { 
     "name": "sample_network_2", 
     "admin_state_up": false 
   }
   { 
     "name": "sample_network_3”, 
     "admin_state_up": true 
   }
   { 
     "name": "sample_network_4”, 
     "admin_state_up": true 
   }
   { 
     "name": "sample_network_5”, 
     "admin_state_up": false 
   }] 
}

As an update to the OpenStack Learning Labs in the Cisco DevNet site, a new version of the lab including OpenStack Mitaka deployed using Kolla is now available.

 

OpenStack Kolla provides tools to deploy OpenStack services as Docker containers and to upgrade easily. To help developers and admins wanting to explore Kolla and OpenStack, we now have an updated virtual machine image available in DevNet with the Mitaka version of OpenStack.

 

Visit the Cisco DevNet OpenStack site and launch the learning lab by clicking the Learn -> Openstack on your Laptop.

 

Feel free to post follow up questions and discussions here in the OpenStack DevNet Community.

amccormi

Heat CFN in OpenStack

Posted by amccormi Jun 21, 2016

Heat CloudFormation (CFN) orchestration in OpenStack provides the ability to use AWS-style:

  • APIs
  • templates
  • expressions within regular HOT templates

 

This means that operators coming from an AWS cloud environment to OpenStack can use their existing AWS APIs and templates to spin up tenant VMs, networks, etc., making the transition more seamless. For more information on Heat and Heat CFN, see https://wiki.openstack.org/wiki/Heat.

 

A very useful mechanism within AWS, and supported by Heat CFN is an AWS::CloudFormation::WaitCondition, which can be inserted in between two virtual resource instantiations, and causes the orchestration layer to wait until one resource is up and running before bringing up the next resource. This allows cloud orchestration to happen in a stable, predictable way. Further explanation and an example HOT template using an AWS::CloudFormation::WaitCondition can be found here: https://blog.zhaw.ch/icclab/manage-instance-startup-order-in-openstack-heat-templates/.

 

Interestingly, the OpenStack community is moving away from AWS APIs, and therefore from CFN. There are several reasons for this:

  • The AWS APIs have not been properly maintained within OpenStack
  • Heat CFN has a dependency on the Nova EC2 APIs, which have been deprecated
  • HOT templates now have native support for WaitCondition

 

See http://cloudscaling.com/blog/openstack/the-future-of-openstacks-ec2-apis/ for more information.


Going forward, a standalone EC2 API is being maintained in StackForge, so it will still be possible to use CFN, however it will not be part of standard deployment.

roagarwa

QoS in OpenStack

Posted by roagarwa Apr 1, 2016

Quality of Service (QoS) is defined as the ability to meet resource requirements in order to satisfy Service Level Agreements (SLA) between application/platform provider and end users.  In a cloud environment multiple tenants share the same physical infrastructure and QoS can span across compute, network and storage. Due to the above characteristics of multi-tenancy and resource sharing, cloud environments must deal with issues such as noisy neighbor, priority inversion etc and enforce policies so that efficient and predictable resource utilization takes place.


To quantitatively implement QoS, several aspects such as network bandwidth, throughput, lossless and low latency, CPU allocation, disk throughput need to be considered.  Specifically from a network point of view, as traffic from video workloads co-exist and traverse within the data center network with other network traffic, QoS will need to be defined and enforced at multiple layers in order to meet the requirements of the flow.

 

The OpenStack platform provides multiple QoS options spread across multiple projects.

  • OpenStack Networking service, Neutron, provides support for limiting egress bandwidth for VMs with hypervisor and SR-IOV ports on a Neutron network and port level.  
  • OpenStack Block Storage service, Cinder, provides volume types QoS specs for limiting throughput and IOPs at the hypervisor front-end or the storage back-end for volumes.
  • OpenStack Compute service, Compute, provides instance resource quota for CPU, disk IO and network bandwidth at the hypervisor level. Additionally, it also provides options for CPU pinning and isolation for compute resource guarantee.


The community continues to improve towards providing additional QoS features such as DSCP marking on Neutron network and ports etc and will be soon part of upcoming OpenStack releases (Newton).

chricker

Trying OpenStack Using Kolla

Posted by chricker Feb 25, 2016

At Cisco we have used OpenStack heavily internally as well as supported use of it in partnership with our customers for several years now. One lesson we've learned from our operational experience with OpenStack is that architectural approaches which facilitate management are essential to a successful experience operating OpenStack; OpenStack is a large distributed application which consists of dozens of loosely coupled services, each with their own software dependency stack. Managing those services and their dependencies through the entire lifecycle from deployment through update and upgrade while achieving desired operational metrics around availability poses interesting challenges.

 

One architecture we often use which helps simplify management of OpenStack is to containerize deployment of OpenStack. By containerizing each of the separate OpenStack services into a self-contained container housing that service and its required software stack, each service can be dealt with as a discrete unit. This greatly simplifies both flexibility in deployment as well as ongoing maintenance tasks such as updates and upgrades.

 

Within the OpenStack community, the Kolla Project provides tools which simplify creation of Docker containers of each of the separate OpenStack services, as well as orchestration to deploy those containers to achieve a functional, manageable OpenStack deployment. To help developers and admins wanting to explore Kolla and OpenStack, we now have an virtual machine image available in DevNet to try out OpenStack and Kolla.

 

The virtual machine image and documentation on how to install it are available now on our Box site

CiscoLive Berlin 2016 Kolla Liberty VM - Box. In this site, you'll find two key files:

 

  • CentOS VirtualBox OpenStack Liberty.ova
    • This is the VirtualBox image. It is roughly 5.5 GB in size, so plan accordingly when you download it.
  • CentOS OpenStack Liberty Kolla Instructions 201602.docx
    • This is a document covering how to configure VirtualBox for the Kolla image, detailing how to set up your networks and other basic configurations that must be made for the virtual machine to work properly. This document also covers user name and password information and other details for accessing the OpenStack cloud running inside this VirtualBox image.

 

To get started exploring, download the VirtualBox image and follow its accompanying setup doc. Feel free to post followup questions and discussions here in the OpenStack DevNet Community. Special thanks to the Kolla Project and the larger OpenStack community for this fantastic way to deploy and manage a cloud!

We make heavy use of Docker internally for its fantastic tooling to help streamline and focus our processes that depend on containers. Recently we found the need to begin hacking on the Docker source to extend its APIs for improving our internal CI/CD workflow. During the development of our custom Docker APIs, I felt that the approach described in Docker’s documentation for setting up a development environment was great for those starting out with contributing, but it didn’t quite fit our needs for the following reasons:


  • all build-related processes are performed inside of a container, which didn’t exactly fit with our existing CM flow;
  • waiting for the dev container to build fresh slowed us down and bloated our dev environment by requiring Docker to build Docker source;
  • the development lifecycle is slightly encumbered due to the nature of needing the code to build Docker inside of a container – which can be solved using volumes, however we already had an established development workflow that uses Vagrant; and
  • when we’re ready to test in a production-like environment with our custom-built Docker, we needed an easy way to build and deploy in our internal cloud – we already had an established workflow for this, so consistency was important

 

To help alleviate our issues above, I decided to move the build steps outside of the dev container and directly into a RHEL 7.2 VM that we use for development (provisioned by Vagrant) – I use a Mac locally, but if you’re using RHEL directly you don’t necessarily need to use a VM. Please keep in mind that the process I use and describe below is for our internal workflows, and if you intend on contributing to Docker then you should be familiar with and follow Docker’s “Code contribution workflow” (https://docs.docker.com/opensource/code/).

 

The entire process to setup a development environment pretty much mirrors what happens during the building of Docker’s development container. It will install various package dependencies, statically build a few required libraries, and set up the environment for building with Go – the steps are easily repeatable and automated using your favorite CM tool.

 

Install dependencies

 

Install the required dependencies. You may need to tweak these depending on your environment.

 

yum install -y golang btrfs-progs btrfs-progs-devel glibc-static

 

Build statically linked libraries

 

Here we grab the source for lvm2 and sqlite3 so we can build them manually while enabling them for static linking.

 

# lvm2
git clone -b v2_02_103 https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2
cd /usr/local/lvm2
./configure --enable-static_link && make device-mapper && make install_device-mapper

# sqlite3
mkdir /usr/src/sqlite3
curl -sSL https://www.sqlite.org/2015/sqlite-autoconf-3081002.tar.gz | tar -v -C /usr/src/sqlite3 -xz
cd /usr/src/sqlite3/sqlite-autoconf-3081002/
./configure --enable-static && make && make install

ldconfig

 

Prep environment for building Go

 

In this step, we’re going to grab Docker source from github and place it in the standard spot for Go programs. Feel free to use your own forked code and relocate the source to where it makes sense in your environment.

 

# grab docker source
mkdir -p /go/src/github.com/docker
git clone -b v1.9.1 https://github.com/docker/docker /go/src/github.com/docker/docker

 

Build docker

 

And the final step is to build the docker binary from the source we cloned in the previous step.

 

cd /go/src/github.com/docker/docker
GOPATH=/go:/go/src/github.com/docker/docker/vendor hack/make.sh binary

 

and the resulting binary should be available at bundles/1.9.1/binary/docker-1.9.1.

 

Wrapper script

 

I use a script similar to the following to quickly kick off various tasks while developing.

 

#!/bin/bash
export GOPATH=/go:/go/src/github.com/docker/docker/vendor
cd /go/src/github.com/docker/docker
hack/make.sh $@

 

and use it for building, linting, validating, running tests, etc. To find the available process you can check the docker/hack/make directory as most of the scripts in there are actual commands that can be used with the provided make.sh or using the wrapper above.

 

./build_wrapper.sh validate-lint
./build_wrapper.sh validate-gofmt
TESTFLAGS='-test.run ^TestBuild$' ./build_wrapper.sh test-unit
./build_wrapper.sh binary

Filter Blog

By date:
By tag: