OpenStack Mitaka and OpenDayLight Beryllium Integration 1

By Kasidit Chanchio, Vasinee Siripoon, Somkiat Kosolsombat

vasabilab

In this document, we refer to a reference architecture depicted in Figure 1 below, where there are four ubuntu 14.04 machines running OpenStack Mitaka and one machine running OpenDayLight (ODL) Beryllium. The gateway machine is a simulated gateway of the network where OpenStack and ODL hosts reside.

openStackODL-pictures4

Figure 1.

We supposed the OpenStack installation with neutron is running. This document uses “gre” tunneling configuration for neutron’s data tunnel network. The creation of this document is inspired the contribution of Vinoth Kumar Selvaraj in [1]. We have also created:

The remaining of this document describes the OpenStack and OpenDayLight integration in 10 steps.
Step 1: ODL installation

On the ODL machine (IP 10.0.0.92), install java and download and install ODL beryllium using the following commands.

$ sudo apt-get install openjdk-7-jdk
$ wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.4.2-Beryllium-SR2/distribution-karaf-0.4.2-Beryllium-SR2.tar.gz
$ tar xvfz distribution-karaf-0.4.2-Beryllium-SR2.tar.gz
$ cd distribution-karaf-0.4.2-Beryllium-SR2/
$ sudo ./bin/start
$ sudo ./bin/client –u karaf

In the ODL shell, install the following features:

opendaylight-user@root>feature:install odl-ovsdb-openstack
opendaylight-user@root>feature:install odl-dlux-core
opendaylight-user@root>feature:install odl-dlux-all

You can check what features were installed with the following commands.

opendaylight-user@root>feature:list | grep dlux
  < showing the list of features installed … >
opendaylight-user@root>feature:list | grep openstack
  < showing the list of features installed … >
opendaylight-user@root>

Check the installation from one of the remote machine in the OpenStacK pool, says the controller machine. The command should return output below showing an empty network on the ODL controller.

openstack@controller:~$ curl -u admin:admin http://10.0.0.92:8080/controller/nb/v2/neutron/networks
{
   "networks" : [ ]
}openstack@controller:~$

Step 2: Erase all VMs and network stuffs from OpenStack
Terminate all the VMs and network related stuffs such as network, subnet, routers, Floating IPs of every user from the OpenStack installation. You can do this via Horizon dashboard or command line interfaces.

Then, stop neutron with the following command:

openstack@controller:~$ sudo service neutron-server stop

Step 3: Remove neutron plugin on every network and compute node

Purge the neutron plugin agent and delete existing openvswitch configurations. Restart the openvswitch agent anew.

openstack@network:~$ sudo apt-get purge neutron-openvswitch-agent
openvswitch-switch stop/waiting
openstack@network:~$ sudo service openvswitch-switch stop
openstack@network:~$ sudo rm -rf /var/log/openvswitch/
openstack@network:~$ sudo rm -rf /etc/openvswitch/conf.db
openstack@network:~$ sudo mkdir /var/log/openvswitch/
openstack@network:~$ sudo service openvswitch-switch start
openvswitch-switch start/running
openstack@network:~$ sudo ovs-vsctl show
2c63096f-74f3-46eb-9904-00305ef84106
    ovs_version: "2.3.1"
openstack@network:~$

Do the same on every network node and compute node.

Next, identify the IP address of the NIC on each machine that connect to the data tunnel network and input the openvswitch’s configuration. The example below show the command to configure the network node, which has

2c63096f-74f3-46eb-9904-00305ef84106

for its openvswitch ID and

10.0.1.21

as its data tunnel network IP address.

openstack@network:~$ sudo ovs-vsctl set Open_vSwitch 2c63096f-74f3-46eb-9904-00305ef84106  other_config={'local_ip'='10.0.1.21'} 

Instruct openvswitch to make a connection to the OpenDayLight server with the command below.

openstack@network:~$ sudo ovs-vsctl set-manager tcp:10.0.0.92:6640

Again, do the same on every network and compute node. If your controller node is also a network node, you may have to do this on the controller as well.

Step 4: Configure external network on the network node

Do the following on the network node.

openstack@network:~$ sudo ovs-vsctl add-br br-ex
openstack@network:~$ sudo ovs-vsctl add-port br-ex eth3
openstack@network:~$ sudo ovs-vsctl show
2c63096f-74f3-46eb-9904-00305ef84106
    Manager "tcp:10.0.0.92:6640"
        is_connected: true
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth3"
            Interface "eth3"
    Bridge br-int
        Controller "tcp:10.0.0.92:6653"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.3.1"
openstack@network:~$

Step 5: Config ml2_conf.ini on controller, network and compute node

Next, modify the /etc/neutron/plugins/ml2/ml2_conf.ini file on

  • controller,
  • network,
  • compute,
  • compute1

On the controller node:

openstack@network:~$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = opendaylight 

[ml2_odl]
password = admin
username = admin
url = http://10.0.0.92:8080/controller/nb/v2/neutron
[ml2_type_gre]
tunnel_id_ranges = 1:1000

On the network nodes do the following. Note that the [agent] section is different from the sane file on the controller node above, assuming the controller node does not have compute node capability and is not connecting to the data tunnel network.

openstack@network:~$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = opendaylight 

[ml2_odl]
password = admin
username = admin
url = http://10.0.0.92:8080/controller/nb/v2/neutron
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ovs]
local_ip = 10.0.1.21
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre

On the compute nodes modify the ml2_conf.ini as follows. The example file below is for the compute node that has data tunnel network IP as 10.0.1.31. You have to change the “local_ip” parameter for other compute nodes accordingly. (Please do the comute1’s configuration by yourself.)

openstack@compute:~$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = opendaylight 
[ml2_odl]
password = admin
username = admin
url = http://10.0.0.92:8080/controller/nb/v2/neutron
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ovs]
local_ip = 10.0.1.31
[agent]
tunnel_types = gre

Step 6: Config l3_agent.ini on network node

Next, indicate “external_network_bridge = br-ex” in the l3_agent.ini file of the network node. Note that from our experience, you must complete the step 5 above before doing this step.

openstack@network:~$ sudo vi /etc/neutron/l3_agent.ini
…
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge = br-ex
router_delete_namespaces = True
verbose = True

Step 7: Reset neutron database on controller node

Log in as a user and use the following commands on the controller node. These commands reset neutron database and configure new parameters to it.

openstack@controller:~$ source ./admin_openrc.sh 
openstack@controller:~$ mysql -u root -pmysqlpassword
MariaDB [(none)]> DROP DATABASE neutron;
Query OK, 157 rows affected (1.76 sec)

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'b6d473ff35f93e98e191';
Query OK, 0 rows affected (0.05 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'b6d473ff35f93e98e191';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye
openstack@controller:$ 
openstack@controller:$ sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

The identifier

b6d473ff35f93e98e191

is the neutron database password you gave to OpenStack when you installed it.

Step 8: Restart neutron services on controller and network and compute nodes
Next, on the controller, you have to restart the neutron server on the controller node.

openstack@controller:$ sudo service neutron-server restart
neutron-server start/running, process 7669
openstack@controller:$

Then, restart services on the network node.

openstack@network:$ sudo service openvswitch-switch restart
openstack@network:$ sudo service neutron-l3-agent restart
openstack@network:$ sudo service neutron-dhcp-agent restart
openstack@network:$ sudo service neutron-metadata-agent restart

Also, you may want to restart services on compute and compute1 nodes.

openstack@compute:$ sudo service openvswitch-switch restart

Step 9: Install the networking_odl python module

Next, do the following on the controller.

openstack@controller:$ sudo apt-get install git
openstack@controller:$ git clone https://github.com/openstack/networking-odl -b stable/mitaka
openstack@controller:$ sudo python setup.py install
openstack@controller:$ sudo service neutron-server restart
neutron-server stop/waiting
neutron-server start/running, process 12502

Step 10: Verify installation

Next, create router, network and subnet on the controller’s CLI or use web UI in horizon. We will create an external network and a private network in this example.

openstack@controller:$ . ./admin_openrc.sh 

openstack@controller:$ neutron net-create ext-net -–router:external –-provider:physical_network external -–provider:network_type flat

openstack@controller:$ neutron subnet-create ext-net 10.0.0.0/24 -–name ext-subnet -–allocation-pool start=10.0.0.100,end=10.0.0.200 -–disable-dhcp -–gateway 10.0.0.1 

openstack@controller:$ neutron net-create demo-net

openstack@controller:$ neutron subnet-create demo-net 192.168.1.0/24 -–name demo-subnet -–gateway 192.168.1.1 

openstack@controller:$ neutron router-create demo-router 

openstack@controller:$ neutron router-interface-add demo-router demo-subnet

openstack@controller:$ neutron router-gateway-set demo-router ext-net

Wait for 1 munute, and use ping to check if 10.0.0.100 is reachable.

openstack@controller:$ ping 10.0.0.100

Next, use openstack dashboard to launch a cirros virtual machine. Login to the machine and test network connectivity by ping google or other web sites. You should also attach a floating IP to the virtual machine and try to ssh login to the machine via floating IP address.

References.

[1] Open Daylight integration with OpenStack: a tutorial

One comment on “OpenStack Mitaka and OpenDayLight Beryllium Integration

  1. Reply Goutham Pratapa Sep 7,2016 7:05 pm

    Hi i have tried the following steps with Mitaka and Boron .I couldnt keep the br-ex so i skipped that step (hoping that i could ping my VM through netns).

    Here are the steps i have followed
    —- Installed ODL in my Host machine (172.16.69.65)
    —- Set Mirantis Openstack setup in my host-machine so my fuel-ip is (10.20.0.2) controller-IP -10.20.0.5 compute-1(IP)-10.20.0.4 compute-2(IP) 10.20.0.3
    — and have done the rest of the steps fine

    but even after these steps i couldnt ping my VM

Leave a Reply