Start with a simple 2-node OpenStack setup with KVM

OpenStack is something that gets more and more in the picture and even if you’re only a little interested in the latest technologies, you must have heard from OpenStack here or there. But what is it exactly and more important, how does it work practically. The best way to figure that out is just to get going with it and try to install it and play around. Here you can find a brief explanation and a tutorial or waltrough to deploy a small OpenStack environment on top of CentOS 7 or RHEL 7.

I’ve heard a lot of OpenStack since almost every big vendor is somehow involved and is actively promoting OpenStack on events or on their website but I never really got into the details and, to be honest, it always stayed a little vague to me how it was really implemented, if it’s production ready, running on top of what and more important, is if it was interesting to take a look at.

To get started with OpenStack is not as easy as it is advertised. After all it is a very big project with a lot of components that each have a different purpose and you need some time to get used to the terminology and find a way through the massive stream of information and especially marketing for OpenStack. Information provided is not always clear and in a lot of cases it’s very high level. There are OpenStack packaged-ready solutions like DevStack in case you want to get going very fast.

I decided to not go the DevStack way but to install all components “by hand” to try and understand everything a little better. For the small test setup, I used the latest stable release at time of writing: IceHouse.

Since this post became much bigger than I originally planned, I decided to add a small index here with some anchor/jumplinks to the correct place in the text:

OpenStack components

Component Purpose
Nova compute: can be compared to or linked to a hypervisor
Swift object storage: similar to cluster filesystem
Cinder block storage: provides block storage for compute nodes
Neutron networking
Horizon Dashboard: web-interface to control OpenStack
Keystone identity service: keeps track of all services and objects
Glance image service: stores disk and server images
Ceilometer telemetry: accounting
Heat orchestration: template and multiple image actions
Trove database as a service
Ironic bare metal provisioning
Zaqar multiple tenant cloud messaging
Sahara elastic map reduce: control clusters

A simple OpenStack setup

You always need to start simple with something new and that’s why I decided to start with a two node setup, one node which will act as the OpenStack controller and another node which will be my compute node. On the compute node, it should be possible to deploy one or more virtual machines, or instances as they are called in the OpenStack world, and deploy, add, remove, control them from the controller.

Regarding networking, the same ideas apply and I decided to not use Neutron, which implements a complete software network layer. Instead, I will use the (almost deprecated) legacy nova-network. The use of nova-network implies that our controller only has an interface in a management network, the compute node needs one interface in the management network and another to communicate with the outside world.

Overview of the test setup

openstack_setup

During the test, I used IP-addresses everywhere to not get confused with names, hosts or DNS problems.

All passwords which are needed are set to secretpass, so in your own setup, you’ll need to replace the IP’s and passwords with your own.

This is the output regarding ip configuration on both nodes:

[root@testvm101 ~]# ip a|grep "inet "
inet 127.0.0.1/8 scope host lo
inet 192.168.203.101/24 brd 192.168.203.255 scope global eno33554968
[root@testvm102 ~]# ip a|grep "inet "
inet 127.0.0.1/8 scope host lo
inet 192.168.202.102/24 brd 192.168.202.255 scope global eno16777736
inet 192.168.203.102/24 brd 192.168.203.255 scope global eno33554968

Preparation of the controller and compute node

Before starting with the setup, I want to make sure that nothing is blocking or disturbing us during the test so I chose to disbale the firewall and SELinux on both nodes. For the rest of the experiment, I will work as root. Not that I like this approach but search for hourse because SELinux or the firewall blocks something isn’t something I like either

[root@testvm101 ~]# setenforce 0
[root@testvm101 ~]# systemctl stop iptables
[root@testvm101 ~]# systemctl disable iptables
rm '/etc/systemd/system/basic.target.wants/iptables.service'
[root@testvm102 ~]# setenforce 0
[root@testvm102 ~]# systemctl stop iptables
[root@testvm102 ~]# systemctl disable iptables
rm '/etc/systemd/system/basic.target.wants/iptables.service'

One of the requirements of the nodes in the OpenStack setup is that the time source is equal so I decided to set-up NTP and let the controller act as the source for all other nodes.

To setup NTP, we’ll start by installing the ntp package on both the controller and compute node

[root@testvm101 ~]# yum install ntp
...
Complete !
[root@testvm102 ~]# yum install ntp
...
Complete !

On the controller; we can leave the configuration as it is for /etc/ntp.conf since it, by default, uses the time from 4 servers of pool.ntp.org.

On the compute node, we need to get the time from our controller so we need to change the /etc/ntp.conf and let it use the time from the controller node. We need to remove or comment the existing server-directives in the file and add a reference to our controller:

[root@testvm102 ~]# cat /etc/ntp.conf |grep "server "
server 192.168.202.101
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

Before starting the ntp daemon on the compute node, I manually synchronize the time to be sure that I won’t get any delays getting the time synced with pool.ntp.org:

[root@testvm101 ~]# ntpstat
unsynchronised
polling server every 64 s
[root@testvm101 ~]# ntpdate -u pool.ntp.org
6 Oct 09:20:06 ntpdate[23641]: adjust time server 178.32.44.208 offset 0.010921 sec
[root@testvm101 ~]# ntpstat
synchronised to NTP server (195.200.224.66) at stratum 3
time correct to within 7972 ms
polling server every 64 s

After syncing the time, we’ll start the ntp daemon and enable it to start at boot time:

[root@testvm101 ~]# systemctl restart ntpd
[root@testvm101 ~]# systemctl enable ntpd
ln -s '/usr/lib/systemd/system/ntpd.service' '/etc/systemd/system/multi-user.target.wants/ntpd.service'

Now that our NTP daemon is running on the controller, we can, just as we did on the controller, manually sync the time to avoid grace periods and then start and enable the ntp daemon:

[root@testvm102 ~]# ntpdate -u 192.168.203.101
6 Oct 09:22:38 ntpdate[25866]: step time server 192.168.203.101 offset 26.711256 sec
[root@testvm102 ~]# systemctl start ntpd
[root@testvm102 ~]# systemctl enable ntpd
ln -s '/usr/lib/systemd/system/ntpd.service' '/etc/systemd/system/multi-user.target.wants/ntpd.service'
[root@testvm102 ~]# ntpstat
synchronised to NTP server (192.168.203.101) at stratum 4
time correct to within 8159 ms
polling server every 64 s

Database (controller)

Data which will be used for our OpenStack components and nodes needs to be stored somewhere, even clouds need some place to keep everything :) So we’ll start by installing and configuring MariaDB on the controller.

The first step is, as always, to install the necessary packages:

[root@testvm101 ~]# yum install mariadb mariadb-server MySQL-python
...
Complete !

After the installation, we need to configure MariaDB to listen on the management IP-address on the controller and to use UTF-8 by default. For that, we need to adjust /etc/my.cnf to look as follows:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
bind-address = 192.168.203.101
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8


[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

After that, we can start and enable MariaDB on boot:

[root@testvm101 ~]# systemctl start mariadb
[root@testvm101 ~]# systemctl enable mariadb
ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'

After starting MariaDB, we need to go trough the standard installation by running:

[root@testvm101 ~]# mysql_install_db

I had some problems just to run this step and got an error message regarding the ario control file. Deleting the offending file solved the problem:

[root@testvm101 ~]# mysql_install_db
Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...
141010 17:04:15 [ERROR] mysqld: Can't lock aria control file '/var/lib/mysql/aria_log_control' for exclusive use, error: 11. Will retry for 30 seconds
141010 17:04:46 [ERROR] mysqld: Got error 'Could not get an exclusive lock; file is probably in use by another process' when trying to use aria control file '/var/lib/mysql/aria_log_control'
141010 17:04:46 [ERROR] Plugin 'Aria' init function returned error.
141010 17:04:46 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
ERROR: 1017 Can't find file: '/var/tmp/#sql_6ef_0' (errno: 2)
141010 17:04:46 [ERROR] Aborting
[root@testvm101 ~]# rm /var/lib/mysql/aria_log_control
[root@testvm101 ~]# mysql_install_db

After the initial setup, we still need to configure the root password of the database and remove the anonymous access. If we don’t remove the anonymous access, we’ll get into problems later:

[root@testvm101 ~]# mysql_secure_installation
/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Database (compute node)

On the compute node, the only component which we need to install regarding the database connectivity is the package MySQL-python:

[root@testvm102 ~]# yum install MySQL-python
...
Complete !

Install the necessary repositories and base OpenStack packages (controller + compute node)

The OpenStack packages can’t be found in the standard CentOS or RHEL repositories so we need to add two repositories to get all necessary packages. One of them is EPEL and the other is from OpenStack itself. The release I’ll use for this test is OpenStack IceHouse.

To install the repositories and GPG-keys:

[root@testvm101 ~]# yum install yum-plugin-priorities
...
Complete !
[root@testvm101 ~]# yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
...
Complete !
[root@testvm101 ~]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
[root@testvm101 ~]# yum -y install http://fedora.cu.be/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
...
Complete !
[root@testvm102 ~]# yum -y install yum-plugin-priorities
...
Complete !
[root@testvm102 ~]# yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
...
Complete !
[root@testvm102 ~]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse
[root@testvm102 ~]# yum -y install http://fedora.cu.be/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
...
Complete !

After adding the repositories, we can install the base openstack packages which contain the OpenStack utilities.

[root@testvm101 ~]# yum install openstack-utils openstack-selinux
...
Complete !
[root@testvm102 ~]# yum install openstack-utils openstack-selinux
...
Complete !

After installing the packages, it’s important to install the latest updates by doing a normal yum update. In case a newer kernel was installed, reboot and choose to use that kernel:

[root@testvm101 ~]# yum update
...
Complete !
[root@testvm102 ~]# yum update
...
Complete !

Message broker (MQ) (controller)

In order for all components in our OpenStack environment to communicate with each other, we’ll install a message broker. Qpid is a good choice.

Install the packages:

[root@testvm101 ~]# yum install qpid-cpp-server
...
Complete !

Disable the need for authentication

[root@testvm101 ~]# echo "auth=no" > /etc/qpid/qpidd.conf

Start the message broker and enable it on boot:

[root@testvm101 ~]# systemctl start qpidd
[root@testvm101 ~]# systemctl enable qpidd
ln -s '/usr/lib/systemd/system/qpidd.service' '/etc/systemd/system/multi-user.target.wants/qpidd.service'

OpenStack services

Now that our controller is prepared, we can finally start installing our OpenStack components. The installation usually consists of the following steps

  • Install the component (openstack-<comonent>) + the command line client for the component (python-<component>client)
  • Configure the component ot use the MySQL database
  • Add a database and tables for the component in the MySQL database
  • Register the component in the identity service
  • Start the component
  • Test the component

Keystone – Identity service (controller)

The identity service is responsible for keeping track of all components that exist in your OpenStack environment. All services, components and nodes need to register to the ID-service in order to be used.

Again, we’ll start with installing all necessary packages:

[root@testvm101 ~]# yum install openstack-keystone python-keystoneclient
...
Complete !

The ID-service needs to be able to save it’s data somewhere and for that we’ll use our previously configured database:

First we’ll configure Keystone to use our database:

[root@testvm101 ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:secretpass@192.168.203.101/keystone

After that, we need to actually create the database used for keystone and give access to the keystone user with the earlier give password:

[root@testvm101 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 5.5.37-MariaDB MariaDB Server

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'secretpass';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'secretpass';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

After creating the database, we’ll let Keystone create the tables in the DB:

[root@testvm101 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

Now that we’re ready setting up the service, we need to create an authentication token to authenticate between Keystone and other services:

We’ll generate a random sequence of hex-numbers and store it in a variable:

[root@testvm101 ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
[root@testvm101 ~]# echo $ADMIN_TOKEN
fd465d38e342ddc68be3

Then we can use the variable and change the configuration of Keystone to use our token:

[root@testvm101 ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN

We want Keystone to provide PKI tokens to authenticate so we’ll configure it to do so:

[root@testvm101 ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
Generating RSA private key, 2048 bit long modulus
.....................................................+++
.......+++
e is 65537 (0x10001)
Generating RSA private key, 2048 bit long modulus
.............................................+++
............................................+++
e is 65537 (0x10001)
Using configuration from /etc/keystone/ssl/certs/openssl.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName           :PRINTABLE:'US'
stateOrProvinceName   :ASN.1 12:'Unset'
localityName          :ASN.1 12:'Unset'
organizationName      :ASN.1 12:'Unset'
commonName            :ASN.1 12:'www.example.com'
Certificate is to be certified until Oct  7 15:19:10 2024 GMT (3650 days)

Write out database with 1 new entries
Data Base Updated
[root@testvm101 ~]# chown -R keystone:keystone /etc/keystone/ssl
[root@testvm101 ~]# chmod -R o-rwx /etc/keystone/ssl

Once everything is configured, we can start the Keystone service. It will listen for requests on port 35357.

[root@testvm101 ~]# systemctl start openstack-keystone
[root@testvm101 ~]# systemctl enable openstack-keystone
ln -s '/usr/lib/systemd/system/openstack-keystone.service' '/etc/systemd/system/multi-user.target.wants/openstack-keystone.service'

Expired tokens will be saved in Keystone forever and in a larger environment, this will cause our tables to become rather large. To resolve this problem, we’ll schedule a job with cron to regurarly cleanup the expired tokens.

[root@testvm101 ~]# echo '@hourly /usr/bin/keystone-manage token_flush > /var/log/keystone/keystone-tokenflush.log 2 > &1' >> /var/spool/cron/keystone
[root@testvm101 ~]# crontab -l -u keystone
@hourly /usr/bin/keystone-manage token_flush > /var/log/keystone/keystone-tokenflush.log 2>&1

We have our identity service up and running so it’s time to add some users and roles to the Keystone database.

To use keystone, we need to provide our earlier generated admin_token when executing actions on Keystone. There are two ways to use Keystone, and most other OpenStack components.

The first is to export the connection and authentication information in environment variables and execute a limited command on the CLI. The second requires a larger list of arguments but saves you the work of exporting the variables. So when you know that you’ll need to execute multiple Keystone commands, it’s interesting to export the information in environment variables. When it’s only one execution, better to provide the information as an argument.

To use keystone without environment variables, we can provide the information as arguments to our keystone-command:

[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 user-list

To use Keystone with the the exported variables, we’ll export them and simply not provide the arguments:

[root@testvm101 ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[root@testvm101 ~]# echo $OS_SERVICE_TOKEN
fd465d38e342ddc68be3
[root@testvm101 ~]# export OS_SERVICE_ENDPOINT=http://192.168.2023.101:35357/v2.0
[root@testvm101 ~]# keystone user-list

In case you receive an error message (HTTP 500), check if the values for the endpoint and admin-token match the ones in /etc/keystone/keystone.conf or check the messages in /var/log/keystone/keystone.log.

Now that we learned how to use keystone, let’s add the admin user, the admin roles and “tenant”.

[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 user-create --name=admin --pass=secretpass --email=root@localhost
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          root@localhost          |
| enabled  |               True               |
|    id    | 9f5113b5e35a494b9d0a283416bd1de7 |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+
[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 role-create --name=admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | b11b83e818b54fe18cb4f9905b76bc23 |
|   name   |              admin               |
+----------+----------------------------------+
[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | 04a29968ada94975ba8871f7bb7301de |
|     name    |              admin               |
+-------------+----------------------------------+

Now let’s link the user with the role and tenant:

[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 user-role-add --user=admin --tenant=admin --role=admin
[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 user-role-add --user=admin --role=_member_ --tenant=admin

To test if the Keystone service is working as expected and that the addition of the user was succesfull, we can try to authenticat at the Keystone service with the user and request a token. To not authenticate any longer using our previously generated token, we need to be sure that those environment varibales are unset before executing the new command:

[root@testvm101 ~]# unset OS_SERVICE_TOKEN
[root@testvm101 ~]# unset OS_SERVICE_ENDPOINT
[root@testvm101 ~]# keystone --os-username=admin --os-password=secretpass --os-auth-url=http://192.168.203.101:35357/v2.0 token-get

To make life a little easier, we can create a small file which we can source everytime we need the variables to be set regarding authentication:

[root@testvm101 ~]# cat ~/admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=secretpass
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.203.101:35357/v2.0

To use the variables, we can simply do:

[root@testvm101 ~]# source ~/admin-openrc.sh
[root@testvm101 ~]# keystone token-get

The idea of the ID-service is to register all components in our OpenStack environment and the first one which we can register is the identity service itself. Before we can do that, we need to create a tenant for the services:

[root@testvm101 ~]# keystone tenant-create --name=service --description="Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | 84b43e4ec2d649b3ab76137adf8e7827 |
|     name    |             service              |
+-------------+----------------------------------+
[root@testvm101 ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | ea5b6f5946c5485599f6683661393319 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+
[root@testvm104 ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://192.168.203.101:5000/v2.0 --internalurl=http://192.168.203.101:5000/v2.0 --adminurl=http://192.168.203.101:35357/v2.0
+-------------+-----------------------------------+
|   Property  |               Value               |
+-------------+-----------------------------------+
|   adminurl  | http://192.168.203.101:35357/v2.0 |
|      id     |  b1f9e9bb04734685b4f083cfc5c904c7 |
| internalurl |  http://192.168.203.101:5000/v2.0 |
|  publicurl  |  http://192.168.203.101:5000/v2.0 |
|    region   |             regionOne             |
|  service_id |  ea5b6f5946c5485599f6683661393319 |
+-------------+-----------------------------------+

Glance – image service (controller)

The goal of our small project is to create virtual machines running on the compute node. To deploy new virtual machines, we’ll need to provide some kind of source to install the operating system in our new guest. We’ll use Glance, the image service-component to store that source.

First we need to install the Glance component and the command line tools for Glance:

[root@testvm101 ~]# yum install openstack-glance python-glanceclient
...
Complete !

After the installation, we’ll configure Glance to use our MySQL database, just as we did for the Keystone component:

[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:secretpass@192.168.203.101/glance
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:secretpass@192.168.203.101/glance

Offcourse we also need to actually create the database in MySQL:

[root@testvm101 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 5.5.37-MariaDB MariaDB Server

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'secretpass';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'secretpass';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

After creating the database, we can let Glance create it’s necessary tables in the database:

[root@testvm101 ~]# su -s /bin/sh -c "glance-manage db_sync" glance

Glance need to be able to identify itself, so we’ll create a user for Glance in Keystone:

[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 user-create --name=glance --pass=secretpass --email=root@localhost
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          root@localhost          |
| enabled  |               True               |
|    id    | f1b1d147fe3a4934a0380f4948905431 |
|   name   |              glance              |
| username |              glance              |
+----------+----------------------------------+
[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 user-role-add --user=glance --tenant=service --role=admin

Next step is to configure Glance to use the identity service (Keystone) to do authentication:

[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://192.168.203.101:5000
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host 192.168.203.101
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password secretpass
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://192.168.203.101:5000
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host 192.168.203.101
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
[root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
 [root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password secretpass
 [root@testvm101 ~]# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

Finally, we can add Glance to the list of services:

[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 service-create --name=glance --type=image --description="OpenStack Image Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|   enabled   |               True               |
|      id     | f171fd0acf0e4612be4be20ad594b16d |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+
[root@testvm101 ~]# keystone --os-token=fd465d38e342ddc68be3 --os-endpoint=http://192.168.203.101:35357/v2.0 endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://192.168.203.101:9292 --internalurl=http://192.168.203.101:9292 --adminurl=http://192.168.203.101:9292
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://192.168.203.101:9292    |
|      id     | 55a6f2327b9242ca97c46bd266e0a873 |
| internalurl |   http://192.168.203.101:9292    |
|  publicurl  |   http://192.168.203.101:9292    |
|    region   |            regionOne             |
|  service_id | f171fd0acf0e4612be4be20ad594b16d |
+-------------+----------------------------------+

After all these configuration steps, we can start and enable Glance:

[root@testvm101 ~]# systemctl start openstack-glance-api
[root@testvm101 ~]# systemctl start openstack-glance-registry
[root@testvm101 ~]# systemctl enable openstack-glance-api
ln -s '/usr/lib/systemd/system/openstack-glance-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service'
[root@testvm101 ~]# systemctl enable openstack-glance-registry
ln -s '/usr/lib/systemd/system/openstack-glance-registry.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service'

Once our image service is running, we can add an image to Glance to test it’s functionality.

Often, in a lot of tutorials, Cirros is used as a small Linux test distribution. It’s a custom built distro based on the Ubuntu kernel and it’s designed to be small (about 30MB in memory).

First we’ll download the image and check which type of image it is:

[root@testvm101 ~]# wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
--2014-10-10 17:38:25--  http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Resolving cdn.download.cirros-cloud.net (cdn.download.cirros-cloud.net)... 72.247.145.120, 72.247.145.123, 2a02:26f0:82::17c8:5786, ...
Connecting to cdn.download.cirros-cloud.net (cdn.download.cirros-cloud.net)|72.247.145.120|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13167616 (13M) [application/octet-stream]
Saving to: ‘cirros-0.3.2-x86_64-disk.img’

100%[=================================================================================>] 13,167,616  4.40MB/s   in 2.9s

2014-10-10 17:38:29 (4.40 MB/s) - ‘cirros-0.3.2-x86_64-disk.img’ saved [13167616/13167616]

[root@testvm104 ~]# file cirros-0.3.2-x86_64-disk.img
cirros-0.3.2-x86_64-disk.img: QEMU QCOW Image (v2), 41126400 bytes

The above output tells us that the CirrOS image is a QCOW v2 image. When adding the image to Glance, we need to specify this. Possible types are: raw, vhd, vmdk, vdi, iso, qcow2, aki, ari & ami.

[root@testvm101 ~]# glance image-create --name=cirros --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.2-x86_64-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 64d7c1cd2b6f60c92c14662941cb7913     |
| container_format | bare                                 |
| created_at       | 2014-10-10T15:40:48                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 25f0168a-9ba7-438d-94e3-95e73ececda9 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 04a29968ada94975ba8871f7bb7301de     |
| protected        | False                                |
| size             | 13167616                             |
| status           | active                               |
| updated_at       | 2014-10-10T15:40:48                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

After adding the downloaded CirrOS image to Glance, we can delete it from the system.

Nova – Compute service (controller)

Before we can start deploying our actual compute node, we first need to configure the necessary components on the controller to support the compute node.

Install the packages that are necessary:

[root@testvm101 ~]# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
 ...
 Complete !

When all packages are in place, we’ll configure the nova-compute component to use our previously configured MySQL-server:

[root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf database connection mysql://nova:secretpass@192.168.203.101/nova

Besides our database, we want nova to use the Qpid message broker as our other components do:

[root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname 192.168.203.101

To be able to see the console of virtual machines running on our compute node, we’ll use VNC, which needs to be configured as well:

[root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.203.101
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.203.101
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.203.101

Besides telling Nova to use MySQL, it needs an actual database to store it’s configuration so we’ll create the database:

[root@testvm101 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 19
Server version: 5.5.37-MariaDB MariaDB Server

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'secretpass';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'secretpass';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

After adding the database, we’ll create the tables needed by Nova:

[root@testvm101 ~]# su -s /bin/sh -c "nova-manage db sync" nova

To unify our configuration and work as it officially should, we’ll use Keystone, our identity service to do authentication in Nova. The first step is to create a user and role for Nova:

[root@testvm101 ~]# keystone user-create --name=nova --pass=secretpass --email=root@localhost
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          root@localhost          |
| enabled  |               True               |
|    id    | 217717a8788c45b29750d18a65abc074 |
|   name   |               nova               |
| username |               nova               |
+----------+----------------------------------+
[root@testvm101 ~]# keystone user-role-add --user=nova --tenant=service --role=admin

Now that nova has it’s dedicated user in Keystone, we’ll configure it to check and do authentication with Keystone:

[root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://192.168.203.101:5000
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.203.101
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
 [root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password secretpass

As with other components of our OpenStack setup, we’ll also register Nova in Keystone:

[root@testvm101 ~]# keystone service-create --name=nova --type=compute --description="OpenStack Compute"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Compute         |
|   enabled   |               True               |
|      id     | 0473e9662345496f862c6f15a14a7056 |
|     name    |               nova               |
|     type    |             compute              |
+-------------+----------------------------------+
[root@testvm101 ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://192.168.203.101:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.203.101:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.203.101:8774/v2/%\(tenant_id\)s
+-------------+----------------------------------------------+
|   Property  |                    Value                     |
+-------------+----------------------------------------------+
|   adminurl  | http://192.168.203.101:8774/v2/%(tenant_id)s |
|      id     |       26af98830a4f4446810744449e994abf       |
| internalurl | http://192.168.203.101:8774/v2/%(tenant_id)s |
|  publicurl  | http://192.168.203.101:8774/v2/%(tenant_id)s |
|    region   |                  regionOne                   |
|  service_id |       0473e9662345496f862c6f15a14a7056       |
+-------------+----------------------------------------------+

Regarding Nova, the configuration on our controller is done so we can start the services which we just installed:

[root@testvm101 ~]# systemctl start openstack-nova-api
[root@testvm101 ~]# systemctl enable openstack-nova-api
ln -s '/usr/lib/systemd/system/openstack-nova-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-api.service'
[root@testvm101 ~]# systemctl start openstack-nova-cert
[root@testvm101 ~]# systemctl enable openstack-nova-cert
ln -s '/usr/lib/systemd/system/openstack-nova-cert.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-cert.service'
[root@testvm101 ~]# systemctl start openstack-nova-scheduler
[root@testvm101 ~]# systemctl enable openstack-nova-scheduler
ln -s '/usr/lib/systemd/system/openstack-nova-scheduler.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service'
[root@testvm101 ~]# systemctl start openstack-nova-novncproxy
[root@testvm101 ~]# systemctl enable openstack-nova-novncproxy
ln -s '/usr/lib/systemd/system/openstack-nova-novncproxy.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service'
[root@testvm101 ~]# systemctl start openstack-nova-consoleauth
[root@testvm101 ~]# systemctl enable openstack-nova-consoleauth
ln -s '/usr/lib/systemd/system/openstack-nova-consoleauth.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service'
[root@testvm101 ~]# systemctl start openstack-nova-conductor
[root@testvm101 ~]# systemctl enable openstack-nova-conductor
ln -s '/usr/lib/systemd/system/openstack-nova-conductor.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service'

The installation of the Nova-component on the controller node is almost finished. In order to make sure that we did everything as we should, we can verify some things:

In order to verify if the list of services that are part of Nova and their status:

[root@testvm101 ~]# nova service-list
+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| nova-cert        | testvm104 | internal | enabled | up    | 2014-10-14T13:56:17.000000 | -               |
| nova-scheduler   | testvm104 | internal | enabled | up    | 2014-10-14T13:56:17.000000 | -               |
| nova-consoleauth | testvm104 | internal | enabled | up    | 2014-10-14T13:56:17.000000 | -               |
| nova-conductor   | testvm104 | internal | enabled | up    | 2014-10-14T13:56:17.000000 | -               |
+------------------+-----------+----------+---------+-------+----------------------------+-----------------+

To check if Nova can communicate with Glace:

[root@testvm101 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| 25f0168a-9ba7-438d-94e3-95e73ececda9 | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

Networking for Nova on the controller

As mentioned in the start of this post, I decided to keep things simple (it’s already complicated enought) and not use the Neutron component for networking. Instead we’ll use the legacy nova-networking. In order to use nova-networking, we need te tell Nova that we want that:

[root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
[root@testvm101 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
[root@testvm101 ~]# systemctl restart openstack-nova-api
[root@testvm101 ~]# systemctl restart openstack-nova-scheduler
[root@testvm101 ~]# systemctl restart openstack-nova-conductor

Besides telling Nova that we want to use the legacy networking, we also need to define a network that will be used for our guests. The netwerk must be a part of the subnet which is used for the external interface on the compute node. In my case: 192.168.202.0.

I chose to take a part of this subnet and use a /28 (255.255.255.240) subnetmask, this allows enough IP-addresses for 14 hosts.

[root@testvm101 ~]# nova network-create test-net --bridge br100 --multi-host T --fixed-range-v4 192.168.202.16/28

To verify the added network, called test-net:

[root@testvm101 ~]# nova net-list
+--------------------------------------+----------+-------------------+
| ID                                   | Label    | CIDR              |
+--------------------------------------+----------+-------------------+
| 0ee59988-7ab8-4c01-b710-b40f5d62291b | test-net | 192.168.202.16/28 |
+--------------------------------------+----------+-------------------+

There is a default security group on our controller, which was automatically added. This security group is rather strict so we’ll change it a little to allow ping and SSH to the network which we just added:

[root@testvm101 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
[root@testvm101 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Nova – Compute service (compute node)

After going trough many many steps to prepare, mostly, our controller, it’s finally time to configure our compute node. This node will actually run the virtual machines which we want to deploy in our OpenStack environment.

Before we can run KVM on the compute node, it’s a good thing to check if that machine supports one of the types of virtualisation extensions. You can find more information in a previous post about KVM.

After verifying that your host is capable of running KVM, install the required packages for the compute node. Only one packages is requested to be installed but it contains a whole list of dependencies (+200) to install the complete compute environment.

[root@testvm102 ~]# yum install openstack-nova-compute
 ...
 Complete !

When all packages are installed and available on our compute node, we’ll configure the node:

Connection information for the database:

[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf database connection mysql://nova:secretpass@192.168.203.101/nova

Configure to use Keystone for authentication:

[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://192.168.203.101:5000
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.203.101
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password secretpass

Configure to use the Qpid MQ for messaging:

[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname 192.168.203.101

Allow access to the consoles of running virtual machines over VNC:

[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.203.102
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.203.102
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://192.168.203.101:6080/vnc_auto.html

Tell the node where to find our image service:

[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT glance_host 192.168.203.101

The above steps is all it takes to configure the compute node as far as the computing component requires. Now we can start and enable the services:

[root@testvm102 ~]# systemctl start libvirtd
[root@testvm102 ~]# systemctl start messagebus
[root@testvm102 ~]# systemctl start openstack-nova-compute
[root@testvm102 ~]# systemctl enable libvirtd
[root@testvm102 ~]# systemctl enable openstack-nova-compute
ln -s '/usr/lib/systemd/system/openstack-nova-compute.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service'

Networking for Nova on the compute node

As with the controller, we need to configure the compute node to use nova-networking and not Neutron.

Install the required packages:

[root@testvm102 ~]# yum install openstack-nova-network openstack-nova-api
...
Complete !

For the configuration, we first need to determine which is the name of the interface that will be used for external connections. As mentioned in the beginning, this will be the interface in the 192.168.202.0 subnet.

[root@testvm102 ~]# ip a|grep "inet "
inet 127.0.0.1/8 scope host lo
inet 192.168.202.102/24 brd 192.168.202.255 scope global eno16777736
inet 192.168.203.102/24 brd 192.168.203.255 scope global eno33554968

So in my example, interface eno16777736 is the interface which we need to configure to use for nova-networking.

[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br100
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface eno16777736
[root@testvm102 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT public_interface eno16777736

The last step is to start the service which we just configured:

[root@testvm102 ~]# systemctl start openstack-nova-network
[root@testvm102 ~]# systemctl start openstack-nova-metadata-api
[root@testvm102 ~]# systemctl enable openstack-nova-metadata-api
ln -s '/usr/lib/systemd/system/openstack-nova-metadata-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-metadata-api.service'
[root@testvm102 ~]# systemctl enable openstack-nova-network
ln -s '/usr/lib/systemd/system/openstack-nova-network.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-network.service'

Deploy a new virtual machine to the OpenStack environment

FInally, that’s the least we can say, our OpenStack setup is ready to get going. The idea is to run some workload on our small setup and for that we’ll start with deploying our first virtual machine on the compute node.

Make sure that you have the correct environment variables if you restarted your session:

 [root@testvm101 ~]# source admin-openrc.sh

The CirrOS distro which we downloaded earlier is capable of receiving a public key and add it to it’s allowed keys for SSH. So, we’ll generate a public key for the controller in order to add it to the VM. This service is provide by package cloud-init, which should exist in the image.

[root@testvm101 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
be:47:f0:35:65:1c:96:08:53:01:d9:47:4a:c1:fe:5d root@testvm101
The key's randomart image is:
+--[ RSA 2048]----+
|          +B=*+o |
|          .o+o*  |
|           ..+   |
|        .   +   E|
|        So . o ..|
|       .  o   . .|
|        ..       |
|         ..      |
|        ..       |
+-----------------+
[root@testvm101 ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub test-key

To generate the command which we’ll use to deploy or add the new instance, we’ll check of all objects are present and if the names are matching to what we created earlier.

Check the SSH-keypair:

[root@testvm101 ~]# nova keypair-list
+----------+-------------------------------------------------+
| Name     | Fingerprint                                     |
+----------+-------------------------------------------------+
| test-key | be:47:f0:35:65:1c:96:08:53:01:d9:47:4a:c1:fe:5d |
+----------+-------------------------------------------------+

To deploy a new instance, Nova uses flavors. A few predefined flavors exist so we’ll list them in order to choose one of them:

[root@testvm101 ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

The images availabe on Glance, our image service:

[root@testvm101 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| 5bb931a3-4336-43c1-b9d8-3f2589cfec29 | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

The list of networks which we added:

[root@testvm101 ~]# nova net-list
+--------------------------------------+----------+-------------------+
| ID                                   | Label    | CIDR              |
+--------------------------------------+----------+-------------------+
| 0ee59988-7ab8-4c01-b710-b40f5d62291b | test-net | 192.168.202.16/28 |
+--------------------------------------+----------+-------------------+

The security groups which we can use:

[root@testvm101 ~]# nova secgroup-list
+----+---------+-------------+
| Id | Name    | Description |
+----+---------+-------------+
| 1  | default | default     |
+----+---------+-------------+

Now that we know all information to build our new instance or VM, we can construct the command to actually execute that action. Pay attention that for the network, we can’t used the name but need to specify the ID:

[root@testvm101 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=0ee59988-7ab8-4c01-b710-b40f5d62291b --security-group default --key-name test-key testvm1
+--------------------------------------+-----------------------------------------------+
| Property                             | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          | nova                                          |
| OS-EXT-SRV-ATTR:host                 | -                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000001e                             |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | -                                             |
| OS-SRV-USG:terminated_at             | -                                             |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| adminPass                            | 5dSBctXpAzi3                                  |
| config_drive                         |                                               |
| created                              | 2014-10-14T15:03:32Z                          |
| flavor                               | m1.tiny (1)                                   |
| hostId                               |                                               |
| id                                   | 768d98a0-1166-4770-83b4-67938a84ed32          |
| image                                | cirros (5bb931a3-4336-43c1-b9d8-3f2589cfec29) |
| key_name                             | test-key                                      |
| metadata                             | {}                                            |
| name                                 | testvm1                                       |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| security_groups                      | default                                       |
| status                               | BUILD                                         |
| tenant_id                            | 860c76ef66df46aa84899a379fd2150c              |
| updated                              | 2014-10-14T15:03:32Z                          |
| user_id                              | ce63eeb03cac449bb848077d7d85efb0              |
+--------------------------------------+-----------------------------------------------+

After executing this command, our image is deployed to a new instance. We can check the status of the deployment as follows:

[root@testvm101 ~]# nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks                |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| 768d98a0-1166-4770-83b4-67938a84ed32 | testvm1 | BUILD  | spawning   | NOSTATE     | test-net=192.168.202.26 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+

In the above output, which we got just after deploying the new instance, we can see that the status is BUILD and that the instance is busy spawning.

After a while, we should get a status ACTIVE which means that our new VM is deployed, booted and ready to use.

[root@testvm101 ~]# nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks                |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| 768d98a0-1166-4770-83b4-67938a84ed32 | testvm1 | ACTIVE | -          | Running     | test-net=192.168.202.26 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+

Use a virtual machine on the OpenStack environment

Now that we have an instance running on our freshly installed OpenStack setup, we can start using it.

There are several ways to use the image. The most convenient is to connect with SSH to the IP-address which we can see in the output of the nova list command:

Using username "cirros".
cirros@192.168.202.26's password:
sh: /usr/bin/xauth: not found
$ uname -a
Linux testvm1 3.2.0-60-virtual #91-Ubuntu SMP Wed Feb 19 04:13:28 UTC 2014 x86_64 GNU/Linux
$ uname -r
3.2.0-60-virtual
$ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Westmere E56xx/L56xx/X56xx (Nehalem-C)
stepping        : 1
microcode       : 0x1
cpu MHz         : 3392.358
cache size      : 4096 KB
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up rep_good nopl pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes avx f16c rdrand hypervisor lahf_lm fsgsbase smep
bogomips        : 6784.71
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:38:7a:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.202.26/29 brd 192.168.202.31 scope global eth0
    inet6 fe80::f816:3eff:fe38:7aae/64 scope link
       valid_lft forever preferred_lft forever

But what if the network is down or the SSH-server gets stopped on the instance. Then we need to be able to control the console of the VM in order to fix those problems. To do so, we can connect to the console of that guest using VNC, just like you would do in a “normal” KVM-environment. OpenStack has a mechanism in providing a link, containing a one time usable token to get access to the console.

First, we need to request such URL from our controller, for our instance:

[root@testvm101 ~]# nova get-vnc-console testvm1 novnc
+-------+--------------------------------------------------------------------------------------+
| Type  | Url                                                                                  |
+-------+--------------------------------------------------------------------------------------+
| novnc | http://192.168.203.101:6080/vnc_auto.html?token=60e8ec45-131f-46a2-9dc6-a5d7cc543ae6 |
+-------+--------------------------------------------------------------------------------------+

When we enter that URL in our browser, we can get access to the console of the testvm1 instance:

openstack_console

Stop a virtual machine on the OpenStack environment

When we want to stop one of the instances running on the compute environment, we can simply shut down the instance via the console of SSH but when the guest is hanging, we can stop it via our controller:

[root@testvm101 ~]# nova stop testvm1

Start a virtual machine on the OpenStack environment

To restart the instance, we can do this in a similar way:

[root@testvm101 ~]# nova start testvm1

Delete a virtual machine on the OpenStack environment

Once you no longer need a certain VM on the compute node, we can simply delete it:

[root@testvm101 ~]# nova delete vm3

Get more information about an instance running on the OpenStack environment:

To know where a certain instance is running when you have multiple compute nodes or to simply get more specifications or information about it:

[root@testvm101 ~]# nova show testvm1
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | testvm102                                                |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | testvm102                                                |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000001e                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-10-14T15:03:56.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2014-10-14T15:03:32Z                                     |
| flavor                               | m1.tiny (1)                                              |
| hostId                               | 716119735af73b4eb42747532810beb6c0d73c98eb312e15dd72d3ca |
| id                                   | 768d98a0-1166-4770-83b4-67938a84ed32                     |
| image                                | cirros (5bb931a3-4336-43c1-b9d8-3f2589cfec29)            |
| key_name                             | test-key                                                 |
| metadata                             | {}                                                       |
| name                                 | testvm1                                                  |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 860c76ef66df46aa84899a379fd2150c                         |
| test-net network                     | 192.168.202.26                                           |
| updated                              | 2014-10-14T15:15:18Z                                     |
| user_id                              | ce63eeb03cac449bb848077d7d85efb0                         |
+--------------------------------------+----------------------------------------------------------

 Remove a compute node from the OpenStack environment

To add another node to our environment, for example testvm103, we just need to follow the same instructions as provided to add the first compute nide, testvm102.

To remove a node which is added, we need to do this as follows.

First let’s check which nodes are known to our OpenStack setup:

[root@testvm101 ~]# nova service-list
+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| nova-scheduler   | testvm101 | internal | enabled | up    | 2014-10-14T15:25:50.000000 | -               |
| nova-cert        | testvm101 | internal | enabled | up    | 2014-10-14T15:25:48.000000 | -               |
| nova-consoleauth | testvm101 | internal | enabled | up    | 2014-10-14T15:25:49.000000 | -               |
| nova-conductor   | testvm101 | internal | enabled | up    | 2014-10-14T15:25:48.000000 | -               |
| nova-compute     | testvm102 | nova     | enabled | up    | 2014-10-14T15:25:42.000000 | -               |
| nova-network     | testvm102 | internal | enabled | up    | 2014-10-14T15:25:49.000000 | -               |
| nova-console     | testvm101 | internal | enabled | up    | 2014-10-14T15:25:49.000000 | -               |
| nova-compute     | testvm103 | nova     | enabled | up    | 2014-10-14T15:25:44.000000 | -               |
| nova-network     | testvm103 | internal | enabled | up    | 2014-10-14T15:25:42.000000 | -               |
+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
[root@testvm101 ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   testvm101                            internal         enabled    :-)   2014-10-14 15:26:10
nova-cert        testvm101                            internal         enabled    :-)   2014-10-14 15:26:08
nova-consoleauth testvm101                            internal         enabled    :-)   2014-10-14 15:26:09
nova-conductor   testvm101                            internal         enabled    :-)   2014-10-14 15:26:08
nova-compute     testvm102                            nova             enabled    :-)   2014-10-14 15:26:12
nova-network     testvm102                            internal         enabled    :-)   2014-10-14 15:26:09
nova-console     testvm101                            internal         enabled    :-)   2014-10-14 15:26:09
nova-compute     testvm103                            nova             enabled    :-)   2014-10-14 15:26:04
nova-network     testvm103                            internal         enabled    :-)   2014-10-14 15:26:12

Let’s say that we no longer want to use testvm103, then we can delete it by remove it’s references in our database. To me this looks a little weird and it’s probably a part which isn’t completely finished in the OpenStack release used for this post (IceHouse).

[root@testvm101 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 263
Server version: 5.5.37-MariaDB MariaDB Server

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use nova;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [nova]> delete from compute_nodes where service_id in (select id from services where host='testvm103');
Query OK, 1 row affected (0.00 sec)

MariaDB [nova]> delete from services where host='testvm103';
Query OK, 2 rows affected (0.00 sec)

MariaDB [nova]> exit
Bye

After executing the above queries om our DB, this is the result:

[root@testvm101 ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   testvm101                            internal         enabled    :-)   2014-10-14 15:27:10
nova-cert        testvm101                            internal         enabled    :-)   2014-10-14 15:27:08
nova-consoleauth testvm101                            internal         enabled    :-)   2014-10-14 15:27:09
nova-conductor   testvm101                            internal         enabled    :-)   2014-10-14 15:27:08
nova-compute     testvm102                            nova             enabled    :-)   2014-10-14 15:27:12
nova-network     testvm102                            internal         enabled    :-)   2014-10-14 15:27:09
nova-console     testvm101                            internal         enabled    :-)   2014-10-14 15:27:09

This post got a lot longer than I initially planned but I hope it could help some people to understand OpenStack better and get people going with a basic OpenStack setup. The product has a lot of potential but, to me, it still lacks a certain maturity to use it for serious production environments. Although there are some big parties which are already doing that.

17 thoughts on “Start with a simple 2-node OpenStack setup with KVM

    • Thanks for letting me know. This must’ve happened when I created the anchors since I copied all content to a normal text-editor to do some find-replace :)

  1. I have a problem in the basic setup .
    I am using oracle virtual box and have tried to connect 3 vms .(Fedora 20).
    But they don’t get connected..what network adapters do i need to use and what other configuration settings do i need to do??

    • My knowledge op virtualbox is rather limited, that’s why the tutorial was with KVM :) Are you sure you’re talking about OpenStack? I’m not sure that the OpenStack RDO images support virtualbox directly. Are you using Qemu to communicate with it?

  2. [root@testvm101 ~]# cat ~/admin-openrc.sh
    export OS_USERNAME=admin
    export OS_PASSWORD=secretpass
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=”http://192.168.203.101:35357/v2.0″
    Better use quotation mark for the OS_AUTH_URL.
    Without you got this message:
    WARNING:keystoneclient.httpclient:Failed to retrieve management_url from token

  3. [root@controller ~]# openstack-status
    == Nova services ==
    openstack-nova-api: active
    openstack-nova-cert: active
    openstack-nova-compute: inactive (disabled on boot)
    openstack-nova-network: inactive (disabled on boot)
    openstack-nova-scheduler: active
    openstack-nova-volume: inactive (disabled on boot)
    openstack-nova-conductor: active
    == Glance services ==
    openstack-glance-api: active
    openstack-glance-registry: active
    == Keystone service ==
    openstack-keystone: active
    == Support services ==
    mysqld: inactive (disabled on boot)
    libvirtd: active
    dbus: active
    qpidd: active
    == Keystone users ==
    Warning keystonerc not sourced
    [root@controller ~]#[root@controller ~]# keystone user-list
    +———————————-+——–+———+—————-+
    | id | name | enabled | email |
    +———————————-+——–+———+—————-+
    | 67111d22ca1f47308d16170a3ba828de | admin | True | root@localhost |
    | 178fe34256f246bcb78167a181dda5d7 | glance | True | root@localhost |
    | cf06a06f0131448e8704797560e5a295 | nova | True | root@localhost |
    +———————————-+——–+———+—————-+
    [root@controller ~]# nova service-list
    +——————+————+———-+———+——-+—————————-+—————–+
    | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
    +——————+————+———-+———+——-+—————————-+—————–+
    | nova-cert | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    | nova-scheduler | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    | nova-consoleauth | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    | nova-conductor | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    +——————+————+———-+———+——-+—————————-+—————–+
    [root@controller ~]# nova image-list
    +————————————–+——–+——–+——–+
    | ID | Name | Status | Server |
    +————————————–+——–+——–+——–+
    | 05c4a105-7129-4f69-a506-696c9732789a | cirros | ACTIVE | |
    +————————————–+——–+——–+——–+
    [root@controller ~]#
    [root@controller ~]# nova network-create test-net –bridge br100 –multi-host T –fixed-range-v4 192.168.203.10/29
    ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-4fda15a6-7d00-4a53-9a1d-a778ec73239f)

    please help me out …i\n aove error as the insatllation is on centos 7

  4. Did you implement using VMWare/VirtualBox?. And how many nodes do I need to implement this?. What should be the h/w s/w specifications for the nodes?.
    Thank you

    • You can implement it on VMWare, don’t know about VirtualBox. But you’ll need to allow nested VM-extensions (see the post). This isn’t very performant but good for testing purposes. You will need at least 2 nodes (see overview of the test setup). Specs are not very important. It depends how much workload you plan to run on the nodes. I would say that 1 CPU/1GB RAM and 15GB disk space should be enough.

      • you can create on VirtualBox via vagrant and you just have to change on compute in /etc/nova/nova.conf from:
        virt_type=kvm
        to
        virt_type=qemu

  5. Hello,

    Thank you for the nice article !

    However what are these characters on some of the lines of commands :

    >

    Can you please explain ?

    Thanks !

    • &gt ; is the > sign in HTML. so probably that’s a copy/paste problem or something with the WordPress theme. I’ll have a look to correct this. I think in 99% of the cases, you can just ignore it :)

  6. I am really happy to read this web site posts which includes plenty of helpful information,
    thanks for providing these information.

Leave a Reply

Your email address will not be published. Required fields are marked *