Start with a simple 2-node OpenStack setup with KVM

OpenStack is something that gets more and more in the picture and even if you’re only a little interested in the latest technologies, you must have heard from OpenStack here or there. But what is it exactly and more important, how does it work practically. The best way to figure that out is just to get going with it and try to install it and play around. Here you can find a brief explanation and a tutorial or waltrough to deploy a small OpenStack environment on top of CentOS 7 or RHEL 7.

I’ve heard a lot of OpenStack since almost every big vendor is somehow involved and is actively promoting OpenStack on events or on their website but I never really got into the details and, to be honest, it always stayed a little vague to me how it was really implemented, if it’s production ready, running on top of what and more important, is if it was interesting to take a look at.

To get started with OpenStack is not as easy as it is advertised. After all it is a very big project with a lot of components that each have a different purpose and you need some time to get used to the terminology and find a way through the massive stream of information and especially marketing for OpenStack. Information provided is not always clear and in a lot of cases it’s very high level. There are OpenStack packaged-ready solutions like DevStack in case you want to get going very fast.

I decided to not go the DevStack way but to install all components “by hand” to try and understand everything a little better. For the small test setup, I used the latest stable release at time of writing: IceHouse.

Since this post became much bigger than I originally planned, I decided to add a small index here with some anchor/jumplinks to the correct place in the text:

OpenStack components

Component Purpose
Nova compute: can be compared to or linked to a hypervisor
Swift object storage: similar to cluster filesystem
Cinder block storage: provides block storage for compute nodes
Neutron networking
Horizon Dashboard: web-interface to control OpenStack
Keystone identity service: keeps track of all services and objects
Glance image service: stores disk and server images
Ceilometer telemetry: accounting
Heat orchestration: template and multiple image actions
Trove database as a service
Ironic bare metal provisioning
Zaqar multiple tenant cloud messaging
Sahara elastic map reduce: control clusters

A simple OpenStack setup

You always need to start simple with something new and that’s why I decided to start with a two node setup, one node which will act as the OpenStack controller and another node which will be my compute node. On the compute node, it should be possible to deploy one or more virtual machines, or instances as they are called in the OpenStack world, and deploy, add, remove, control them from the controller.

Regarding networking, the same ideas apply and I decided to not use Neutron, which implements a complete software network layer. Instead, I will use the (almost deprecated) legacy nova-network. The use of nova-network implies that our controller only has an interface in a management network, the compute node needs one interface in the management network and another to communicate with the outside world.

Overview of the test setup

openstack_setup

During the test, I used IP-addresses everywhere to not get confused with names, hosts or DNS problems.

All passwords which are needed are set to secretpass, so in your own setup, you’ll need to replace the IP’s and passwords with your own.

This is the output regarding ip configuration on both nodes:

Preparation of the controller and compute node

Before starting with the setup, I want to make sure that nothing is blocking or disturbing us during the test so I chose to disbale the firewall and SELinux on both nodes. For the rest of the experiment, I will work as root. Not that I like this approach but search for hourse because SELinux or the firewall blocks something isn’t something I like either

One of the requirements of the nodes in the OpenStack setup is that the time source is equal so I decided to set-up NTP and let the controller act as the source for all other nodes.

To setup NTP, we’ll start by installing the ntp package on both the controller and compute node

On the controller; we can leave the configuration as it is for /etc/ntp.conf since it, by default, uses the time from 4 servers of pool.ntp.org.

On the compute node, we need to get the time from our controller so we need to change the /etc/ntp.conf and let it use the time from the controller node. We need to remove or comment the existing server-directives in the file and add a reference to our controller:

Before starting the ntp daemon on the compute node, I manually synchronize the time to be sure that I won’t get any delays getting the time synced with pool.ntp.org:

After syncing the time, we’ll start the ntp daemon and enable it to start at boot time:

Now that our NTP daemon is running on the controller, we can, just as we did on the controller, manually sync the time to avoid grace periods and then start and enable the ntp daemon:

Database (controller)

Data which will be used for our OpenStack components and nodes needs to be stored somewhere, even clouds need some place to keep everything :) So we’ll start by installing and configuring MariaDB on the controller.

The first step is, as always, to install the necessary packages:

After the installation, we need to configure MariaDB to listen on the management IP-address on the controller and to use UTF-8 by default. For that, we need to adjust /etc/my.cnf to look as follows:

After that, we can start and enable MariaDB on boot:

After starting MariaDB, we need to go trough the standard installation by running:

I had some problems just to run this step and got an error message regarding the ario control file. Deleting the offending file solved the problem:

After the initial setup, we still need to configure the root password of the database and remove the anonymous access. If we don’t remove the anonymous access, we’ll get into problems later:

Database (compute node)

On the compute node, the only component which we need to install regarding the database connectivity is the package MySQL-python:

Install the necessary repositories and base OpenStack packages (controller + compute node)

The OpenStack packages can’t be found in the standard CentOS or RHEL repositories so we need to add two repositories to get all necessary packages. One of them is EPEL and the other is from OpenStack itself. The release I’ll use for this test is OpenStack IceHouse.

To install the repositories and GPG-keys:

After adding the repositories, we can install the base openstack packages which contain the OpenStack utilities.

After installing the packages, it’s important to install the latest updates by doing a normal yum update. In case a newer kernel was installed, reboot and choose to use that kernel:

Message broker (MQ) (controller)

In order for all components in our OpenStack environment to communicate with each other, we’ll install a message broker. Qpid is a good choice.

Install the packages:

Disable the need for authentication

Start the message broker and enable it on boot:

OpenStack services

Now that our controller is prepared, we can finally start installing our OpenStack components. The installation usually consists of the following steps

  • Install the component (openstack-<comonent>) + the command line client for the component (python-<component>client)
  • Configure the component ot use the MySQL database
  • Add a database and tables for the component in the MySQL database
  • Register the component in the identity service
  • Start the component
  • Test the component

Keystone – Identity service (controller)

The identity service is responsible for keeping track of all components that exist in your OpenStack environment. All services, components and nodes need to register to the ID-service in order to be used.

Again, we’ll start with installing all necessary packages:

The ID-service needs to be able to save it’s data somewhere and for that we’ll use our previously configured database:

First we’ll configure Keystone to use our database:

After that, we need to actually create the database used for keystone and give access to the keystone user with the earlier give password:

After creating the database, we’ll let Keystone create the tables in the DB:

Now that we’re ready setting up the service, we need to create an authentication token to authenticate between Keystone and other services:

We’ll generate a random sequence of hex-numbers and store it in a variable:

Then we can use the variable and change the configuration of Keystone to use our token:

We want Keystone to provide PKI tokens to authenticate so we’ll configure it to do so:

Once everything is configured, we can start the Keystone service. It will listen for requests on port 35357.

Expired tokens will be saved in Keystone forever and in a larger environment, this will cause our tables to become rather large. To resolve this problem, we’ll schedule a job with cron to regurarly cleanup the expired tokens.

We have our identity service up and running so it’s time to add some users and roles to the Keystone database.

To use keystone, we need to provide our earlier generated admin_token when executing actions on Keystone. There are two ways to use Keystone, and most other OpenStack components.

The first is to export the connection and authentication information in environment variables and execute a limited command on the CLI. The second requires a larger list of arguments but saves you the work of exporting the variables. So when you know that you’ll need to execute multiple Keystone commands, it’s interesting to export the information in environment variables. When it’s only one execution, better to provide the information as an argument.

To use keystone without environment variables, we can provide the information as arguments to our keystone-command:

To use Keystone with the the exported variables, we’ll export them and simply not provide the arguments:

In case you receive an error message (HTTP 500), check if the values for the endpoint and admin-token match the ones in /etc/keystone/keystone.conf or check the messages in /var/log/keystone/keystone.log.

Now that we learned how to use keystone, let’s add the admin user, the admin roles and “tenant”.

Now let’s link the user with the role and tenant:

To test if the Keystone service is working as expected and that the addition of the user was succesfull, we can try to authenticat at the Keystone service with the user and request a token. To not authenticate any longer using our previously generated token, we need to be sure that those environment varibales are unset before executing the new command:

To make life a little easier, we can create a small file which we can source everytime we need the variables to be set regarding authentication:

To use the variables, we can simply do:

The idea of the ID-service is to register all components in our OpenStack environment and the first one which we can register is the identity service itself. Before we can do that, we need to create a tenant for the services:

Glance – image service (controller)

The goal of our small project is to create virtual machines running on the compute node. To deploy new virtual machines, we’ll need to provide some kind of source to install the operating system in our new guest. We’ll use Glance, the image service-component to store that source.

First we need to install the Glance component and the command line tools for Glance:

After the installation, we’ll configure Glance to use our MySQL database, just as we did for the Keystone component:

Offcourse we also need to actually create the database in MySQL:

After creating the database, we can let Glance create it’s necessary tables in the database:

Glance need to be able to identify itself, so we’ll create a user for Glance in Keystone:

Next step is to configure Glance to use the identity service (Keystone) to do authentication:

Finally, we can add Glance to the list of services:

After all these configuration steps, we can start and enable Glance:

Once our image service is running, we can add an image to Glance to test it’s functionality.

Often, in a lot of tutorials, Cirros is used as a small Linux test distribution. It’s a custom built distro based on the Ubuntu kernel and it’s designed to be small (about 30MB in memory).

First we’ll download the image and check which type of image it is:

The above output tells us that the CirrOS image is a QCOW v2 image. When adding the image to Glance, we need to specify this. Possible types are: raw, vhd, vmdk, vdi, iso, qcow2, aki, ari & ami.

After adding the downloaded CirrOS image to Glance, we can delete it from the system.

Nova – Compute service (controller)

Before we can start deploying our actual compute node, we first need to configure the necessary components on the controller to support the compute node.

Install the packages that are necessary:

When all packages are in place, we’ll configure the nova-compute component to use our previously configured MySQL-server:

Besides our database, we want nova to use the Qpid message broker as our other components do:

To be able to see the console of virtual machines running on our compute node, we’ll use VNC, which needs to be configured as well:

Besides telling Nova to use MySQL, it needs an actual database to store it’s configuration so we’ll create the database:

After adding the database, we’ll create the tables needed by Nova:

To unify our configuration and work as it officially should, we’ll use Keystone, our identity service to do authentication in Nova. The first step is to create a user and role for Nova:

Now that nova has it’s dedicated user in Keystone, we’ll configure it to check and do authentication with Keystone:

As with other components of our OpenStack setup, we’ll also register Nova in Keystone:

Regarding Nova, the configuration on our controller is done so we can start the services which we just installed:

The installation of the Nova-component on the controller node is almost finished. In order to make sure that we did everything as we should, we can verify some things:

In order to verify if the list of services that are part of Nova and their status:

To check if Nova can communicate with Glace:

Networking for Nova on the controller

As mentioned in the start of this post, I decided to keep things simple (it’s already complicated enought) and not use the Neutron component for networking. Instead we’ll use the legacy nova-networking. In order to use nova-networking, we need te tell Nova that we want that:

Besides telling Nova that we want to use the legacy networking, we also need to define a network that will be used for our guests. The netwerk must be a part of the subnet which is used for the external interface on the compute node. In my case: 192.168.202.0.

I chose to take a part of this subnet and use a /28 (255.255.255.240) subnetmask, this allows enough IP-addresses for 14 hosts.

To verify the added network, called test-net:

There is a default security group on our controller, which was automatically added. This security group is rather strict so we’ll change it a little to allow ping and SSH to the network which we just added:

Nova – Compute service (compute node)

After going trough many many steps to prepare, mostly, our controller, it’s finally time to configure our compute node. This node will actually run the virtual machines which we want to deploy in our OpenStack environment.

Before we can run KVM on the compute node, it’s a good thing to check if that machine supports one of the types of virtualisation extensions. You can find more information in a previous post about KVM.

After verifying that your host is capable of running KVM, install the required packages for the compute node. Only one packages is requested to be installed but it contains a whole list of dependencies (+200) to install the complete compute environment.

When all packages are installed and available on our compute node, we’ll configure the node:

Connection information for the database:

Configure to use Keystone for authentication:

Configure to use the Qpid MQ for messaging:

Allow access to the consoles of running virtual machines over VNC:

Tell the node where to find our image service:

The above steps is all it takes to configure the compute node as far as the computing component requires. Now we can start and enable the services:

Networking for Nova on the compute node

As with the controller, we need to configure the compute node to use nova-networking and not Neutron.

Install the required packages:

For the configuration, we first need to determine which is the name of the interface that will be used for external connections. As mentioned in the beginning, this will be the interface in the 192.168.202.0 subnet.

So in my example, interface eno16777736 is the interface which we need to configure to use for nova-networking.

The last step is to start the service which we just configured:

Deploy a new virtual machine to the OpenStack environment

FInally, that’s the least we can say, our OpenStack setup is ready to get going. The idea is to run some workload on our small setup and for that we’ll start with deploying our first virtual machine on the compute node.

Make sure that you have the correct environment variables if you restarted your session:

The CirrOS distro which we downloaded earlier is capable of receiving a public key and add it to it’s allowed keys for SSH. So, we’ll generate a public key for the controller in order to add it to the VM. This service is provide by package cloud-init, which should exist in the image.

To generate the command which we’ll use to deploy or add the new instance, we’ll check of all objects are present and if the names are matching to what we created earlier.

Check the SSH-keypair:

To deploy a new instance, Nova uses flavors. A few predefined flavors exist so we’ll list them in order to choose one of them:

The images availabe on Glance, our image service:

The list of networks which we added:

The security groups which we can use:

Now that we know all information to build our new instance or VM, we can construct the command to actually execute that action. Pay attention that for the network, we can’t used the name but need to specify the ID:

After executing this command, our image is deployed to a new instance. We can check the status of the deployment as follows:

In the above output, which we got just after deploying the new instance, we can see that the status is BUILD and that the instance is busy spawning.

After a while, we should get a status ACTIVE which means that our new VM is deployed, booted and ready to use.

Use a virtual machine on the OpenStack environment

Now that we have an instance running on our freshly installed OpenStack setup, we can start using it.

There are several ways to use the image. The most convenient is to connect with SSH to the IP-address which we can see in the output of the nova list command:

But what if the network is down or the SSH-server gets stopped on the instance. Then we need to be able to control the console of the VM in order to fix those problems. To do so, we can connect to the console of that guest using VNC, just like you would do in a “normal” KVM-environment. OpenStack has a mechanism in providing a link, containing a one time usable token to get access to the console.

First, we need to request such URL from our controller, for our instance:

When we enter that URL in our browser, we can get access to the console of the testvm1 instance:

openstack_console

Stop a virtual machine on the OpenStack environment

When we want to stop one of the instances running on the compute environment, we can simply shut down the instance via the console of SSH but when the guest is hanging, we can stop it via our controller:

Start a virtual machine on the OpenStack environment

To restart the instance, we can do this in a similar way:

Delete a virtual machine on the OpenStack environment

Once you no longer need a certain VM on the compute node, we can simply delete it:

Get more information about an instance running on the OpenStack environment:

To know where a certain instance is running when you have multiple compute nodes or to simply get more specifications or information about it:

 Remove a compute node from the OpenStack environment

To add another node to our environment, for example testvm103, we just need to follow the same instructions as provided to add the first compute nide, testvm102.

To remove a node which is added, we need to do this as follows.

First let’s check which nodes are known to our OpenStack setup:

Let’s say that we no longer want to use testvm103, then we can delete it by remove it’s references in our database. To me this looks a little weird and it’s probably a part which isn’t completely finished in the OpenStack release used for this post (IceHouse).

After executing the above queries om our DB, this is the result:

This post got a lot longer than I initially planned but I hope it could help some people to understand OpenStack better and get people going with a basic OpenStack setup. The product has a lot of potential but, to me, it still lacks a certain maturity to use it for serious production environments. Although there are some big parties which are already doing that.

16 thoughts on “Start with a simple 2-node OpenStack setup with KVM

    • Thanks for letting me know. This must’ve happened when I created the anchors since I copied all content to a normal text-editor to do some find-replace :)

  1. I have a problem in the basic setup .
    I am using oracle virtual box and have tried to connect 3 vms .(Fedora 20).
    But they don’t get connected..what network adapters do i need to use and what other configuration settings do i need to do??

    • My knowledge op virtualbox is rather limited, that’s why the tutorial was with KVM :) Are you sure you’re talking about OpenStack? I’m not sure that the OpenStack RDO images support virtualbox directly. Are you using Qemu to communicate with it?

  2. [root@testvm101 ~]# cat ~/admin-openrc.sh
    export OS_USERNAME=admin
    export OS_PASSWORD=secretpass
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=”http://192.168.203.101:35357/v2.0″
    Better use quotation mark for the OS_AUTH_URL.
    Without you got this message:
    WARNING:keystoneclient.httpclient:Failed to retrieve management_url from token

  3. [root@controller ~]# openstack-status
    == Nova services ==
    openstack-nova-api: active
    openstack-nova-cert: active
    openstack-nova-compute: inactive (disabled on boot)
    openstack-nova-network: inactive (disabled on boot)
    openstack-nova-scheduler: active
    openstack-nova-volume: inactive (disabled on boot)
    openstack-nova-conductor: active
    == Glance services ==
    openstack-glance-api: active
    openstack-glance-registry: active
    == Keystone service ==
    openstack-keystone: active
    == Support services ==
    mysqld: inactive (disabled on boot)
    libvirtd: active
    dbus: active
    qpidd: active
    == Keystone users ==
    Warning keystonerc not sourced
    [root@controller ~]#[root@controller ~]# keystone user-list
    +———————————-+——–+———+—————-+
    | id | name | enabled | email |
    +———————————-+——–+———+—————-+
    | 67111d22ca1f47308d16170a3ba828de | admin | True | root@localhost |
    | 178fe34256f246bcb78167a181dda5d7 | glance | True | root@localhost |
    | cf06a06f0131448e8704797560e5a295 | nova | True | root@localhost |
    +———————————-+——–+———+—————-+
    [root@controller ~]# nova service-list
    +——————+————+———-+———+——-+—————————-+—————–+
    | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
    +——————+————+———-+———+——-+—————————-+—————–+
    | nova-cert | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    | nova-scheduler | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    | nova-consoleauth | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    | nova-conductor | controller | internal | enabled | up | 2015-06-18T02:27:43.000000 | – |
    +——————+————+———-+———+——-+—————————-+—————–+
    [root@controller ~]# nova image-list
    +————————————–+——–+——–+——–+
    | ID | Name | Status | Server |
    +————————————–+——–+——–+——–+
    | 05c4a105-7129-4f69-a506-696c9732789a | cirros | ACTIVE | |
    +————————————–+——–+——–+——–+
    [root@controller ~]#
    [root@controller ~]# nova network-create test-net –bridge br100 –multi-host T –fixed-range-v4 192.168.203.10/29
    ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-4fda15a6-7d00-4a53-9a1d-a778ec73239f)

    please help me out …i\n aove error as the insatllation is on centos 7

  4. Did you implement using VMWare/VirtualBox?. And how many nodes do I need to implement this?. What should be the h/w s/w specifications for the nodes?.
    Thank you

    • You can implement it on VMWare, don’t know about VirtualBox. But you’ll need to allow nested VM-extensions (see the post). This isn’t very performant but good for testing purposes. You will need at least 2 nodes (see overview of the test setup). Specs are not very important. It depends how much workload you plan to run on the nodes. I would say that 1 CPU/1GB RAM and 15GB disk space should be enough.

      • you can create on VirtualBox via vagrant and you just have to change on compute in /etc/nova/nova.conf from:
        virt_type=kvm
        to
        virt_type=qemu

  5. Hello,

    Thank you for the nice article !

    However what are these characters on some of the lines of commands :

    >

    Can you please explain ?

    Thanks !

    • &gt ; is the > sign in HTML. so probably that’s a copy/paste problem or something with the WordPress theme. I’ll have a look to correct this. I think in 99% of the cases, you can just ignore it :)

Leave a Reply

Your email address will not be published. Required fields are marked *