Let’s Build a Cloud — Using Puppet to Install OpenStack

Last week I gave an introduction to Puppet. In this part of the series, Puppet will be used to install an entire OpenStack environment. Two servers will be used: one will be the Cloud Controller and the other will be a Compute Node.

Required Information

When creating the Puppet manifests that will be used to install and configure OpenStack, various bits of information specific to your environment will be needed. Use the following table to fill in the values. When you come across the name of the value in the manifest, replace it with your value.

Name

Example Value

mysql_root_password

changeme

keystone_admin_token

12345

keystone_admin_password

changeme

keystone_db_password

changeme

mysql_allowed_hosts

[‘127.0.0.%’, ‘10.0.0.%'€™] 

glance_db_password

changeme

glance_keystone_password

changeme

nova_db_password

changeme

nova_keystone_password

changeme

public_interface

eth0

private_interface

eth1

cloud_controller_ip

10.0.2.5

compute_node_ip

10.0.2.6

Downloading the Required Puppet Modules

Before starting, ensure that your apt package repository is up to date:

root@server:~# apt-get update

Next, download the needed Puppet modules. As mentioned in the previous part, Puppet 2.7.12 will be able to download these modules and their dependencies from the puppet command, but as Ubuntu 12.04 is not using 2.7.12 yet, the modules will be downloaded manually. 

Since the standalone mode of Puppet will be used, these commands will need to be done on each of the two servers:

sudo su
cd /etc/puppet/modules
git clone https://github.com/puppetlabs/puppetlabs-mysql mysql
git clone https://github.com/puppetlabs/puppetlabs-concat concat
git clone https://github.com/puppetlabs/puppetlabs-stdlib stdlib
git clone https://github.com/puppetlabs/puppetlabs-glance glance
git clone https://github.com/puppetlabs/puppetlabs-nova nova
git clone https://github.com/puppetlabs/puppetlabs-keystone keystone
git clone https://github.com/puppetlabs/puppetlabs-horizon horizon
git clone https://github.com/puppetlabs/puppetlabs-rabbitmq rabbitmq
git clone https://github.com/puppetlabs/puppetlabs-sysctl sysctl
git clone https://github.com/saz/puppet-memcached memcached

Building the Cloud Controller

The Cloud Controller will consist of the following components:

  • Keystone
  • Glance
  • Horizon
  • nova-network
  • nova-scheduler
  • nova-objectstore
  • nova-api

 

Puppet will be used to install and configure each of the above services. You may place all Puppet code into a file called '€œcloudcontroller.pp'€

Keystone

Keystone can store its information in any type of database. SQLite was used earlier in this series. This time around, MySQL will be used. As shown in the previous part, MySQL was configured using the MySQL Puppet module. The same is done here.

 

# Install and Configure MySQL
class { 'mysql': }
class { 'mysql::server':
  config_hash => {
    'root_password' => %mysql_root_password%,
    'bind_address'  => '0.0.0.0'€™,
  }
}

# Secure the MySQL installation
class { 'mysql::server::account_security': }

Next, start configuring Keystone:

# Install Keystone
class { 'keystone':
  log_verbose  => true,
  log_debug    => true,
  catalog_type => 'sql',
  admin_token  => %keystone_admin_token%,

Keystone needs an admin role:

# Keystone admin role
class { 'keystone::roles::admin':  
  email        => 'root@localhost',
  password     => %keystone_admin_password%,
  admin_tenant => 'admin',
}

Keystone also needs to add itself as a service / endpoint to Keystone

# Add a Keystone service
class { 'keystone::endpoint': }

The next two blocks will create a MySQL database for Keystone and then configure Keystone to use that database:

# Create a MySQL db for Keystone
class { 'keystone::db::mysql':
  user          => 'keystone',
  password      => %keystone_db_password%,
  dbname        => 'keystone',
  allowed_hosts => %mysql_allowed_hosts%,
}

# Have Keystone use MySQL
class { 'keystone::config::mysql':
  user => 'keystone'
  password => %keystone_db_password%,
  host     => 'localhost',
}

It is important to change the allowed_hosts to IP subnets of your environment.

Just like in Part 2, creating an openrc file will provide allow the ability to perform Keystone and other OpenStack commands without specifying credentials:

# Create an openrc file
file { '/root/openrc':
  ensure    => present,
  owner     => 'root',
  group     => 'root',
  mode      => '0600',
  content   =>
"
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=%keystone_admin_password%
export OS_AUTH_URL="http://localhost:5000/v2.0/"
export OS_AUTH_STRATEGY=keystone
export SERVICE_TOKEN=12345
export SERVICE_ENDPOINT=http://localhost:35357/v2.0/
"
}

One thing to note about Puppet is that it does not guarantee that your defined resources will be applied in the exact order that you specified them. Puppet can be given hints about order, though. In the above case, MySQL needs to be configured before the Keystone database. Likewise, the Keystone database needs to be configured before Keystone. To guarantee this order, add the following to the bottom of the manifest:

Class['mysql::server'] -> Class['keystone::db::mysql']
Class['keystone::db::mysql'] -> Class['keystone']

The capital letters are required.

At this point, the manifest can be applied and Keystone will be installed and configured. You are more than welcome to try it – or wait until the end. 

root@cloudcontroller:~# puppet apply --verbose cloudcontroller.pp

You can test Keystone by:

root@cloudcontroller:~# source /root/openrc
root@cloudcontroller:~# keystone tenant-list

 

Glance

Next is Glance. 

Just like with Keystone, SQLite was used earlier in this series. For production installs of Glance, MySQL should be used. To configure a MySQL database for Glance, add this to the bottom of the manifest:

# Configure a MySQL DB for Glance
class { 'glance::db::mysql':
  user          => 'glance',
  password      => %glance_db_password%,
  dbname        => 'glance',
  allowed_hosts => %mysql_allowed_hosts%,
}

Again, just like with Keystone, modify the allowed_hosts to suit your environment.

There are four more parts of Glance to configure. The first is the glance-api service:

# Install glance-api and configure it to use Keystone
class { 'glance::api':
  log_verbose       => 'True',
  log_debug         => 'True',
  auth_type         => 'keystone',
  keystone_tenant   => 'services',
  keystone_user     => 'glance',
  keystone_password => %glance_keystone_password%,
}

Next is the glance-registry service:

# Install glance-registry and configure it to use Keystone
class { 'glance::registry':
  log_verbose       => 'True',
  log_debug         => 'True',
  auth_type         => 'keystone',
  keystone_tenant   => 'services',
  keystone_user     => 'glance',
  keystone_password => %glance_keystone_password%,
  sql_connection    => 'mysql://glance:%glance_db_password%@localhost/glance',
}

These two parts are very similar to the manual work done in Part 2 of the series. Note the keystone_tenant, keystone_user, and keystone_password parameters as they relate exactly to the parameters given in the Glance configuration files that were edited.

The third part to configuring Glance is to tell Glance to store its images on the filesystem as opposed to another storage back-end such as S3:

# Have Glance use a file storage back-end
class { 'glance::backend::file': }

The final part is to add Glance authentication and service to Keystone:

# Configure Keystone for Glance
class { 'glance::keystone::auth':  
  password         => %glance_keystone_password%,
}

Like with Keystone, add some hints so Puppet knows what to process in what order:

Class['mysql::server'] -> Class['glance::registry']

Again, you may run the manifest as it is right now. Both Glance and Keystone will be fully functional when Puppet is done. At this point, all work that was done in Part 2 has now been done by using Puppet. 

Nova

Now Nova. The Cloud Controller will run all Nova subcomponents except for nova-compute and nova volume.

First, configure a MySQL database for Nova:

# Configure a MySQL database for Nova
class { 'nova::db::mysql':
  user          => 'nova',
  password      => %nova_db_password%
  dbname        => 'nova',
  allowed_hosts => %mysql_allowed_hosts%,
}

Next, configure nova as a whole:

# Install and configure Nova
class { 'nova':
  sql_connection     => 'mysql://nova:%nova_db_password%@localhost/nova',
}

Next, tell Nova to use RabbitMQ:

# Configure Nova to use RabbitMQ
class { 'nova::rabbitmq': }

Now configure nova-api:

# Install and configure nova-api
class { 'nova::api':
  enabled        => true,
  admin_password => %keystone_admin_password%,
}

These next lines will quickly configure five other Nova subcomponents:

# Configure various nova subcomponents
class { 'nova::scheduler': enabled => true }
class { 'nova::objectstore': enabled => true }
class { 'nova::cert': enabled => true }
class { 'nova::vncproxy': enabled => true }
class { 'nova::consoleauth': enabled => true }

Next is nova-network. This part will require you to enter your own specific network configuration. I'€™ll show two options for doing this. The first is using Nova'€™s simple FlatDHCPManager:

# Configure nova-network
class { 'nova::network': 
  enabled            => true,
  network_manager    => 'nova.network.manager.FlatDHCPManager',
  private_interface  => %private_interface%,
  public_interface   => %public_interface%,
  fixed_range        => '10.0.0.0/8',
  num_networks       => '255',
}

FlatDHCPManager is useful if you are unable to use VlanManager (described below). Note that if you do use FlatDHCPManager, ensure that you use a dedicated NIC for the private_interface. If the private_interface is set to the same value as public_interface, that interface will lose network connectivity. 

To use Nova'€™s VlanManager:

# Configure nova-network
class { 'nova::network': 
  enabled            => true,
  network_manager    => 'nova.network.manager.VlanManager',
  private_interface  => %private_interface%
  public_interface   => %public_interface%,
  fixed_range        => '10.0.0.0/8',
  num_networks       => '255',
  config_overrides => {
    vlan_start => '100',
  }
}

Add Nova to Keystone:

# Configure Keystone for Nova
class { 'nova::keystone::auth':
  password => %nova_keystone_password%,
}

Finally, add a hint for Puppet:

Class['mysql::server'] -> Class['nova']

As with the last two parts, you can run the manifest now and have Keystone, Glance, and Nova installed and configured.

Horizon

The final component of the Cloud Controller is Horizon — the OpenStack web UI. Configuring it is as simple as:

class { 'horizon': }
class { 'memcached': listen_ip => '127.0.0.1' }

Once this is done, apply the manifest:

root@cloudcontroller:~# puppet apply --verbose cloudcontroller.pp

Once Puppet is done, use a web browser and go to the address of your server. You will be prompted with an OpenStack login prompt. The username is your Keystone admin credentials.

Add an Image to Glance

Use this quick snippet to add Ubuntu 12.04 to Glance. Once the Compute Node is running, you will be able to then launch an Ubuntu instance:

root@cloudcontroller:~# cd 
root@cloudcontroller:~# source openrc
root@cloudcontroller:~# wget http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img
root@cloudcontroller:~# glance add name="Ubuntu 12.04 cloudimg amd64" is_public=true container_format=ovf disk_format=qcow2 < ubuntu-12.04-server-cloudimg-amd64-disk1.img
root@cloudcontroller:~# glance index

Building a Compute Node

Once the Cloud Controller is up and running, it'€™s time to build a Compute Node.

First Steps

If the Cloud Controller will also be the Compute Node, you can proceed to the next section. If not, follow steps to make sure Puppet and git are installed, as well as downloading all of the required Puppet modules.

Next, create a new manifest on the second server and start with the following:

# Install MySQL client libraries
class { 'mysql': }
class { 'mysql::python': }

# Nova base configuration
class { 'nova':
  sql_connection => 'mysql://nova:%nova_db_password%@%cloud_controller_ip%/nova',
  rabbit_host    => %cloud_controller_ip%,
}

nova-compute

nova-compute handles the management of virtual machines. To configure it, begin with:

class { 'nova::compute':
  enabled                       => true,
}

Next, configure the virtualization part of nova-compute:

class { 'nova::compute::libvirt':
  libvirt_type                => 'qemu',
}

If you are working on a virtual server, use '€œqemu'€ for the libvirt_type. If you are using a physical server, you may use '€˜libvirt'€™ instead.

nova-volume

The final part to configure is nova-volume. nova-volume allows the creation of '€œstorage volumes'€. These are secondary storage devices that can be attached to your running instances. Think of them as external hard drives that you plug into your computer — except virtualized.

There are a few different ways to use nova-volumes. For this series, I'€™ll explain how to use LVM volumes. 

During Ubuntu'€™s installation, you may choose to dedicate a partition to LVM storage. If that dedicated partition is named '€œnova-volumes,'€ then Nova will automatically use it as its volume storage.

If this was not done during installation, this can be accomplished in another way. Note that this way is useful only for testing purposes. It would be too slow to do in production:

root@compute:~# apt-get install lvm2
root@compute:~# dd if=/dev/zero of=/tmp/nova-volumes.img bs=1M seek=5120 count=0 && /sbin/vgcreate nova-volumes `/sbin/losetup --show -f /tmp/nova-volumes.img`

This will create a 5GB volume-group that can be used for volumes. 

You can verify that the volume-group exists by typing '€œvgs'€.

Note that this volume-group will be removed upon rebooting the server and that the command will need re-run.

Once there is a nova-volumes volume, add the following to the manifest:

# Install and configure nova-volume
class { 'nova::volume': enabled => true }

# Use ISCSI / LVM volumes
class { 'nova::volume::iscsi': }

Compute Node Networking

There is very little that needs done to configure the Compute Node for networking. 

If you chose to use FlatDHCPManager, add the following to the manifest

nova_config { 'public_interface': value => %public_interface% }
nova_config { 'flat_interface': value => %private_interface% }

If you chose to use VlanManager, add the following to the manifest:

nova_config { 'vlan_interface': value => %private_interface% }

Once the above has been added, you can apply the manifest:

root@compute:~# puppet apply --verbose computenode.pp

When Puppet has finished, the compute node will be up and running. You can verify that it has contacted the Cloud Controller by:

root@cloudcontroller:~# nova-manage service list

You should see a result like:

Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth test1.example.com                    nova             enabled    :-)   2012-07-12 17:37:55
nova-scheduler   test1.example.com                    nova             enabled    :-)   2012-07-12 17:37:57
nova-network     test1.example.com                    nova             enabled    :-)   2012-07-12 17:37:58
nova-cert        test1.example.com                    nova             enabled    :-)   2012-07-12 17:37:59
nova-compute     test2.example.com                    nova             enabled    :-)   2012-07-12 17:37:59
nova-volume      test2.example.com                    nova             enabled    :-)   2012-07-12 17:38:01

At this point, you should be able to log in to the Horizon web interface and launch an instance. Congratulations! You have just built a cloud.

Conclusion

This part of the series explained how to build a two-node OpenStack cloud by way of Puppet. By utilizing several contributed Puppet modules and by describing OpenStack'€™s configuration in Puppet manifests, almost the entire installation and configuration of the cloud was automated. The Puppet manifests created in this series can easily be re-used for other cloud environments — as well, you can add more compute nodes to your cloud by just distributing the computenode.pp manifest to more servers.

This is part four in the Let's Build a Cloud series. To start from the beginning, please click here.