OpenConext Deployment: Part #2

By Andrew Klaus, former security advisor, Cybera

In part one, I talked about what myUnifiED is and the various tools Cybera has used to deploy it.

In this post, I’ll be covering how to create the repository for configuration management, and deploy infrastructure to support the OpenConext platform. To keep things simple, I’ll only create a single OpenConext server.

Ansible Inventory

Ansible has the ability to run dynamic inventories of servers and virtual machines in your environment. By using a plugin system, Ansible is able to inventory a wide range of different server types. The most popular are Cobbler, Amazon Web Services, and OpenStack, but Ansible supports many others as well. By using Ansible’s inventory system, we can dynamically discover servers and even place them in common groups. I’ll demonstrate how this is useful shortly.


OpenConext requires several DNS records to be created prior to deployment. A list of these records can be found in the OpenConext-deploy wiki (they can either be A or CNAME records).



Terraform can be downloaded here. It’s a single binary, so the only “installation” that’s required is to add it to your path under /usr/local/bin (as explained here).

Installation on Linux 64-bit

sudo mv terraform /usr/local/bin/

Folder Structure

Note: To make things simple, I will be using a very flat folder structure. Cybera’s actual environment is more complicated, since we run several instances of OpenConext (one for development, one for demoing, and one for production), and each has its own directory structure.

You can create the terraform folder and SSH keypair using the following commands:

mkdir terraform
cd terraform
ssh-keygen -t rsa -f id_rsa -N ''

Next, create a file called “” in the “terraform” directory. This file will declare the SSH key pair, security group, and virtual machine that Terraform will build. The fields under the OpenStack provider in need to be populated with your actual OpenStack credentials.

Replace with the domain name you used when creating your DNS records. The image_name and flavor_name may differ, so ensure these are changed to suit your OpenStack environment. I’ll be using CentOS 7 for this deployment, which is the recommended Linux distribution for OpenConext in general.


provider "openstack" {
user_name = ""
tenant_name = ""
password = ""
auth_url = ""

resource "openstack_compute_keypair_v2" "openconext" {
name = "openconext"
public_key = "${file("")}"
resource "openstack_compute_secgroup_v2" "openconext" {
name = "openconext"
description = "Rules for openconext"
rule {
ip_protocol = "tcp"
from_port = 22
to_port = 22
cidr = ""

rule {
ip_protocol = "tcp"
from_port = 80
to_port = 80
cidr = ""

rule {
ip_protocol = "tcp"
from_port = 443
to_port = 443
cidr = ""

resource "openstack_compute_floatingip_v2" "openconext" {
pool = "nova"

resource "openstack_compute_instance_v2" "openconext" {
name = "openconext."
image_name = "CentOS 7"
flavor_name = "m1.large"
key_pair = "${}"
security_groups = ["${}"]
floating_ip = "${openstack_compute_floatingip_v2.openconext.address}"
user_data = "#cloud-confignfqdn: openconext.nhostname: openconext."
metadata {
groups = “loadbalancer,php-apps,java-apps,storage,java-apps-common”

The folder structure should now look like this:


Notice the groups =”  line above. This line instructs Terraform to tag the instance with those names in OpenStack. The Ansible OpenStack inventory script will use these metadata tags for two things: 1) To assign group variables (configuration settings that each server in the group should have), and 2) To connect to the instance to install and configure OpenConext. This is especially powerful when you start scaling up your infrastructure and only need to keep track of groups of instances instead of individual names.

Once everything is filled in, run terraform apply. Terraform will then create and build the OpenStack SSH keypair, security group, and virtual machine.

Once these pieces of infrastructure have been created, you’ll use Ansible to do the actual installation and configuration of OpenConext on the virtual machine. This will be shown in Part 3.

In case you missed it: OpenConext Deployment: Part #1

Leave a Comment

Your email address will not be published. Required fields are marked *