OpenConext Deployment: Part #3

In part two, I explained how to create the repository for configuration management, and deploy infrastructure to support the OpenConext platform.

In this part, I’ll be covering how to prepare the Ansible environment, install OpenConext, and test if the installation was successful.

Prerequisites

Ansible and other dependencies need to be installed on the machine deploying OpenConext:

  • git is used to clone the OpenConext source repository

  • curl is used to retrieve the dynamic inventory plugin

  • python and python-pip are prerequisites of Ansible

  • shade is a python library to interface with OpenStack

I use Ubuntu 16.04 for the deployment machine so these commands can only be confirmed to work on this particular distribution. However, these package names are universal and can easily be installed on other distributions if you so choose.

Install the required packages with these commands:

sudo apt-get install git curl python python-pip
sudo pip install shade ansible

Next, you need to clone the OpenConext-deploy repository:

git clone git://github.com/OpenConext/OpenConext-deploy
cd OpenConext-deploy

Prepare the Environment

OpenConext needs a number of different passwords, certificates, and keys to operate. To create these all by hand would take a lot of effort, so a script is included to automate this. Replace <domain> with your domain name (which you created in part two) and run this command to create your environment:

	./prep-env dev <domain>

Next, update the environments/dev/group_vars/dev.yml file by replacing line:

	haproxy_sni_ip: 192.168.66.100

With:

	haproxy_sni_ip: 0.0.0.0

This tells the HAproxy front-end to listen on all available interfaces.

The OpenConext-deploy wiki says to use a "host_vars" file for a few variables. This means hard-coding the server hostname into a filename (like host_vars/openconext.mydomain.com.yml). For the demo, this file can be moved to a "group_vars" file for the same result:

	mv environments/dev/host_vars/template.yml environments/dev/group_vars/php-apps.yml

In Ansible, host variable files are used for X and group variables are used for Y. Remember that, in part two, I tagged the OpenStack virtual machine with X. This means Ansible will apply the settings in the group variable files to the virtual machines with the corresponding tag.

Setup Ansible Inventory

OpenConext-deploy uses a static inventory file by default. The OpenStack inventory plugin should be used instead, so replace the static inventory file with the OpenStack script and mark it executable:

	curl https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py > environments/dev/inventory
	chmod +x environments/dev/inventory

Next, you need to create a cloud.yaml file to instruct Ansible’s OpenStack plugin to connect to your OpenStack infrastructure. Create a clouds.yaml file and fill in the values under the auth section with your actual OpenStack credentials.

OpenConext-deploy/clouds.yaml

	clouds:
	 dev:
	   auth:
	     auth_url: ‘’
	     username: ‘’
	     password: ‘’
	     project_name: ‘’
	ansible:
	 use_hostnames: True
	 fail_on_errors: True

Now, test the inventory is setup correctly by running:

	environments/dev/inventory --list

The new OpenStack server you created in part two should be listed in the output of this command. If you receive an error, verify that the OpenStack credentials entered are correct.

Using SSH keys in Ansible

The servers created in OpenStack generally only allow an SSH key for authentication by default. This ensures passwords aren’t hard-coded and susceptible to attackers guessing them.

Ansible supports an ansible.cfg variable called private_key_file that can instruct Ansible to use an SSH key for the connection. This removes the need to use the ssh-agent every time you start a new terminal. Create a file called ansible.cfg with the following line and replace the <path_to_terraform_conf> to reflect the pull pathname of the id_rsa file created in part two:

ansible.cfg

	[defaults]
	private_key_file=/<path_to_terraform_conf>/dev/key/id_rsa

Deploying OpenConext

OpenConext is now ready to be installed. Replace <username> with the SSH username Ansible, which should connect with and run the following command:

	ansible-playbook -v -i "environments/dev/inventory" -u "<username>" -K provision-template.yml  --extra-vars="secrets_file=environments/dev/secrets/dev.yml"

Verifying the OpenConext Installation

After the ansible-playbook command completes for the first time, I generally recommend a reboot to make sure all services start properly.

Ansible allows running of ad-hoc commands in cases where you don’t need (or want) to save a full-fledged playbook. We can use this to reboot the server after the installation by running:

	ansible loadbalancer -m shell -a ‘shutdown -r now’ -i "environments/dev/inventory" -u ‘<username>’ -b

This is the breakdown of each argument:

  • “loadbalancer” is the name of a server group to execute this command on. We can also use any of the other names in the metadata group tag from part two (i.e. storage, php-apps, java-apps).

  • “-m shell” tells the command to use the shell module in Ansible to run a command.

  • “-a” allow you to pass arguments to the module being used (in this case, the shell module).

  • “-i” specifies which inventory file to use. This can also be set in ansible.cfg instead of setting it each execution on the command line.

  • “-u” is used to specify the username to connect with. This can also be set in ansible.cfg.

  • “-b” tells Ansible to “become” an elevated user to run the command (i.e with sudo)

Once your server comes up, it may take a few minutes for the different java services to fully load. Accessing any of the components when they aren’t fully started will just return 5XX HTTP error codes.

Opening up https://welcome.<domain> in a browser will provide a page with links to different services the federation provides. As I mentioned in part one, only EngineBlock and the Service Registry are covered in this tutorial:

  • https://engine.<domain> shows certificate data and metadata endpoints to provide to IdPs and SPs joining the federation, and other links.

  • https://serviceregistry.<domain> is the federation service that manages metadata for all IdPs and SPs that are part of the federation.

When you load the Service Registry for the first time, you will be prompted for a username and password by a “Mock IdP”. This Mock IdP provides no real authentication and is simply used to “bootstrap” your federation. Bootstrapping is necessary because the Service Registry requires an IdP to access it and to add IdP, but being a new federation means there’s no valid IdP.

Entering username “admin” without a password will bring you to the main Service Registry page.

Conclusion

At this point, OpenConext is installed and running! You now have a functional "hub" in a Hub and Spoke Identity Federation. Of course, there aren't any Identity Providers or Services in the federation yet. We'll cover that in a future blog post.

 

In case you missed it: OpenConext Deployment: Part #1