Using Terraform Part 5: Virtual Networks

Introduction

The previous blog posts in this series focused on creating “compute”-based resources in Cybera’s Rapid Access Cloud — that is, services focused around virtual machines. The Rapid Access Cloud recently implemented OpenStack Neutron, which provides virtual networking services. Let’s see how Terraform can take advantage of these new services.

Virtual Networks and Subnets

Just like a virtual machine is a software representation of a physical hardware computer, a virtual network is a software representation of a physical network. Instead of a hardware switch with cables interconnecting devices, an operating system can partition a software network stack into different segments.

This is very powerful technology. Probably its most basic application is connecting a virtual machine on one or more virtual networks. One reason for doing this is to dedicate a specific network to a specific type of traffic.

Anyone who has managed multiple networks knows that things can quickly get messy. Going back to the physical/software analogy, a physical rack of servers and network equipment can look like this:

And a virtual software set of servers and networks can look like this:

Complexity is inevitable sometimes. But using a good tool can help manage and organize this complexity. This is where Terraform comes in.

I’m going to assume you’ve read the past series or are already familiar with Terraform.

(Please note that since this series has been running for over two years, syntax and commands may have changed. If you spot an error, please let us know. If you’re an eligible Rapid Access Cloud user, we’ll give you a quota increase as a way of saying “thanks”.)

To begin, create a directory on your workstation called “terraform-vnetwork”. Inside the directory, create a file called “main.tf”. I have laid out below the code that you will use in this file.

First, create a security group with some rules. These rules will allow all TCP and ICMP from both IPv4 and IPv6 sources. Normally, you want to be more strict with security groups, but this will work for demonstration purposes:

resource "openstack_networking_secgroup_v2" "sg_1" {
name = "sg_1"
description = "sg_1"
}

resource "openstack_networking_secgroup_rule_v2" "allow-all-tcp-ipv4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 1
port_range_max = 65535
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.sg_1.id}"
}

resource "openstack_networking_secgroup_rule_v2" "allow-all-tcp-ipv6" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 1
port_range_max = 65535
remote_ip_prefix = "::/0"
security_group_id = "${openstack_networking_secgroup_v2.sg_1.id}"
}

resource "openstack_networking_secgroup_rule_v2" "allow-all-icmp-ipv4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "icmp"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.sg_1.id}"
}

resource "openstack_networking_secgroup_rule_v2" "allow-all-icmp-ipv6" {
direction = "ingress"
ethertype = "IPv6"
protocol = "icmp"
remote_ip_prefix = "::/0"
security_group_id = "${openstack_networking_secgroup_v2.sg_1.id}"
}

Next, create a network and subnet:

resource "openstack_networking_network_v2" "internal" {
name = "internal"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "internal" {
name = "internal"
network_id = "${openstack_networking_network_v2.internal.id}"
cidr = "192.168.199.0/24"
ip_version = 4
enable_dhcp = true
}

Both the network and subnet are called “internal”. The subnet will support IPv4 addresses in the 192.168.199.0/24 range. A DHCP service will be created to provide addresses to the devices on the network.

Next, create two virtual machines:

resource "openstack_compute_instance_v2" "instance_1" {
depends_on = ["openstack_networking_subnet_v2.internal.id"]

name = "instance_1"
image_name = "Ubuntu 14.04"
flavor_name = "m1.small"
key_pair = "CHANGE"
security_groups = ["${openstack_networking_secgroup_v2.sg_1.id}"]
user_data = "#!/bin/bashnifup eth1"

network {
name = "default"
}

network {
name = "${openstack_networking_network_v2.internal.name}"
}
}

resource "openstack_compute_instance_v2" "instance_2" {
depends_on = ["openstack_networking_subnet_v2.internal"]

name = "internal_"
image_name = "Ubuntu 14.04"
flavor_name = "m1.small"
key_pair = "CHANGE"
security_groups = ["${openstack_networking_secgroup_v2.sg_1.id}"]

network {
name = "${openstack_networking_network_v2.internal.name}"
}
}

A few things to note about these two machines:

  • The first virtual machine has two “network” blocks. This will configure the virtual machine with two virtual network interface controllers (NICs): one on the “default” network (which everyone in the Rapid Access Cloud has access to) and one on the internal virtual network you’re creating.

  • The first virtual machine has an embedded shell script in the “user_data” section to bring up the second virtual NIC. Ubuntu will not do this by default.

  • The second virtual machine is only configured with one virtual NIC, which will be connected to the virtual network you’re creating.

  • You should replace “key_pair” with an SSH key pair in your Rapid Access Cloud account.

And finally, some “outputs” to display resource information:

output "instance_1" {
value = "${openstack_compute_instance_v2.instance_1.access_ip_v6}"
}

output "instance_2" {
value = "${openstack_compute_instance_v2.instance_1.network.0.fixed_ip_v4}"
}

With all of the above in place, run the following commands in a terminal:

$ cd terraform-vnetwork
$ terraform apply

When Terraform has finished, you should see something similar to:

Outputs:

instance_1 = [2605:fd00:4:1000:f816:3eff:fe1e:d29c]
instance_2 = 192.168.199.4


Using the IPv6 address of instance_1, remotely connect to it via SSH. Note that you will need IPv6 connectivity to do this step, but you do have IPv6 connectivity, right?

$ ssh ubuntu@2605:fd00:4:1000:f816:3eff:fe1e:d29c

Once you’ve logged in to instance_1, try to ping instance_2 using the IP address noted in the Terraform output:

ubuntu@instance-1:~$ ping -c 5 192.168.199.4
PING 192.168.199.4 (192.168.199.4) 56(84) bytes of data.
64 bytes from 192.168.199.4: icmp_seq=1 ttl=64 time=0.824 ms
64 bytes from 192.168.199.4: icmp_seq=2 ttl=64 time=0.775 ms
64 bytes from 192.168.199.4: icmp_seq=3 ttl=64 time=0.714 ms
64 bytes from 192.168.199.4: icmp_seq=4 ttl=64 time=0.779 ms
64 bytes from 192.168.199.4: icmp_seq=5 ttl=64 time=0.787 ms

Congratulations! You’ve successfully created a multi-tier virtual environment. Unfortunately, you won’t be able to log in to instance_2 unless you copy your private SSH key to instance_1. However, for demonstration purposes, the ping example serves as proof of connectivity.

For a more illustrative report, if you log into the Rapid Access Cloud dashboard, you can view the Network Topology in the Network panel:

Conclusion

This installment of the Using Terraform series explained how to create a multi-tier virtual environment using the Rapid Access Cloud’s new virtual networking technology. In the next part, we’ll show how you can use Virtual Routers to create other virtual network architectures.

Leave a Comment

Your email address will not be published. Required fields are marked *