Using Terraform: Part 4

Introduction

Part 3 of this series showed how to create multiple virtual servers in Terraform. We also used Terraform to configure the servers so that they are aware of each other. This allows the two servers to act as failover / highly available pairs.

In this blog post, we’ll look at how to leverage multiple providers in Terraform in order to build a Consul cluster. Even better, we’ll be able to make the cluster any size we like: whether 3 nodes or 300 nodes, all we’ll have to do is tell Terraform the number.

As mentioned in Part 1 of this series, one of the many great features of Terraform is its ability to support multiple cloud service providers (such as Amazon AWS, Microsoft Azure and OpenStack). Up until now, we’ve been leveraging a single OpenStack provider. Now we’ll add in a second provider ‘Amazon AWS’ to provide a cloud-based DNS service.

We’ll also start using Consul, which is another tool from HashiCorp (the same group that makes Terraform). Consul provides service discovery, as well as key value and monitoring services

Service discovery can be thought of as a catalogue of services. When an application wants to know if the environment has a MySQL service available, it can query the catalogue to get more details. If Consul already existed in our Terraform configuration, we could use it to help us build the Consul cluster. However, since Consul is providing the service catalogue, it can’t exactly query itself while it’s still being built. A bit of a chicken and egg scenario. To get around this, we’ll use Amazon’s Route 53 DNS service

Required Materials

Just like Part 3, this is a complex demo that has several different pieces. I have bundled everything together here

This demo will be run on Cybera’s Rapid Access Cloud, which is an IPv6 enabled cloud. This is important to note because in order to use the provider’s resources without having to modify a different IaaS cloud, the cloud will need to support IPv6.

In addition, you’ll also need an Amazon AWS account. If you’ve never used Amazon AWS before, you can sign up for their free tier. If you’re not eligible for the free tier, the Route 53 service should only cost $1-2 per month.

To follow along, make sure you have a DNS zone hosted with Route 53. Instructions on how to do that can be found here.

The Terraform Configuration

The Github repo that I linked to earlier contains all the instructions in full. This section will highlight and describe the most important parts.

count

variable "count" {
  default = 3
}

At the top of the Terraform configuration, we’ll specify a “count”. This determines how large the Consul cluster will be. By default, the size of the cluster will be 3 x the recommended minimum size of a Consul cluster.

When you run “terraform apply”, Terraform will prompt you to enter a number. If you choose to just press “enter”, the default number will be used.

Key Pair

resource "openstack_compute_keypair_v2" "consul" {
  name = "consul"
  public_key = "${file("keys/consul.pub")}"
}

Just like in Part 3, you’ll need a key pair. You can either re-use the same key pair from Part 3 or generate a new one:

$ ssh-keygen -f key/id_rsa

Security Group

resource "openstack_compute_secgroup_v2" "consul" {
  name = "consul"
  description = "Rules for consul tests"
  rule {
    from_port = 1
    to_port = 65535
    ip_protocol = "tcp"
    cidr = "0.0.0.0/0"
  }
  rule {
    from_port = 1
    to_port = 65535
    ip_protocol = "tcp"
    cidr = "::/0"
  }
  rule {
    from_port = 1
    to_port = 65535
    ip_protocol = "udp"
    cidr = "0.0.0.0/0"
  }
  rule {
    from_port = 1
    to_port = 65535
    ip_protocol = "udp"
    cidr = "::/0"
  }
  rule {
    from_port = -1
    to_port = -1
    ip_protocol = "icmp"
    cidr = "0.0.0.0/0"
  }
  rule {
    from_port = -1
    to_port = -1
    ip_protocol = "icmp"
    cidr = "::/0"
  }
}
 

Again, like in Part 3, a security group will be needed. This security group simply allows all traffic. It’s good for testing, but not for production use.

Server Group

resource "openstack_compute_servergroup_v2" "consul" {
  name = "consul"
  policies = ["anti-affinity"]
}

And again, like in Part 3, a server group will be used. This ensures that each Consul server is hosted on a different compute node for high availability support.

Instances

resource "openstack_compute_instance_v2" "consul" {
  count = "${var.count}"
  name = "${format("consul-%02d", count.index+1)}"
  image_name = "Ubuntu 14.04"
  flavor_name = "m1.tiny"
  key_pair = "consul"
  security_groups = ["${openstack_compute_secgroup_v2.keepalived.name}"]
  scheduler_hints {
    group = "${openstack_compute_servergroup_v2.consul.id}"
  }
}

In Part 3, we specified two explicit instance resources. Here we’re only specifying one, but notice the “count” parameter. This tells Terraform that we want to create n amount of instances where n is the “count” variable we declared at the beginning.

The name of the instance is derived from the word “consul-” and the count variable. So using the default value of 3 will create three virtual machines with the names:

  1. consul-01
  2. consul-02
  3. consul-03

Null Resource

resource "null_resource" "consul" {
  count = "${var.count}"
 
  connection {
    user = "ubuntu"
    key_file = "keys/consul"
    host = "${element(openstack_compute_instance_v2.consul.*.access_ip_v6, count.index)}"
  }
 
  provisioner "file" {
    source = "files"
    destination = "files"
  }
 
  provisioner "remote-exec" {
    inline = [
      "sudo cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/",
      "sudo bash /home/ubuntu/files/install.sh"
    ]
  }
}

In Part 3, we used a Template Resource to configure and provision the virtual machines. We could still use the Template Resource, but since we’re not doing any templating, it would look a little funny. Instead, we will use the “Null” resource to do the provisioning.

The Null Resource doesn’t do anything special other than provide the ability to provision resources. It’s currently an undocumented feature in Terraform. In the future, a better way of doing this same piece may be available.

As you can see above, there are two blocks in the Null Resource: one connection and two provisioner blocks.

The connection specifies how to connect to the virtual machine. You can see that the “element” function is being used to look up the IPv6 address of the virtual machine in the array of instance resources, based on the current index of the “count” variable.

The “file” provisioner copies over everything from the “files” directory.

The “remote-exec” provisioner performs two steps:

  1. Copies the “ubuntu” user’s SSH key to the “root” user (this allows us to log into the servers as “root”).
  2. Runs the “install.sh” script, which will be detailed later.

Amazon Route 53 Resources

resource "aws_route53_record" "consul-aaaa" {
  zone_id = "YOUR-ZONE-ID"
  name = "consul.YOUR-DOMAIN-NAME"
  type = "AAAA"
  ttl = "60"
  records = ["${replace(openstack_compute_instance_v2.consul.*.access_ip_v6, "/[[]]/", "")}"]
}
 
resource "aws_route53_record" "consul-txt" {
  zone_id = "YOUR-ZONE-ID"
  name = "consul.YOUR-DOMAIN-NAME"
  type = "TXT"
  ttl = "60"
  records = ["${formatlist("%s.z.terrarum.net", openstack_compute_instance_v2.consul.*.name)}"]
}
 
resource "aws_route53_record" "consul-individual" {
  count = 3
  zone_id = "YOUR-ZONE-ID"
  name = "${format("consul-%02d.z.terrarum.net", count.index+1)}"
  type = "AAAA"
  ttl = "60"
  records = ["${replace(element(openstack_compute_instance_v2.consul.*.access_ip_v6, count.index), "/[[]]/", "")}"]

The above three resources are creating three types of DNS records:

  1. An “AAAA” record that will hold the IPv6 addresses of all Consul virtual machines.
  2. A “TXT” record that will also hold the IPv6 addresses of all consul virtual machines, but in a TXT record format. (Think of this as a storing a plain text file in DNS.)
  3. Individual “AAAA” records for each Consul virtual machine.

Consul Installation Script

The consul installation script can be found here. It does the following:

  1. Installs the “unzip” package (since Consul is distributed as a .zip file).
  2. Creates a user called “consul”, as well as the files and directories required by the Consul service.
  3. Downloads and installs Consul and the Consul web application.
  4. Makes a DNS query for the TXT record of “consul.YOUR-DOMAIN-NAME”.
  5. Loops through the results of the TXT record and adds them to the Consul configuration file.
  6. Starts the Consul service.

Step 4 is the most important part here, as it allows us to break out of the chicken / egg scenario of bootstrapping our Consul cluster. Terraform created the virtual machines for us and then created DNS records of those virtual machines. When the install script runs, it now queries for those DNS records and allows Consul to self-configure itself. This is a very powerful feature to have in a tool. 

Action

With all of the resources in place, running “terraform apply” will do the following:

  1. Declare a variable called “count” with a default value of 3.
  2. Create a Key Pair called “consul”.
  3. Create a Security Group called “consul”.
  4. Create a Server Group called “consul”.
  5. Create three virtual machines that use the “consul” key pair, “consul” security group, and “consul” server group.
  6. Create three null resources that provision the three virtual machines to:
    1. Copy the “files” directory to the virtual machine
    2. Run the “install.sh” script
  7. Create an AAAA DNS record with the IPv6 addresses of all three virtual machines.
  8. Create a TXT DNS record with the IPv6 addresses of all three virtual machines.
  9. Create an AAAA DNS records with the IPv6 address for each virtual machine.

Results

Once Terraform has finished, you can now log in to your Consul cluster:

$ ssh consul.YOUR-DOMAIN-NAME
root@consul-02:~# consul members
Node       Address                                      Status  Type    Build  Protocol  DC
consul-03  [2605::3eff:fe07:1875]:8301  alive   server  0.5.2  2         honolulu
consul-01  [2605::3eff:feea:e84]:8301   alive   server  0.5.2  2         honolulu
consul-02  [2605::3eff:fe79:1586]:8301  alive   server  0.5.2  2         honolulu

All three have discovered each other with no help from us!

Also note that the SSH command was for consul.YOUR-DOMAIN-NAME, yet we were taken to “consul-02”. This is done automatically by the DNS round-robin: when a DNS record contains multiple addresses, it will cycle through each one. This is an easy way to allow a single entry point to a group of identical servers.

With the Consul web application installed, you can also visit http://consul.YOUR-DOMAIN-NAME:8500/ui and interact with Consul through a browser.

Conclusion

In this part of the series, we built a self-configuring cluster (that can be scaled up to any size) by utilizing multiple providers in Terraform. The cluster we built uses Consul which provides service discovery, a key/value database, and service monitoring. We’ll leverage this cluster in future parts. Stay tuned!

Leave a Comment

Your email address will not be published. Required fields are marked *