Let’s Build a Cloud — OpenStack and Networking Addendum

In the fourth part of this series, I showed how to use Puppet to install an entire OpenStack environment.

During the writing and testing of these blogs, I have spent over half of my time describing how to get OpenStack networking to work properly.

OpenStack networking is, unfortunately, very complicated. Networking works once you have correctly figured how to configure OpenStack for a specific environment, but when that environment changes, the networking config could possibly change drastically.

I thought it would be beneficial to include in this final post my notes about OpenStack and networking. Please note that Quantum is not included.

Notes on OpenStack Networking Managers

OpenStack provides three native network managers:

FlatManager

Flat Networking Mode leaves most of the network configuration up to the OpenStack administrator. You must manually set up a bridge between all OpenStack nodes, and then configure OpenStack with a pool of available IPs. Upon booting an instance, OpenStack will inject an IP from the pool into the instance. This injection currently only works on Debian-based Linux distributions.

FlatDHCPManager

Like Flat Networking, you configure OpenStack with a pool of IP addresses for Flat DHCP Networking. Flat DHCP, however, will configure a bridge between all OpenStack nodes on your behalf. Rather than injecting an IP address, Flat DHCP runs a local DHCP server (dnsmasq) so instances can request IPs by way of the DHCP protocol.

One big drawback to Flat DHCP is that you must have a dedicated interface for the DHCP server. If an interface is currently in use and has an IP address attached to it, OpenStack will remove that IP address. If that interface happened to be the only way to communicate with the server, you will then need to access the server via a console.

I usually don't consider using Flat DHCP due to this. Instead, I focus attention on VLAN Manager.

VlanManager

VLAN Manager, I think, is the most robust network manager available. It'€™s also the most complicated.

With VLAN Manager, OpenStack will create a new VLAN for each project. For example, if you configure OpenStack to use a starting VLAN of 100, the first project will then be on VLAN 100. The second project will be on VLAN 101, and so on.

Each VLAN has a dedicated dnsmasq process that provides DHCP and DNS resolution.

By segregating projects into VLANs, users of those projects can be assured that no other project can interfere with their network traffic.

VLAN Manager does have some drawbacks, though. First, your environment needs to support VLAN, although this is usually not a problem. Secondly, you need to know what VLANs are already in use before configuring OpenStack. Trouble could arise if OpenStack tries to use a VLAN that is already in use.

Additionally, you should ensure that you have configured OpenStack to use a physical interface for the vlan_interface nova.conf paramter. If vlan_interface is a VLAN'€™d interface, OpenStack will then create the VLANs on top of another VLAN. In short, what this is doing is adding an extra 4 bytes to every packet and pushing the MTU to 1504. Very, very odd network problems will happen due to this. 

I'€™ve found two more odd drawbacks with VLAN Manager: the first is that unless you are running an all-in-one or a single compute node OpenStack architecture, you cannot create an OpenStack environment inside OpenStack. It sounds silly to do, but the ability to run OpenStack inside virtual machines is great for testing.

If you are unable to run OpenStack inside OpenStack, the next best solution is to virtualize it from within VirtualBox. However, you must choose to use the PCnet family of adapters for the virtual NICs. The Intel family does not support VLANs in the virtual machines.

Additionally, with VirtualBox, ensure you create a new Host Only network that has DHCP disabled.

Notes on Bridging

All forms of OpenStack networking work with bridges in one way or another.

Bridging enables the instances to be seen on the same networks that are hosted by the OpenStack nodes. Without bridging, all instances would need to be NAT'€™d.

One area to be aware of with bridging is to ensure that if you are running more than one OpenStack environment on one switch, and that the bridges in each environment have different IDs. If they have the same IDs, network traffic between the two environments will be bridged.

For example, there are two OpenStack environments connected to the same switch: sandbox1.example.com and sandbox2.example.com. Both OpenStack environments bring up br100 as a bridge. Since bridges work on Layer 2, and since a switch is a Layer 2 device, the two environments are now connected.

Notes on NICs

When working with OpenStack, it'€™s best to have a minimum of two NICs — even if you'€™re creating an all-in-one environment. 

NIC Consistency

For OpenStack environments with more than one server, I'€™ve found it helps to keep the NIC's role consistent between all servers.

For example, an OpenStack environment of 4 servers will consist of one Cloud Controller and three Compute Nodes. If you are not using a multi_host configuration, the Cloud Controller will be the only server with public access to the Internet. The Compute Nodes will each get their Internet access by way of the Cloud Controller.

To keep things consistent, pick one NIC, either eth0 or eth1, to be designated for public internet access. Use the other for internal access: 

OpenStack Role

NIC

NIC Role

Cloud Controller

eth0

internal access

Cloud Controller

eth1

public access

Compute Node

eth0

internal access

Compute Node

eth1

not in use

 

This way, you know that eth0 means '€œinternal access'€ and eth1 means '€œpublic access'€ across your entire OpenStack environment.

Bonding NICs

In critical production environments, you will most likely want to bond two or more NICs together to provide network redundancy.

Taking the NIC Consistency example into consideration, if eth0 and eth1 were bonded into a single interface, bond0, another solution is now needed to provide internal and public access.

One solution is to use bond0 just for either public or internal access and use another set of bonded NICs, bond1, for the other type of access. This requires a minimum of four NICs, though.

Another solution is to use VLANs inside the bonded interface. For example, VLAN 10 would be used for public access and VLAN 20 would be used for internal access. Both VLANs would be attached to the bond0 interface. This solution requires your environment to support VLANs, though.

This is the final post in the Let'€™s Build a Cloud series. To start from the beginning, please click here.