Lately, we’ve attempted to deploy OpenStack (Cactus) in IPv6 dual stack mode using the instructions from Configuring Compute to use IPv6 Addresses. Those instructions detail what to do to get virtual machines (VMs) with IPv6 addresses, but don’t go into what to do once you have them! There isn’t a lot of material on the Internet about using OpenStack with IPv6 so, with the help of our excellent Network Operations team, we dug our heels in and went to work.
In the end we had some success with getting OpenStack to work with IPv6 in our sandbox environment. We were able to ping6 and ssh -6 to a VM across the Calgary Cybera network, meaning that we were able to access VMs via IPv6 from nodes other than those immediately in our OpenStack deployment. Unfortunately, we weren’t able to test access via IPv6 across the Internet.
On the network operations side of things here’s what we had to do:
Below is a simple diagram showing how we setup OpenStack to use IPv6.
The FD00::/8 address is private, not routable on the Internet (like 10.0.0.0 or 192.168.0.0 for IPv4). We are simply using it as an example in this diagram. Ideally we would want to use a public IPv6 address when we setup a real OpenStack environment.
For example, Cybera’s range could be 2001:410:6080:80::/64 for the “outside” OpenStack network and 2001:410:6080:90::/64 for the “inside” OpenStack network.
On the OpenStack side of things here’s what we had to do:
In the Nova DB I had to change some of the IPv6 fields for the first project in the networks table.
However, OpenStack changed the gateway _v6 to fe80::f0c6:d8ff:fe2f:db49, which is the Scope:Link address of br0 on openstack1 (our mgmt node). Of course, after changing these settings, I deleted the VLANs and bridges, killed all dnsmasq processes, restarted radvd and restarted all nova services.
On both the mgmt and compute nodes, enable IPv6 forwarding.
echo "net.ipv6.conf.all.forwarding=1" >> /etc/sysctl.conf
sysctl -w net.ipv6.conf.all.forwarding=1
Now the nasty part. When I first tried to ping6 and ssh -6 a VM it didn’t work. I checked my security groups and there was nothing there, so I tried the usual euca-authorize commands:
euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default
This had mixed results. It created a (bad) ip6table rule for ping and no rule at all for ssh.
root@openstack2:~# ip6tables --line-numbers -n -L
Chain nova-compute-inst-8 (1 references)
num target prot opt source destination
1 DROP all ::/0 ::/0 state INVALID
2 ACCEPT all ::/0 ::/0 state RELATED,ESTABLISHED
3 ACCEPT icmpv6 fe80::f0c6:d8ff:fe2f:db49/128 ::/0
4 ACCEPT all 2001:410:6080:10::/64 ::/0
5 nova-compute-sg-fallback all ::/0 ::/0
The rule at line four means you could only ping the VM from openstack1! So I deleted this rule and replaced it with one that works. I also created a rule for ssh. (Note that I’m inserting those rules before the nova-compute-sg-fallback rule.)
ip6tables -D nova-compute-inst-8 3
ip6tables -I nova-compute-inst-8 3 -p ipv6-icmp -j ACCEPT
ip6tables -I nova-compute-inst-8 4 -p tcp --dport 22 -j ACCEPT
So it seems that the IPv6 implementation of security groups is quite buggy and not to be trusted. You may have to manually (or somehow automate it) to insert rules on the compute node to allow access to the VMs.
This was all done in the VlanManager mode. We also tried it in the FlatDHCPManager mode at Peking University in Beijing (a topic for another Tech Radar post). My understanding is that FlatDHCPManager is actually, conceptually, quite similar to VlanManager, except that all VMs are in only one network, so you would only ever have one row to change in the networks table.
I couldn’t tell you if this is an optimal IPv6 setup for OpenStack, but it’s a working start. The real goal of this post is to hopefully stimulate some conversation about OpenStack and IPv6. If you’re deploying OpenStack on IPv6, we’d like to hear from you!
This article was co-written by Alvaro Pereira.
comments powered by Disqus