Hello! Welcome to my next blog. In this blog, I’m going to cover all-in-one implementation of OpenStack with RDO. By the time this blog is published, OpenStack has already released its new version with code-name Liberty, which I’m using here. Follow this link to learn more about it. For a beginner, OpenStack can be a very intimidating platform as it is composed of so many components like Nova, Neutron, Horizon, Cinder, Glance, Ceilometer, and others. And each of these components is a complex and vast subject on its own. So, if an OpenStack platform isn’t working correctly, it is very hard to know what is the cause of the problem and how to solve it. Therefore it is very important to have solid fundamental knowledge and understanding of OpenStack modules and the entire platform itself too. Before jumping into implementation part, I’d like to briefly introduce you to OpenStack and also my project setup environment.
This blog is intended for audiences who have good understanding of Computer Networking, Linux and Virtualization. I’d like to encourage everyone to go through it end-to-end. However, if you’re planning on following this blog to perform your own lab setup, please make sure that you’re well familiar with and skilled in above mentioned subjects.
What is OpenStack?
OpenStack is a free and open-source Cloud-computing platform, which controls and manages large pools of compute, storage and networking throughout the data centers. It began in 2010 with joint collaboration of Rackspace Hosting and NASA with a shared objective to develop and promote open cloud platform. Later in 2015, OpenStack Foundation, a non-profit organization began managing OpenStack. As of today, more than 500 top IT companies have joined the project and the open community across the world contributes to and uses it.
What is my Project Setup?
For this project, I’m using CentOS-7.0.1 minimal operating system as a host on Vmware environment. I’ve allocated 4 vCPU, 8 GB RAM, 20 GB Disk space and two NICs. I’ve assigned my two interfaces the IP addresses of 10.10.11.248 and 10.10.11.249 respectively. The main reason for using two network interfaces here is to maintain remote connectivity to the host while modifying network configurations inside the openstack environment. Trust me when I say this, setting up and troubleshooting OpenStack neutron network can be a big headache. So, it’ll be better if we have backup connectivity to our server.
Project Deployment
I started this project with the creation of a new virtual machine in my Vmware vSphere platform. I hope you’re familiar with VM creation in vSphere. If not, please follow this link to learn how. After creating a VM with required configurations, I mounted my CentOS 7 minimal installation ISO to it from vSphere DataStore, where I had uploaded that ISO earlier. Then, I booted up my VM from CD/DVD and my CentOS installation began. (In case you’d like to know about how to install CentOS, please read my article on it:
After I had my OS installed, I rebooted it and logged in with the user credentials I created during installation. Then I did few initial configurations on my CentOS system before installing OpenStack.
Initial System Configuration
Setting up my networking (Note: Naming convention of interfaces in CentOS 7 is little different than its earlier versions and other linux distributions):
[code][root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736[/code]
This is my NIC1 and its configuration is as follows:
[code]TYPE=Ethernet
DEVICE=eno16777736
ONBOOT=yes
NETBOOT=yes
BOOTPROTO=static
IPADDR=10.10.11.248
NETMASK=255.255.255.0
GATEWAY=10.10.11.1
DNS1=10.10.13.2
TYPE=Ethernet
HWADDR=00:50:56:88:0d:8e[/code]
Configuring NIC2:
[code][root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno33555200
TYPE=Ethernet
DEVICE=eno33555200
ONBOOT=yes
NETBOOT=yes
BOOTPROTO=static
IPADDR=10.10.11.249
NETMASK=255.255.255.0
GATEWAY=10.10.11.1
DNS1=10.10.13.2
TYPE=Ethernet
HWADDR=00:50:56:88:72:aa[/code]
I also updated my hosts file as:
[code][root@localhost ~]# vi /etc/hosts
127.0.0.1   localhost
127.0.0.1   openstack[/code]
I updated my system’s hostname and domain too as:
[code][root@localhost ~]# hostname openstack
[root@openstack ~]# vi /etc/hostname
openstack.sajjan.com.np[/code]
Restarting networking to implement the changes:
[code][root@openstack ~]# systemctl restart network.service[/code]
Note: Just a heads up! If you’re doing this lab in a virtual environment like Vmware or Xen and you’ve got a cloned VM, you’re more likely to experience problem with your networking because the hardware or MAC addresses of the network interfaces inside cloned VM aren’t quite correctly assigned while cloning VMs. In my case, removing the attached network adapters from the new VM and adding new network interfaces to it solved this problem. But before you do that, make sure you’ve deleted all the network interface configuration files (/etc/sysconfig/network-scripts/ifcfg-e*) from your cloned system. Then, when you restart your system, it’ll get new MAC addresses and will generate new configurations files automatically. In case it doesn’t generate config files, you can manually create them by learning your interfaces’ name from ip link show.
Installing OpenStack
First, I installed a repository that provided OpenStack because the base repository of CentOS doesn’t include it. In my case, I used the repo of Fedora. You can also use the repo of RDO project itself too.
[code][root@openstack ~]# yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
[root@openstack ~]# yum install -y openstack-packstack
[root@openstack ~]# yum update[/code]
The newly added repo information looked something like this:
[code][root@openstack ~]# more /etc/yum.repos.d/rdo-release.repo
[openstack-liberty]
name=OpenStack Liberty Repository
baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-liberty/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud[/code]
[Note: If you’re logging as a user other than root, don’t forget to use sudo before each yum command.]
After the installation and update ended, I restarted the system just to let OpenStack configure/modify my system properly.
The next time I logged in, I entered the following command to install and start OpenStack RDO.
[code][root@openstack ~]# packstack –allinone [/code]
Now that my all-in-one OpenStack platform started, I had to implement networking within this newly created platform. So, I began by creating a bridge interface between my CentOS system’s NIC (NIC2 in my case and NIC1 became backup for remote connectivity) and OVS system inside OpenStack. For that purpose, I created new bridge interface called br-ex and configured it as follows:
[code][root@openstack ~]# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=10.10.11.249
NETMASK=255.255.255.0
GATEWAY=10.10.11.1
DNS1=10.10.13.2
ONBOOT=yes[/code]
Then, I also modified my NIC2 (eno33555200 interface) according to my newly created bridge interface. Note that the IP assignment on interface eno33555200 has been removed and placed on br-ex interface. Also the eno33555200 has now become an OVSPort instead of regular Ethernet and it’s also a part of OVS bridge now.
[code][root@openstack ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno33555200
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
DEVICE=eno33555200
ONBOOT=yes
NETBOOT=yes
BOOTPROTO=none
HWADDR=00:50:56:88:72:aa[/code]
Then, I informed my Neutron about my newly created bridge interface and mapped it with my physical network. I did that by opening /etc/neutron/plugin.ini file in vi editor and adding the next following lines to it.
[code][root@openstack ~]# vi /etc/neutron/plugin.ini[/code]
Under the [ml2_type_vxlan] line:
[code]network_vlan_ranges = physnet1
[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-ex[/code]
I then restarted my network service again to implement my changes.
[code][root@openstack ~]# systemctl restart network.service[/code]
While following these steps, in case you’ve made any mistake, your network might not come alive properly and might even leave you disconnected if you’re accessing your system remotely via SSH. Because of this reason, I used two network interfaces in my system. With two interfaces, even if you messed up with one of your interfaces, you can still have remote access from another interface which remains unaffected throughout the process. If your network fails after network restart, you may begin troubleshooting by first analyzing the status and logs of network service.
[code][root@openstack ~]# systemctl status network.service
[root@openstack ~]# journalctl -xn [/code]
When I restarted my network, it came live without any error and when I observed my system’s interfaces, it looked like this, which is a good thing:
[code][root@openstack ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:50:56:88:0d:8e brd ff:ff:ff:ff:ff:ff
3: eno33555200: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
link/ether 00:50:56:88:72:aa brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 0e:01:7f:03:c9:74 brd ff:ff:ff:ff:ff:ff
6: br-tun: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether ea:3e:e0:57:64:4e brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 06:5d:ae:a5:8d:44 brd ff:ff:ff:ff:ff:ff
34: qbr2969d5c9-84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether ea:df:41:71:d4:02 brd ff:ff:ff:ff:ff:ff
35: qvo2969d5c9-84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 6e:85:40:e4:c2:f3 brd ff:ff:ff:ff:ff:ff
36: qvb2969d5c9-84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbr2969d5c9-84 state UP qlen 1000
link/ether ea:df:41:71:d4:02 brd ff:ff:ff:ff:ff:ff
38: qbrf4901064-dd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 52:c0:69:c3:75:97 brd ff:ff:ff:ff:ff:ff
39: qvof4901064-dd: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 8e:33:57:70:f6:a7 brd ff:ff:ff:ff:ff:ff
40: qvbf4901064-dd: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbrf4901064-dd state UP qlen 1000
link/ether 52:c0:69:c3:75:97 brd ff:ff:ff:ff:ff:ff
41: tapf4901064-dd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbrf4901064-dd state UNKNOWN qlen 500
link/ether fe:16:3e:97:96:16 brd ff:ff:ff:ff:ff:ff
42: tap2969d5c9-84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbr2969d5c9-84 state UNKNOWN qlen 500
link/ether fe:16:3e:6f:dd:2c brd ff:ff:ff:ff:ff:ff
43: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:50:56:88:72:aa brd ff:ff:ff:ff:ff:ff
inet 10.10.11.249/24 brd 10.10.11.255 scope global br-ex
valid_lft forever preferred_lft forever[/code]
[code][root@openstack ~]# ovs-vsctl show
7e8e0970-923e-4bc1-a453-c7b21bfac30c
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eno33555200"
Interface "eno33555200"
Port "qg-98a7a712-e0"
Interface "qg-98a7a712-e0"
type: internal
Bridge br-int
fail_mode: secure
Port "qvo2969d5c9-84"
tag: 7
Interface "qvo2969d5c9-84"
Port "qvof4901064-dd"
tag: 7
Interface "qvof4901064-dd"
Port "tap4da2f74f-6b"
tag: 2
Interface "tap4da2f74f-6b"
type: internal
Port "tapa8ad3b8a-52"
tag: 7 Interface "tapa8ad3b8a-52"
type: internal
Port "tap3c9982ec-9b"
tag: 4095
Interface "tap3c9982ec-9b"
type: internal
Port "qr-4fa0cc6d-ae"
tag: 7
Interface "qr-4fa0cc6d-ae"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: "2.4.0"[/code]
Configuring OpenStack
Before configuring components of OpenStack, we first need to authenticate ourselves to the system which we do by loading the credential file called keystonerc_admin stored in our home directory.
[code][root@openstack ~]# source keystonerc_admin[/code]
Now, let’s begin with the setup process of my Neutron, a networking module in OpenStack. I first displayed a list of existing networks in my cloud. By default, there’s an external/public network that represents our host network. And there’s also a private network with private_subnet (10.0.0.0/24) that represents a network inside our cloud. By default, there’s also a router called router1 which has a gateway to public network. You can use these default networks, subnets and router with required configurations or you can completely remove these items and create entirely new network topology.
Note: Inside OpenStack, every components or entities are identified by Universally Unique Identifier (UUID) for better identification and management of those entities. In the following sections, I’ll be using these UUIDs time and again for various operations. So, please keep track of those UUIDs whenever I highlight them and use them in my commands.
I entered the following commands to display the lists of networks, subnets and router:
[code][root@openstack ~(keystone_admin)]# neutron net-list
+————————————–+———-+——————————————————-+
| id | name | subnets |
+————————————–+———-+——————————————————-+
| 2ad9fd9b-ea60-4b65-83a8-ab1cb74bd9ed | GATEWAY_NET&nbsp;| d1ac48ea-b108-4cd0-81ee-ed430c102596 10.10.11.0/24 |
| 87ac8c42-74a6-4981-a5a6-4051fbf7a29b | CLOUD_NET| 1bb568e1-14ee-49b5-b809-ec1674845a65 192.168.100.0/24xa. |
+————————————–+———-+——————————————————-+
[root@openstack ~(keystone_admin)]# neutron subnet-list
+————————————–+—————-+——————+——————————————————+
| id | name | cidr | allocation_pools |
+————————————–+—————-+——————+——————————————————+
| d1ac48ea-b108-4cd0-81ee-ed430c102596 | vSphere_Net | 10.10.11.0/24 | {"start": "10.10.11.245", "end": "10.10.11.247"} |
| 1bb568e1-14ee-49b5-b809-ec1674845a65 | Cloud_Subnet_1 | 192.168.100.0/24 | {"start": "192.168.100.2", "end": "192.168.100.100"} |
+————————————–+—————-+——————+——————————————————+
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<!– Test1 –>
<ins class="adsbygoogle"
style="display:block"
data-ad-client="ca-pub-4368450939737731"
data-ad-slot="1429555762"
data-ad-format="auto"></ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>
[root@openstack ~(keystone_admin)]# neutron router-list
+————————————–+———+——————————————————————————————————————————————————————————————+————-+——-+
| id | name | external_gateway_info | distributed | ha |
+————————————–+———+——————————————————————————————————————————————————————————————+————-+——-+
| 500e7407-4722-473c-83d0-13fd8bddfe7d | Neutron_Router | {"network_id": "2ad9fd9b-ea60-4b65-83a8-ab1cb74bd9ed", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "d1ac48ea-b108-4cd0-81ee-ed430c102596", "ip_address": "10.10.11.244"}]} | False | False |
+————————————–+———+——————————————————————————————————————————————————————————————+————-+——-+[/code]
As you can see in the above results, I’ve deleted the default network topology in OpenStack and have created a new topology as per my requirements. I’ve created an external network called GATEWAY_NET and an internal network called CLOUD_NET. Inside my public network, I created a subnet called vSphere_Net (10.10.11.0/24) with IP range from 10.10.11.245 to 10.10.11.247. Similarly, I created a subnet called Cloud_Subnet_1 (192.168.100.0/24) with IP range from 192.168.100.2 to 192.168.100.100.
Commands I used to delete older networks and subnets and then to create new ones:
[code][root@openstack ~(keystone_admin)]# neutron net-delete <UUID-of-network>
[root@openstack ~(keystone_admin)]# neutron subnet-delete <UUID-of-subnet>
[root@openstack ~(keystone_admin)]# neutron net-create GATEWAY_NET –router:external
[root@openstack ~(keystone_admin)]# neutron net-create CLOUD_NET
[root@openstack ~(keystone_admin)]# neutron subnet-create –name vSphere_Net –enable_dhcp=False –allocation-pool=start=10.10.11.245,end=10.10.11.247 –gateway=10.10.11.1 public 10.10.11.0/24
[root@openstack ~(keystone_admin)]# neutron subnet-create –name Cloud_Subnet_1 –enable_dhcp=True –allocation-pool=start=192.168.100.2,end=192.168.100.100 CLOUD_NET 192.168.100.0/24 –dns-nameservers list=true 10.10.13.2 8.8.8.8[/code]
In the same way, I used following commands to delete my default router and then, create a new one. But before I deleted the old router, I first cleared its gateway and all its associated interfaces.
[code][root@openstack ~(keystone_admin)]# neutron router-list
[root@openstack ~(keystone_admin)]# neutron router-show <UUID-of-old-router>
[root@openstack ~(keystone_admin)]# neutron router-gateway-clear <UUID-of-router>
[root@openstack ~(keystone_admin)]# neutron router-port-list <UUID-of-router>
[root@openstack ~(keystone_admin)]# neutron router-interface-delete <router-id> <subnet-id>
[root@openstack ~(keystone_admin)]# neutron router-delete <router-id>
[root@openstack ~(keystone_admin)]# neutron router-create Neutron_Router
[root@openstack ~(keystone_admin)]# neutron router-gateway-set Neutron_Router&nbsp;GATEWAY_NET
[root@openstack ~(keystone_admin)]# neutron router-interface-add Neutron_Router Cloud_Subnet_1[/code]
Then I made sure that all of my OpenStack components are enabled and running correctly by entering the openstack-status command. Next, I browsed my OpenStack server’s IP address in my web browser and logged in with user admin and password stored in my keystonerc_admin file.
Then, clicked on Project > Network > Network Topology from the Navigation panel on left and browsed my network topology that I created earlier.
In the topology, you can see the networks and router I created above. Since this is my final network topology, it also contains two instances (Cirrius and Cirrius-2) connected to CLOUD_NET network. If you’ve been serially following me until now, your topology will look exactly like mine except for the two instances, which brings me to the next step i.e. creating my instances. But before I installed my instances, I created my floating IPs which will be assigned to my to-be-created instances.
If you’re wondering about what is a Floating IP, it’s simply a statically NAT‘ed IP address in external network representing private IP in internal network. For example, in my case, I’ve got my internal network of 192.168.100.0 with DHCP IP range from 192.168.100.2-100 and external network of 10.10.11.0 with DHCP IP range from 10.10.11.245-247. So, when I generate my first floating IP from external network, it becomes 10.10.11.245. And when I create my first instance and connect it to my internal network, I’ll more likely get a private IP address of 192.168.100.3. Now in order to enable my instance to communicate with external network or even internet, I’ve to assign a floating IP to this instance which I can do by specifying the floating IP I created earlier and the port created on the instance. After the assignment, my instance will be locally identified by 192.168.100.3, but it’ll be identified by 10.10.11.245 in the external network and therefore, will be able to communicate to external network and Internet.
This is how I created my floating IPs:
[code][root@openstack ~(keystone_admin)]# neutron floatingip-create GATEWAY_NET[/code]
Alternatively, we can also use web panel to create floating IPs by going to Project > Compute > Access & Security and then inside Floating IPs tab and finally by clicking on Allocate IP To Project.
I also created some firewall rules to allow both Ingress and Egress ICMP traffic and SSH traffic across CLOUD_NET and GATEWAY_NET. We can add these rules by going to Project > Compute > Access & Security > Security Groups tab, then selecting the Security Group and clicking on Manage Rules. My Default Security Group looked something like this:
Now, lets go ahead and create some instances. We can create instance in OpenStack by going to Project > Compute > Instances > Launch Instance. In the instance creation window, we need to provide the name, flavor, boot source and network interfaces as primary parameters. Since it requires us to enter numerous parameters, the act of launching instance is easier and simpler in web panel than in CLI. It can be done in CLI as follows, where I created an instance called Cirrius-3:
[code][root@openstack ~(keystone_admin)]# nova boot –flavor=m1.tiny –image=<image-UUID> –nic net-id=<network-UUID> <Instance-name>
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<!– Test1 –>
<ins class="adsbygoogle"
style="display:block"
data-ad-client="ca-pub-4368450939737731"
data-ad-slot="1429555762"
data-ad-format="auto"></ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>
[root@openstack ~(keystone_admin)]# glance image-list
+————————————–+————+
| ID | Name |
+————————————–+————+
| 9e5e9b79-2c31-4bd4-a118-73e2b7478040 | cirros |
+————————————–+————+
[root@openstack ~(keystone_admin)]# nova boot –flavor=m1.tiny –image=9e5e9b79-2c31-4bd4-a118-73e2b7478040 –nic net-id=87ac8c42-74a6-4981-a5a6-4051fbf7a29b Cirrius-3
+————————————–+———————————————–+
| Property | Value |
+————————————–+———————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | – |
| OS-EXT-SRV-ATTR:hypervisor_hostname | – |
| OS-EXT-SRV-ATTR:instance_name | instance-0000000f |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | X7GbqSM9xFGa |
| config_drive | |
| created | 2015-11-23T15:23:54Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | abe7c1eb-8676-419f-9533-d628d160fc98 |
| image | cirros (9e5e9b79-2c31-4bd4-a118-73e2b7478040) |
| key_name | – |
| metadata | {} |
| name | Cirrius-3 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | a078e24965564c26815e29cb97307b84 |
| updated | 2015-11-23T15:23:54Z |
| user_id | ebf7bb96f3ff4811b8010c2d678991d8 |
+————————————–+———————————————–+[/code]
Then, I listed my floating IPs so that I could assign the unassigned IP to my newly created instance.
[code][root@openstack ~(keystone_admin)]# neutron floatingip-list
+————————————–+——————+———————+————————————–+
| id | fixed_ip_address | floating_ip_address | port_id |
+————————————–+——————+———————+————————————–+
| 0a97d36a-5c53-4fb1-962a-6237ea3bcbb1 | 192.168.100.3 | 10.10.11.245 | d40955d9-ea35-4c4d-a8d5-ceea9a2bb0da |
| 1305cb4b-5926-48a2-9e20-1863d5b42b56 | 192.168.100.4 | 10.10.11.246 | d96f9d00-2498-425f-b801-0371c931230c |
| ae74a2a1-9b89-4b6d-b6e2-aa67a43382e7 | | 10.10.11.247 | |
+————————————–+——————+———————+————————————–+[/code]
I also listed the ports created on my new instance and grabbed its UUID to associate it with free floating IP.
[code][root@openstack ~(keystone_admin)]# neutron port-list –device-id abe7c1eb-8676-419f-9533-d628d160fc98
+————————————–+——+——————-+————————————————————————————–+
| id | name | mac_address | fixed_ips |
+————————————–+——+——————-+————————————————————————————–+
| 64fab88d-22cf-4cd4-a1f8-b4de6eb82830 | | fa:16:3e:03:d1:4f | {"subnet_id": "1bb568e1-14ee-49b5-b809-ec1674845a65", "ip_address": "192.168.100.5"} |
+————————————–+——+——————-+————————————————————————————–+[/code]
Now that I got a free floating IP and a disconnected port in my instance, I associated them together.
[code][root@openstack ~(keystone_admin)]# neutron floatingip-associate ae74a2a1-9b89-4b6d-b6e2-aa67a43382e7 64fab88d-22cf-4cd4-a1f8-b4de6eb82830
Associated floating IP ae74a2a1-9b89-4b6d-b6e2-aa67a43382e7[/code]
To test the network connectivity to our new instance, let’s SSH into it and ping Google from there.
[code][root@openstack ~(keystone_admin)]# ssh cirros@10.10.11.247
The authenticity of host ‘10.10.11.247 (10.10.11.247)’ can’t be established.
RSA key fingerprint is 03:1b:63:29:6e:da:3a:c7:df:34:48:7c:f3:4c:5a:62.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.10.11.247’ (RSA) to the list of known hosts.
cirros@10.10.11.247’s password:
$ ping google.com
PING google.com (202.51.67.23): 56 data bytes
64 bytes from 202.51.67.23: seq=0 ttl=58 time=3.847 ms
64 bytes from 202.51.67.23: seq=1 ttl=58 time=4.121 ms
64 bytes from 202.51.67.23: seq=2 ttl=58 time=2.754 ms
^C
— google.com ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 2.754/3.574/4.121 ms[/code]
We can also observe the logs being generated by this new instance:
[code][root@openstack ~(keystone_admin)]# nova console-log Cirrius-3
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
…
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,192.168.100.5,24,fe80::f816:3eff:fe03:d14f
ip-route:default via 192.168.100.1 dev eth0
ip-route:169.254.169.254 via 192.168.100.1 dev eth0
ip-route:192.168.100.0/24 dev eth0 src 192.168.100.5
=== datasource: ec2 net ===
instance-id: i-0000000f
name: N/A
availability-zone: nova
local-hostname: cirrius-3.novalocal
launch-index: 0
=== cirros: current=0.3.3 latest=0.3.4 uptime=59.59 ===
____ ____ ____
/ __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/ /_/ \____/___/
http://cirros-cloud.net[/code]
We can also dump traffic flows across various network interfaces to understand the traffic pattern and network performance in our cloud platform.
[code][root@openstack ~(keystone_admin)]# ovs-ofctl dump-flows br-ex
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=290498.318s, table=0, n_packets=103272, n_bytes=18950435, idle_age=0, hard_age=65534, priority=0 actions=NORMAL
[root@openstack ~(keystone_admin)]# ovs-ofctl dump-flows br-int
…
cookie=0xbc8b820daa6413f5, duration=2846.566s, table=0, n_packets=23, n_bytes=966, idle_age=1303, priority=10,arp,in_port=8 actions=resubmit(,24)[/code]
…
If you’ve been following me step-by-step up to now, then you’ll be able to create your own instances and also SSH into them. But while you’ll try to ping any internet site or external gateway (in this case, 10.10.11.1, which is my physical router), you won’t have network connectivity beyond your directly connected networks i.e. 192.168.100.0/24 and 10.10.11.0/24. To be specific, you won’t even have access to external gateway i.e. 10.10.11.1 because it lies two hops away from your instances even though its parent network is directly connected.
So, in order to solve this issue, I needed to create static routes in our Neutron_Router, which we can do by going to Project > Network > Routers > Static Routes tab and then clicking on Add Static Route. First, I added a default route (0.0.0.0/0) and specified my next hop as my OpenStack host’s bridge inteface’s IP address i.e. 10.10.11.249, in this case. Adding this single route enabled my instances to access internet but couldn’t ping or access my physical router (10.10.11.1) even though my instances’ traffic was passing through that same router. So I added another route to that router alone (10.10.11.1/32) and again specified the next hop address as 10.10.11.249. After doing so, I could ping and access my external router.
I said I could ping my external router and internet after setting up static routes. But in reality, I wasn’t able to access external network just then. Adding correct static routes in Neutron_Router completes the network configuration inside my cloud, but even after adding those routes, my instances weren’t able to access internet or even my external router, for that matter. The problem was lying on my OpenStack host then, because it wasn’t forwarding the traffic to/from my cloud to external network. So, I printed the content of an ip_forward file, which proved my suspicion to be right. It really was the culprit. So, I modified it to forward IP traffic through this system.
[code][root@openstack ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@openstack ~]# echo 1 > /proc/sys/net/ipv4/ip_forward[/code]
After commanding my host system to forward IP traffic, I once again tried to ping intenet site and my external router from one of my instances. But to my surprise, it still couldn’t connect to external networks. After being back to squared one once again, I did even more research on network troubleshooting part in OpenStack. Then I came across something about NAT traffic being blocked by Iptables in OpenStack host. Actually, OpenStack had configured numerous firewall rules in iptables during installation, but for some unknown reason, it missed to add one more rule to allow NAT MASQUERADE traffic through OpenStack bridge inteface. Since our cloud platform hosts multiple instances inside it and all of them need to communicate to Internet via this single host, Linux systems implement a networking function similar to one-to-many NAT called Masquerade.
[code][root@openstack ~]# iptables -t nat -A POSTROUTING -o br-ex -j MASQUERADE
[root@openstack ~]# iptabes-save > /etc/sysconfig/iptables[/code]
Finally, my instances could communicate to Internet and also our external router. Only then, my earlier ping to google.com would work. I also traced my network from inside an instance:
[code]cirrius$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
1 192.168.100.1 (192.168.100.1) 0.639 ms 0.175 ms 2.048 ms
2 * 10.10.11.249 (10.10.11.249) 1.881 ms 0.180 ms
3 10.10.11.1 (10.10.11.1) 5.126 ms 2.431 ms 2.338 ms
…
8 202.51.66.146 (202.51.66.146) 34.679 ms 8.060 ms 2.731 ms
9 202.51.66.205 (202.51.66.205) 54.977 ms 55.403 ms 56.093 ms
10 209.85.249.51 (209.85.249.51) 54.153 ms 209.85.248.85 (209.85.248.85) 56.672 ms 209.85.249.51 (209.85.249.51) 54.985 ms
11 google-public-dns-a.google.com (8.8.8.8) 53.856 ms 55.784 ms 54.698 ms[/code]
My final network topology looked like this:
This is it! I hope you’ve learned something useful here. Please let me know of your suggestion or feedback in the Comments section below. Thank you!