How to install openstack (RHOSP 6) all-in-one setup using packstack ?

In this article, I am going to explain the allinone setup of openstack using packstack. Below are my setup details.

–> Installed RHEL 7.1 machine as a KVM. (imp connected the NAT interface to KVM machine)
–> Subscribed to required pools.
–> Installed the RHOSP 6.

Step 1 :  Enabling the required repositories.

a) In order to register with openstack channel, you need to get the pool ID using below commands.

# subscription-manager list –available –all | grep -i openstack -A 10

Replace the <POOLID> with the ID which you will get from previous command.

# subscription-manager attach –pool=<POOLID>

Enable the openstack (RHOSP 6) repository.

# subscription-manager repos –enable=rhel-7-server-openstack-6.0-rpms

b) Finally my system is registered with below channels.

~~~
[root@opens1 ~(keystone_admin)]# subscription-manager list

+——————————————-+
Installed Product Status
+——————————————-+
Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.1
Arch:           x86_64
Status:         Subscribed
Status Details:
Starts:
Ends:

Product Name:   Red Hat OpenStack
Product ID:     191
Version:        6.0
Arch:           x86_64
Status:         Subscribed
Status Details:
Starts:
Ends:
~~~

Step 2 : Upgrade the system using “yum update” command and reboot the system with latest kernel, not a mandatory point but it’s better to do it. Once the server comes up start installing the openstack packages.

# yum update
# reboot
# yum install -y openstack-packstack

Step 3 : Create a answer file for doing any modification. in my case I directly start the installation using command :

# packstack –gen-answer-file=/root/answer.txt

In my case I directly start the installation using :

# packstack –allinone

Step 4 : Once the installation is finished. We need to bring the existing external interface under the br-ex bridge of OVS (openvswitch) to provide external connectivity to the instances.

My existing interface was eth0 which was providing the connectivity to base machine using NAT. I modified the configuration file to used it as interface for br-ex (bridge).

[root@opens1 ~(keystone_admin)]# egrep -v “^(#|$)” /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=OVSPort
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
UUID=08ede69a-19ed-4b58-99b0-2066177603e9
DEVICE=eth0
ONBOOT=yes
DEVICETYPE=ovs
OVS_BRIDGE=br-ex

Created new bridge configuration file.

[root@opens1 ~(keystone_admin)]# egrep -v “^(#|$)” /etc/sysconfig/network-scripts/ifcfg-br-ex
TYPE=OVSBridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
DEVICE=br-ex
ONBOOT=yes
IPADDR=192.168.100.225
NETMASK=255.255.255.0

Step 5 : Add the eth0 in bridge and restart the network in same command to avoid any distrubance in network.

[root@opens1 ~(keystone_admin)]# ovs-vsctl add-port br-ex eth0 ; systemctl restart network

Step 6 : Made some changes in configuration file to make the things perfect.

[root@opens1 ~]# openstack-config –set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
[root@opens1 ~]# openstack-config –set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan
[root@opens1 ~]# systemctl restart network

Step 7 : Time to create the external network which can be used to provide external world connectivity to the instances.

I created external network with name “external_network”.

[root@opens1 ~(keystone_admin)]# neutron net-create external_network –provider:network_type flat –provider:physical_network extnet  –router:external –shared
Created a new network:
+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 326df58a-a158-43b1-916d-82d84cb4d31b |
| name                      | external_network                     |
| provider:network_type     | flat                                 |
| provider:physical_network | extnet                               |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 41f0f6e665dc4e059288283b3b7595cc     |
+—————————+————————————–+

Created subnet for the external network. From this subnet pool range the floating IPs will get assigned to the instances.

[root@opens1 ~(keystone_admin)]# neutron subnet-create –name public_subnet –enable_dhcp=False –allocation-pool=start=192.168.100.210,end=192.168.100.220 –gateway=192.168.100.1 external_network 192.168.100.0/24
Created a new subnet:
+——————-+——————————————————–+
| Field             | Value                                                  |
+——————-+——————————————————–+
| allocation_pools  | {“start”: “192.168.100.210”, “end”: “192.168.100.220”} |
| cidr              | 192.168.100.0/24                                       |
| dns_nameservers   |                                                        |
| enable_dhcp       | False                                                  |
| gateway_ip        | 192.168.100.1                                          |
| host_routes       |                                                        |
| id                | fd6bd388-0f30-48a8-b2b6-78a1faf71df5                   |
| ip_version        | 4                                                      |
| ipv6_address_mode |                                                        |
| ipv6_ra_mode      |                                                        |
| name              | public_subnet                                          |
| network_id        | 326df58a-a158-43b1-916d-82d84cb4d31b                   |
| tenant_id         | 41f0f6e665dc4e059288283b3b7595cc                       |
+——————-+——————————————————–+

Step 8 : Once the external network is in place, lets create the internal network as well. I used the 10.0.0.0/24 range to provide the internal connectivity.

[root@opens1 ~(keystone_admin)]# neutron net-create private

[root@opens1 ~(keystone_admin)]# neutron subnet-create private 10.0.0.0/24 –name private
Created a new subnet:
+——————-+——————————————–+
| Field             | Value                                      |
+——————-+——————————————–+
| allocation_pools  | {“start”: “10.0.0.2”, “end”: “10.0.0.254”} |
| cidr              | 10.0.0.0/24                                |
| dns_nameservers   |                                            |
| enable_dhcp       | True                                       |
| gateway_ip        | 10.0.0.1                                   |
| host_routes       |                                            |
| id                | 7bdbaf8a-98dd-4c9e-bd28-a94b812e1240       |
| ip_version        | 4                                          |
| ipv6_address_mode |                                            |
| ipv6_ra_mode      |                                            |
| name              | private                                    |
| network_id        | 5edea9b1-feb9-4502-8d58-7aa75260c695       |
| tenant_id         | 41f0f6e665dc4e059288283b3b7595cc           |
+——————-+——————————————–+

Finally, the external and internal network is created.

[root@opens1 ~(keystone_admin)]# neutron net-list
+————————————–+——————+——————————————————-+
| id                                   | name             | subnets                                               |
+————————————–+——————+——————————————————-+
| 326df58a-a158-43b1-916d-82d84cb4d31b | external_network | fd6bd388-0f30-48a8-b2b6-78a1faf71df5 192.168.100.0/24 |
| 5edea9b1-feb9-4502-8d58-7aa75260c695 | private          | 7bdbaf8a-98dd-4c9e-bd28-a94b812e1240 10.0.0.0/24      |
+————————————–+——————+——————————————————-+

Step 9 : Create a router to provide the connectivity between external and internal world.

[root@opens1 ~(keystone_admin)]# neutron router-create router1
Created a new router:
+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | a1de2f04-5bbc-45da-a48a-f51204df62e5 |
| name                  | router1                              |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 41f0f6e665dc4e059288283b3b7595cc     |
+———————–+————————————–+

Step 10 : Set the gateway to the default gateway of the external network.

[root@opens1 ~(keystone_admin)]# neutron router-gateway-set router1 external_network
Set gateway for router router1

Step 11 : In this we are connecting private network to the public network through router.

[root@opens1 ~(keystone_admin)]# neutron router-interface-add router1 private
Added interface 079322e0-ff71-49f2-a832-4c1d27d186cb to router router1.

SCREENSHOT.

Step 12 : By default, cirros image is available. We can lanuch the new instance from this image. I have done this step from Horizon dashboard.

[root@opens1 ~(keystone_admin)]# glance image-list
+————————————–+——–+————-+——————+———-+——–+
| ID                                   | Name   | Disk Format | Container Format | Size     | Status |
+————————————–+——–+————-+——————+———-+——–+
| 80aabdae-ed10-407f-8e35-e7a5ab3b8b6a | cirros | qcow2       | bare             | 13200896 | active |
+————————————–+——–+————-+——————+———-+——–+

Step 13 : Once the instance is launched, running instance can be checked using virsh commands. We can see the tap interface is connected to the running instance.

[root@opens1 ~(keystone_admin)]# virsh list
Id    Name                           State
—————————————————-
3     instance-00000004              running

[root@opens1 ~(keystone_admin)]# virsh domiflist instance-00000004
Interface  Type       Source     Model       MAC
——————————————————-
tap9f506981-47 bridge     qbr9f506981-47 virtio      fa:16:3e:09:c9:9f

Step 14 : Off the topic, tracking where this interface is connected. From above output we understand that it’s connected to a bridge. Let’s check the status of bridge.

[root@opens1 ~(keystone_admin)]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr9f506981-47        8000.3ee67d9e599f    no        qvb9f506981-47
tap9f506981-47

qvb is veth pair other end is present in OVS. Between your instance and OVS there is one bridge which is used for firewall and security rules because these rules can’t be directly apply on OVS.

[root@opens1 ~(keystone_admin)]# ovs-vsctl list-ports br-int
int-br-ex
patch-tun
qr-079322e0-ff
qvo9f506981-47                 <<< Other end of veth pair which is present on OVS.
tap82a8d211-26

Step 15 : Instance has taken the IP address 10.0.0.3 which is from the private range. To provide connectivity to external world, I have attached a floating IP to the instance using the Horizon Dashboard.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:09:c9:9f brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
inet6 fe80::f816:3eff:fe09:c99f/64 scope link
valid_lft forever preferred_lft forever

Step 16 : After attaching the floating ip below is the status but still I was not able to reach the VM from my base machine. Time to investigate further.

[root@opens1 ~(keystone_admin)]# nova floating-ip-list
+—————–+————————————–+———-+——————+
| Ip              | Server Id                            | Fixed Ip | Pool             |
+—————–+————————————–+———-+——————+
| 192.168.100.211 | 0e5b9008-c76c-4152-9f49-e757fc5b402d | 10.0.0.3 | external_network |
+—————–+————————————–+———-+——————+

Step 17 : Ah.. I have not modified the default security rules. I added the support for ping and ssh.

[root@opens1 ~(keystone_admin)]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+————-+———–+———+———–+————–+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+————-+———–+———+———–+————–+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+————-+———–+———+———–+————–+
[root@opens1 ~(keystone_admin)]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+————-+———–+———+———–+————–+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+————-+———–+———+———–+————–+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+————-+———–+———+———–+————–+

Step 18 : Finally I am able to reach my VM using public IP address.

[root@vaggarwa ~]# ping 192.168.100.211
PING 192.168.100.211 (192.168.100.211) 56(84) bytes of data.
64 bytes from 192.168.100.211: icmp_seq=1 ttl=63 time=53.6 ms
64 bytes from 192.168.100.211: icmp_seq=2 ttl=63 time=0.572 ms
^C
— 192.168.100.211 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1858ms
rtt min/avg/max/mdev = 0.572/27.114/53.657/26.543 ms

[root@vaggarwa ~]# ssh root@192.168.100.211
The authenticity of host ‘192.168.100.211 (192.168.100.211)’ can’t be established.
RSA key fingerprint is 7b:89:a3:3c:36:c7:8d:a9:7e:3f:9d:c8:8b:8e:e3:19.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.100.211’ (RSA) to the list of known hosts.
root@192.168.100.211’s password:
Permission denied, please try again.
root@192.168.100.211’s password:

Step 19 : Wait a minute, why I am not able to see the assigned floating IP inside the VM ? where is my floating IP.

[root@vaggarwa ~]# ssh cirros@192.168.100.211
cirros@192.168.100.211’s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:09:c9:9f brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
inet6 fe80::f816:3eff:fe09:c99f/64 scope link
valid_lft forever preferred_lft forever
$

Step 20 : Your floating IP is inside the router namespace.

[root@opens1 ~(keystone_admin)]# ip netns list
qrouter-a1de2f04-5bbc-45da-a48a-f51204df62e5
qdhcp-5edea9b1-feb9-4502-8d58-7aa75260c695
qdhcp-d6a907a2-ff1a-4266-9ccb-2d8613f9e0ec

Step 21 : Floating IP assigned to instance can be checked using below command.

[root@opens1 ~(keystone_admin)]# ip netns exec qrouter-a1de2f04-5bbc-45da-a48a-f51204df62e5 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: qg-b137e98c-38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:fe:da:a0 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.210/24 brd 192.168.100.255 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet 192.168.100.211/32 brd 192.168.100.211 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fefe:daa0/64 scope link
valid_lft forever preferred_lft forever
15: qr-079322e0-ff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:b3:72:54 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-079322e0-ff
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb3:7254/64 scope link
valid_lft forever preferred_lft forever

[root@opens1 ~(keystone_admin)]# ip netns exec qdhcp-5edea9b1-feb9-4502-8d58-7aa75260c695 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: tap82a8d211-26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:e0:68:3b brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global tap82a8d211-26
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fee0:683b/64 scope link
valid_lft forever preferred_lft forever

References :

https://www.rdoproject.org/Neutron_with_existing_external_network
https://virtuallylg.wordpress.com/2015/04/07/openstack-juno-rdo-packstack-deployment-to-an-external-network-config-via-neutron-2/
https://access.redhat.com/articles/1127153

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s