Tag Archives: neutron

Difference between neutron LBaaS v1 and LBaaS v2 ?

LBaaS v2 is not a new topic anymore most of the customers are switching to LBaaS v2 from LBaaS v1. I have written blog posts in past related to the configuration of both in case you have missed, those are located at LBaaSv1 , LBaaSv2

Still the in Red Hat Openstack, no HA functionality is present for load balancer itself, it means if your load balancer service is running on controller node present in HA setup and if that node is getting down then we have to manually fix the things. There are some other articles present in internet to make LBaaS HA work using some workarounds but I have never tried them.

In this post I am going show the improvements of lbaasv2 over lbaasv1. I will also shed some light on Octavia project which can help us to provide HA capabilities for load balancing service basically it used for Elastic Load Balancing.

Let’s start with comparison of lbaasv2 and lbaasv1

lbaasv1 has provided the capabilities like :

  • L4 Load balancing
  • Session persistence including cookies based
  • Cookie insertion
  • Driver interface for 3rd parties.

Basic flow of the request in lbaas v1 :

Request —> VIP —> Pool [Optional Health Monitor] —> Members [Backend instances]

untitled

Missing features :

  • L7 Content switching [IMP feature]
  • Multiple TCP ports per load balancer
  • TLS Termination at load balancer to avoid the load on instances.
  • Load balancer running inside instances.

lbaasv2 is introduced in Kilo version, at that time it was not having the features like L7, Pool sharing, Single create LB [Creating load balancer in single API call] these features are included in liberty. Pool sharing feature is introduced in Mitaka.

Basic flow of the request in lbaas v2 :

Request —> VIP —> Listeners –> Pool [Optional Health Monitor] —> Members [Backend instances]

lbaas3

Let’s see what components/changes have been made in  which makes the missing feature available in newer version :

  1. L7 Content switching

Why we require this feature :

A layer 7 load balancer consists of a listener that accepts requests on behalf of a number of back-end pools and distributes those requests based on policies that use application data to determine which pools should service any given request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one group of back-end servers (pool) can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML, CSS, and JavaScript.

This feature is introduced by adding additional component “Listener” in lbaasv2 architecture. We can add the policies and then apply the rules to policy to have L7 layer load balancing. Very informative article about the L7 content switching is available at link , it covers lot of practical scenarios.

2. Multiple TCP ports per load balancer

In lbaas v1 we were only having one TCP port like 80 or 443 at load balancer associated with VIP (Virtual IP), we can’t have two ports/protocols associated with VIP that means either you can have HTTP traffic load balanced or HTTPS. This limit has been lifted in case of Lbaas v2, as now we can have multiple ports associated with single VIP.

It can be done with pool sharing or without pool sharing.

With pool sharing :

with-pool-sharing

Without Pool Sharing :

pool-sharing

3. TLS Termination at load balancer to avoid the load on instances.

We can have the TLS termination at load balancer level instead of having the termination at backend servers. It reduces the load on backend servers and also it provides the capability of having L7 content switching if the TLS termination done at load balancer. Barbican containers are used to do the termination at load balancer level.

4. Load balancer running inside instances.

I have not seen this implementation without Octavia which is using “Amphora” instances to run the load balancer.

IMP : Both load balancer versions can’t be run simultaneously.

As promised at the beginning of article, let’s see what capabilities “Octavia” adds to lbaasv2 version.

Here is the architecture of Octavia :

octavia

Octavia API lacks the athentication facility hence it accepts the APIs from neutron instead of exposing direct APIs.

As I mentioned earlier, in case of Octavia load balancer runs inside the nova instances hence it need to communicate with components like nova, neutron to spawn the instances in which load balancer [haproxy] can run. Okay, what about other components required to spawn instance :

  • Create amphora disk image using OpenStack diskimage-builder.
  • Create a Nova flavor for the amphorae.
  • Add amphora disk image to glance.
  • Tag the above glance disk image with ‘amphora’.

But now amphora instance becomes single point of failure and also the capability to handle the load is limited. From Mitaka version onwards we can run single load balancer replicated in two instances which can run in A/P mode and send the heartbeat using VRRP. If one instance is getting down other can start serving load balancer service.

So what’s the major advantage of Octavia, okay, here comes  the term Elastic Load Balancing (ELB), currently VIP is associated with single load balancer it’s 1:1 relation but in case of ELB relation between VIP and load-balancer is 1:N, VIP distribute the incoming traffic over pool of “amphora” instances.

In ELB, traffic is getting distributed at two levels :

  1. VIP to pool of amphora instances.
  2. amphora instances to back-end instances.

We can also use HEAT orchestration with CEILOMETER (alarm) functionality to manage the number of instances in ‘amphora’ pool.

Combining the power of “pool of amphora instances” and “failover” we can have a robust N+1 topology in which if any VM from pool of amphora instance is getting failed, it’s getting replaced by standby VM.

 

I hope that this article shed some light on the jargon of neutron lbaas world 🙂

Advertisements

Step by Step configuing openstack Neutron LbaaS in packstack setup ?

In this article, I am going to show the procedure of creating LbaaSv1 load balancer in packstack setup using two instances.

First of all, I didn’t find any image with HTTP package in it hence I created my own Fed 22 image with http and cloud packages [cloud-utils, cloud-init] installed.

If you are not going to install the cloud packages then you will face issue while spawning the instances like routes will not be configured in instance eventually you will not be able to reach the instance.

Step 1 : Downloaded one fedora 22 ISO and launch a KVM using that ISO. Installed http and cloud packages in it.

Step 2 : Poweroff the KVM and locate the qcow2 created corresponding to KVM using below command.

# virsh domblklist myimage

myimage is KVM name.

Step 3 : Reset the image so that it can become clean for use in openstack environment.

# virt-sysprep -d myimage

Step 4 : Use the qcow2 path found in Step 2 to compress the qcow2 image.

# ls -lsh /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2
1.8G -rw——- 1 qemu qemu 8.1G Mar 25 11:56 /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2

#virt-sparsify –compress /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2 fedora22.qcow2

# ll -lsh fedora22.qcow2
662M -rw-r–r– 1 root root 664M Mar 25 11:59 fedora22.qcow2

Notice the difference before and after compression. Upload this image to glance.

Step 5 : Spawn two instances web1 and web2 while spawning the instances I am changing the index.html file to web1 and web2 respectively.

# nova boot –flavor m1.custom1 –security-groups lbsg –image c3dedff2-f0a9-4aa1-baa9-9cdc08860f6d –file /var/www/html/index.html=/root/index1.html –nic net-id=9ec24eff-f470-4d4e-8c23-9eeb41dfe749 web1

# nova boot –flavor m1.custom1 –security-groups lbsg –image c3dedff2-f0a9-4aa1-baa9-9cdc08860f6d –file /var/www/html/index.html=/root/index2.html –nic net-id=9ec24eff-f470-4d4e-8c23-9eeb41dfe749 web2

Note : I have created a new security group lbsg to allow HTTP/HTTPS traffic

Step 6 : Once the instances are spawned, you need to login into each instance and change the selinux content of the index.html file. If you want, you can disable the selinux in Step 1 itself to avoid this step.

# ip netns exec qdhcp-9ec24eff-f470-4d4e-8c23-9eeb41dfe749 ssh root@10.10.1.17

# restorecon -Rv /var/www/html/index.html

Step 7 : Create a pool which can redirect the traffic in ROUND_ROBIN manner.

# neutron lb-pool-create –name lb1 –lb-method ROUND_ROBIN –protocol HTTP –subnet 26316551-44d7-4326-b011-a519b556eda2

Note : This pool and instances are spawned using internal network.

Step 8 : Add two instances as member of pool.

# neutron lb-member-create –address 10.10.1.17 –protocol-port 80 lb1

# neutron lb-member-create –address 10.10.1.18 –protocol-port 80 lb1

Step 9 : Create a virtual IP from internal work. Port which is going to created corresponding to virtual IP. We will be attaching the floating IP to that port only.

# neutron lb-vip-create –name lb1-vip –protocol-port 80 –protocol HTTP –subnet 26316551-44d7-4326-b011-a519b556eda2 lb1

Step 10 : Attaching the floating-ip to newly created port.

# neutron floatingip-associate 09bdbe29-fa85-4110-8dd2-50d274412d8e 25b892cb-44c3-49e2-88b3-0aec7ec8a026

Step 11 : LbaaS also creates a new namespace.

# ip netns list
qlbaas-b8daa41a-3e2a-408e-862b-20d3c52b1764
qrouter-5f7f711c-be0a-4dd0-ba96-191ef760cef7
qdhcp-9ec24eff-f470-4d4e-8c23-9eeb41dfe749

# ip netns exec qlbaas-b8daa41a-3e2a-408e-862b-20d3c52b1764 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
23: tap25b892cb-44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:ae:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.10.1.19/24 brd 10.10.1.255 scope global tap25b892cb-44
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feae:b2a/64 scope link
valid_lft forever preferred_lft forever

Step 12 : In my case floating IP was 192.168.122.3, I ran curl on that IP, and it’s confirmed that response is coming from both member of pools in ROUND_ROBIN manner.

# for i in {1..5} ; do curl  192.168.122.3 ; done

web1
web2
web1
web2
web1

Flat Provider network with OVS

In this article, I am going to show the configuration of flat provider network. It helps to avoid the NAT which in turn improves the performance. Most importantly, compute node can reach external world directly skipping the network node.

I have referred the below link for configuration and understanding the setup.

http://docs.openstack.org/liberty/networking-guide/scenario-provider-ovs.html

I am showing the setup from packstack all-in-one.

Step 1 : As we are not going to use any tenant network here hence I left that blank. flat is mentioned in type_drivers as my external network is of flat type. If you are using VLAN provider network, you can replace it accordingly.

egrep -v “^(#|$)” /etc/neutron/plugin.ini
[ml2]
type_drivers = flat
tenant_network_types =
mechanism_drivers =openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True

I will be create network with name of external hence I mentioned the same in flat_networks. Comment the default vxlan settings.

Step 2 : Our ML2 plugin file is configured, now it’s turn for openvswitch configuration file.

As I will be creating network with name external hence mentioned the same in bridge_mapping. br-ex is the external bridge to which port (interface) is assigned. I have disabled the tunneling.

egrep -v “^(#|$)” /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
enable_tunneling = False
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.168.122.163
bridge_mappings = external:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Step 3 : Creating external network.

[root@allinone7 ~(keystone_admin)]# neutron net-create external1 –shared –provider:physical_network external –provider:network_type flat
Created a new network:
+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 6960a06c-5352-419f-8455-80c4d43dedf8 |
| name                      | external1                            |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | a525deb290124433b80996d4f90b42ba     |
+—————————+————————————–+

As I am using flat network type hence mentioned the same for network_type, if your external network is VLAN provider network, you need to add one more parameter segmentation ID. It’s important to use the same physical_network name which you have used in Step 1 and Step 2 configuration files.

Step 4 : Creating subnet. My external network is 192.168.122.0/24
[root@allinone7 ~(keystone_admin)]# neutron net-list
+————————————–+———–+———+
| id                                   | name      | subnets |
+————————————–+———–+———+
| 6960a06c-5352-419f-8455-80c4d43dedf8 | external1 |         |
+————————————–+———–+———+

[root@allinone7 ~(keystone_admin)]# neutron subnet-create external1 192.168.122.0/24 –name external1-subnet –gateway 192.168.122.1
Created a new subnet:
+——————-+——————————————————+
| Field             | Value                                                |
+——————-+——————————————————+
| allocation_pools  | {“start”: “192.168.122.2”, “end”: “192.168.122.254”} |
| cidr              | 192.168.122.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.122.1                                        |
| host_routes       |                                                      |
| id                | 38ac41fd-edc7-4ad7-a7fa-1a06000fc4c7                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | external1-subnet                                     |
| network_id        | 6960a06c-5352-419f-8455-80c4d43dedf8                 |
| tenant_id         | a525deb290124433b80996d4f90b42ba                     |
+——————-+——————————————————+
[root@allinone7 ~(keystone_admin)]# neutron net-list
+————————————–+———–+——————————————————-+
| id                                   | name      | subnets                                               |
+————————————–+———–+——————————————————-+
| 6960a06c-5352-419f-8455-80c4d43dedf8 | external1 | 38ac41fd-edc7-4ad7-a7fa-1a06000fc4c7 192.168.122.0/24 |
+————————————–+———–+——————————————————-+

Step 5 : Spawn the instance using “external” network directly.

[root@allinone7 ~(keystone_admin)]# nova list
+————————————–+—————-+——–+————+————-+————————-+
| ID                                   | Name           | Status | Task State | Power State | Networks                |
+————————————–+—————-+——–+————+————-+————————-+
| 36934762-5769-4ac1-955e-fb475b8f6a76 | test-instance1 | ACTIVE | –          | Running     | external1=192.168.122.4 |
+————————————–+—————-+——–+————+————-+————————-+

You will be able to connect to this instance directly.

How to use plotnetcfg to solve the neutron jumble ?

Recently, I came acorss recently tool “plotnetcfg” which was included in RHEL OSP 7 [kilo] to make our life easy while working on neutron issues.

You need to install two packages to use that tool.

# yum install -y plotnetcfg
# yum install -y graphviz

After installing these packages, issue the below command :

# plotnetcfg | dot -Tpdf > file1.pdf

Boom, open a pdf file and you will get whole idea about the network configuration in openstack environment.

I ran the same on my all-in-one openstack setup. When I ran the above commands below was the network interfaces present on my node.

–> One instance was in running state.

[root@allinone ~(keystone_admin)]# virsh list
Id    Name                           State
—————————————————-
3     instance-00000055              running

[root@allinone ~(keystone_admin)]# virsh domiflist 3
Interface  Type       Source     Model       MAC
——————————————————-
tapf06383c5-03 bridge     qbrf06383c5-03 virtio      fa:16:3e:b5:ef:93

[root@allinone ~(keystone_admin)]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbrf06383c5-03        8000.027cd0a39e14    no        qvbf06383c5-03
tapf06383c5-03

–> Three network namespaces were present. Among these one was internal only and one was routed to external network.

[root@allinone ~(keystone_admin)]# ip netns list
qrouter-a379e8d6-618f-4799-969f-4d3e24805497
qdhcp-a8d2e131-b917-4b71-888b-8e888ed66446
qdhcp-b67a60d1-0a82-4a87-9d2d-ea695bc0cd2f

–> ens3 was the physical interface which was plumbed to br-ex.

 

I have shown the example output at below link :

https://drive.google.com/open?id=0B7F4NEbnRvYidy04RlhwMWRPbnM

Also, we can ran the same on collected sosreport as well.

plotnetcfg –ovs-db=sos_commands/openvswitch/ovsdb-client_dump | dot -Tpdf > file2.pdf

However, it doesn’t look so much accurate while running on sosreport. It could be possible that I have chosen a wrong file. I need to look into it.

https://drive.google.com/open?id=0B7F4NEbnRvYiV0pKTnJJNWRtT1k

How to track networking of instance in openstack ?

In this article, I am going to show you what happen when we are creating a instance on single node openstack (packstack) deployment from neutron perspective.

Step 1 : I have take a backup of the command before creating a instance.

[root@opens1 ~(keystone_admin)]# ovs-vsctl show >> /tmp/before.txt

Step 2 : Once the instance is in active state. I took the backup in new file.

[root@opens1 ~(keystone_admin)]# ovs-vsctl show >> /tmp/after_private.txt

Step 3 : Associated the floating IP with the instance and taken the backup in new file.

[root@opens1 ~(keystone_admin)]# ovs-vsctl show >> /tmp/after_public.txt

Checking the state of instance.

[root@opens1 ~(keystone_admin)]# virsh list
Id    Name                           State
—————————————————-
2     instance-00000005              running

[root@opens1 ~(keystone_admin)]# virsh domiflist 2
Interface  Type       Source     Model       MAC
——————————————————-
tape3702b67-3e bridge     qbre3702b67-3e virtio      fa:16:3e:92:ad:3a

Step 4 : Checking the difference in output of all the backups.

a) First comapring the “/tmp/before.txt” and “/tmp/after_private.txt” output.

[root@opens1 ~(keystone_admin)]# diff /tmp/before.txt /tmp/after_private.txt
15a16,18
>         Port “qvoe3702b67-3e”
>             tag: 1
>             Interface “qvoe3702b67-3e”
[root@opens1 ~(keystone_admin)]# diff /tmp/before.txt /tmp/after_public.txt

b) No difference is present in /tmp/after_private.txt and  /tmp/after_public.txt. Assigning public IP to instance has not changed anything at OVS level.

[root@opens1 ~(keystone_admin)]# diff /tmp/after_private.txt /tmp/after_public.txt

Step 5 : Lets dig deeper on the difference we have seen in the output of Step 4 (a).

a) When we are creating a new instance. It will create a new bridge qvb which will be connected to your OVS using veth pair.

[root@opens1 ~(keystone_admin)]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbre3702b67-3e        8000.2eb4721ca5aa    no        qvbe3702b67-3e
tape3702b67-3e

Setup will look like starting from physical host to instance ethernet.

ens3 —> br-ex (phy-br-ex) —> (int-br-ex) br-int —> (qvoe3702b67-3e) br-int —> (qvbe3702b67-3e) qbre3702b67-3e —>  qbre3702b67-3e (tape3702b67-3e)  —> eth0 (instance)

b) Lets check the MAC address of the interface assigned to instance. Same has been verified using command “virsh domiflist 2”.

# ip a | awk ‘/eth0/ {getline var1; print $0,var1}’
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast qlen 1000     link/ether fa:16:3e:92:ad:3a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.7/24 brd 10.0.0.255 scope global eth0     inet6 fe80::f816:3eff:fe92:ad3a/64 scope link

Step 6 : As the bridge which is created after launching a instance is connected to br-int. Hence we are checking the ports of br-int to find the MAC addresses associated with them.

a) Checking the ports which are present on br-int.

[root@opens1 images(keystone_admin)]# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000fa5710368941
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
3(int-br-ex): addr:72:64:40:3c:94:9e
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
4(patch-tun): addr:f6:a3:bf:f8:56:d7
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
5(tap8885a021-43): addr:00:00:00:00:00:00
config:     PORT_DOWN
state:      LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
8(qr-ae7f75fa-85): addr:00:00:00:00:00:00
config:     PORT_DOWN
state:      LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
9(qvoe3702b67-3e): addr:0a:ba:a4:51:fd:c9             <<<< Other pair of veth pair usedconnected to bridge (qbre3702b67-3e), as shown in Step 5 (a).
config:     0
state:      0
current:    10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max
LOCAL(br-int): addr:fa:57:10:36:89:41
config:     PORT_DOWN
state:      LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

b) Checking the MAC addreses registered with the ports.

[root@opens1 images(keystone_admin)]# ovs-appctl fdb/show br-int
port  VLAN  MAC                Age
8     1  fa:16:3e:16:0f:e4  198
9     1  fa:16:3e:92:ad:3a  198                   <<<< Mac address of the instance from 5(b) output. (fa:16:3e:92:ad:3a)

Now we know that our instance IP is registered with port 9 on br-int.

[root@opens1 images(keystone_admin)]# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=15807.166s, table=0, n_packets=1020, n_bytes=106714, idle_age=381, priority=1 actions=NORMAL
cookie=0x0, duration=15807.041s, table=0, n_packets=611, n_bytes=26785, idle_age=411, priority=2,in_port=3 actions=drop
cookie=0x0, duration=15807.159s, table=23, n_packets=0, n_bytes=0, idle_age=15807, priority=0 actions=drop

Step 7 : If you want to check the connectivity of br-int with br-ex, you can map the same with help of Step 5(a).

a) Issue below command to list the status of external bridge (br-ex) to which our interface is connected.

[root@opens1 images(keystone_admin)]# ovs-ofctl show br-ex
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000c66a17f6fa42
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
1(phy-br-ex): addr:de:49:06:e9:c0:a1                                                                  <<<<<<<<<<<<<< This is connected to Port 3 of br-int. Refer the output 6(a) ==> 3(int-br-ex)
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
2(ens3): addr:52:54:00:fe:1c:36                                                                       <<<<<< Physical ethernet connected directly to br-ex.
config:     0
state:      0
current:    100MB-FD AUTO_NEG
advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD COPPER AUTO_NEG AUTO_PAUSE
supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD COPPER AUTO_NEG
speed: 100 Mbps now, 100 Mbps max
5(qg-6f3dc69b-8e): addr:00:00:00:00:00:00                                                        <<<<<< Router other end of veth pair.
config:     PORT_DOWN
state:      LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-ex): addr:c6:6a:17:f6:fa:42
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

b) Checking the namespaces. (Miscellaneous)

[root@opens1 images(keystone_admin)]# ip netns list
qdhcp-42ef38e7-2b55-477c-bafc-3cd5f267e826
qrouter-6f3070e7-ea2a-478e-9e66-c74017a2f749

c) Issuing the command in namespaces.

[root@opens1 images(keystone_admin)]# ip netns exec qrouter-6f3070e7-ea2a-478e-9e66-c74017a2f749 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
11: qr-ae7f75fa-85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:16:0f:e4 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-ae7f75fa-85
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe16:fe4/64 scope link
valid_lft forever preferred_lft forever
12: qg-6f3dc69b-8e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:f3:ed:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.100/24 brd 192.168.122.255 scope global qg-6f3dc69b-8e
valid_lft forever preferred_lft forever
inet 192.168.122.101/32 brd 192.168.122.101 scope global qg-6f3dc69b-8e
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fef3:ed09/64 scope link
valid_lft forever preferred_lft forever

[root@opens1 images(keystone_admin)]# ip netns exec qdhcp-42ef38e7-2b55-477c-bafc-3cd5f267e826 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: tap8885a021-43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:a7:d7:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global tap8885a021-43
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea7:d7c1/64 scope link
valid_lft forever preferred_lft forever
d) Checking the status of tun bridge.

[root@opens1 images(keystone_admin)]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ce1b35779b4c
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
2(patch-int): addr:e6:38:3b:ee:b6:52
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-tun): addr:ce:1b:35:77:9b:4c
config:     PORT_DOWN
state:      LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

How to create new private subnet in openstack neutron ?

In the previous article, I have shown the installation of allinone openstack setup using packstack. In this article, I am going to create new private subnet and launching new instance using the newly created private subnet, and verifying the connectivity between the instances using old private subnet and new private subnet.

Step 1 : Creating new private network with name private1.

[root@opens1 ~(keystone_admin)]# neutron net-create private1
Created a new network:
+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 18a2e61c-f7ca-4701-b408-f9f5e03f0def |
| name                      | private1                             |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 10                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 41f0f6e665dc4e059288283b3b7595cc     |
+—————————+————————————–+

Step 2 : Creating subnet for the private1 network.

[root@opens1 ~(keystone_admin)]# neutron subnet-create private1 20.0.0.0/24 –name private1

[root@opens1 ~(keystone_admin)]# neutron net-show private1
+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 18a2e61c-f7ca-4701-b408-f9f5e03f0def |
| name                      | private1                             |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 10                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | f242b4bd-0f7b-4bd1-a110-138ab78fedc5 |
| tenant_id                 | 41f0f6e665dc4e059288283b3b7595cc     |
+—————————+————————————–+

Step 3 : Added the same router as the default gateway for this private network as well.

[root@opens1 ~(keystone_admin)]# neutron router-interface-add router1 private1
Added interface 9d10485b-c6d8-4f0e-90a2-336a244ca12a to router router1.

Step 4 : Checked the status of all subnets.

[root@opens1 ~(keystone_admin)]# neutron subnet-list
+————————————–+—————+——————+——————————————————–+
| id                                   | name          | cidr             | allocation_pools                                       |
+————————————–+—————+——————+——————————————————–+
| fd6bd388-0f30-48a8-b2b6-78a1faf71df5 | public_subnet | 192.168.100.0/24 | {“start”: “192.168.100.210”, “end”: “192.168.100.220”} |
| 7bdbaf8a-98dd-4c9e-bd28-a94b812e1240 | private       | 10.0.0.0/24      | {“start”: “10.0.0.2”, “end”: “10.0.0.254”}             |
| f242b4bd-0f7b-4bd1-a110-138ab78fedc5 | private1      | 20.0.0.0/24      | {“start”: “20.0.0.2”, “end”: “20.0.0.254”}             |
+————————————–+—————+——————+——————————————————–+

Step 5 : Checking the router namespace for the new qr-<> device which  appears corresponding to newly created subnet.

[root@opens1 ~(keystone_admin)]# ip netns exec  qrouter-a1de2f04-5bbc-45da-a48a-f51204df62e5 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: qg-b137e98c-38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:fe:da:a0 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.210/24 brd 192.168.100.255 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet 192.168.100.211/32 brd 192.168.100.211 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fefe:daa0/64 scope link
valid_lft forever preferred_lft forever
15: qr-079322e0-ff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:b3:72:54 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-079322e0-ff
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb3:7254/64 scope link
valid_lft forever preferred_lft forever
34: qr-9d10485b-c6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:56:07:7a brd ff:ff:ff:ff:ff:ff
inet 20.0.0.1/24 brd 20.0.0.255 scope global qr-9d10485b-c6
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe56:77a/64 scope link
valid_lft forever preferred_lft forever

Step 6 : Spawned the new instance using private1. We can see two instances in below output, one with old private network and second with new private network.

[root@opens1 ~(keystone_admin)]# nova floating-ip-list
+—————–+————————————–+———-+——————+
| Ip              | Server Id                            | Fixed Ip | Pool             |
+—————–+————————————–+———-+——————+
| 192.168.100.213 | 7964b485-51f1-4145-8200-4d15205d7616 | 20.0.0.2 | external_network |
| 192.168.100.211 | 0e5b9008-c76c-4152-9f49-e757fc5b402d | 10.0.0.3 | external_network |
+—————–+————————————–+———-+——————+

Step 7 : We can see the floating ip’s of both instances are assigned on a single interface in qrouter namespace.

[root@opens1 ~(keystone_admin)]# ip netns exec  qrouter-a1de2f04-5bbc-45da-a48a-f51204df62e5 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: qg-b137e98c-38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:fe:da:a0 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.210/24 brd 192.168.100.255 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet 192.168.100.211/32 brd 192.168.100.211 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet 192.168.100.213/32 brd 192.168.100.213 scope global qg-b137e98c-38
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fefe:daa0/64 scope link
valid_lft forever preferred_lft forever
15: qr-079322e0-ff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:b3:72:54 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-079322e0-ff
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb3:7254/64 scope link
valid_lft forever preferred_lft forever
34: qr-9d10485b-c6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:56:07:7a brd ff:ff:ff:ff:ff:ff
inet 20.0.0.1/24 brd 20.0.0.255 scope global qr-9d10485b-c6
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe56:77a/64 scope link
valid_lft forever preferred_lft forever

Step 8 : Logging into test1 instance and checking the IP address configuration. I am able to ping the the private1 network IP assigned to second instance i.e test2.

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:09:c9:9f brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
inet6 fe80::f816:3eff:fe09:c99f/64 scope link
valid_lft forever preferred_lft forever

ping # ping 20.0.0.2
PING 20.0.0.2 (20.0.0.2): 56 data bytes
64 bytes from 20.0.0.2: seq=0 ttl=63 time=24.387 ms
64 bytes from 20.0.0.2: seq=1 ttl=63 time=1.515 ms
64 bytes from 20.0.0.2: seq=2 ttl=63 time=0.941 ms
^C
— 20.0.0.2 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.941/8.947/24.387 ms

# traceroute 20.0.0.2
traceroute to 20.0.0.2 (20.0.0.2), 30 hops max, 46 byte packets
1  host-10-0-0-1.openstacklocal (10.0.0.1)  1.423 ms  0.750 ms  0.887 ms
2  20.0.0.2 (20.0.0.2)  2.718 ms  1.086 ms  0.706 ms

In this case both private networks were connected to same router hence we were able to reach them. But in typical cloud environment, to provide the isolation between tenants, private ip’s are connected to separate routers.