Command cheat sheet for vagrant

Few months back, I wrote article on vagrant usage which basically covers the very brief introduction to vagrant.

https://ervikrant06.wordpress.com/2015/08/26/how-to-use-vagrant-to-create-vm-part-1/

I started learning about ansible, again I need to visit the basic concept of vagrant to make my learning quick with ansible.  In this post, I am just going to list the commands and codes which I am going to use most of the time :

Commands :

1) We already know that we can create a file using “vagrant init” if we want to re-create file in the same directory and without any comments.

# vagrant init -f -m

2) Creating file specifying the name of box.

# vagrant init -f rhel-7.2

You need to replace the name with your box name.

3) To check the global status of all vagrant boxes.

# vagrant global-status

4) Checking the port mapping between host and vagrant guest.

# vagrant port <guest name>

This command doesn’t work for libvirtd provider, I have used this command with Virtualbox provider.

5) Commands to destroy, shutdown or suspend the vagrant environment.

# vagrant suspend
# vagrant halt
# vagrant destroy

Code tips :

1) Enabling the port forwarding and running a script while bringing up the vagrant box.

Vagrant.configure(“2”) do |config|
config.vm.box = “rhel-7.2”
config.vm.provision :shell, path: “bootstrap.sh”
config.vm.network :forwarded_port, guest: 80, host: 4567
end

Above stanza can be changed if we want to avoid any sort of port conflict while spawning multiple instances.

Vagrant.configure(“2”) do |config|
config.vm.box = “rhel-7.2”
config.vm.provision :shell, path: “bootstrap.sh”
config.vm.network :forwarded_port, guest: 80, host: 4567

auto_correct: true
end

2) bootstrap.sh script will run only once when you brought up the vagrant instance. To run that script again.

# vagrant up –provision
# vagrant provision
# vagrant reload –provision

3) If we want to bring the vagrant instance up without running mentioned script.

# vagrant up –no-provision

4) Running a particular box version.

Vagrant.configure(“2”) do |config|
config.vm.box = “rhel-7.2”
config.vm.box_version = “1.1.0”
end

I will keep on adding more hacks into this article.

My Neutron notes

[root@allinone-7 ~(keystone_admin)]# ip netns exec qdhcp-049b58b3-716f-4445-ae24-32a23f8523dd iptables -t nat -L > /tmp/qdhcp.before.txt
[root@allinone-7 ~(keystone_admin)]# ip netns exec qrouter-65ba96d9-decb-4494-badb-68e300074d73 iptables -t nat -L > /tmp/qrouter.before.txt

diff /tmp/qrouter.before.txt /tmp/qrouter.after.txt
19a20
> DNAT       all  —  anywhere             192.168.122.4        to:10.10.1.15
28a30
> DNAT       all  —  anywhere             192.168.122.4        to:10.10.1.15
32a35
> SNAT       all  —  unused               anywhere             to:192.168.122.4

Physical machine : 192.168.122.124    52:5f:04:f2:18:41
tap interface      : 10.10.1.2          fa:16:3e:0f:d8:af
qg interface       : 192.168.122.4     fa:16:3e:96:f6:13
qr interface       : 10.10.1.1        fa:16:3e:62:46:55
Instance MAC       : 10.10.1.15        fa:16:3e:df:0f:b9

Scenario 1 : Assigning floating ip to instance and pinging the instance floating ip from base machine.

Created br-int mirror traffic referring Red Hat KCS [1]

# tcpdump -s0 -i br-int-snooper0 -w /tmp/br-int.pcap  &
# ip netns exec qdhcp-049b58b3-716f-4445-ae24-32a23f8523dd tcpdump -s0 -i tap9d746101-ac -w /tmp/tap-interface.pcap &
# ip netns exec qrouter-65ba96d9-decb-4494-badb-68e300074d73 tcpdump -s0 -i qr-b2f794eb-7c -w /tmp/qr-interface.pcap &
# ip netns exec qrouter-65ba96d9-decb-4494-badb-68e300074d73 tcpdump -s0 -i qg-03f40a0b-5f -w /tmp/qg-interface.pcap &

[root@allinone-7 ~(keystone_admin)]# tshark -tad -n -r /tmp/tap-interface.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
41 2016-05-03 12:27:09 192.168.122.124 -> 10.10.1.15   ICMP 98 Echo (ping) request  id=0x720a, seq=1/256, ttl=63

[root@allinone-7 ~(keystone_admin)]# tshark -tad -n -r /tmp/qg-interface.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
2 2016-05-03 12:27:09 192.168.122.124 -> 192.168.122.4 ICMP 98 Echo (ping) request  id=0x720a, seq=1/256, ttl=64
3 2016-05-03 12:27:09 192.168.122.4 -> 192.168.122.124 ICMP 98 Echo (ping) reply    id=0x720a, seq=1/256, ttl=63 (request in 2)
4 2016-05-03 12:27:10 192.168.122.124 -> 192.168.122.4 ICMP 98 Echo (ping) request  id=0x720a, seq=2/512, ttl=64
5 2016-05-03 12:27:10 192.168.122.4 -> 192.168.122.124 ICMP 98 Echo (ping) reply    id=0x720a, seq=2/512, ttl=63 (request in 4)
6 2016-05-03 12:27:11 192.168.122.124 -> 192.168.122.4 ICMP 98 Echo (ping) request  id=0x720a, seq=3/768, ttl=64
7 2016-05-03 12:27:11 192.168.122.4 -> 192.168.122.124 ICMP 98 Echo (ping) reply    id=0x720a, seq=3/768, ttl=63 (request in 6)

[root@allinone-7 ~(keystone_admin)]# tshark -tad -n -r /tmp/qg-interface.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
192.168.122.124    192.168.122.4    52:5f:04:f2:18:41    fa:16:3e:96:f6:13
192.168.122.4    192.168.122.124    fa:16:3e:96:f6:13    52:5f:04:f2:18:41
192.168.122.124    192.168.122.4    52:5f:04:f2:18:41    fa:16:3e:96:f6:13
192.168.122.4    192.168.122.124    fa:16:3e:96:f6:13    52:5f:04:f2:18:41
192.168.122.124    192.168.122.4    52:5f:04:f2:18:41    fa:16:3e:96:f6:13
192.168.122.4    192.168.122.124    fa:16:3e:96:f6:13    52:5f:04:f2:18:41

[root@allinone-7 ~(keystone_admin)]# tshark -tad -n -r /tmp/qr-interface.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
103 2016-05-03 12:27:09 192.168.122.124 -> 10.10.1.15   ICMP 98 Echo (ping) request  id=0x720a, seq=1/256, ttl=63
106 2016-05-03 12:27:09   10.10.1.15 -> 192.168.122.124 ICMP 98 Echo (ping) reply    id=0x720a, seq=1/256, ttl=64 (request in 103)
109 2016-05-03 12:27:10 192.168.122.124 -> 10.10.1.15   ICMP 98 Echo (ping) request  id=0x720a, seq=2/512, ttl=63
110 2016-05-03 12:27:10   10.10.1.15 -> 192.168.122.124 ICMP 98 Echo (ping) reply    id=0x720a, seq=2/512, ttl=64 (request in 109)
113 2016-05-03 12:27:11 192.168.122.124 -> 10.10.1.15   ICMP 98 Echo (ping) request  id=0x720a, seq=3/768, ttl=63
114 2016-05-03 12:27:11   10.10.1.15 -> 192.168.122.124 ICMP 98 Echo (ping) reply    id=0x720a, seq=3/768, ttl=64 (request in 113)

[root@allinone-7 ~(keystone_admin)]# tshark -tad -n -r /tmp/qr-interface.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
192.168.122.124    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    192.168.122.124    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
192.168.122.124    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    192.168.122.124    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
192.168.122.124    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    192.168.122.124    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55

[root@allinone-7 ~(keystone_admin)]# tshark -tad -n -r /tmp/br-int.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
181 2016-05-03 12:27:09 192.168.122.124 -> 10.10.1.15   ICMP 102 Echo (ping) request  id=0x720a, seq=1/256, ttl=63
184 2016-05-03 12:27:09   10.10.1.15 -> 192.168.122.124 ICMP 102 Echo (ping) reply    id=0x720a, seq=1/256, ttl=64 (request in 181)
187 2016-05-03 12:27:10 192.168.122.124 -> 10.10.1.15   ICMP 102 Echo (ping) request  id=0x720a, seq=2/512, ttl=63
188 2016-05-03 12:27:10   10.10.1.15 -> 192.168.122.124 ICMP 102 Echo (ping) reply    id=0x720a, seq=2/512, ttl=64 (request in 187)
191 2016-05-03 12:27:11 192.168.122.124 -> 10.10.1.15   ICMP 102 Echo (ping) request  id=0x720a, seq=3/768, ttl=63
192 2016-05-03 12:27:11   10.10.1.15 -> 192.168.122.124 ICMP 102 Echo (ping) reply    id=0x720a, seq=3/768, ttl=64 (request in 191)

[root@allinone-7 ~(keystone_admin)]#  tshark -tad -n -r /tmp/br-int.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
192.168.122.124    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    192.168.122.124    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
192.168.122.124    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    192.168.122.124    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
192.168.122.124    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    192.168.122.124    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55

Scenario 2 : Pinging the external world from instance when it’s having floating ip assigned.

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/br-int_rev.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
29 2016-05-03 12:57:36   10.10.1.15 -> 8.8.8.8      ICMP 102 Echo (ping) request  id=0x6c01, seq=0/0, ttl=64
30 2016-05-03 12:57:36      8.8.8.8 -> 10.10.1.15   ICMP 102 Echo (ping) reply    id=0x6c01, seq=0/0, ttl=51 (request in 29)
33 2016-05-03 12:57:37   10.10.1.15 -> 8.8.8.8      ICMP 102 Echo (ping) request  id=0x6c01, seq=1/256, ttl=64
34 2016-05-03 12:57:37      8.8.8.8 -> 10.10.1.15   ICMP 102 Echo (ping) reply    id=0x6c01, seq=1/256, ttl=51 (request in 33)
37 2016-05-03 12:57:38   10.10.1.15 -> 8.8.8.8      ICMP 102 Echo (ping) request  id=0x6c01, seq=2/512, ttl=64
38 2016-05-03 12:57:38      8.8.8.8 -> 10.10.1.15   ICMP 102 Echo (ping) reply    id=0x6c01, seq=2/512, ttl=51 (request in 37)

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/br-int_rev.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qr-interface_rev.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
7 2016-05-03 12:57:36   10.10.1.15 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6c01, seq=0/0, ttl=64
8 2016-05-03 12:57:36      8.8.8.8 -> 10.10.1.15   ICMP 98 Echo (ping) reply    id=0x6c01, seq=0/0, ttl=51 (request in 7)
9 2016-05-03 12:57:37   10.10.1.15 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6c01, seq=1/256, ttl=64
10 2016-05-03 12:57:37      8.8.8.8 -> 10.10.1.15   ICMP 98 Echo (ping) reply    id=0x6c01, seq=1/256, ttl=51 (request in 9)
11 2016-05-03 12:57:38   10.10.1.15 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6c01, seq=2/512, ttl=64
12 2016-05-03 12:57:38      8.8.8.8 -> 10.10.1.15   ICMP 98 Echo (ping) reply    id=0x6c01, seq=2/512, ttl=51 (request in 11)

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qr-interface_rev.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qg-interface_rev.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
1 2016-05-03 12:57:36 192.168.122.4 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6c01, seq=0/0, ttl=63
2 2016-05-03 12:57:36      8.8.8.8 -> 192.168.122.4 ICMP 98 Echo (ping) reply    id=0x6c01, seq=0/0, ttl=52 (request in 1)
3 2016-05-03 12:57:37 192.168.122.4 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6c01, seq=1/256, ttl=63
4 2016-05-03 12:57:37      8.8.8.8 -> 192.168.122.4 ICMP 98 Echo (ping) reply    id=0x6c01, seq=1/256, ttl=52 (request in 3)
5 2016-05-03 12:57:38 192.168.122.4 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6c01, seq=2/512, ttl=63
6 2016-05-03 12:57:38      8.8.8.8 -> 192.168.122.4 ICMP 98 Echo (ping) reply    id=0x6c01, seq=2/512, ttl=52 (request in 5)

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qg-interface_rev.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
192.168.122.4    8.8.8.8    fa:16:3e:96:f6:13    52:54:00:68:9d:b5
8.8.8.8    192.168.122.4    52:54:00:68:9d:b5    fa:16:3e:96:f6:13
192.168.122.4    8.8.8.8    fa:16:3e:96:f6:13    52:54:00:68:9d:b5
8.8.8.8    192.168.122.4    52:54:00:68:9d:b5    fa:16:3e:96:f6:13
192.168.122.4    8.8.8.8    fa:16:3e:96:f6:13    52:54:00:68:9d:b5
8.8.8.8    192.168.122.4    52:54:00:68:9d:b5    fa:16:3e:96:f6:13

Scenario 3 : Pinging the external world from instance when it’s not having floating ip assigned.

IP assigned to gateway interface :

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qr-interface_rev_wflp.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
9 2016-05-03 13:05:33   10.10.1.15 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6d01, seq=0/0, ttl=64
10 2016-05-03 13:05:33      8.8.8.8 -> 10.10.1.15   ICMP 98 Echo (ping) reply    id=0x6d01, seq=0/0, ttl=51 (request in 9)
13 2016-05-03 13:05:34   10.10.1.15 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6d01, seq=1/256, ttl=64
14 2016-05-03 13:05:34      8.8.8.8 -> 10.10.1.15   ICMP 98 Echo (ping) reply    id=0x6d01, seq=1/256, ttl=51 (request in 13)
17 2016-05-03 13:05:35   10.10.1.15 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6d01, seq=2/512, ttl=64
18 2016-05-03 13:05:35      8.8.8.8 -> 10.10.1.15   ICMP 98 Echo (ping) reply    id=0x6d01, seq=2/512, ttl=51 (request in 17)

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qr-interface_rev_wflp.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9
10.10.1.15    8.8.8.8    fa:16:3e:df:0f:b9    fa:16:3e:62:46:55
8.8.8.8    10.10.1.15    fa:16:3e:62:46:55    fa:16:3e:df:0f:b9

27: qg-03f40a0b-5f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:96:f6:13 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.3/24 brd 192.168.122.255 scope global qg-03f40a0b-5f

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qg-interface_rev_wflp.pcap -Y ‘icmp’
Running as user “root” and group “root”. This could be dangerous.
1 2016-05-03 13:05:33 192.168.122.3 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6d01, seq=0/0, ttl=63
2 2016-05-03 13:05:33      8.8.8.8 -> 192.168.122.3 ICMP 98 Echo (ping) reply    id=0x6d01, seq=0/0, ttl=52 (request in 1)
3 2016-05-03 13:05:34 192.168.122.3 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6d01, seq=1/256, ttl=63
4 2016-05-03 13:05:34      8.8.8.8 -> 192.168.122.3 ICMP 98 Echo (ping) reply    id=0x6d01, seq=1/256, ttl=52 (request in 3)
5 2016-05-03 13:05:35 192.168.122.3 -> 8.8.8.8      ICMP 98 Echo (ping) request  id=0x6d01, seq=2/512, ttl=63
6 2016-05-03 13:05:35      8.8.8.8 -> 192.168.122.3 ICMP 98 Echo (ping) reply    id=0x6d01, seq=2/512, ttl=52 (request in 5)

[root@allinone-7 neutron(keystone_admin)]# tshark -tad -n -r /tmp/qg-interface_rev_wflp.pcap -Y ‘icmp’ -T fields -e ip.src -e ip.dst -e eth.src -e eth.dst
Running as user “root” and group “root”. This could be dangerous.
192.168.122.3    8.8.8.8    fa:16:3e:96:f6:13    52:54:00:68:9d:b5
8.8.8.8    192.168.122.3    52:54:00:68:9d:b5    fa:16:3e:96:f6:13
192.168.122.3    8.8.8.8    fa:16:3e:96:f6:13    52:54:00:68:9d:b5
8.8.8.8    192.168.122.3    52:54:00:68:9d:b5    fa:16:3e:96:f6:13
192.168.122.3    8.8.8.8    fa:16:3e:96:f6:13    52:54:00:68:9d:b5
8.8.8.8    192.168.122.3    52:54:00:68:9d:b5    fa:16:3e:96:f6:13

 

[1] https://access.redhat.com/solutions/2060413

How to configure lbaasv2 in openstack Kilo packstack setup ?

In this article I am going to show the configuration of lbaasv2 on openstack kilo packstack setup. By default lbaasv1 configuration is present, we have to modify some files to make lbaasv2  work.

First of all, I suggest you to refer the below presentation to understand the difference between lbaasv1 and lbaasv2. Most importantly, the slide number 9.

https://www.openstack.org/assets/Uploads/LBaaS.v2.Liberty.and.Beyond.pdf

Step 1 : Ensure that openstack packstack setup is installed using lbaas.

~~~

grep LBAAS /root/answer.txt
CONFIG_LBAAS_INSTALL=y

~~~

Step 2 : Make the below changes. Before making any change I suggest you to take the backup of conf file.

a) Changes made in /etc/neutron/neutron.conf 

~~~

diff /etc/neutron/neutron.conf /var/tmp/LBAAS_BACKUP/neutron.conf
79,80c79
< #service_plugins =neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
< service_plugins = neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

> service_plugins =neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

~~~

b) Changes made in /etc/neutron/neutron_lbaas.conf 

~~~

diff /etc/neutron/neutron_lbaas.conf /var/tmp/LBAAS_BACKUP/neutron_lbaas.conf
53,54c53
< #service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
< service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

> service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

~~~

c) Changes made in /etc/neutron/lbaas_agent.ini

~~~

diff /etc/neutron/lbaas_agent.ini /var/tmp/LBAAS_BACKUP/lbaas_agent.ini
31,32c31
< #device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
< device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

> device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

~~~

Step 3 : Run the below command to activate the lbaasv2 agent.

# neutron-db-manage --service lbaas upgrade head
# systemctl disable neutron-lbaas-agent.service
# systemctl stop neutron-lbaas-agent.service
# systemctl restart neutron-server.service
# systemctl enable neutron-lbaasv2-agent.service
# systemctl start neutron-lbaasv2-agent.service

Verify that lbaasv2 agent is running.
ps -ef | grep 'neutron-lbaasv2'  |grep -v grep
neutron  24609     1  0 06:01 ?        00:00:14 /usr/bin/python2 /usr/bin/neutron-lbaasv2-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /usr/share/neutron/neutron-lbaas-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/lbaas_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-lbaasv2-agent --log-file /var/log/neutron/lbaas-agent.log

Step 4 : Creating loadbalancer using LbaaSv2.

a) Create loadbalancer.

[root@allinone-7 ~(keystone_admin)]# neutron lbaas-loadbalancer-create –name Snet_test_1 9bed29a5-8cb3-436a-89fc-6ca6a8467c03
Created a new loadbalancer:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | f0513999-9b07-48c4-b8b8-645b322a0e78 |
| listeners           |                                      |
| name                | Snet_test_1                          |
| operating_status    | OFFLINE                              |
| provider            | haproxy                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 90686d89a72143179f7608cb9b6d0898     |
| vip_address         | 10.10.1.9                            |
| vip_port_id         | 6d95724a-1232-45ba-8992-7ffc1983b2b9 |
| vip_subnet_id       | 9bed29a5-8cb3-436a-89fc-6ca6a8467c03 |
+———————+————————————–+

b) Creating listener.

[root@allinone-7 ~(keystone_admin)]# neutron lbaas-listener-create –loadbalancer 9455e883-2fb2-49d8-8468-2b24003de808 –protocol TCP –protocol-port 80 –name Snet_test_1_80
Created a new listener:
+————————–+————————————————+
| Field                    | Value                                          |
+————————–+————————————————+
| admin_state_up           | True                                           |
| connection_limit         | -1                                             |
| default_pool_id          |                                                |
| default_tls_container_id |                                                |
| description              |                                                |
| id                       | 78bc2864-b962-4483-a287-80afe45ec6ec           |
| loadbalancers            | {“id”: “f0513999-9b07-48c4-b8b8-645b322a0e78”} |
| name                     | Snet_test_1_80                                 |
| protocol                 | TCP                                            |
| protocol_port            | 80                                             |
| sni_container_ids        |                                                |
| tenant_id                | 90686d89a72143179f7608cb9b6d0898               |
+————————–+————————————————+

c) Creating pool in listener.

[root@allinone-7 ~(keystone_admin)]# neutron lbaas-pool-create –lb-algorithm ROUND_ROBIN –listener Snet_test_1_80 –protocol TCP –name Snet_test_1_pool80
Created a new pool:
+———————+————————————————+
| Field               | Value                                          |
+———————+————————————————+
| admin_state_up      | True                                           |
| description         |                                                |
| healthmonitor_id    |                                                |
| id                  | 48d9b744-c7d5-41c0-873e-5d477a1f7853           |
| lb_algorithm        | ROUND_ROBIN                                    |
| listeners           | {“id”: “78bc2864-b962-4483-a287-80afe45ec6ec”} |
| members             |                                                |
| name                | Snet_test_1_pool80                             |
| protocol            | TCP                                            |
| session_persistence |                                                |
| tenant_id           | 90686d89a72143179f7608cb9b6d0898               |
+———————+————————————————+

d) Creating members using below commands.

~~~
# neutron lbaas-member-create –subnet 9bed29a5-8cb3-436a-89fc-6ca6a8467c03 –address 10.10.1.5 –protocol-port 80 Snet_test_1_pool80
# neutron lbaas-member-create –subnet 9bed29a5-8cb3-436a-89fc-6ca6a8467c03 –address 10.10.1.6 –protocol-port 80 Snet_test_1_pool80
~~~

e) It’s working fine in round-robin manner. I have used only private range. I am curl from inside the namespace hence I am able to reach the private range.

~~~
[root@allinone-7 ~(keystone_admin)]# ip netns exec qdhcp-049b58b3-716f-4445-ae24-32a23f8523dd bash
[root@allinone-7 ~(keystone_admin)]# for i in {1..5} ; do curl  10.10.1.9 ; done
web2
web1
web2
web1
web2
~~~

f) Even if I am using public IP I am able to access them. Let’s come out of namespace and verify the same by accessing the public IP.

~~~
[root@allinone-7 ~(keystone_admin)]# exit
[root@allinone-7 ~(keystone_admin)]# for i in {1..5} ; do curl  192.168.122.4 ; done
web1
web2
web1
web2
web1
~~~

Troubleshooting Tips :

  • Make sure httpd service is running in instances.
  • iptables are not blocking the httpd traffic.
  • selinux content are right on created http file.
  • if you are facing issue while getting response from curl using load balancer ip check whether you are getting response using instance ip or not.

Step by Step configuing openstack Neutron LbaaS in packstack setup ?

In this article, I am going to show the procedure of creating LbaaSv1 load balancer in packstack setup using two instances.

First of all, I didn’t find any image with HTTP package in it hence I created my own Fed 22 image with http and cloud packages [cloud-utils, cloud-init] installed.

If you are not going to install the cloud packages then you will face issue while spawning the instances like routes will not be configured in instance eventually you will not be able to reach the instance.

Step 1 : Downloaded one fedora 22 ISO and launch a KVM using that ISO. Installed http and cloud packages in it.

Step 2 : Poweroff the KVM and locate the qcow2 created corresponding to KVM using below command.

# virsh domblklist myimage

myimage is KVM name.

Step 3 : Reset the image so that it can become clean for use in openstack environment.

# virt-sysprep -d myimage

Step 4 : Use the qcow2 path found in Step 2 to compress the qcow2 image.

# ls -lsh /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2
1.8G -rw——- 1 qemu qemu 8.1G Mar 25 11:56 /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2

#virt-sparsify –compress /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2 fedora22.qcow2

# ll -lsh fedora22.qcow2
662M -rw-r–r– 1 root root 664M Mar 25 11:59 fedora22.qcow2

Notice the difference before and after compression. Upload this image to glance.

Step 5 : Spawn two instances web1 and web2 while spawning the instances I am changing the index.html file to web1 and web2 respectively.

# nova boot –flavor m1.custom1 –security-groups lbsg –image c3dedff2-f0a9-4aa1-baa9-9cdc08860f6d –file /var/www/html/index.html=/root/index1.html –nic net-id=9ec24eff-f470-4d4e-8c23-9eeb41dfe749 web1

# nova boot –flavor m1.custom1 –security-groups lbsg –image c3dedff2-f0a9-4aa1-baa9-9cdc08860f6d –file /var/www/html/index.html=/root/index2.html –nic net-id=9ec24eff-f470-4d4e-8c23-9eeb41dfe749 web2

Note : I have created a new security group lbsg to allow HTTP/HTTPS traffic

Step 6 : Once the instances are spawned, you need to login into each instance and change the selinux content of the index.html file. If you want, you can disable the selinux in Step 1 itself to avoid this step.

# ip netns exec qdhcp-9ec24eff-f470-4d4e-8c23-9eeb41dfe749 ssh root@10.10.1.17

# restorecon -Rv /var/www/html/index.html

Step 7 : Create a pool which can redirect the traffic in ROUND_ROBIN manner.

# neutron lb-pool-create –name lb1 –lb-method ROUND_ROBIN –protocol HTTP –subnet 26316551-44d7-4326-b011-a519b556eda2

Note : This pool and instances are spawned using internal network.

Step 8 : Add two instances as member of pool.

# neutron lb-member-create –address 10.10.1.17 –protocol-port 80 lb1

# neutron lb-member-create –address 10.10.1.18 –protocol-port 80 lb1

Step 9 : Create a virtual IP from internal work. Port which is going to created corresponding to virtual IP. We will be attaching the floating IP to that port only.

# neutron lb-vip-create –name lb1-vip –protocol-port 80 –protocol HTTP –subnet 26316551-44d7-4326-b011-a519b556eda2 lb1

Step 10 : Attaching the floating-ip to newly created port.

# neutron floatingip-associate 09bdbe29-fa85-4110-8dd2-50d274412d8e 25b892cb-44c3-49e2-88b3-0aec7ec8a026

Step 11 : LbaaS also creates a new namespace.

# ip netns list
qlbaas-b8daa41a-3e2a-408e-862b-20d3c52b1764
qrouter-5f7f711c-be0a-4dd0-ba96-191ef760cef7
qdhcp-9ec24eff-f470-4d4e-8c23-9eeb41dfe749

# ip netns exec qlbaas-b8daa41a-3e2a-408e-862b-20d3c52b1764 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
23: tap25b892cb-44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:ae:0b:2a brd ff:ff:ff:ff:ff:ff
inet 10.10.1.19/24 brd 10.10.1.255 scope global tap25b892cb-44
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feae:b2a/64 scope link
valid_lft forever preferred_lft forever

Step 12 : In my case floating IP was 192.168.122.3, I ran curl on that IP, and it’s confirmed that response is coming from both member of pools in ROUND_ROBIN manner.

# for i in {1..5} ; do curl  192.168.122.3 ; done

web1
web2
web1
web2
web1

Flat Provider network with OVS

In this article, I am going to show the configuration of flat provider network. It helps to avoid the NAT which in turn improves the performance. Most importantly, compute node can reach external world directly skipping the network node.

I have referred the below link for configuration and understanding the setup.

http://docs.openstack.org/liberty/networking-guide/scenario-provider-ovs.html

I am showing the setup from packstack all-in-one.

Step 1 : As we are not going to use any tenant network here hence I left that blank. flat is mentioned in type_drivers as my external network is of flat type. If you are using VLAN provider network, you can replace it accordingly.

egrep -v “^(#|$)” /etc/neutron/plugin.ini
[ml2]
type_drivers = flat
tenant_network_types =
mechanism_drivers =openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True

I will be create network with name of external hence I mentioned the same in flat_networks. Comment the default vxlan settings.

Step 2 : Our ML2 plugin file is configured, now it’s turn for openvswitch configuration file.

As I will be creating network with name external hence mentioned the same in bridge_mapping. br-ex is the external bridge to which port (interface) is assigned. I have disabled the tunneling.

egrep -v “^(#|$)” /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
enable_tunneling = False
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.168.122.163
bridge_mappings = external:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Step 3 : Creating external network.

[root@allinone7 ~(keystone_admin)]# neutron net-create external1 –shared –provider:physical_network external –provider:network_type flat
Created a new network:
+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 6960a06c-5352-419f-8455-80c4d43dedf8 |
| name                      | external1                            |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | a525deb290124433b80996d4f90b42ba     |
+—————————+————————————–+

As I am using flat network type hence mentioned the same for network_type, if your external network is VLAN provider network, you need to add one more parameter segmentation ID. It’s important to use the same physical_network name which you have used in Step 1 and Step 2 configuration files.

Step 4 : Creating subnet. My external network is 192.168.122.0/24
[root@allinone7 ~(keystone_admin)]# neutron net-list
+————————————–+———–+———+
| id                                   | name      | subnets |
+————————————–+———–+———+
| 6960a06c-5352-419f-8455-80c4d43dedf8 | external1 |         |
+————————————–+———–+———+

[root@allinone7 ~(keystone_admin)]# neutron subnet-create external1 192.168.122.0/24 –name external1-subnet –gateway 192.168.122.1
Created a new subnet:
+——————-+——————————————————+
| Field             | Value                                                |
+——————-+——————————————————+
| allocation_pools  | {“start”: “192.168.122.2”, “end”: “192.168.122.254”} |
| cidr              | 192.168.122.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.122.1                                        |
| host_routes       |                                                      |
| id                | 38ac41fd-edc7-4ad7-a7fa-1a06000fc4c7                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | external1-subnet                                     |
| network_id        | 6960a06c-5352-419f-8455-80c4d43dedf8                 |
| tenant_id         | a525deb290124433b80996d4f90b42ba                     |
+——————-+——————————————————+
[root@allinone7 ~(keystone_admin)]# neutron net-list
+————————————–+———–+——————————————————-+
| id                                   | name      | subnets                                               |
+————————————–+———–+——————————————————-+
| 6960a06c-5352-419f-8455-80c4d43dedf8 | external1 | 38ac41fd-edc7-4ad7-a7fa-1a06000fc4c7 192.168.122.0/24 |
+————————————–+———–+——————————————————-+

Step 5 : Spawn the instance using “external” network directly.

[root@allinone7 ~(keystone_admin)]# nova list
+————————————–+—————-+——–+————+————-+————————-+
| ID                                   | Name           | Status | Task State | Power State | Networks                |
+————————————–+—————-+——–+————+————-+————————-+
| 36934762-5769-4ac1-955e-fb475b8f6a76 | test-instance1 | ACTIVE | –          | Running     | external1=192.168.122.4 |
+————————————–+—————-+——–+————+————-+————————-+

You will be able to connect to this instance directly.

How to integrate Keystone packstack with AD ?

In this article, I am going to show the integration of keystone with active directory. In case of packstack, by default keystone is running under apache, I have written article on this before. I am going to use the same setup to configure keystone with AD.

I have referred Red Hat article to configure keystone with AD. In that article, steps suggested are for running keystone without httpd, but there is not much difference in steps. You just need to restart the apache service instead of keystone to bring the changes into reflect.

Step 1 : I have configured Windows AD setup which is very easy, after installation just run the “dcpromo.exe” command to configure the AD.

Step 2 : After configuring the AD, as suggested in Red  Hat article, I have created user and group using Windows Power CLI. If you are facing issue while setting the password use the GUI that will be much easier.

Step 3 : Time to make the changes on openstack side.

Again followed the steps for v3 api, glance and keystone provided in article, just restart the httpd service in place of keystone.

Step 4 : Below is my domain keystone configuration file. Note : I am not using any certificate hence I modified some of the options in file like port number from 636 to 389 and ldaps to ldap.

[root@allinone domains(keystone_admin)]# cat /etc/keystone/domains/keystone.ganesh.conf
[ldap]
url =  ldap://192.168.122.133:389
user = CN=svc-ldap,CN=Users,DC=ganesh,DC=com
password                 = User@123
suffix                   = DC=ganesh,DC=com
user_tree_dn             = CN=Users,DC=ganesh,DC=com
user_objectclass         = person
user_filter = (memberOf=cn=grp-openstack,CN=Users,DC=ganesh,DC=com)
user_id_attribute        = cn
user_name_attribute      = cn
user_mail_attribute      = mail
user_pass_attribute      =
user_enabled_attribute   = userAccountControl
user_enabled_mask        = 2
user_enabled_default     = 512
user_attribute_ignore    = password,tenant_id,tenants
user_allow_create        = False
user_allow_update        = False
user_allow_delete        = False

[identity]
driver = keystone.identity.backends.ldap.Identity

Step 5 : Restart the httpd service and create a domain matching the NetBIOS name of AD in my case it’s GANESH.

Step 6 : Verify that you are able to list the users present in domain.

[root@allinone domains(keystone_admin)]# openstack user list –domain GANESH
+——————————————————————+———-+
| ID                                                               | Name     |
+——————————————————————+———-+
| a557f06c03960d3b3de7d670774c1c329efe9f33e17c5aa894f0207ec78766e6 | svc-ldap |
+——————————————————————+———-+

Step 7 : I created one test user in AD “user1” and then again issued the command in openstack setup, and I can see that new user is showing in below output.

[root@allinone domains(keystone_admin)]# openstack user list –domain GANESH
+——————————————————————+———-+
| ID                                                               | Name     |
+——————————————————————+———-+
| a557f06c03960d3b3de7d670774c1c329efe9f33e17c5aa894f0207ec78766e6 | svc-ldap |
| f71c9fb8479994f287978a2b25f5796a80871b472de07bdee7794806e0902d7e | user1    |
+——————————————————————+———-+

Just in case, if someone is curious about the calls which are going to ldap server from packstack setup.

Below calls can be seen in tcpdump while collecting tcpdump in background

[root@allinone domains(keystone_admin)]# openstack user list –domain GANESH

tshark -tad -n -r /tmp/ldap.pcap -Y ldap
Running as user “root” and group “root”. This could be dangerous.
6 2016-03-13 04:25:45 192.168.122.50 -> 192.168.122.133 LDAP 125 bindRequest(1) “CN=svc-ldap,CN=Users,DC=ganesh,DC=com” simple
7 2016-03-13 04:25:45 192.168.122.133 -> 192.168.122.50 LDAP 88 bindResponse(1) success
9 2016-03-13 04:25:45 192.168.122.50 -> 192.168.122.133 LDAP 232 searchRequest(2) “CN=Users,DC=ganesh,DC=com” singleLevel
10 2016-03-13 04:25:45 192.168.122.133 -> 192.168.122.50 LDAP 332 searchResEntry(2) “CN=svc-ldap,CN=Users,DC=ganesh,DC=com”  | searchResEntry(2) “CN=user1,CN=Users,DC=ganesh,DC=com”  | searchResDone(2) success  [2 results]
11 2016-03-13 04:25:45 192.168.122.50 -> 192.168.122.133 LDAP 73 unbindRequest(3)

Step 8 : Listing all the present domains, roles and adding the user to project add assigning role to it.

[root@allinone domains(keystone_admin)]# openstack domain list
+———————————-+———+———+———————————————————————-+
| ID                               | Name    | Enabled | Description                                                          |
+———————————-+———+———+———————————————————————-+
| d313e92c985b456295c254e827bbbd1b | GANESH  | True    |                                                                      |
| db1b4320ec764bdfb45106cdeadc754c | heat    | True    | Contains users and projects created by heat                          |
| default                          | Default | True    | Owns users and tenants (i.e. projects) available on Identity API v2. |
+———————————-+———+———+———————————————————————-+

[root@allinone domains(keystone_admin)]# openstack role list
+———————————-+——————+
| ID                               | Name             |
+———————————-+——————+
| 5ca3a634c2b649dd9e2033509fb561cc | heat_stack_user  |
| 65f8c50174af4818997d94f0bfeb5183 | ResellerAdmin    |
| 68a199b73276438a8466f51a03cd2980 | admin            |
| 8c574229aa654937a5a53d3ced333c08 | heat_stack_owner |
| 9a408ea418884fee94e10bfc8019a6f3 | SwiftOperator    |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_         |
+———————————-+——————+

[root@allinone domains(keystone_admin)]# openstack role add –project demo –user f71c9fb8479994f287978a2b25f5796a80871b472de07bdee7794806e0902d7e _member_

 

 

Reference :

I found very good information about comparison of keystone v2 and keystone v3.

[1] http://www.madorn.com/keystone-v3-api.html#.VuUiC5SbRIt

Various nova instance migration techniques.

In this article, I am going to list the various nova instances techniques. I have used my packstack all-in-one setup and two extra compute nodes to show these tests. I am using the local storage

  • Offline storage migration : Downtime required.

As my instances ephemeral disks are configured on local storage hence the first migration which comes to our mind is the offline migration :

[root@allinone6 ~(keystone_admin)]# nova migrate test-instance1 –poll

Above command will not give the option to specify the destination host on which we want to run the instance, scheduler will choose the destination host for you.

Once the migration is completed successfully, you will see the instance is running (ACTIVE) on other compute node.

I have seen the below state of instance during the migration.

ACTIVE –> RESIZE –> VERIFY_RESIZE –> ACTIVE

If I am checking the instance action list I can see that it has performed the migrate and resize operations both.

[root@allinone6 ~(keystone_admin)]# nova instance-action-list test-instance1
+—————+——————————————+———+—————————-+
| Action        | Request_ID                               | Message | Start_Time                 |
+—————+——————————————+———+—————————-+
| create        | req-93d78dbe-8914-46b9-9605-0e9ff7ed76e8 | –       | 2016-03-06T02:13:57.000000 |
| migrate       | req-f0c214d7-d5ed-4633-a147-056dad6611a2 | –       | 2016-03-07T05:04:26.000000 |
| confirmResize | req-6d97c3cf-a509-4e6c-a016-457569ca46b3 | –       | 2016-03-07T05:05:14.000000 |
+—————+——————————————+———+—————————-+

Even thou it’s performing the resize but flavor will remain the same. Only reason which I can find for resize is that migrate and resize are sharing the same code.

Below are the nova-api.log from the controller node.

[root@allinone6 ~(keystone_admin)]# grep ‘req-f0c214d7-d5ed-4633-a147-056dad6611a2’ /var/log/nova/nova-api.log
2016-03-07 00:04:26.198 3819 DEBUG nova.api.openstack.wsgi [req-f0c214d7-d5ed-4633-a147-056dad6611a2 None] Action: ‘action’, calling method: <bound method AdminActionsController._migrate of <nova.api.openstack.compute.contrib.admin_actions.AdminActionsController object at 0x3aff8d0>>, body: {“migrate”: null} _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:934
2016-03-07 00:04:26.228 3819 DEBUG nova.compute.api [req-f0c214d7-d5ed-4633-a147-056dad6611a2 None] [instance: c8cb4bcc-2b4e-4478-9a7c-61f5170fb177] flavor_id is None. Assuming migration. resize /usr/lib/python2.7/site-packages/nova/compute/api.py:2559
2016-03-07 00:04:26.229 3819 DEBUG nova.compute.api [req-f0c214d7-d5ed-4633-a147-056dad6611a2 None] [instance: c8cb4bcc-2b4e-4478-9a7c-61f5170fb177] Old instance type m1.tiny,  new instance type m1.tiny resize /usr/lib/python2.7/site-packages/nova/compute/api.py:2578
2016-03-07 00:04:26.305 3819 INFO oslo.messaging._drivers.impl_rabbit [req-f0c214d7-d5ed-4633-a147-056dad6611a2 ] Connecting to AMQP server on 192.168.122.234:5672
2016-03-07 00:04:26.315 3819 INFO oslo.messaging._drivers.impl_rabbit [req-f0c214d7-d5ed-4633-a147-056dad6611a2 ] Connected to AMQP server on 192.168.122.234:5672
2016-03-07 00:04:26.563 3819 INFO nova.osapi_compute.wsgi.server [req-f0c214d7-d5ed-4633-a147-056dad6611a2 None] 192.168.122.234 “POST /v2/618cb39791784d7fb7a80d17eb99b306/servers/c8cb4bcc-2b4e-4478-9a7c-61f5170fb177/action HTTP/1.1” status: 202 len: 209 time: 0.4026990

It’s a offline operation, we can confirm the same using uptime of instance.

[root@allinone6 ~(keystone_admin)]# ip netns exec qdhcp-0b9572fb-29fe-4705-b50c-74aa00acb983 ssh cirros@10.10.3.15
cirros@10.10.3.15’s password:
$ uptime
22:05:31 up 0 min,  1 users,  load average: 0.12, 0.04, 0.01

  • Evacuating the instance from failed compute node.

This makes sense while using the shared storage.

Instance is running on compute26 and I shutdown the same node, instance remains in the ACTIVE state but I am not able to ping the instance. Actually the instance is down.
[root@allinone6 ~(keystone_admin)]# nova service-list
+—-+——————+———–+———-+———+——-+—————————-+—————–+
| Id | Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—-+——————+———–+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | allinone6 | internal | enabled | up    | 2016-03-07T05:20:44.000000 | –               |
| 2  | nova-scheduler   | allinone6 | internal | enabled | up    | 2016-03-07T05:20:44.000000 | –               |
| 3  | nova-conductor   | allinone6 | internal | enabled | up    | 2016-03-07T05:20:44.000000 | –               |
| 5  | nova-compute     | allinone6 | nova     | enabled | up    | 2016-03-07T05:20:39.000000 | –               |
| 6  | nova-cert        | allinone6 | internal | enabled | up    | 2016-03-07T05:20:44.000000 | –               |
| 7  | nova-compute     | compute26 | nova     | enabled | down  | 2016-03-07T05:19:19.000000 | –               |
| 8  | nova-compute     | compute16 | nova     | enabled | up    | 2016-03-07T05:20:44.000000 | –               |
+—-+——————+———–+———-+———+——-+—————————-+—————–+

Started the evacuation of instance[,s] from the failed node using below command, in this case we can specify the destination compute node.

[root@compute16 ~(keystone_admin)]# nova host-evacuate –target_host allinone6 compute26
+————————————–+——————-+—————+
| Server UUID                          | Evacuate Accepted | Error Message |
+————————————–+——————-+—————+
| c8cb4bcc-2b4e-4478-9a7c-61f5170fb177 | True              |               |
+————————————–+——————-+—————+

Below is the state of instances which I noticed in nova list output.

ACTIVE — REBUILD — ACTIVE

In below command, we can see the evacuate task is inserted.

[root@allinone6 ~(keystone_admin)]# nova instance-action-list test-instance1
+—————+——————————————+———+—————————-+
| Action        | Request_ID                               | Message | Start_Time                 |
+—————+——————————————+———+—————————-+
| create        | req-93d78dbe-8914-46b9-9605-0e9ff7ed76e8 | –       | 2016-03-06T02:13:57.000000 |
| migrate       | req-f0c214d7-d5ed-4633-a147-056dad6611a2 | –       | 2016-03-07T05:04:26.000000 |
| confirmResize | req-6d97c3cf-a509-4e6c-a016-457569ca46b3 | –       | 2016-03-07T05:05:14.000000 |
| migrate       | req-43e92f8e-04d6-4379-98c1-8ce72094766f | –       | 2016-03-07T05:17:09.000000 |
| confirmResize | req-4a11404d-e448-4692-86f0-063a0dfd2d4a | –       | 2016-03-07T05:17:42.000000 |
| evacuate      | req-1b22f414-36c1-487e-9560-359e2ecd2800 | –       | 2016-03-07T05:22:18.000000 |
+—————+——————————————+———+—————————-+

We can see the below nova-api.log corresponding to “evacuate” operation.

[root@allinone6 ~(keystone_admin)]# grep ‘req-1b22f414-36c1-487e-9560-359e2ecd2800’ /var/log/nova/nova-api.log
2016-03-07 00:22:18.162 3819 DEBUG nova.api.openstack.wsgi [req-1b22f414-36c1-487e-9560-359e2ecd2800 None] Action: ‘action’, calling method: <bound method Controller._evacuate of <nova.api.openstack.compute.contrib.evacuate.Controller object at 0x3b01f10>>, body: {“evacuate”: {“host”: “allinone6”, “onSharedStorage”: false}} _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:934
2016-03-07 00:22:18.209 3819 DEBUG nova.compute.api [req-1b22f414-36c1-487e-9560-359e2ecd2800 None] [instance: c8cb4bcc-2b4e-4478-9a7c-61f5170fb177] vm evacuation scheduled evacuate /usr/lib/python2.7/site-packages/nova/compute/api.py:3258
2016-03-07 00:22:18.219 3819 DEBUG nova.servicegroup.drivers.db [req-1b22f414-36c1-487e-9560-359e2ecd2800 None] Seems service is down. Last heartbeat was 2016-03-07 05:19:19. Elapsed time is 179.219082 is_up /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:75
2016-03-07 00:22:18.270 3819 INFO nova.osapi_compute.wsgi.server [req-1b22f414-36c1-487e-9560-359e2ecd2800 None] 192.168.122.41 “POST /v2/618cb39791784d7fb7a80d17eb99b306/servers/c8cb4bcc-2b4e-4478-9a7c-61f5170fb177/action HTTP/1.1” status: 200 len: 225 time: 0.1504130

  • Live-migration [block migration]

In this case we are doing the live migration of instance despite of not having the shared storage configured for the instance. It will perform the scp of disk from source to destination compute node.

ACTIVE  –> MIGRATING –> ACTIVE

Below is the command for the live-migration.

[root@allinone6 ~(keystone_admin)]# nova live-migration –block-migrate test-instance1 compute16

In instance-action-list, you will not be able to see anything.

[root@allinone6 ~(keystone_admin)]# nova instance-action-list test-instance1
+——–+——————————————+———+—————————-+
| Action | Request_ID                               | Message | Start_Time                 |
+——–+——————————————+———+—————————-+
| create | req-93d78dbe-8914-46b9-9605-0e9ff7ed76e8 | –       | 2016-03-06T02:13:57.000000 |
+——–+——————————————+———+—————————-+

You can see the below logs in nova-api.log file while doing the live migration.

From : /var/log/nova/nova-api.log

~~~
2016-03-06 23:55:42.463 3820 INFO nova.osapi_compute.wsgi.server [req-22ca9202-8a90-4998-a0d2-6705e7bbfa71 None] 192.168.122.234 “GET /v2/618cb39791784d7fb7a80d17eb99b306/servers/c8cb4bcc-2b4e-4478-9a7c-61f5170fb177 HTTP/1.1” status: 200 len: 1903 time: 0.1395051
2016-03-06 23:55:42.468 3818 DEBUG nova.api.openstack.wsgi [req-6fbe4aa6-2e40-403d-b573-18c6f50669f5 None] Action: ‘action’, calling method: <bound method AdminActionsController._migrate_live of <nova.api.openstack.compute.contrib.admin_actions.AdminActionsController object at 0x3aff8d0>>, body: {“os-migrateLive”: {“disk_over_commit”: false, “block_migration”: true, “host”: “compute16”}} _process_stack /u
sr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:934
2016-03-06 23:55:42.505 3818 DEBUG nova.compute.api [req-6fbe4aa6-2e40-403d-b573-18c6f50669f5 None] [instance: c8cb4bcc-2b4e-4478-9a7c-61f5170fb177] Going to try to live migrate instance to compute16 live_migrate /usr/lib/python2.7/site-packages/nova/compute/api.py:3234
2016-03-06 23:55:42.765 3818 INFO nova.osapi_compute.wsgi.server [req-6fbe4aa6-2e40-403d-b573-18c6f50669f5 None] 192.168.122.234 “POST /v2/618cb39791784d7fb7a80d17eb99b306/servers/c8cb4bcc-2b4e-4478-9a7c-61f5170fb177/action HTTP/1.1” status: 202 len: 209 time: 0.2986290
2016-03-06 23:55:49.325 3819 DEBUG keystoneclient.session [-] REQ: curl -i -X GET http://192.168.122.234:35357/v3/auth/tokens -H “X-Subject-Token: TOKEN_REDACTED” -H “User-Agent: python-keystoneclient” -H “Accept: application/json” -H “X-Auth-Token: TOKEN_REDACTED” _http_log_request /usr/lib/python2.7/site-packages/keystoneclient/session.py:155
~~~