Tag Archives: openstack

Ansible configuration file precedence

Installation of Ansible provides us a default /etc/ansible/ansible.cfg configuration file, in which we can make various settings like the default user with which playbook should run on the remote server and the privilege mode of that user.

Here is the default sections present in ansible.cfg file.

# grep ‘^[[]’ /etc/ansible/ansible.cfg

As we are dealing with multiple servers which need to be manage by Ansible and most of the time group of servers have different requirements than other hence need of having separate ansible.cfg files for these group of servers can arise easily. Having multiple ansible.cfg creates confusion that which one will get use, this is a genuine question here is the precedence ordering starting from top to bottom.

  • $ANSIBLE_CONFIG   Setting environment variable for the location of ansible configuration file.
  • Using ./ansible.cfg   from the current directory which is used to run the ansible playbook or adhoc command
  • ~/.ansible.cfg   file present in home directory of user which is use to run the ansible command.
  • /etc/ansible/ansible.cfg  default ansible.cfg file.

IMP : Ansible will only use the configuration settings from the file which is found in this sequence first, it will not look for the settings in the higher sequence files if the setting is not present in the file which is chosen for deployment.

Ex : If ./ansible.cfg file is choosen as $ANSIBLE_CONFIG is not defined then it will use all the settings present in ./ansible.cfg, if any setting/parameter is missing for this file, it will not search the setting in ansible.cfg file present in home directory or the default ansible.cfg file.


Useful hacks for Ansible to make troubleshooting easy

I have completed my Red Hat Ansible certification couple of months back but after that I didn’t get the chance to get my hands dirty on it. I planned to revisit my Ansible concepts again so that  I can start using it in my daily work.

Here is my first post on some Ansible tips and tricks.

Useful tips about yaml templates :

* As ansible is based on yaml playbooks and in yaml template  proper spacing matters a lot. While using yaml template in vim editor I face lot of difficulties to keep the proper spacing to make the yaml template work hence to avoid pressing space again and again here is the useful VIM trick so that double space is getting inserted by default while pressing tab.

autocmd FileType yaml setlocal ai ts=2 sw=2 et

Above line need to added in “$HOME/.vimrc” file and after that whenever tab is pressed it will automatically take 2 spaces.

* Couple of online methods are available to check the yaml SYNTAX. My search for the yaml SYNTAX check using CLI ends with following python command :

#  python -c ‘import yaml , sys ; print yaml.load(sys.stdin)’ < test1.yml
[{‘tasks’: [{‘name’: ‘first task’, ‘service’: ‘name=iptables enabled=true’}, {‘name’: ‘second task’, ‘service’: ‘name=sshd enabled=true’}], ‘hosts’: ‘all’, ‘name’: ‘a simple playbook’}]

Here test1.yaml is the yaml file for which SYNTAX need to be check. In my case I was not having any SYNTAX error hence it simply prints the yaml content in json format.

* Another way to check the SYNTAX is using ansible command. If any error is there it will give us the approx position of the SYNTAX error.

# ansible-playbook –syntax-check test1.yml

playbook: test1.yml

Troubleshooting Ansible playbooks :

In the previous section we have take care about the SYNTAX of Ansible playbook now steps regarding troubleshooting the logical of playbook.

* I always prefer to run the Ansible playbook in dry mode that means not making any actual change just checking what changes it’s going to make. Be careful some modules doesn’t respect the dry mode but still it’s a safer option as most of the modules do respect this mode.

# ansible-playbook –check test1.yml

PLAY [a simple playbook] ******************************************************

TASK: [first task] ************************************************************
ok: [localhost]

TASK: [second task] ***********************************************************
ok: [localhost]

PLAY RECAP ********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

* Another method is to run the playbook step by step instead of running the whole playbook in single shot. It will ask for confirmation before running each step. It will only the step on “y”

# ansible-playbook –step test1.yml

PLAY [a simple playbook] ******************************************************
Perform task: first task (y/n/c): y

Perform task: first task (y/n/c):  ********************************************
ok: [localhost]
Perform task: second task (y/n/c): y

Perform task: second task (y/n/c):  *******************************************
ok: [localhost]

PLAY RECAP ********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

* It also has the option to start a particular task from the list of tasks mentioned in playbook. Like I was having two tasks in my playbook, I have started second task from playbook skipping the first one. Of-course you can also use the tags to achieve the same.

# ansible-playbook –start-at-task=”second task” test1.yml

PLAY [a simple playbook] ******************************************************

TASK: [second task] ***********************************************************
ok: [localhost]

PLAY RECAP ********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0

* By default ansible only dumps the adhoc command or playbook  information on terminal, it’s not getting recorded in log file but that doesn’t mean we can’t. It provide us the flexibility of putting the information in log file so that it can be viewed at later.

# grep log_path /etc/ansible/ansible.cfg
#log_path = /var/log/ansible.log

By default log_path parameter is commented in ansible.cfg file, it can be set to the path of log file in which you want to dump the log information. Also, environment variable ANSIBLE_LOG_PATH can be set which will take precedence over the default location mentioned in ansible.cfg file.

* Here comes my favorite debug module which can be used inside the Ansible playbook to print the variable value. This feature is key to managing tasks that use variables
to communicate with each other (for example, using the output of a task as the input to the
following one).

In first example it’s printing the facts information.

– debug: msg=”The free memory for this system is {{ ansible_memfree_mb }}”

In second example, variable of variable is printed with more verbosity.

– debug: var=output1 verbosity=2

Difference between neutron LBaaS v1 and LBaaS v2 ?

LBaaS v2 is not a new topic anymore most of the customers are switching to LBaaS v2 from LBaaS v1. I have written blog posts in past related to the configuration of both in case you have missed, those are located at LBaaSv1 , LBaaSv2

Still the in Red Hat Openstack, no HA functionality is present for load balancer itself, it means if your load balancer service is running on controller node present in HA setup and if that node is getting down then we have to manually fix the things. There are some other articles present in internet to make LBaaS HA work using some workarounds but I have never tried them.

In this post I am going show the improvements of lbaasv2 over lbaasv1. I will also shed some light on Octavia project which can help us to provide HA capabilities for load balancing service basically it used for Elastic Load Balancing.

Let’s start with comparison of lbaasv2 and lbaasv1

lbaasv1 has provided the capabilities like :

  • L4 Load balancing
  • Session persistence including cookies based
  • Cookie insertion
  • Driver interface for 3rd parties.

Basic flow of the request in lbaas v1 :

Request —> VIP —> Pool [Optional Health Monitor] —> Members [Backend instances]


Missing features :

  • L7 Content switching [IMP feature]
  • Multiple TCP ports per load balancer
  • TLS Termination at load balancer to avoid the load on instances.
  • Load balancer running inside instances.

lbaasv2 is introduced in Kilo version, at that time it was not having the features like L7, Pool sharing, Single create LB [Creating load balancer in single API call] these features are included in liberty. Pool sharing feature is introduced in Mitaka.

Basic flow of the request in lbaas v2 :

Request —> VIP —> Listeners –> Pool [Optional Health Monitor] —> Members [Backend instances]


Let’s see what components/changes have been made in  which makes the missing feature available in newer version :

  1. L7 Content switching

Why we require this feature :

A layer 7 load balancer consists of a listener that accepts requests on behalf of a number of back-end pools and distributes those requests based on policies that use application data to determine which pools should service any given request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one group of back-end servers (pool) can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML, CSS, and JavaScript.

This feature is introduced by adding additional component “Listener” in lbaasv2 architecture. We can add the policies and then apply the rules to policy to have L7 layer load balancing. Very informative article about the L7 content switching is available at link , it covers lot of practical scenarios.

2. Multiple TCP ports per load balancer

In lbaas v1 we were only having one TCP port like 80 or 443 at load balancer associated with VIP (Virtual IP), we can’t have two ports/protocols associated with VIP that means either you can have HTTP traffic load balanced or HTTPS. This limit has been lifted in case of Lbaas v2, as now we can have multiple ports associated with single VIP.

It can be done with pool sharing or without pool sharing.

With pool sharing :


Without Pool Sharing :


3. TLS Termination at load balancer to avoid the load on instances.

We can have the TLS termination at load balancer level instead of having the termination at backend servers. It reduces the load on backend servers and also it provides the capability of having L7 content switching if the TLS termination done at load balancer. Barbican containers are used to do the termination at load balancer level.

4. Load balancer running inside instances.

I have not seen this implementation without Octavia which is using “Amphora” instances to run the load balancer.

IMP : Both load balancer versions can’t be run simultaneously.

As promised at the beginning of article, let’s see what capabilities “Octavia” adds to lbaasv2 version.

Here is the architecture of Octavia :


Octavia API lacks the athentication facility hence it accepts the APIs from neutron instead of exposing direct APIs.

As I mentioned earlier, in case of Octavia load balancer runs inside the nova instances hence it need to communicate with components like nova, neutron to spawn the instances in which load balancer [haproxy] can run. Okay, what about other components required to spawn instance :

  • Create amphora disk image using OpenStack diskimage-builder.
  • Create a Nova flavor for the amphorae.
  • Add amphora disk image to glance.
  • Tag the above glance disk image with ‘amphora’.

But now amphora instance becomes single point of failure and also the capability to handle the load is limited. From Mitaka version onwards we can run single load balancer replicated in two instances which can run in A/P mode and send the heartbeat using VRRP. If one instance is getting down other can start serving load balancer service.

So what’s the major advantage of Octavia, okay, here comes  the term Elastic Load Balancing (ELB), currently VIP is associated with single load balancer it’s 1:1 relation but in case of ELB relation between VIP and load-balancer is 1:N, VIP distribute the incoming traffic over pool of “amphora” instances.

In ELB, traffic is getting distributed at two levels :

  1. VIP to pool of amphora instances.
  2. amphora instances to back-end instances.

We can also use HEAT orchestration with CEILOMETER (alarm) functionality to manage the number of instances in ‘amphora’ pool.

Combining the power of “pool of amphora instances” and “failover” we can have a robust N+1 topology in which if any VM from pool of amphora instance is getting failed, it’s getting replaced by standby VM.


I hope that this article shed some light on the jargon of neutron lbaas world 🙂

How to make auto-scaling work for nova with heat and ceilometer ?

I was trying to test this feature for a very long time but never got a chance to dig into it. Today, I got a opportunity to work on this feature. I prepared a packstack OSP 7 [Kilo] setup and took the reference from wonderful official Red Hat documentation [1] to make this work.

In this article I am going to cover only scale-up scenario.

Step 1 : While installing packstack we need to make below options as “yes” so that required components can be installed.

# egrep “HEAT|CEILOMETER” /root/answer.txt | grep INSTALL

If you have already deployed packstack setup no need to worry just enable these in answer.txt file which is used for creating existing setup and run the packstack installation command again.

Step 2 : Created three templates to make this work.

cirros.yaml – Contains the information for spawning an instance. Script is used to generate the cpu utilization alarm.

environment.yaml – Environment file to call cirros.yaml template.

sample.yaml — Containing the main logic for scaling-up.

# cat cirros.yaml
heat_template_version: 2014-10-16
description: A simple server.
type: OS::Nova::Server
#  – device_name: vda
#    delete_on_termination: true
#    volume_id: { get_resource: volume }
image: cirros
flavor: m1.tiny
– network: internal1
user_data_format: RAW
user_data: |
while [ 1 ] ; do echo $((13**99)) 1>/dev/null 2>&1; done

# cat environment.yaml
“OS::Nova::Server::Cirros”: “cirros.yaml”

# cat sample.yaml
heat_template_version: 2014-10-16
description: A simple auto scaling group.
type: OS::Heat::AutoScalingGroup
cooldown: 60
desired_capacity: 1
max_size: 3
min_size: 1
type: OS::Nova::Server::Cirros
type: OS::Heat::ScalingPolicy
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: scale_group }
cooldown: 60
scaling_adjustment: +1
type: OS::Ceilometer::Alarm
meter_name: cpu_util
statistic: avg
period: 60
evaluation_periods: 1
threshold: 20
– {get_attr: [scaleup_policy, alarm_url]}
comparison_operator: gt


Shedding some information on sample.yaml file, initially I am spawning only one instance and scaling this up-to maximum of 3 instances. Threshold of ceilometer set to 20.

Step 3 : Modify the ceilometer sampling interval for cpu_util in “/etc/ceilometer/pipeline.yaml” file. Changed this value from default of 10mins to 1 min.

– name: cpu_source
interval: 60
– “cpu”
– cpu_sink

Restart all openstack services after making this change.

Step 4 : Let’s create a stack now.

[root@allinone7 VIKRANT(keystone_admin)]# heat stack-create teststack1 -f sample.yaml -e environment.yaml
| id                                   | stack_name | stack_status       | creation_time        |
| 0f163366-c599-4fd5-a797-86cf40f05150 | teststack1 | CREATE_IN_PROGRESS | 2016-10-10T12:02:37Z |

Instance spawned successfully and alarm is created once the heat stack creation is completed.

[root@allinone7 VIKRANT(keystone_admin)]# nova list
| ID                                   | Name                                                  | Status | Task State | Power State | Networks              |
| 845abae0-9834-443b-82ec-d55bce2243ab | te-yvfr-ws5tn26msbub-zpeebwwwa67w-server-pxu6pqcssmmb | ACTIVE | –          | Running     | internal1= |

[root@allinone7 VIKRANT(keystone_admin)]# ceilometer alarm-list
| Alarm ID                             | Name                                   | State             | Severity | Enabled | Continuous | Alarm condition                | Time constraints |
| 7746e457-9114-4cc6-8408-16b14322e937 | teststack1-cpu_alarm_high-sctookginoqz | insufficient data | low      | True    | True       | cpu_util > 20.0 during 1 x 60s | None             |

Checking the events in heat-engine.log file.

2016-10-10 12:02:37.499 22212 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1): Stack CREATE started
2016-10-10 12:02:37.510 22212 INFO heat.engine.resource [-] creating AutoScalingResourceGroup “scale_group” Stack “teststack1” [0f163366-c599-4fd5-a797-86cf40f05150]
2016-10-10 12:02:37.558 22215 INFO heat.engine.service [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Creating stack teststack1-scale_group-ujt3ixg3yvfr
2016-10-10 12:02:37.572 22215 INFO heat.engine.resource [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Validating TemplateResource “ws5tn26msbub”
2016-10-10 12:02:37.585 22215 INFO heat.engine.resource [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:02:37.639 22215 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr): Stack CREATE started
2016-10-10 12:02:37.650 22215 INFO heat.engine.resource [-] creating TemplateResource “ws5tn26msbub” Stack “teststack1-scale_group-ujt3ixg3yvfr” [0c311ad5-cb76-4956-b038-ab2e44721cf1]
2016-10-10 12:02:37.699 22214 INFO heat.engine.service [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Creating stack teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w
2016-10-10 12:02:37.712 22214 INFO heat.engine.resource [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:02:38.004 22214 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack CREATE started
2016-10-10 12:02:38.022 22214 INFO heat.engine.resource [-] creating Server “server” Stack “teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w” [11dbdc5d-dc67-489b-9738-7ee6984c286e]
2016-10-10 12:02:42.965 22213 INFO heat.engine.service [req-e6410d2a-6f85-404d-a675-897c8a254241 – -] Service 13d36b70-a2f6-4fec-8d2a-c904a2f9c461 is updated
2016-10-10 12:02:42.969 22214 INFO heat.engine.service [req-531481c7-5fd1-4c25-837c-172b2b7c9423 – -] Service 71fb5520-7064-4cee-9123-74f6d7b86955 is updated
2016-10-10 12:02:42.970 22215 INFO heat.engine.service [req-6fe46418-bf3d-4555-a77c-8c800a414ba8 – -] Service f0706340-54f8-42f1-a647-c77513aef3a5 is updated
2016-10-10 12:02:42.971 22212 INFO heat.engine.service [req-82f5974f-0e77-4b60-ac5e-f3c849812fe1 – -] Service 083acd77-cb7f-45fc-80f0-9d41eaf2a37d is updated
2016-10-10 12:02:53.228 22214 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack CREATE completed successfully
2016-10-10 12:02:53.549 22215 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr): Stack CREATE completed successfully
2016-10-10 12:02:53.960 22212 INFO heat.engine.resource [-] creating AutoScalingPolicy “scaleup_policy” Stack “teststack1” [0f163366-c599-4fd5-a797-86cf40f05150]
2016-10-10 12:02:55.152 22212 INFO heat.engine.resource [-] creating CeilometerAlarm “cpu_alarm_high” Stack “teststack1” [0f163366-c599-4fd5-a797-86cf40f05150]
2016-10-10 12:02:56.379 22212 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1): Stack CREATE completed successfully

Step 5 : Once the alarm is triggered, it will initiate the creation of one more instance.

[root@allinone7 VIKRANT(keystone_admin)]# ceilometer alarm-history 7746e457-9114-4cc6-8408-16b14322e937
| Type             | Timestamp                  | Detail                                                               |
| state transition | 2016-10-10T12:04:48.492000 | state: alarm                                                         |
| creation         | 2016-10-10T12:02:55.247000 | name: teststack1-cpu_alarm_high-sctookginoqz                         |
|                  |                            | description: Alarm when cpu_util is gt a avg of 20.0 over 60 seconds |
|                  |                            | type: threshold                                                      |
|                  |                            | rule: cpu_util > 20.0 during 1 x 60s                                 |
|                  |                            | time_constraints: None                                               |

Log from ceilometer log file.

From : /var/log/ceilometer/alarm-evaluator.log

2016-10-10 12:04:48.488 16550 INFO ceilometer.alarm.evaluator [-] alarm 7746e457-9114-4cc6-8408-16b14322e937 transitioning to alarm because Transition to alarm due to 1 samples outside threshold, most recent: 97.05

Step 6 : In the heat-engine.log file, we can see that triggered alarm has started the scaleup_policy and stack came in “UPDATE IN_PROGRESS” state. We are seeing two events because 2 instances are getting spawned, remember we set the max number of instances to 3, first instance got deployed during stack creation and remaining 2 instances are triggered at alarm. Actually at first alarm, first instance got triggered, as utilization stayed more than threshold for next min hence 3rd instance got triggered.


2016-10-10 12:04:48.641 22213 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm scaleup_policy, new state alarm
2016-10-10 12:04:48.680 22213 INFO heat.engine.resources.openstack.heat.scaling_policy [-] scaleup_policy Alarm, adjusting Group scale_group with id teststack1-scale_group-ujt3ixg3yvfr by 1
2016-10-10 12:04:48.802 22215 INFO heat.engine.stack [-] Stack UPDATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr): Stack UPDATE started
2016-10-10 12:04:48.858 22215 INFO heat.engine.resource [-] updating TemplateResource “ws5tn26msbub” [11dbdc5d-dc67-489b-9738-7ee6984c286e] Stack “teststack1-scale_group-ujt3ixg3yvfr” [0c311ad5-cb76-4956-b038-ab2e44721cf1]
2016-10-10 12:04:48.919 22214 INFO heat.engine.service [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Updating stack teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w
2016-10-10 12:04:48.922 22214 INFO heat.engine.resource [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:04:49.317 22214 INFO heat.engine.stack [-] Stack UPDATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack UPDATE started
2016-10-10 12:04:49.346 22215 INFO heat.engine.resource [-] creating TemplateResource “mmm6uxmlf3om” Stack “teststack1-scale_group-ujt3ixg3yvfr” [0c311ad5-cb76-4956-b038-ab2e44721cf1]
2016-10-10 12:04:49.366 22214 INFO heat.engine.update [-] Resource server for stack teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w updated
2016-10-10 12:04:49.405 22212 INFO heat.engine.service [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Creating stack teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx
2016-10-10 12:04:49.419 22212 INFO heat.engine.resource [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:04:49.879 22212 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx): Stack CREATE started
2016-10-10 12:04:49.889 22212 INFO heat.engine.resource [-] creating Server “server” Stack “teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx” [36c613d1-b89f-4409-b965-521b1ae2cbf3]
2016-10-10 12:04:50.406 22214 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack DELETE started
2016-10-10 12:04:50.443 22214 INFO heat.engine.stack [-] Stack DELETE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack DELETE completed successfully
2016-10-10 12:04:50.930 22215 INFO heat.engine.update [-] Resource ws5tn26msbub for stack teststack1-scale_group-ujt3ixg3yvfr updated
2016-10-10 12:05:07.865 22212 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx): Stack CREATE completed successfully


Step 7 : We can see the event-list of created stack for more understanding.

[root@allinone7 VIKRANT(keystone_admin)]# heat event-list 0f163366-c599-4fd5-a797-86cf40f05150
| resource_name  | id                                   | resource_status_reason                                                                                                           | resource_status    | event_time           |
| teststack1     | 6ddf5a0c-c345-43ad-8c20-54d67cf8e2a6 | Stack CREATE started                                                                                                             | CREATE_IN_PROGRESS | 2016-10-10T12:02:37Z |
| scale_group    | 528ed942-551d-482b-95ee-ab72a6f59280 | state changed                                                                                                                    | CREATE_IN_PROGRESS | 2016-10-10T12:02:37Z |
| scale_group    | 9d7cf5f4-027f-4c97-92f2-86d208a4be77 | state changed                                                                                                                    | CREATE_COMPLETE    | 2016-10-10T12:02:53Z |
| scaleup_policy | a78e9577-1251-4221-a1c7-9da4636550b7 | state changed                                                                                                                    | CREATE_IN_PROGRESS | 2016-10-10T12:02:53Z |
| scaleup_policy | cb690cd5-5243-47f0-8f9f-2d88ca13780f | state changed                                                                                                                    | CREATE_COMPLETE    | 2016-10-10T12:02:55Z |
| cpu_alarm_high | 9addbccf-cc18-410a-b1f6-401b56b09065 | state changed                                                                                                                    | CREATE_IN_PROGRESS | 2016-10-10T12:02:55Z |
| cpu_alarm_high | ed9a5f49-d4ea-4f68-af9e-355d2e1b9113 | state changed                                                                                                                    | CREATE_COMPLETE    | 2016-10-10T12:02:56Z |
| teststack1     | 14be65fc-1b33-478e-9f81-413b694c8312 | Stack CREATE completed successfully                                                                                              | CREATE_COMPLETE    | 2016-10-10T12:02:56Z |
| scaleup_policy | e65de9b1-6854-4f27-8256-f5f9a13890df | alarm state changed from insufficient data to alarm (Transition to alarm due to 1 samples outside threshold, most recent: 97.05) | SIGNAL_COMPLETE    | 2016-10-10T12:05:09Z |
| scaleup_policy | a499bfef-1824-4ef3-8c7f-e86cf14e11d6 | alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.7083333333)      | SIGNAL_COMPLETE    | 2016-10-10T12:07:14Z |
| scaleup_policy | 2a801848-bf9f-41e0-acac-e526d60f5791 | alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.0833333333)      | SIGNAL_COMPLETE    | 2016-10-10T12:08:55Z |
| scaleup_policy | f57fda03-2017-4408-b4b9-f302a1fad430 | alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.1444444444)      | SIGNAL_COMPLETE    | 2016-10-10T12:10:55Z |

We can see three instances running.

[root@allinone7 VIKRANT(keystone_admin)]# nova list
| ID                                   | Name                                                  | Status | Task State | Power State | Networks              |
| 041345cc-4ebf-429c-ab2b-ef0f757bfeaa | te-yvfr-mmm6uxmlf3om-m5idcplscfcx-server-hxaqqmxzv4jp | ACTIVE | –          | Running     | internal1= |
| bebbd5a0-e0b2-40b4-8810-978b86626267 | te-yvfr-r7vn2e5c34b6-by4oq22vnxbo-server-ktblt3evhvd6 | ACTIVE | –          | Running     | internal1= |
| 845abae0-9834-443b-82ec-d55bce2243ab | te-yvfr-ws5tn26msbub-zpeebwwwa67w-server-pxu6pqcssmmb | ACTIVE | –          | Running     | internal1= |


[1] https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/7/single/auto-scaling-for-compute/#example_auto_scaling_based_on_cpu_usage

What is Terraform and how to use it ?

Terraform it a tool to configure and provision cloud infrastructure, it provides similar functionality as heat. Major difference is, terrform is cloud agnostic, it can be use for openstack, amazon or other cloud providers, but heat functionality is limited only to openstack. In this article, I am going to show you the usage of terraform with openstack.

As more and more companies are moving towards hybrid cloud architectures hence tool like terraform provides great benefits.

Terraform configurations can be in terraformat or json. I have used json in this article.

Step 1 : Download the terraform according to your OS. I have downloaded terraform for linux 64 bit from official download page.

Step 2 : Unzip the downloaded zip file and copy the binary in /usr/bin so that it can be used as a command.

[root@allinone9 ~(keystone_admin)]# unzip terraform_0.7.4_linux_amd64.zip
Archive:  terraform_0.7.4_linux_amd64.zip
inflating: terraform

[root@allinone9 ~(keystone_admin)]# cp -p terraform /usr/bin/
[root@allinone9 ~(keystone_admin)]# terraform

Step 3 : Also, install the graphviz tool which we will be using later in this article.

[root@allinone9 ~(keystone_admin)]# yum install -y graphviz

Step 4 : To use terraform, we need to create four files in a directory, main logic lies in main.tf.json file. Basically main.tf.json and vars.tf.json are two mandatory files.

[root@allinone9 terrformexample1(keystone_admin)]# ll
total 32
-rw-r–r– 1 root root  419 Sep 29 08:16 main.tf.json
-rw-r–r– 1 root root  138 Sep 29 08:46 output.tf.json
-rw-r–r– 1 root root  233 Sep 29 08:11 provider.tf.json
-rw-r–r– 1 root root  177 Sep 29 08:12 vars.tf.json

Let’s check the content of these files.

a) In provider.tf.json file I am specifying the provider which I am going to use along with credentails of that provider. In this case, I am using openstack.

[root@allinone9 terrformexample1(keystone_admin)]# cat provider.tf.json
“provider”: {
“openstack”: {
“user_name”: “admin”,
“tenant_name”: “admin”,
“password”: “ed5432114db34e29”,
“auth_url”: “;

b) I have defined image and flavor as variables in separate file, to make the main logic more modular. Basically this acts like a heat environment file.

[root@allinone9 terrformexample1(keystone_admin)]# cat vars.tf.json
“variable”: {
“image”: {
“default”: “cirros”
“variable”: {
“flavor”: {
“default”: “m1.tiny”

c) main.tf.json file contains the main resource definition. I am using the various defined in vars.tf.json file in this file to spawn an instance. This file plays the same role as heat resource definition file.

[root@allinone9 terrformexample1(keystone_admin)]# cat main.tf.json
“resource”: {
“openstack_compute_instance_v2”: {
“tf-instance”: {
“name”: “tf-instance”,
“image_name”: “${var.image}”,
“flavor_name”: “${var.flavor}”,
“security_groups”: [“default”],
“network”: {
“uuid”: “1e149f28-66b3-4254-a88c-f1b42e7bc200”

Note : Security group should be in list format, despite of being a single value. This is hard coded.

d) Output to print when the operation is completed successfully. I am printing the instance IP.  In case of heat it’s display in resource definition file.

[root@allinone9 terrformexample1(keystone_admin)]# cat output.tf.json
“output”: {
“address”: {
“value”: “${openstack_compute_instance_v2.tf-instance.access_ip_v4}”

Step 5 : All the required files are in place, now issue the deployment command to create the instance.

[root@allinone9 terrformexample1(keystone_admin)]# terraform apply
openstack_compute_instance_v2.tf-instance: Creating…
access_ip_v4:               “” => “<computed>”
access_ip_v6:               “” => “<computed>”
flavor_id:                  “” => “<computed>”
flavor_name:                “” => “m1.tiny”
image_id:                   “” => “<computed>”
image_name:                 “” => “cirros”
name:                       “” => “tf-instance”
network.#:                  “” => “1”
network.0.access_network:   “” => “false”
network.0.fixed_ip_v4:      “” => “<computed>”
network.0.fixed_ip_v6:      “” => “<computed>”
network.0.floating_ip:      “” => “<computed>”
network.0.mac:              “” => “<computed>”
network.0.name:             “” => “<computed>”
network.0.port:             “” => “<computed>”
network.0.uuid:             “” => “1e149f28-66b3-4254-a88c-f1b42e7bc200”
region:                     “” => “RegionOne”
security_groups.#:          “” => “1”
security_groups.3814588639: “” => “default”
stop_before_destroy:        “” => “false”
openstack_compute_instance_v2.tf-instance: Still creating… (10s elapsed)
openstack_compute_instance_v2.tf-instance: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate


address =

Above output shows the instance information and IP address of instance because we have specified it to print the IP address.

Step 6 : Verify the instance is spawned successfully.

[root@allinone9 terrformexample1(keystone_admin)]# nova list | grep tf-instance
| 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f | tf-instance   | ACTIVE  | –          | Running     | internal1= |

Step 7 : If later on we want to check the information about our deployment, we can use below commands.

[root@allinone9 terrformexample1(keystone_admin)]# terraform output
address =

[root@allinone9 terrformexample1(keystone_admin)]# terraform show
id = 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f
access_ip_v4 =
access_ip_v6 =
flavor_id = eb45fb1b-1470-4315-81e5-ac5be702dbd2
flavor_name = m1.tiny
image_id = b74c6a4e-ccd4-4b47-9bca-8019d3ce44d9
image_name = cirros
metadata.% = 0
name = tf-instance
network.# = 1
network.0.access_network = false
network.0.fixed_ip_v4 =
network.0.fixed_ip_v6 =
network.0.floating_ip =
network.0.mac = fa:16:3e:ad:cb:6c
network.0.name = internal1
network.0.port =
network.0.uuid = 1e149f28-66b3-4254-a88c-f1b42e7bc200
region = RegionOne
security_groups.# = 1
security_groups.3814588639 = default
stop_before_destroy = false
volume.# = 0


address =

Step 8 : Deployment stack can be dumped into an image. I found this feature quite useful, as it’s easy to visualize.

[root@allinone9 terrformexample1(keystone_admin)]# terraform graph | dot -Tpng > graph.png

Step 9 : If you are missing the heat commands like “resource-list” don’t worry those are also available in terraforms.

[root@allinone9 terrformexample1(keystone_admin)]# terraform state list
[root@allinone9 terrformexample1(keystone_admin)]# terraform state show
id                         = 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f
access_ip_v4               =
access_ip_v6               =
flavor_id                  = eb45fb1b-1470-4315-81e5-ac5be702dbd2
flavor_name                = m1.tiny
image_id                   = b74c6a4e-ccd4-4b47-9bca-8019d3ce44d9
image_name                 = cirros
metadata.%                 = 0
name                       = tf-instance
network.#                  = 1
network.0.access_network   = false
network.0.fixed_ip_v4      =
network.0.fixed_ip_v6      =
network.0.floating_ip      =
network.0.mac              = fa:16:3e:ad:cb:6c
network.0.name             = internal1
network.0.port             =
network.0.uuid             = 1e149f28-66b3-4254-a88c-f1b42e7bc200
region                     = RegionOne
security_groups.#          = 1
security_groups.3814588639 = default
stop_before_destroy        = false
volume.#                   = 0

Step 10 : Finally we can destroy the deployment.

[root@allinone9 terrformexample1(keystone_admin)]# terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only ‘yes’ will be accepted to confirm.

Enter a value: yes

openstack_compute_instance_v2.tf-instance: Refreshing state… (ID: 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f)
openstack_compute_instance_v2.tf-instance: Destroying…
openstack_compute_instance_v2.tf-instance: Still destroying… (10s elapsed)
openstack_compute_instance_v2.tf-instance: Destruction complete

Destroy complete! Resources: 1 destroyed.


In this article, I have just covered the basic working functionality of terrforms,  there are lot of other features available in this tool. You can refer the official hashicorp site to know more about terraform features.


Step by Step configuing openstack Neutron LbaaS in packstack setup ?

In this article, I am going to show the procedure of creating LbaaSv1 load balancer in packstack setup using two instances.

First of all, I didn’t find any image with HTTP package in it hence I created my own Fed 22 image with http and cloud packages [cloud-utils, cloud-init] installed.

If you are not going to install the cloud packages then you will face issue while spawning the instances like routes will not be configured in instance eventually you will not be able to reach the instance.

Step 1 : Downloaded one fedora 22 ISO and launch a KVM using that ISO. Installed http and cloud packages in it.

Step 2 : Poweroff the KVM and locate the qcow2 created corresponding to KVM using below command.

# virsh domblklist myimage

myimage is KVM name.

Step 3 : Reset the image so that it can become clean for use in openstack environment.

# virt-sysprep -d myimage

Step 4 : Use the qcow2 path found in Step 2 to compress the qcow2 image.

# ls -lsh /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2
1.8G -rw——- 1 qemu qemu 8.1G Mar 25 11:56 /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2

#virt-sparsify –compress /home/vaggarwa/VirtualMachines/fedora-unknown.qcow2 fedora22.qcow2

# ll -lsh fedora22.qcow2
662M -rw-r–r– 1 root root 664M Mar 25 11:59 fedora22.qcow2

Notice the difference before and after compression. Upload this image to glance.

Step 5 : Spawn two instances web1 and web2 while spawning the instances I am changing the index.html file to web1 and web2 respectively.

# nova boot –flavor m1.custom1 –security-groups lbsg –image c3dedff2-f0a9-4aa1-baa9-9cdc08860f6d –file /var/www/html/index.html=/root/index1.html –nic net-id=9ec24eff-f470-4d4e-8c23-9eeb41dfe749 web1

# nova boot –flavor m1.custom1 –security-groups lbsg –image c3dedff2-f0a9-4aa1-baa9-9cdc08860f6d –file /var/www/html/index.html=/root/index2.html –nic net-id=9ec24eff-f470-4d4e-8c23-9eeb41dfe749 web2

Note : I have created a new security group lbsg to allow HTTP/HTTPS traffic

Step 6 : Once the instances are spawned, you need to login into each instance and change the selinux content of the index.html file. If you want, you can disable the selinux in Step 1 itself to avoid this step.

# ip netns exec qdhcp-9ec24eff-f470-4d4e-8c23-9eeb41dfe749 ssh root@

# restorecon -Rv /var/www/html/index.html

Step 7 : Create a pool which can redirect the traffic in ROUND_ROBIN manner.

# neutron lb-pool-create –name lb1 –lb-method ROUND_ROBIN –protocol HTTP –subnet 26316551-44d7-4326-b011-a519b556eda2

Note : This pool and instances are spawned using internal network.

Step 8 : Add two instances as member of pool.

# neutron lb-member-create –address –protocol-port 80 lb1

# neutron lb-member-create –address –protocol-port 80 lb1

Step 9 : Create a virtual IP from internal work. Port which is going to created corresponding to virtual IP. We will be attaching the floating IP to that port only.

# neutron lb-vip-create –name lb1-vip –protocol-port 80 –protocol HTTP –subnet 26316551-44d7-4326-b011-a519b556eda2 lb1

Step 10 : Attaching the floating-ip to newly created port.

# neutron floatingip-associate 09bdbe29-fa85-4110-8dd2-50d274412d8e 25b892cb-44c3-49e2-88b3-0aec7ec8a026

Step 11 : LbaaS also creates a new namespace.

# ip netns list

# ip netns exec qlbaas-b8daa41a-3e2a-408e-862b-20d3c52b1764 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
23: tap25b892cb-44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:ae:0b:2a brd ff:ff:ff:ff:ff:ff
inet brd scope global tap25b892cb-44
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feae:b2a/64 scope link
valid_lft forever preferred_lft forever

Step 12 : In my case floating IP was, I ran curl on that IP, and it’s confirmed that response is coming from both member of pools in ROUND_ROBIN manner.

# for i in {1..5} ; do curl ; done


Flat Provider network with OVS

In this article, I am going to show the configuration of flat provider network. It helps to avoid the NAT which in turn improves the performance. Most importantly, compute node can reach external world directly skipping the network node.

I have referred the below link for configuration and understanding the setup.


I am showing the setup from packstack all-in-one.

Step 1 : As we are not going to use any tenant network here hence I left that blank. flat is mentioned in type_drivers as my external network is of flat type. If you are using VLAN provider network, you can replace it accordingly.

egrep -v “^(#|$)” /etc/neutron/plugin.ini
type_drivers = flat
tenant_network_types =
mechanism_drivers =openvswitch
flat_networks = external
enable_security_group = True

I will be create network with name of external hence I mentioned the same in flat_networks. Comment the default vxlan settings.

Step 2 : Our ML2 plugin file is configured, now it’s turn for openvswitch configuration file.

As I will be creating network with name external hence mentioned the same in bridge_mapping. br-ex is the external bridge to which port (interface) is assigned. I have disabled the tunneling.

egrep -v “^(#|$)” /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
enable_tunneling = False
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =
bridge_mappings = external:br-ex
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Step 3 : Creating external network.

[root@allinone7 ~(keystone_admin)]# neutron net-create external1 –shared –provider:physical_network external –provider:network_type flat
Created a new network:
| Field                     | Value                                |
| admin_state_up            | True                                 |
| id                        | 6960a06c-5352-419f-8455-80c4d43dedf8 |
| name                      | external1                            |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | a525deb290124433b80996d4f90b42ba     |

As I am using flat network type hence mentioned the same for network_type, if your external network is VLAN provider network, you need to add one more parameter segmentation ID. It’s important to use the same physical_network name which you have used in Step 1 and Step 2 configuration files.

Step 4 : Creating subnet. My external network is
[root@allinone7 ~(keystone_admin)]# neutron net-list
| id                                   | name      | subnets |
| 6960a06c-5352-419f-8455-80c4d43dedf8 | external1 |         |

[root@allinone7 ~(keystone_admin)]# neutron subnet-create external1 –name external1-subnet –gateway
Created a new subnet:
| Field             | Value                                                |
| allocation_pools  | {“start”: “”, “end”: “”} |
| cidr              |                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        |                                        |
| host_routes       |                                                      |
| id                | 38ac41fd-edc7-4ad7-a7fa-1a06000fc4c7                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | external1-subnet                                     |
| network_id        | 6960a06c-5352-419f-8455-80c4d43dedf8                 |
| tenant_id         | a525deb290124433b80996d4f90b42ba                     |
[root@allinone7 ~(keystone_admin)]# neutron net-list
| id                                   | name      | subnets                                               |
| 6960a06c-5352-419f-8455-80c4d43dedf8 | external1 | 38ac41fd-edc7-4ad7-a7fa-1a06000fc4c7 |

Step 5 : Spawn the instance using “external” network directly.

[root@allinone7 ~(keystone_admin)]# nova list
| ID                                   | Name           | Status | Task State | Power State | Networks                |
| 36934762-5769-4ac1-955e-fb475b8f6a76 | test-instance1 | ACTIVE | –          | Running     | external1= |

You will be able to connect to this instance directly.