Author Archives: Vikrant

About Vikrant

I am having diversified 5.5 Years (approx) experience in Unix, Linux and Virtualization.

Dind (Docker in Docker) on Atomic host

I am preparing for Red Hat Docker exam 276, while preparation I came to know about docker in docker term which is called as Dind. It’s not a part of exam content. However I started digging more on it and I found some useful blogs on net to get start with it. In this article I am going to share the steps which I have followed to see the working on Dind.

Checking the docker version present on atomic host by default :

-bash-4.2# docker version
Client:
Version: 1.10.3
API version: 1.22
Package version: docker-common-1.10.3-59.el7.x86_64
Go version: go1.6.2
Git commit: 429be27-unsupported
Built: Fri Nov 18 17:03:44 2016
OS/Arch: linux/amd64

Server:
Version: 1.10.3
API version: 1.22
Package version: docker-common-1.10.3-59.el7.x86_64
Go version: go1.6.2
Git commit: 429be27-unsupported
Built: Fri Nov 18 17:03:44 2016
OS/Arch: linux/amd64

Started a new container using ding image.

-bash-4.2# docker run –privileged -t -i jpetazzo/dind
Unable to find image ‘jpetazzo/dind:latest’ locally
Trying to pull repository registry.access.redhat.com/jpetazzo/dind …
unknown: Not Found
Trying to pull repository docker.io/jpetazzo/dind …
latest: Pulling from docker.io/jpetazzo/dind
16da43b30d89: Pull complete
1840843dafed: Pull complete
91246eb75b7d: Pull complete
7faa681b41d7: Pull complete
97b84c64d426: Pull complete
a1bc5a98c1dc: Pull complete
ce58583abd90: Pull complete
66270626f481: Pull complete
Digest: sha256:63a7c4b0f69fbc21755e677f85532ce327e0240aedf6afa0421ca1f3a66dbf2e
Status: Downloaded newer image for docker.io/jpetazzo/dind:latest
ln: failed to create symbolic link ‘/sys/fs/cgroup/systemd/name=systemd’: Operation not permitted
INFO[0001] libcontainerd: new containerd process, pid: 72
ERRO[0002] devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a more recent version of libdevmapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option
ERRO[0002] ‘overlay’ not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.
INFO[0002] Graph migration to content-addressability took 0.00 seconds
INFO[0002] Loading containers: start.
WARN[0002] Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file ‘/lib/modules/3.10.0-514.2.2.el7.x86_64/modules.dep.bin’
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file ‘/lib/modules/3.10.0-514.2.2.el7.x86_64/modules.dep.bin’
, error: exit status 1
WARN[0002] Running modprobe nf_nat failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file ‘/lib/modules/3.10.0-514.2.2.el7.x86_64/modules.dep.bin’`, error: exit status 1
WARN[0002] Running modprobe xt_conntrack failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file ‘/lib/modules/3.10.0-514.2.2.el7.x86_64/modules.dep.bin’`, error: exit status 1
INFO[0002] Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option –bip can be used to set a preferred IP address

INFO[0002] Loading containers: done.
INFO[0002] Daemon has completed initialization
INFO[0002] Docker daemon commit=7392c3b graphdriver=vfs version=1.12.5
INFO[0002] API listen on /var/run/docker.sock

Check the docker version inside it.

root@0d97538dcb4d:/# docker version
Client:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:30:42 2016
OS/Arch: linux/amd64

Server:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:30:42 2016
OS/Arch: linux/amd64

I am into the container now and I can run the docker commands inside container. Isn’t it cool ?

root@0d97538dcb4d:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Let’s try to spawn a new container inside a container.

While trying to spawn a new container I ran out of space issue.

root@0d97538dcb4d:/# docker run -t -i mysql bash
ERRO[0403] Handler for POST /v1.24/containers/create returned error: No such image: mysql:latest
Unable to find image ‘mysql:latest’ locally
latest: Pulling from library/mysql
75a822cd7888: Pull complete
b8d5846e536a: Pull complete
b75e9152a170: Pull complete
832e6b030496: Pull complete
fe4a6c835905: Pull complete
c3f247e29ab1: Extracting [==================================================>] 19.02 kB/19.02 kB
21be3e562071: Download complete
c7399d6bf033: Downloading [=====================================> ] 57.31 MB/76.98 MB
c7399d6bf033: Downloading [==================================================>] 76.98 MB/76.98 MB
3835a628a92f: Download complete
530d0fb19b13: Download complete
ERRO[0549] Download failed: write /var/lib/docker/tmp/GetImageBlob883713982: no space left on device
ERRO[0549] Not continuing with pull after error: write /var/lib/docker/tmp/GetImageBlob883713982: no space left on device
docker: write /var/lib/docker/tmp/GetImageBlob883713982: no space left on device.
See ‘docker run –help’.

Checked the filesystem utilization inside container. Found that /etc/hosts has not such space left.

~~~
root@0d97538dcb4d:/# df -Ph
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-5137701-f6e4b3a3f41c934e95188beb881a8fa964bcdcdef7bd86f46dfb8c3740905410 10G 456M 9.6G 5% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/rhelah-root 3.0G 2.3G 779M 75% /etc/hosts
shm 64M 0 64M 0% /dev/shm
~~~

I fixed the issue by expanding the root filesystem on host atomic machine. After that I was able to see the expanded filesystem space inside the container.

~~~
root@0d97538dcb4d:/# df -Ph
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-5137701-f6e4b3a3f41c934e95188beb881a8fa964bcdcdef7bd86f46dfb8c3740905410 10G 456M 9.6G 5% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/rhelah-root 8.0G 2.3G 5.8G 28% /etc/hosts
shm 64M 0 64M 0% /dev/shm
~~~

Let’s try to start the second citizen container again. This time I am able to start the nested container successfully and I am into new nested container.

root@0d97538dcb4d:/# docker run -t -i mysql bash
ERRO[0917] Handler for POST /v1.24/containers/create returned error: No such image: mysql:latest
Unable to find image ‘mysql:latest’ locally
latest: Pulling from library/mysql
75a822cd7888: Pull complete
b8d5846e536a: Pull complete
b75e9152a170: Pull complete
832e6b030496: Pull complete
fe4a6c835905: Pull complete
c3f247e29ab1: Pull complete
21be3e562071: Pull complete
c7399d6bf033: Pull complete
ccdaeae6c735: Pull complete
3835a628a92f: Pull complete
530d0fb19b13: Pull complete
Digest: sha256:de1570492c641112fdb94db9c788f6a400f71f25a920da95ec88c3848450ed57
Status: Downloaded newer image for mysql:latest
root@bb8d6a3218ab:/#

Let’s switch to base atomic machine and see how many containers we are seeing in output.

We are seeing only one container. Second container is running inside this container.

-bash-4.2# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0d97538dcb4d jpetazzo/dind “wrapdocker” About an hour ago Up About an hour compassionate_borg

Let’s login into this container and then see the second running container. Great we are able to see nested container.

-bash-4.2# docker exec -it 0d97538dcb4d bash

root@0d97538dcb4d:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb8d6a3218ab mysql “docker-entrypoint.sh” About an hour ago Up About an hour 3306/tcp gloomy_hawking

Let’s check some network settings about this setup. As you may already know by default installation of atomic host creates a docker bridge and first class container got the IP address in that default range.

In my case atomic host is having this docker0 linux bridge.

-bash-4.2# ip a show docker0
3: docker0: mtu 1500 qdisc noqueue state UP
link/ether 02:42:5a:45:6e:46 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:5aff:fe45:6e46/64 scope link
valid_lft forever preferred_lft forever

Inspecting the IP address of first class container. It’s range of docker0.

-bash-4.2# docker inspect -f ‘{{.NetworkSettings.IPAddress}}’ 0d97538dcb4d
172.17.0.2

Let’s login into container and then see the interfaces assigned to it.

-bash-4.2# docker exec -it 0d97538dcb4d bash
root@0d97538dcb4d:/# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: docker0: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d5:03:ab:de brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe03:abde/64 scope link
valid_lft forever preferred_lft forever
4: veth2b9b80a@if3: mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 62:60:f4:0a:07:f3 brd ff:ff:ff:ff:ff:ff
inet6 fe80::6060:f4ff:fe0a:7f3/64 scope link
valid_lft forever preferred_lft forever
96: eth0@if97: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever

We can see that it’s having another docker0 linux bridge which is having subnet range “172.18.0.1/16” now this will be used to provide the DHCP ip address to second class or nested containers.

root@0d97538dcb4d:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb8d6a3218ab mysql “docker-entrypoint.sh” About an hour ago Up About an hour 3306/tcp gloomy_hawking
root@0d97538dcb4d:/# docker inspect -f ‘{{.NetworkSettings.IPAddress}}’ bb8d6a3218ab
172.18.0.2

This IP address will not be reachable from atomic host.

-bash-4.2# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
^C
— 172.18.0.2 ping statistics —
3 packets transmitted, 0 received, 100% packet loss, time 2003ms

Obviously you would be able to reach “172.17.0.2” from atomic host because of masquerading rule.

-bash-4.2# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all — anywhere anywhere ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all — anywhere !loopback/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all — 172.17.0.0/16 anywhere

Chain DOCKER (2 references)
target prot opt source destination
RETURN all — anywhere anywhere

Advertisements

Ansible configuration file precedence

Installation of Ansible provides us a default /etc/ansible/ansible.cfg configuration file, in which we can make various settings like the default user with which playbook should run on the remote server and the privilege mode of that user.

Here is the default sections present in ansible.cfg file.

# grep ‘^[[]’ /etc/ansible/ansible.cfg
[defaults]
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[accelerate]
[selinux]

As we are dealing with multiple servers which need to be manage by Ansible and most of the time group of servers have different requirements than other hence need of having separate ansible.cfg files for these group of servers can arise easily. Having multiple ansible.cfg creates confusion that which one will get use, this is a genuine question here is the precedence ordering starting from top to bottom.

  • $ANSIBLE_CONFIG   Setting environment variable for the location of ansible configuration file.
  • Using ./ansible.cfg   from the current directory which is used to run the ansible playbook or adhoc command
  • ~/.ansible.cfg   file present in home directory of user which is use to run the ansible command.
  • /etc/ansible/ansible.cfg  default ansible.cfg file.

IMP : Ansible will only use the configuration settings from the file which is found in this sequence first, it will not look for the settings in the higher sequence files if the setting is not present in the file which is chosen for deployment.

Ex : If ./ansible.cfg file is choosen as $ANSIBLE_CONFIG is not defined then it will use all the settings present in ./ansible.cfg, if any setting/parameter is missing for this file, it will not search the setting in ansible.cfg file present in home directory or the default ansible.cfg file.

Useful hacks for Ansible to make troubleshooting easy

I have completed my Red Hat Ansible certification couple of months back but after that I didn’t get the chance to get my hands dirty on it. I planned to revisit my Ansible concepts again so that  I can start using it in my daily work.

Here is my first post on some Ansible tips and tricks.

Useful tips about yaml templates :

* As ansible is based on yaml playbooks and in yaml template  proper spacing matters a lot. While using yaml template in vim editor I face lot of difficulties to keep the proper spacing to make the yaml template work hence to avoid pressing space again and again here is the useful VIM trick so that double space is getting inserted by default while pressing tab.

autocmd FileType yaml setlocal ai ts=2 sw=2 et

Above line need to added in “$HOME/.vimrc” file and after that whenever tab is pressed it will automatically take 2 spaces.

* Couple of online methods are available to check the yaml SYNTAX. My search for the yaml SYNTAX check using CLI ends with following python command :

#  python -c ‘import yaml , sys ; print yaml.load(sys.stdin)’ < test1.yml
[{‘tasks’: [{‘name’: ‘first task’, ‘service’: ‘name=iptables enabled=true’}, {‘name’: ‘second task’, ‘service’: ‘name=sshd enabled=true’}], ‘hosts’: ‘all’, ‘name’: ‘a simple playbook’}]

Here test1.yaml is the yaml file for which SYNTAX need to be check. In my case I was not having any SYNTAX error hence it simply prints the yaml content in json format.

* Another way to check the SYNTAX is using ansible command. If any error is there it will give us the approx position of the SYNTAX error.

# ansible-playbook –syntax-check test1.yml

playbook: test1.yml

Troubleshooting Ansible playbooks :

In the previous section we have take care about the SYNTAX of Ansible playbook now steps regarding troubleshooting the logical of playbook.

* I always prefer to run the Ansible playbook in dry mode that means not making any actual change just checking what changes it’s going to make. Be careful some modules doesn’t respect the dry mode but still it’s a safer option as most of the modules do respect this mode.

# ansible-playbook –check test1.yml

PLAY [a simple playbook] ******************************************************

TASK: [first task] ************************************************************
ok: [localhost]

TASK: [second task] ***********************************************************
ok: [localhost]

PLAY RECAP ********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

* Another method is to run the playbook step by step instead of running the whole playbook in single shot. It will ask for confirmation before running each step. It will only the step on “y”

# ansible-playbook –step test1.yml

PLAY [a simple playbook] ******************************************************
Perform task: first task (y/n/c): y

Perform task: first task (y/n/c):  ********************************************
ok: [localhost]
Perform task: second task (y/n/c): y

Perform task: second task (y/n/c):  *******************************************
ok: [localhost]

PLAY RECAP ********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

* It also has the option to start a particular task from the list of tasks mentioned in playbook. Like I was having two tasks in my playbook, I have started second task from playbook skipping the first one. Of-course you can also use the tags to achieve the same.

# ansible-playbook –start-at-task=”second task” test1.yml

PLAY [a simple playbook] ******************************************************

TASK: [second task] ***********************************************************
ok: [localhost]

PLAY RECAP ********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0

* By default ansible only dumps the adhoc command or playbook  information on terminal, it’s not getting recorded in log file but that doesn’t mean we can’t. It provide us the flexibility of putting the information in log file so that it can be viewed at later.

# grep log_path /etc/ansible/ansible.cfg
#log_path = /var/log/ansible.log

By default log_path parameter is commented in ansible.cfg file, it can be set to the path of log file in which you want to dump the log information. Also, environment variable ANSIBLE_LOG_PATH can be set which will take precedence over the default location mentioned in ansible.cfg file.

* Here comes my favorite debug module which can be used inside the Ansible playbook to print the variable value. This feature is key to managing tasks that use variables
to communicate with each other (for example, using the output of a task as the input to the
following one).

In first example it’s printing the facts information.

– debug: msg=”The free memory for this system is {{ ansible_memfree_mb }}”

In second example, variable of variable is printed with more verbosity.

– debug: var=output1 verbosity=2

Difference between virtio-blk and virtio-scsi ?

Both virtio-blk and virtio-scsi are type of para-virtualization then what’s the exact difference between them I was having this question in my mind for sometime.  As readers may already know, by default we are assigning a virtio-blk disk to openstack instance that’s why it shows inside the instance as vd*, we do have the option to assign a scsi disk to instance by setting the metadata properties on glance image which is going to used to spawning an instance, once the instance is spawned it will show the disk name as sd*.

Major advantages of providing the virtio-scsi over virtio-blk is having multiple block devices per virtual SCSI adapter. It’s not like that virtio-scsi is the replacement for virtio-blk. Development work for virtio-blk is also going on.

virtio-blk

  • Three types of storage can be attached to a guest machine using virtio-blk.
    • File
    • Disk
    • LUN

Let’s understand the I/O path for virtio-blk and what improvements are coming it in near future.

Guests :

App –> VFS/Filesystem –> Generic Block Layer –> IO scheduler –> virtio-blk.ko

Host :

QEMU (user space) –> VFS/Filesystem –> Generic Block Layer –> IO Scheduler –> Block Device Layer –> Hard Disk.

We can see that in above flow two IO scheduler are coming into picture which doesn’t make sense for all kind of I/O patterns hence in “Guests” flow scheduler is going to be replaced with BIO based virtio-blk. Also, scheduler option will also be available just in case if some applications takes the advantage of scheduler.

Eventually it would be like :

  • struct request based [Using IO scheduler in guest]
  • struct bio based [Not using IO scheduler in guest]

It’s merged in Kernel 3.7

Add ‘virtio_blk.use_bio=1’ to kernel cmdline of guest no change is needed in host machine. it’s not enabled by default.

Kernel developers are planning to increase the intelligence of this feature by deciding to enable this depending upon underlying device and choosing the best I/O path according to workload.

Host:

Host side virtio-blk implementation include.

  1. Qemu Current : global mutex which is main source of bottleneck because only thread can submit  an I/O
  2. Qemu data plane : Each virtio-blk device has thread dedicated to handle request. Requests are processes without going through the QEMU block layer using Linux AIO directly.vhost-blk
  3. vhost-blk is an in-kernel virtio-blk device accelerator, similar to vhost-net. it’s skipping host user space involvement which help us to avoid the context switching.

It mainly lacks the following capability because it’s not based on scsi model :

  • Thin-provisioned Storage for manageability
  • HA cluster on virtual machines.
  • Backup server for reliability.

As it’s not based on scsi protocol hence lacks the capabilities like Persistence Reservation scsi which is required if we are running disks attached to VM while running in cluster environment, it helps to avoid the data corruption on shared devices.

Exception : In case of virtio-blk scsi commands works when storage is attached as LUN to guest.

virtio-scsi

  • It has mainly three kind of configurations.
    • Qemu [User space target]
      • File
      • Disk
      • LUN   << SCSI command compatible.
    • LIO target [Kernel space target]
    • libiscsi [User Space iscsi initiator]  << SCSI command compatible.

It can support thousands of disks per PCI device True SCSI devices, as the naming convention in guest is showing as sd* hence it’s good for p2v/v2v migration.

Difference between neutron LBaaS v1 and LBaaS v2 ?

LBaaS v2 is not a new topic anymore most of the customers are switching to LBaaS v2 from LBaaS v1. I have written blog posts in past related to the configuration of both in case you have missed, those are located at LBaaSv1 , LBaaSv2

Still the in Red Hat Openstack, no HA functionality is present for load balancer itself, it means if your load balancer service is running on controller node present in HA setup and if that node is getting down then we have to manually fix the things. There are some other articles present in internet to make LBaaS HA work using some workarounds but I have never tried them.

In this post I am going show the improvements of lbaasv2 over lbaasv1. I will also shed some light on Octavia project which can help us to provide HA capabilities for load balancing service basically it used for Elastic Load Balancing.

Let’s start with comparison of lbaasv2 and lbaasv1

lbaasv1 has provided the capabilities like :

  • L4 Load balancing
  • Session persistence including cookies based
  • Cookie insertion
  • Driver interface for 3rd parties.

Basic flow of the request in lbaas v1 :

Request —> VIP —> Pool [Optional Health Monitor] —> Members [Backend instances]

untitled

Missing features :

  • L7 Content switching [IMP feature]
  • Multiple TCP ports per load balancer
  • TLS Termination at load balancer to avoid the load on instances.
  • Load balancer running inside instances.

lbaasv2 is introduced in Kilo version, at that time it was not having the features like L7, Pool sharing, Single create LB [Creating load balancer in single API call] these features are included in liberty. Pool sharing feature is introduced in Mitaka.

Basic flow of the request in lbaas v2 :

Request —> VIP —> Listeners –> Pool [Optional Health Monitor] —> Members [Backend instances]

lbaas3

Let’s see what components/changes have been made in  which makes the missing feature available in newer version :

  1. L7 Content switching

Why we require this feature :

A layer 7 load balancer consists of a listener that accepts requests on behalf of a number of back-end pools and distributes those requests based on policies that use application data to determine which pools should service any given request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one group of back-end servers (pool) can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML, CSS, and JavaScript.

This feature is introduced by adding additional component “Listener” in lbaasv2 architecture. We can add the policies and then apply the rules to policy to have L7 layer load balancing. Very informative article about the L7 content switching is available at link , it covers lot of practical scenarios.

2. Multiple TCP ports per load balancer

In lbaas v1 we were only having one TCP port like 80 or 443 at load balancer associated with VIP (Virtual IP), we can’t have two ports/protocols associated with VIP that means either you can have HTTP traffic load balanced or HTTPS. This limit has been lifted in case of Lbaas v2, as now we can have multiple ports associated with single VIP.

It can be done with pool sharing or without pool sharing.

With pool sharing :

with-pool-sharing

Without Pool Sharing :

pool-sharing

3. TLS Termination at load balancer to avoid the load on instances.

We can have the TLS termination at load balancer level instead of having the termination at backend servers. It reduces the load on backend servers and also it provides the capability of having L7 content switching if the TLS termination done at load balancer. Barbican containers are used to do the termination at load balancer level.

4. Load balancer running inside instances.

I have not seen this implementation without Octavia which is using “Amphora” instances to run the load balancer.

IMP : Both load balancer versions can’t be run simultaneously.

As promised at the beginning of article, let’s see what capabilities “Octavia” adds to lbaasv2 version.

Here is the architecture of Octavia :

octavia

Octavia API lacks the athentication facility hence it accepts the APIs from neutron instead of exposing direct APIs.

As I mentioned earlier, in case of Octavia load balancer runs inside the nova instances hence it need to communicate with components like nova, neutron to spawn the instances in which load balancer [haproxy] can run. Okay, what about other components required to spawn instance :

  • Create amphora disk image using OpenStack diskimage-builder.
  • Create a Nova flavor for the amphorae.
  • Add amphora disk image to glance.
  • Tag the above glance disk image with ‘amphora’.

But now amphora instance becomes single point of failure and also the capability to handle the load is limited. From Mitaka version onwards we can run single load balancer replicated in two instances which can run in A/P mode and send the heartbeat using VRRP. If one instance is getting down other can start serving load balancer service.

So what’s the major advantage of Octavia, okay, here comes  the term Elastic Load Balancing (ELB), currently VIP is associated with single load balancer it’s 1:1 relation but in case of ELB relation between VIP and load-balancer is 1:N, VIP distribute the incoming traffic over pool of “amphora” instances.

In ELB, traffic is getting distributed at two levels :

  1. VIP to pool of amphora instances.
  2. amphora instances to back-end instances.

We can also use HEAT orchestration with CEILOMETER (alarm) functionality to manage the number of instances in ‘amphora’ pool.

Combining the power of “pool of amphora instances” and “failover” we can have a robust N+1 topology in which if any VM from pool of amphora instance is getting failed, it’s getting replaced by standby VM.

 

I hope that this article shed some light on the jargon of neutron lbaas world 🙂

How to make auto-scaling work for nova with heat and ceilometer ?

I was trying to test this feature for a very long time but never got a chance to dig into it. Today, I got a opportunity to work on this feature. I prepared a packstack OSP 7 [Kilo] setup and took the reference from wonderful official Red Hat documentation [1] to make this work.

In this article I am going to cover only scale-up scenario.

Step 1 : While installing packstack we need to make below options as “yes” so that required components can be installed.

# egrep “HEAT|CEILOMETER” /root/answer.txt | grep INSTALL
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_HEAT_CLOUDWATCH_INSTALL=y
CONFIG_HEAT_CFN_INSTALL=y

If you have already deployed packstack setup no need to worry just enable these in answer.txt file which is used for creating existing setup and run the packstack installation command again.

Step 2 : Created three templates to make this work.

cirros.yaml – Contains the information for spawning an instance. Script is used to generate the cpu utilization alarm.

environment.yaml – Environment file to call cirros.yaml template.

sample.yaml — Containing the main logic for scaling-up.

# cat cirros.yaml
heat_template_version: 2014-10-16
description: A simple server.
resources:
server:
type: OS::Nova::Server
properties:
#block_device_mapping:
#  – device_name: vda
#    delete_on_termination: true
#    volume_id: { get_resource: volume }
image: cirros
flavor: m1.tiny
networks:
– network: internal1
user_data_format: RAW
user_data: |
#!/bin/sh
while [ 1 ] ; do echo $((13**99)) 1>/dev/null 2>&1; done

# cat environment.yaml
resource_registry:
“OS::Nova::Server::Cirros”: “cirros.yaml”

# cat sample.yaml
heat_template_version: 2014-10-16
description: A simple auto scaling group.
resources:
scale_group:
type: OS::Heat::AutoScalingGroup
properties:
cooldown: 60
desired_capacity: 1
max_size: 3
min_size: 1
resource:
type: OS::Nova::Server::Cirros
scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: scale_group }
cooldown: 60
scaling_adjustment: +1
cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 60
evaluation_periods: 1
threshold: 20
alarm_actions:
– {get_attr: [scaleup_policy, alarm_url]}
comparison_operator: gt

 

Shedding some information on sample.yaml file, initially I am spawning only one instance and scaling this up-to maximum of 3 instances. Threshold of ceilometer set to 20.

Step 3 : Modify the ceilometer sampling interval for cpu_util in “/etc/ceilometer/pipeline.yaml” file. Changed this value from default of 10mins to 1 min.

– name: cpu_source
interval: 60
meters:
– “cpu”
sinks:
– cpu_sink

Restart all openstack services after making this change.

Step 4 : Let’s create a stack now.

[root@allinone7 VIKRANT(keystone_admin)]# heat stack-create teststack1 -f sample.yaml -e environment.yaml
+————————————–+————+——————–+———————-+
| id                                   | stack_name | stack_status       | creation_time        |
+————————————–+————+——————–+———————-+
| 0f163366-c599-4fd5-a797-86cf40f05150 | teststack1 | CREATE_IN_PROGRESS | 2016-10-10T12:02:37Z |
+————————————–+————+——————–+———————-+

Instance spawned successfully and alarm is created once the heat stack creation is completed.

[root@allinone7 VIKRANT(keystone_admin)]# nova list
+————————————–+——————————————————-+——–+————+————-+———————–+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks              |
+————————————–+——————————————————-+——–+————+————-+———————–+
| 845abae0-9834-443b-82ec-d55bce2243ab | te-yvfr-ws5tn26msbub-zpeebwwwa67w-server-pxu6pqcssmmb | ACTIVE | –          | Running     | internal1=10.10.10.53 |
+————————————–+——————————————————-+——–+————+————-+———————–+

[root@allinone7 VIKRANT(keystone_admin)]# ceilometer alarm-list
+————————————–+—————————————-+——————-+———-+———+————+——————————–+——————+
| Alarm ID                             | Name                                   | State             | Severity | Enabled | Continuous | Alarm condition                | Time constraints |
+————————————–+—————————————-+——————-+———-+———+————+——————————–+——————+
| 7746e457-9114-4cc6-8408-16b14322e937 | teststack1-cpu_alarm_high-sctookginoqz | insufficient data | low      | True    | True       | cpu_util > 20.0 during 1 x 60s | None             |
+————————————–+—————————————-+——————-+———-+———+————+——————————–+——————+

Checking the events in heat-engine.log file.

~~~
2016-10-10 12:02:37.499 22212 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1): Stack CREATE started
2016-10-10 12:02:37.510 22212 INFO heat.engine.resource [-] creating AutoScalingResourceGroup “scale_group” Stack “teststack1” [0f163366-c599-4fd5-a797-86cf40f05150]
2016-10-10 12:02:37.558 22215 INFO heat.engine.service [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Creating stack teststack1-scale_group-ujt3ixg3yvfr
2016-10-10 12:02:37.572 22215 INFO heat.engine.resource [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Validating TemplateResource “ws5tn26msbub”
2016-10-10 12:02:37.585 22215 INFO heat.engine.resource [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:02:37.639 22215 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr): Stack CREATE started
2016-10-10 12:02:37.650 22215 INFO heat.engine.resource [-] creating TemplateResource “ws5tn26msbub” Stack “teststack1-scale_group-ujt3ixg3yvfr” [0c311ad5-cb76-4956-b038-ab2e44721cf1]
2016-10-10 12:02:37.699 22214 INFO heat.engine.service [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Creating stack teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w
2016-10-10 12:02:37.712 22214 INFO heat.engine.resource [req-681ddfb8-3ca6-4ecb-a8af-f35ceb358138 f6a950be30fd41488cf85b907dfa41b5 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:02:38.004 22214 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack CREATE started
2016-10-10 12:02:38.022 22214 INFO heat.engine.resource [-] creating Server “server” Stack “teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w” [11dbdc5d-dc67-489b-9738-7ee6984c286e]
2016-10-10 12:02:42.965 22213 INFO heat.engine.service [req-e6410d2a-6f85-404d-a675-897c8a254241 – -] Service 13d36b70-a2f6-4fec-8d2a-c904a2f9c461 is updated
2016-10-10 12:02:42.969 22214 INFO heat.engine.service [req-531481c7-5fd1-4c25-837c-172b2b7c9423 – -] Service 71fb5520-7064-4cee-9123-74f6d7b86955 is updated
2016-10-10 12:02:42.970 22215 INFO heat.engine.service [req-6fe46418-bf3d-4555-a77c-8c800a414ba8 – -] Service f0706340-54f8-42f1-a647-c77513aef3a5 is updated
2016-10-10 12:02:42.971 22212 INFO heat.engine.service [req-82f5974f-0e77-4b60-ac5e-f3c849812fe1 – -] Service 083acd77-cb7f-45fc-80f0-9d41eaf2a37d is updated
2016-10-10 12:02:53.228 22214 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack CREATE completed successfully
2016-10-10 12:02:53.549 22215 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr): Stack CREATE completed successfully
2016-10-10 12:02:53.960 22212 INFO heat.engine.resource [-] creating AutoScalingPolicy “scaleup_policy” Stack “teststack1” [0f163366-c599-4fd5-a797-86cf40f05150]
2016-10-10 12:02:55.152 22212 INFO heat.engine.resource [-] creating CeilometerAlarm “cpu_alarm_high” Stack “teststack1” [0f163366-c599-4fd5-a797-86cf40f05150]
2016-10-10 12:02:56.379 22212 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1): Stack CREATE completed successfully
~~~

Step 5 : Once the alarm is triggered, it will initiate the creation of one more instance.

[root@allinone7 VIKRANT(keystone_admin)]# ceilometer alarm-history 7746e457-9114-4cc6-8408-16b14322e937
+——————+—————————-+———————————————————————-+
| Type             | Timestamp                  | Detail                                                               |
+——————+—————————-+———————————————————————-+
| state transition | 2016-10-10T12:04:48.492000 | state: alarm                                                         |
| creation         | 2016-10-10T12:02:55.247000 | name: teststack1-cpu_alarm_high-sctookginoqz                         |
|                  |                            | description: Alarm when cpu_util is gt a avg of 20.0 over 60 seconds |
|                  |                            | type: threshold                                                      |
|                  |                            | rule: cpu_util > 20.0 during 1 x 60s                                 |
|                  |                            | time_constraints: None                                               |
+——————+—————————-+———————————————————————-+

Log from ceilometer log file.

~~~
From : /var/log/ceilometer/alarm-evaluator.log

2016-10-10 12:04:48.488 16550 INFO ceilometer.alarm.evaluator [-] alarm 7746e457-9114-4cc6-8408-16b14322e937 transitioning to alarm because Transition to alarm due to 1 samples outside threshold, most recent: 97.05
~~~

Step 6 : In the heat-engine.log file, we can see that triggered alarm has started the scaleup_policy and stack came in “UPDATE IN_PROGRESS” state. We are seeing two events because 2 instances are getting spawned, remember we set the max number of instances to 3, first instance got deployed during stack creation and remaining 2 instances are triggered at alarm. Actually at first alarm, first instance got triggered, as utilization stayed more than threshold for next min hence 3rd instance got triggered.

~~~

2016-10-10 12:04:48.641 22213 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm scaleup_policy, new state alarm
2016-10-10 12:04:48.680 22213 INFO heat.engine.resources.openstack.heat.scaling_policy [-] scaleup_policy Alarm, adjusting Group scale_group with id teststack1-scale_group-ujt3ixg3yvfr by 1
2016-10-10 12:04:48.802 22215 INFO heat.engine.stack [-] Stack UPDATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr): Stack UPDATE started
2016-10-10 12:04:48.858 22215 INFO heat.engine.resource [-] updating TemplateResource “ws5tn26msbub” [11dbdc5d-dc67-489b-9738-7ee6984c286e] Stack “teststack1-scale_group-ujt3ixg3yvfr” [0c311ad5-cb76-4956-b038-ab2e44721cf1]
2016-10-10 12:04:48.919 22214 INFO heat.engine.service [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Updating stack teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w
2016-10-10 12:04:48.922 22214 INFO heat.engine.resource [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:04:49.317 22214 INFO heat.engine.stack [-] Stack UPDATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack UPDATE started
2016-10-10 12:04:49.346 22215 INFO heat.engine.resource [-] creating TemplateResource “mmm6uxmlf3om” Stack “teststack1-scale_group-ujt3ixg3yvfr” [0c311ad5-cb76-4956-b038-ab2e44721cf1]
2016-10-10 12:04:49.366 22214 INFO heat.engine.update [-] Resource server for stack teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w updated
2016-10-10 12:04:49.405 22212 INFO heat.engine.service [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Creating stack teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx
2016-10-10 12:04:49.419 22212 INFO heat.engine.resource [req-ddf93f69-5fdc-4218-a427-aae312f4a02d – 41294ddb9af747c8b46dc258c3fa61e1] Validating Server “server”
2016-10-10 12:04:49.879 22212 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx): Stack CREATE started
2016-10-10 12:04:49.889 22212 INFO heat.engine.resource [-] creating Server “server” Stack “teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx” [36c613d1-b89f-4409-b965-521b1ae2cbf3]
2016-10-10 12:04:50.406 22214 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack DELETE started
2016-10-10 12:04:50.443 22214 INFO heat.engine.stack [-] Stack DELETE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr-ws5tn26msbub-zpeebwwwa67w): Stack DELETE completed successfully
2016-10-10 12:04:50.930 22215 INFO heat.engine.update [-] Resource ws5tn26msbub for stack teststack1-scale_group-ujt3ixg3yvfr updated
2016-10-10 12:05:07.865 22212 INFO heat.engine.stack [-] Stack CREATE COMPLETE (teststack1-scale_group-ujt3ixg3yvfr-mmm6uxmlf3om-m5idcplscfcx): Stack CREATE completed successfully

~~~

Step 7 : We can see the event-list of created stack for more understanding.

[root@allinone7 VIKRANT(keystone_admin)]# heat event-list 0f163366-c599-4fd5-a797-86cf40f05150
+—————-+————————————–+———————————————————————————————————————————-+——————–+———————-+
| resource_name  | id                                   | resource_status_reason                                                                                                           | resource_status    | event_time           |
+—————-+————————————–+———————————————————————————————————————————-+——————–+———————-+
| teststack1     | 6ddf5a0c-c345-43ad-8c20-54d67cf8e2a6 | Stack CREATE started                                                                                                             | CREATE_IN_PROGRESS | 2016-10-10T12:02:37Z |
| scale_group    | 528ed942-551d-482b-95ee-ab72a6f59280 | state changed                                                                                                                    | CREATE_IN_PROGRESS | 2016-10-10T12:02:37Z |
| scale_group    | 9d7cf5f4-027f-4c97-92f2-86d208a4be77 | state changed                                                                                                                    | CREATE_COMPLETE    | 2016-10-10T12:02:53Z |
| scaleup_policy | a78e9577-1251-4221-a1c7-9da4636550b7 | state changed                                                                                                                    | CREATE_IN_PROGRESS | 2016-10-10T12:02:53Z |
| scaleup_policy | cb690cd5-5243-47f0-8f9f-2d88ca13780f | state changed                                                                                                                    | CREATE_COMPLETE    | 2016-10-10T12:02:55Z |
| cpu_alarm_high | 9addbccf-cc18-410a-b1f6-401b56b09065 | state changed                                                                                                                    | CREATE_IN_PROGRESS | 2016-10-10T12:02:55Z |
| cpu_alarm_high | ed9a5f49-d4ea-4f68-af9e-355d2e1b9113 | state changed                                                                                                                    | CREATE_COMPLETE    | 2016-10-10T12:02:56Z |
| teststack1     | 14be65fc-1b33-478e-9f81-413b694c8312 | Stack CREATE completed successfully                                                                                              | CREATE_COMPLETE    | 2016-10-10T12:02:56Z |
| scaleup_policy | e65de9b1-6854-4f27-8256-f5f9a13890df | alarm state changed from insufficient data to alarm (Transition to alarm due to 1 samples outside threshold, most recent: 97.05) | SIGNAL_COMPLETE    | 2016-10-10T12:05:09Z |
| scaleup_policy | a499bfef-1824-4ef3-8c7f-e86cf14e11d6 | alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.7083333333)      | SIGNAL_COMPLETE    | 2016-10-10T12:07:14Z |
| scaleup_policy | 2a801848-bf9f-41e0-acac-e526d60f5791 | alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.0833333333)      | SIGNAL_COMPLETE    | 2016-10-10T12:08:55Z |
| scaleup_policy | f57fda03-2017-4408-b4b9-f302a1fad430 | alarm state changed from alarm to alarm (Remaining as alarm due to 1 samples outside threshold, most recent: 95.1444444444)      | SIGNAL_COMPLETE    | 2016-10-10T12:10:55Z |
+—————-+————————————–+———————————————————————————————————————————-+——————–+———————-+

We can see three instances running.

[root@allinone7 VIKRANT(keystone_admin)]# nova list
+————————————–+——————————————————-+——–+————+————-+———————–+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks              |
+————————————–+——————————————————-+——–+————+————-+———————–+
| 041345cc-4ebf-429c-ab2b-ef0f757bfeaa | te-yvfr-mmm6uxmlf3om-m5idcplscfcx-server-hxaqqmxzv4jp | ACTIVE | –          | Running     | internal1=10.10.10.54 |
| bebbd5a0-e0b2-40b4-8810-978b86626267 | te-yvfr-r7vn2e5c34b6-by4oq22vnxbo-server-ktblt3evhvd6 | ACTIVE | –          | Running     | internal1=10.10.10.55 |
| 845abae0-9834-443b-82ec-d55bce2243ab | te-yvfr-ws5tn26msbub-zpeebwwwa67w-server-pxu6pqcssmmb | ACTIVE | –          | Running     | internal1=10.10.10.53 |
+————————————–+——————————————————-+——–+————+————-+———————–+

 

[1] https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/7/single/auto-scaling-for-compute/#example_auto_scaling_based_on_cpu_usage

What is Terraform and how to use it ?

Terraform it a tool to configure and provision cloud infrastructure, it provides similar functionality as heat. Major difference is, terrform is cloud agnostic, it can be use for openstack, amazon or other cloud providers, but heat functionality is limited only to openstack. In this article, I am going to show you the usage of terraform with openstack.

As more and more companies are moving towards hybrid cloud architectures hence tool like terraform provides great benefits.

Terraform configurations can be in terraformat or json. I have used json in this article.

Step 1 : Download the terraform according to your OS. I have downloaded terraform for linux 64 bit from official download page.

Step 2 : Unzip the downloaded zip file and copy the binary in /usr/bin so that it can be used as a command.

[root@allinone9 ~(keystone_admin)]# unzip terraform_0.7.4_linux_amd64.zip
Archive:  terraform_0.7.4_linux_amd64.zip
inflating: terraform

[root@allinone9 ~(keystone_admin)]# cp -p terraform /usr/bin/
[root@allinone9 ~(keystone_admin)]# terraform

Step 3 : Also, install the graphviz tool which we will be using later in this article.

[root@allinone9 ~(keystone_admin)]# yum install -y graphviz

Step 4 : To use terraform, we need to create four files in a directory, main logic lies in main.tf.json file. Basically main.tf.json and vars.tf.json are two mandatory files.

[root@allinone9 terrformexample1(keystone_admin)]# ll
total 32
-rw-r–r– 1 root root  419 Sep 29 08:16 main.tf.json
-rw-r–r– 1 root root  138 Sep 29 08:46 output.tf.json
-rw-r–r– 1 root root  233 Sep 29 08:11 provider.tf.json
-rw-r–r– 1 root root  177 Sep 29 08:12 vars.tf.json

Let’s check the content of these files.

a) In provider.tf.json file I am specifying the provider which I am going to use along with credentails of that provider. In this case, I am using openstack.

[root@allinone9 terrformexample1(keystone_admin)]# cat provider.tf.json
{
“provider”: {
“openstack”: {
“user_name”: “admin”,
“tenant_name”: “admin”,
“password”: “ed5432114db34e29”,
“auth_url”: “http://192.168.122.12:5000/v2.0&#8221;
}
}
}

b) I have defined image and flavor as variables in separate file, to make the main logic more modular. Basically this acts like a heat environment file.

[root@allinone9 terrformexample1(keystone_admin)]# cat vars.tf.json
{
“variable”: {
“image”: {
“default”: “cirros”
}
},
“variable”: {
“flavor”: {
“default”: “m1.tiny”
}
}
}

c) main.tf.json file contains the main resource definition. I am using the various defined in vars.tf.json file in this file to spawn an instance. This file plays the same role as heat resource definition file.

[root@allinone9 terrformexample1(keystone_admin)]# cat main.tf.json
{
“resource”: {
“openstack_compute_instance_v2”: {
“tf-instance”: {
“name”: “tf-instance”,
“image_name”: “${var.image}”,
“flavor_name”: “${var.flavor}”,
“security_groups”: [“default”],
“network”: {
“uuid”: “1e149f28-66b3-4254-a88c-f1b42e7bc200”
}
}
}
}
}

Note : Security group should be in list format, despite of being a single value. This is hard coded.

d) Output to print when the operation is completed successfully. I am printing the instance IP.  In case of heat it’s display in resource definition file.

[root@allinone9 terrformexample1(keystone_admin)]# cat output.tf.json
{
“output”: {
“address”: {
“value”: “${openstack_compute_instance_v2.tf-instance.access_ip_v4}”
}
}
}

Step 5 : All the required files are in place, now issue the deployment command to create the instance.

[root@allinone9 terrformexample1(keystone_admin)]# terraform apply
openstack_compute_instance_v2.tf-instance: Creating…
access_ip_v4:               “” => “<computed>”
access_ip_v6:               “” => “<computed>”
flavor_id:                  “” => “<computed>”
flavor_name:                “” => “m1.tiny”
image_id:                   “” => “<computed>”
image_name:                 “” => “cirros”
name:                       “” => “tf-instance”
network.#:                  “” => “1”
network.0.access_network:   “” => “false”
network.0.fixed_ip_v4:      “” => “<computed>”
network.0.fixed_ip_v6:      “” => “<computed>”
network.0.floating_ip:      “” => “<computed>”
network.0.mac:              “” => “<computed>”
network.0.name:             “” => “<computed>”
network.0.port:             “” => “<computed>”
network.0.uuid:             “” => “1e149f28-66b3-4254-a88c-f1b42e7bc200”
region:                     “” => “RegionOne”
security_groups.#:          “” => “1”
security_groups.3814588639: “” => “default”
stop_before_destroy:        “” => “false”
openstack_compute_instance_v2.tf-instance: Still creating… (10s elapsed)
openstack_compute_instance_v2.tf-instance: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

address = 10.10.10.12

Above output shows the instance information and IP address of instance because we have specified it to print the IP address.

Step 6 : Verify the instance is spawned successfully.

[root@allinone9 terrformexample1(keystone_admin)]# nova list | grep tf-instance
| 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f | tf-instance   | ACTIVE  | –          | Running     | internal1=10.10.10.12 |

Step 7 : If later on we want to check the information about our deployment, we can use below commands.

[root@allinone9 terrformexample1(keystone_admin)]# terraform output
address = 10.10.10.12

[root@allinone9 terrformexample1(keystone_admin)]# terraform show
openstack_compute_instance_v2.tf-instance:
id = 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f
access_ip_v4 = 10.10.10.12
access_ip_v6 =
flavor_id = eb45fb1b-1470-4315-81e5-ac5be702dbd2
flavor_name = m1.tiny
image_id = b74c6a4e-ccd4-4b47-9bca-8019d3ce44d9
image_name = cirros
metadata.% = 0
name = tf-instance
network.# = 1
network.0.access_network = false
network.0.fixed_ip_v4 = 10.10.10.12
network.0.fixed_ip_v6 =
network.0.floating_ip =
network.0.mac = fa:16:3e:ad:cb:6c
network.0.name = internal1
network.0.port =
network.0.uuid = 1e149f28-66b3-4254-a88c-f1b42e7bc200
region = RegionOne
security_groups.# = 1
security_groups.3814588639 = default
stop_before_destroy = false
volume.# = 0

Outputs:

address = 10.10.10.12

Step 8 : Deployment stack can be dumped into an image. I found this feature quite useful, as it’s easy to visualize.

[root@allinone9 terrformexample1(keystone_admin)]# terraform graph | dot -Tpng > graph.png

Step 9 : If you are missing the heat commands like “resource-list” don’t worry those are also available in terraforms.

[root@allinone9 terrformexample1(keystone_admin)]# terraform state list
openstack_compute_instance_v2.tf-instance
[root@allinone9 terrformexample1(keystone_admin)]# terraform state show
id                         = 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f
access_ip_v4               = 10.10.10.12
access_ip_v6               =
flavor_id                  = eb45fb1b-1470-4315-81e5-ac5be702dbd2
flavor_name                = m1.tiny
image_id                   = b74c6a4e-ccd4-4b47-9bca-8019d3ce44d9
image_name                 = cirros
metadata.%                 = 0
name                       = tf-instance
network.#                  = 1
network.0.access_network   = false
network.0.fixed_ip_v4      = 10.10.10.12
network.0.fixed_ip_v6      =
network.0.floating_ip      =
network.0.mac              = fa:16:3e:ad:cb:6c
network.0.name             = internal1
network.0.port             =
network.0.uuid             = 1e149f28-66b3-4254-a88c-f1b42e7bc200
region                     = RegionOne
security_groups.#          = 1
security_groups.3814588639 = default
stop_before_destroy        = false
volume.#                   = 0

Step 10 : Finally we can destroy the deployment.

[root@allinone9 terrformexample1(keystone_admin)]# terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only ‘yes’ will be accepted to confirm.

Enter a value: yes

openstack_compute_instance_v2.tf-instance: Refreshing state… (ID: 10f635b3-a7bb-40ef-a3e7-9c7fef0a712f)
openstack_compute_instance_v2.tf-instance: Destroying…
openstack_compute_instance_v2.tf-instance: Still destroying… (10s elapsed)
openstack_compute_instance_v2.tf-instance: Destruction complete

Destroy complete! Resources: 1 destroyed.

 

In this article, I have just covered the basic working functionality of terrforms,  there are lot of other features available in this tool. You can refer the official hashicorp site to know more about terraform features.