How to add a new compute node to existing all-in-one packstack setup ?

I have already shown the packstack allinone installation in one of my earlier article. In this article, I  am going to add one more compute node to existing setup. Obviously, existing allinone node will be acting like a controller node for the setup also it’s playing the role of compute node. I am going to refer the allinone node as controller node in the rest of article.

My Setup Info :

Controller node with two interfaces. ens3 and ens8. ens3 is used for the the external connectivity attached to br-ex (external bridge – 192.168.122.147) and ens8 is used for the connectivity with second compute node.

Compute node with two interfaces ens3 (192.168.122.233) and ens8. We will be using ens3 for external connectivity and ens8 for communication with controller node.

 

Step 1 : Setup password less ssh for root between two nodes and register the compute node with appropriate channels.

Step 2 : Modify the answer.txt file on controller node according to your environment variables. Here I am showing the difference between the original and modified file.

# diff /root/answer.txt.backup /root/answer.txt
70c70
< EXCLUDE_SERVERS=

> EXCLUDE_SERVERS=192.168.122.147
83c83
< CONFIG_COMPUTE_HOSTS=192.168.122.147

> CONFIG_COMPUTE_HOSTS=192.168.122.233
816c816
< CONFIG_NOVA_COMPUTE_PRIVIF=eth1

> CONFIG_NOVA_COMPUTE_PRIVIF=ens8
825c825
< CONFIG_NOVA_NETWORK_PRIVIF=eth1

> CONFIG_NOVA_NETWORK_PRIVIF=ens8

Step 3 : Issued the below command on controller node.

# packstack –answer-file=/root/answer.txt

Simultaneously we can monitor the logs by opening a created file using tailf in another terminal.

Step 4 : Once the installation is completed check the packages installed on compute node and the running services.

[root@vswitch2 ~]# rpm -qa | grep -i openstack
openstack-neutron-openvswitch-2015.1.1-7.el7ost.noarch
openstack-ceilometer-compute-2015.1.1-1.el7ost.noarch
openstack-nova-common-2015.1.1-3.el7ost.noarch
openstack-selinux-0.6.43-1.el7ost.noarch
openstack-nova-compute-2015.1.1-3.el7ost.noarch
openstack-neutron-common-2015.1.1-7.el7ost.noarch
openstack-neutron-2015.1.1-7.el7ost.noarch
openstack-utils-2014.2-1.el7ost.noarch
openstack-ceilometer-common-2015.1.1-1.el7ost.noarch

[root@vswitch2 ~]# openstack-service status
MainPID=7426 Id=neutron-openvswitch-agent.service ActiveState=active
MainPID=6515 Id=openstack-ceilometer-compute.service ActiveState=active
MainPID=6558 Id=openstack-nova-compute.service ActiveState=active

Step 5 : Coming back to controller node, we can see two hypervisors are listed. vswitch1 (allinone) and vswitch2 (only compute node).

[root@vswitch1 ~(keystone_admin)]# nova hypervisor-list
+—-+———————+——-+———+
| ID | Hypervisor hostname | State | Status  |
+—-+———————+——-+———+
| 1  | vswitch1            | up    | enabled |
| 2  | vswitch2            | up    | enabled |
+—-+———————+——-+———+

Step 6 : Currently, no instance is running on the compute node vswitch2.

[root@vswitch1 ~(keystone_admin)]# nova hypervisor-servers vswitch2
+—-+——+—————+———————+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+—-+——+—————+———————+
+—-+——+—————+———————+

Step 7 : Booting the instance on compute node vswitch2.

[root@vswitch1 ~(keystone_admin)]# nova boot –image 0b6c7ab2-cd95-4afa-a01d-fe7993ae4995 –flavor m1.tiny –nic net-id=64e86f08-ce73-4fdd-9581-da449e5f069b –availability-zone nova:vswitch2 test1vswitch2

Step 8 : We can see the running instance of vswitch2.

[root@vswitch1 ~(keystone_admin)]# nova hypervisor-servers vswitch2
+————————————–+——————-+—————+———————+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
+————————————–+——————-+—————+———————+
| 75189fdb-2a37-4808-81f6-3de3ffc57020 | instance-0000000c | 2             | vswitch2            |
+————————————–+——————-+—————+———————+

Step 9 : I tried to do the block migration of instance from vswitch2 to vswitch1 but it got failed. I found the error message on vswitch2 log (/var/log/nova/nova-compute.log) file.

2015-12-12 05:10:34.537 25360 ERROR nova.virt.libvirt.driver [req-5eceee6f-f819-49bc-9fc6-3668cfe1c2ba c2760d13bc3843f1ad57301795c0ca7b bc958f05f594434a8cd8702cbd02dc6d – – -] [instance: 75189fdb-2a37-4808-81f6-3de3ffc57020] Migration operation has aborted
2015-12-12 05:10:34.607 25360 ERROR nova.virt.libvirt.driver [req-5eceee6f-f819-49bc-9fc6-3668cfe1c2ba c2760d13bc3843f1ad57301795c0ca7b bc958f05f594434a8cd8702cbd02dc6d – – -] [instance: 75189fdb-2a37-4808-81f6-3de3ffc57020] Live Migration failure: internal error: unable to execute QEMU command ‘migrate’: this feature or command is not currently supported

I found Bug opened with Red Hat about the same.

Modified the block_migration filter in /etc/nova/nova.conf file on both nodes as per https://bugzilla.redhat.com/show_bug.cgi?id=1211457.

FROM :

[root@vswitch2 ~]# grep -i block_migration /etc/nova/nova.conf
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC

TO :

[root@vswitch2 ~]# grep -i block_migration /etc/nova/nova.conf
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_NON_SHARED_INC

Restart the nova service.

Step 10 : Issued the migration command again this time it was successful.

[root@vswitch1 ~(keystone_admin)]# nova live-migration –block-migrate test1vswitch2 vswitch1

We can see the logs simultaneously to understand what is going in background.

Like in compute logs it’s showing migration completed successfully.

2015-12-12 05:15:47.305 25933 INFO nova.compute.manager [req-6431f934-c237-49fa-8851-b658aa134956 c2760d13bc3843f1ad57301795c0ca7b bc958f05f594434a8cd8702cbd02dc6d – – -] [instance: 75189fdb-2a37-4808-81f6-3de3ffc57020] You may see the error “libvirt: QEMU error: Domain not found: no domain with matching name.” This error can be safely ignored.
2015-12-12 05:16:01.484 25933 INFO nova.compute.manager [-] [instance: 75189fdb-2a37-4808-81f6-3de3ffc57020] VM Stopped (Lifecycle Event)
2015-12-12 05:16:01.614 25933 INFO nova.compute.manager [req-3f93c820-2764-49b8-8473-2f710eeef014 – – – – -] [instance: 75189fdb-2a37-4808-81f6-3de3ffc57020] During the sync_power process the instance has moved from host vswitch1 to host vswitch2

We can verify the same using below command that instance is running on vswitch1 instead of vswitch2.

[root@vswitch1 ~(keystone_admin)]# nova show test1vswitch2
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | vswitch1                                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | vswitch1                                                 |

Note : I was not using shared storage hence used block migration technique like VMware storage migration but if you are using shared storage you can simply migrate the VM to another hypervisor which is equivalent to VMware vmotion.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s