How to configure ceph as nova (compute) backend ?

In my previous article, I have shown how can we configure ceph as backend for glance. In this article, I am configuring the ceph as backend for nova. Whenever we launch instance in openstack, it will create ephemeral disks which are located locally on openstack compute node at location /var/lib/nova/instances/{UUID}/* ; with using ceph as backend these disks will be stored in ceph pool.

Step 1 : Created a pool in ceph cluster.

[ceph@ceph1 ceph-deploy]$ sudo ceph osd pool create vms 128
pool ‘vms’ created

 Step 2 : Creating user with authentication to pool.

[ceph@ceph1 ceph-deploy]$ sudo ceph auth get-or-create client.nova mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rx pool=images’ -o /etc/ceph/ceph.client.nova.keyring

Step 3 : Copying the key of created user to openstack node, also the keyring file to openstack node.

[ceph@ceph1 ceph-deploy]$ sudo ceph auth get-key client.nova | ssh root@192.168.122.88 tee /root/client.nova.key
root@192.168.122.88’s password:
AQDpCypWyIlNEBAAFW2weR95/qUUgKAnK/0pfg==

[ceph@ceph1 ceph-deploy]$ scp /etc/ceph/ceph.client.nova.keyring root@192.168.122.88:/etc/ceph
root@192.168.122.88’s password:
ceph.client.nova.keyring                                                                                                                   100%   62     0.1KB/s   00:00

Step 4 : Change the ownership and permissions of copied keyring file on openstack node.

[root@opens1 ceph]# chown nova:nova /etc/ceph/ceph.client.nova.keyring
[root@opens1 ceph]# chmod 0640 !$
chmod 0640 /etc/ceph/ceph.client.nova.keyring

Step 5 : Added the keyring information in ceph.conf file on openstack node.

[root@opens1 ceph(keystone_admin)]# cat /etc/ceph/ceph.conf
[global]
fsid = 08cf2015-1de2-4e13-a29b-ac9a53230ec8
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 192.168.122.109,192.168.122.64,192.168.122.163
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

osd journal size = 100
osd pool default size = 2
osd pool default min size = 1

[mon]

mon osd allow primary affinity = 1

mon_clock_drift_allowed = 1
mon_clock_drift_warn_backoff = 30

[client.images]
keyring = /etc/ceph/ceph.client.images.keyring

[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring

[client.nova]
keyring = /etc/ceph/ceph.client.nova.keyring

Step 6 : We have to define the virsh secret so that launch instances can put the disks on ceph pool.

[root@opens1 ceph]# uuidgen
9aa48a3c-db43-45ef-8db3-0ed288aa1714

[root@opens1 ceph(keystone_admin)]# cat nova-ceph.xml
<secret ephemeral=”no” private=”no”>
<uuid>9aa48a3c-db43-45ef-8db3-0ed288aa1714</uuid>
<usage type=”ceph”>
<name>client.nova secret</name>
</usage>
</secret>

[root@opens1 ceph]# virsh secret-define –file nova-ceph.xml
Secret 9aa48a3c-db43-45ef-8db3-0ed288aa1714 created

[root@opens1 ceph]# virsh secret-set-value –secret 9aa48a3c-db43-45ef-8db3-0ed288aa1714  –base64 $(cat /root/client.nova.key)
Secret value set

Step 7 : Time to make the changes to nova.conf file. Before making change I have taken a backup of the file.

# sudo cp /etc/nova/nova.conf /etc/nova/nova.conf.orig
# Add the below chunk in /etc/nova/nova.conf

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = 9aa48a3c-db43-45ef-8db3-0ed288aa1714

[root@opens1 ceph]# systemctl restart openstack-nova-compute

Step 8 : Launch the instance.

[root@opens1 ceph(keystone_admin)]#  neutron net-list
+————————————–+——————+——————————————————-+
| id                                   | name             | subnets                                               |
+————————————–+——————+——————————————————-+
| 71eaffa2-8833-4348-809a-9b96e4352b90 | external_network | 4938c470-5819-4fa3-88af-9ed71575a248 192.168.122.0/24 |
| 42ef38e7-2b55-477c-bafc-3cd5f267e826 | private          | e66dafee-6c00-4bed-8ea9-cd1db312cf7a 10.0.0.0/24      |
+————————————–+——————+——————————————————-+

[root@opens1 ceph(keystone_admin)]# glance image-list
+————————————–+——–+————-+——————+———–+——–+
| ID                                   | Name   | Disk Format | Container Format | Size      | Status |
+————————————–+——–+————-+——————+———–+——–+
| 8497bdbe-71f9-4cd1-94dd-94f479c8eae9 | cirros | qcow2       | bare             | 13200896  | active |
| 770bf2f2-1016-46a6-9876-b98fbf702722 | ubuntu | iso         | bare             | 601882624 | active |
+————————————–+——–+————-+——————+———–+——–+

[root@opens1 ceph(keystone_admin)]# nova boot –flavor m1.small –nic net-id=42ef38e7-2b55-477c-bafc-3cd5f267e826 –image cirros cephvm
+————————————–+———————————————–+
| Property                             | Value                                         |
+————————————–+———————————————–+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          |                                               |
| OS-EXT-SRV-ATTR:host                 | –                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | –                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000006                             |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | –                                             |
| OS-SRV-USG:terminated_at             | –                                             |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| adminPass                            | QsADWEdb53qp                                  |
| config_drive                         |                                               |
| created                              | 2015-10-23T10:37:12Z                          |
| flavor                               | m1.small (2)                                  |
| hostId                               |                                               |
| id                                   | 219eb8aa-7f05-491a-8b34-4b26e2df1a7b          |
| image                                | cirros (8497bdbe-71f9-4cd1-94dd-94f479c8eae9) |
| key_name                             | –                                             |
| metadata                             | {}                                            |
| name                                 | cephvm                                        |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| security_groups                      | default                                       |
| status                               | BUILD                                         |
| tenant_id                            | d7a71f1899d744d798fadb988be994dd              |
| updated                              | 2015-10-23T10:37:12Z                          |
| user_id                              | b8e8ecad20c04b3e9eb0013fb03edce5              |
+————————————–+———————————————–+

Step 9 : Checking the content of the pool on ceph node.

[ceph@ceph1 ceph-deploy]$ sudo rbd -p vms ls
219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk

Step 10 : Corresponding to instance ID still the directory is created in /var/lib/nova/instance but this it doesn’t contain any ephemeral disk.

[root@opens1 instances(keystone_admin)]# pwd
/var/lib/nova/instances

[root@opens1 instances(keystone_admin)]# ll
total 4
drwxr-xr-x. 2 nova nova 42 Oct 23 06:37 219eb8aa-7f05-491a-8b34-4b26e2df1a7b
drwxr-xr-x. 2 nova nova 53 Oct 15 08:39 _base
-rw-r–r–. 1 nova nova 29 Oct 23 06:15 compute_nodes
drwxr-xr-x. 2 nova nova 91 Oct 15 08:39 locks

[root@opens1 instances(keystone_admin)]# cd 219eb8aa-7f05-491a-8b34-4b26e2df1a7b/

[root@opens1 219eb8aa-7f05-491a-8b34-4b26e2df1a7b(keystone_admin)]# ll
total 24
-rw-rw—-. 1 qemu qemu 18898 Oct 23 06:37 console.log
-rw-r–r–. 1 nova nova  2759 Oct 23 06:37 libvirt.xml

Step 11 : Checking the content of the ceph pool in more detail.

[ceph@ceph1 ceph-deploy]$ sudo rbd -p vms info 219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk
rbd image ‘219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk’:
size 20480 MB in 5120 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.85dc74b0dc51
format: 2
features: layering
flags:

Step 12 : I have created snapshot of the instance. I can see one more image is appeared in vms pool.

[ceph@ceph1 ceph-deploy]$ sudo rbd -p vms ls
219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk
219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk_clone_a56d61de82ac4be9b42a8a3ad04d4b8d

[ceph@ceph1 ceph-deploy]$ sudo rbd -p vms info 219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk_clone_a56d61de82ac4be9b42a8a3ad04d4b8d
rbd image ‘219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk_clone_a56d61de82ac4be9b42a8a3ad04d4b8d’:
size 20480 MB in 5120 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.85fed73abf1
format: 2
features: layering
flags:
parent: vms/219eb8aa-7f05-491a-8b34-4b26e2df1a7b_disk@a56d61de82ac4be9b42a8a3ad04d4b8d_to_be_deleted_by_glance
overlap: 20480 MB

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s