How to configure ceph as glance backend ?

In this article, I am going to show how to configure ceph as backend for glance. I have configured all-in-one openstack setup using packstack. Created ceph cluster using three nodes which are playing role of both mon and osd.

Step 1 : I have created new pool for glance backend.

[ceph@ceph1 ~]$ sudo ceph osd pool create images 64

Step 2 : Created the user with rwx access on the newly created ‘image’ pool.

[ceph@ceph1 ~]$ sudo ceph auth get-or-create client.images mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=images’ -o /etc/ceph/ceph.client.images.keyring

Step 3 : Checking the auth file to list the user permissions. It should show output like below.

[ceph@ceph1 ~]$ sudo ceph auth list

client.images
key: AQCwwilW/eEuBRAAMTaCyoQjarsxhJkTEBpaLw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images

Step 4 : Added the keyring entry in /etc/ceph/ceph.conf

cat ceph.conf
[global]
fsid = 08cf2015-1de2-4e13-a29b-ac9a53230ec8
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 192.168.122.109,192.168.122.64,192.168.122.163
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

osd journal size = 100
osd pool default size = 2
osd pool default min size = 1

[mon]

mon osd allow primary affinity = 1

mon_clock_drift_allowed = 1
mon_clock_drift_warn_backoff = 30

[client.images]                                                                           <<<<
keyring = /etc/ceph/ceph.client.images.keyring

Step 5 : Deploy the modified ceph.conf file on all ceph nodes.

[ceph@ceph1 ~]$ sudo ceph-deploy –overwrite-conf config push ceph1 ceph2 ceph3

Step 6 : Copy the keyring files from the ceph node to openstack node.

[ceph@ceph1 ~]$ sudo scp /etc/ceph/ceph.conf root@192.168.122.88:/etc/ceph/

[ceph@ceph1 ~]$ scp -p ceph.client.images.keyring root@192.168.122.88:/etc/ceph/

Step 7 : We need to give the below ownership and permissions on openstack node.

[root@opens1 ceph(keystone_admin)]# cd /etc/ceph/
[root@opens1 ceph(keystone_admin)]# ll
total 12
-rw-r—–. 1 glance glance  64 Oct 23 01:18 ceph.client.images.keyring
-rw-rw-r–. 1 root   root 524 Oct 23 01:19 ceph.conf

Step 8 : Take a backup of glance-api.conf file and add the below entries shown in diff to original glance-api.conf file.

[root@opens1 ceph(keystone_admin)]# diff /etc/glance/glance-api.conf /var/tmp/glance-api.conf.backup
558,563d557
< default_store=rbd
< stores = rbd
< rbd_store_pool = images
< rbd_store_user = images
< rbd_store_ceph_conf = /etc/ceph/ceph.conf
< rbd_store_chunk_size = 8

Step 9 : Restart the openstack-glance-api.service to bring the changes into reflect.

[root@opens1 ceph(keystone_admin)]# systemctl restart openstack-glance-api.service
[root@opens1 ceph(keystone_admin)]# systemctl status openstack-glance-api.service
openstack-glance-api.service – OpenStack Image Service (code-named Glance) API server
Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled)
Active: active (running) since Fri 2015-10-23 01:34:23 EDT; 20s ago
Main PID: 22917 (glance-api)
CGroup: /system.slice/openstack-glance-api.service
├─22917 /usr/bin/python2 /usr/bin/glance-api
├─22924 /usr/bin/python2 /usr/bin/glance-api
├─22925 /usr/bin/python2 /usr/bin/glance-api
├─22926 /usr/bin/python2 /usr/bin/glance-api
└─22927 /usr/bin/python2 /usr/bin/glance-api

Step 10 : Uploaded one image and checking the status of image pool from ceph node.

[root@opens1 ceph(keystone_admin)]# glance image-list
+————————————–+——–+————-+——————+———–+——–+
| ID                                   | Name   | Disk Format | Container Format | Size      | Status |
+————————————–+——–+————-+——————+———–+——–+
| 8497bdbe-71f9-4cd1-94dd-94f479c8eae9 | cirros | qcow2       | bare             | 13200896  | active |
| 770bf2f2-1016-46a6-9876-b98fbf702722 | ubuntu | iso         | bare             | 601882624 | active |
+————————————–+——–+————-+——————+———–+——–+

[root@ceph1 ceph]# rados -p images ls | grep -i rbd_id
rbd_id.770bf2f2-1016-46a6-9876-b98fbf702722

Step 11 : We can check the status from the openstack node as well.

[root@opens1 ceph(keystone_admin)]# ceph –user=images –keyring=/etc/ceph/ceph.client.images.keyring osd lspools
0 rbd,2 glance,3 images,

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s