Category Archives: cinder

How to delete old cinder volume after changing the storage backend ?

I have found the below very useful article in which it’s mentioned that how can we delete the old volumes which are created with different backend after changing to new backend.

http://www.dischord.org/2015/12/22/cinder-multi-backend-with-multiple-ceph-pools/

Key takeaways:

  • Need to change the “attached_host" parameter in two tables volumes, volume_attachment present in cinder DB.

How to integrate Glusterfs with openstack cinder ?

In this article I am going to show you the integration of glusterfs using openstack cinder. Procedure is quite simple but you may face lot of issues while doing it.

********************************************************Gluster Setup****************************************************

Step 1 : I have create two node gluster setup and created one volume (vol1) of replicated type.

[root@Glusternode ~]# gluster vol status vol1
Status of volume: vol1
Gluster process                                         Port    Online  Pid
——————————————————————————
Brick 192.168.111.84:/Brick1node1/First1                49152   Y       3086
Brick 192.168.111.85:/Brick1node2/First2                49152   Y       2851
NFS Server on localhost                                 2049    Y       3018
Self-heal Daemon on localhost                           N/A     Y       3025
NFS Server on 192.168.111.84                            2049    Y       3054
Self-heal Daemon on 192.168.111.84                      N/A     Y       3061

Task Status of Volume vol1
——————————————————————————
There are no active volume tasks

Step 2 : Setting the permission on volume so that it can be used as backend of cinder openstack.

I checked the uid and gid of cinder user in my openstack setup. I have setup my openstack using allinone formula and used the packstack for installation.

[root@Glusternode ~]# gluster vol set vol1 group virt
volume set: success
[root@Glusternode ~]# gluster vol set vol1 storage.owner-uid 165
volume set: success
[root@Glusternode ~]# gluster vol set vol1 storage.owner-gid 165
volume set: success

Step 3 : We need to add below entry in /etc/glusterfs/glusterd.vol file.

option rpc-auth-allow-insecure on

After adding above entry restart the glusterd service using below command.

/etc/init.d/glusterd restart

Step 4 : After setting the required permission now we will start working on openstack node.

********************************************************Openstack Setup****************************************************

Step 1 : As I mentioned earlier I have packstack –allinone to install openstack components on single node.

Step 2 : We need to create one file (/etc/cinder/glusterfs) on the node which is used for openstack.

[root@node1 ~]# source /root/keystonerc_admin

In the below file I have added the gluster volume with IP address of gluster node. We can give options as well.

[root@node1 ~(keystone_admin)]# cat /etc/cinder/glusterfs
192.168.111.84:/vol1

Step 3 : I have modified the main /etc/cinder/cinder.conf to add the below entries or if already present you can modify them accordingly.

volume_backend_name=GLUSTER
glusterfs_sparsed_volumes=true
glusterfs_mount_point_base=/var/lib/cinder/volumes
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config = /etc/cinder/glusterfs

Step 4 : I tried creating the cinder volume using CLI. It was showing me the status as “error”.

[root@node1 ~(keystone_admin)]# cinder create –display-name vikrant 1
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-02-28T12:44:47.597406      |
| display_description |                 None                 |
|     display_name    |               vikrant                |
|      encrypted      |                False                 |
|          id         | cf94fe2a-231a-4ef1-99c1-d89d9f5c216e |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@node1 ~(keystone_admin)]# cinder list
+————————————–+——–+————–+——+————-+———-+————-+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+——–+————–+——+————-+———-+————-+
| cf94fe2a-231a-4ef1-99c1-d89d9f5c216e | error  |   vikrant    |  1   |     None    |  false   |             |
+————————————–+——–+————–+——+————-+———-+————-+

I checked the volume.log file I found that glusterfs volume was not getting mounted using glusterfs fuse.

[root@node1 ~(keystone_admin)]# tailf /var/log/cinder/volume.log
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.111.84:/vol1 /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513
Exit code: 1

I deleted the volume using below command.

[root@node1 ~(keystone_admin)]# cinder delete vikrant

Step 5 : I tried to manually mount it using glusterfs fuse. It was giving me the error I checked the gluster log file I found the below message.

[2015-02-28 11:58:14.411118] E [glusterfsd-mgmt.c:1369:mgmt_getspec_cbk] 0-glusterfs: failed to get the ‘volume file’ from server
[2015-02-28 11:58:14.411173] E [glusterfsd-mgmt.c:1460:mgmt_getspec_cbk] 0-mgmt: Server is operating at an op-version which is not supported

I checked the fuse version on glusterfs nodes (Trusted storage pool) and on the openstack nodes I found that version was different.

I set the below option on volume on gluster node.

[root@Glusternode1 First2]# gluster vol vol1 set readdir-ahead off
unrecognized word: vol1 (position 1)

After that I manually tried to mount it using glusterfs I was able to do it successfully 🙂 I unmounted it.

Step 6 : I tried to create the cinder volume again using command line.

[root@node1 ~(keystone_admin)]# cinder create –display-name vikrant1 1
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-02-28T12:54:22.417710      |
| display_description |                 None                 |
|     display_name    |               vikrant1               |
|      encrypted      |                False                 |
|          id         | fe4868fb-e7e5-44be-84aa-e67c0b205fd5 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

This time it was showing in available status.

[root@node1 ~(keystone_admin)]# cinder list
+————————————–+———–+————–+——+——
|                  ID                  |   Status  | Display Name | Size | Volum                                                                                        e Type | Bootable | Attached to |
+————————————–+———–+————–+——+——                                                                                        ——-+———-+————-+
| fe4868fb-e7e5-44be-84aa-e67c0b205fd5 | available |   vikrant1   |  1   |     N                                                                                        one    |  false   |             |
+————————————–+———–+————–+——+——                                                                                        ——-+———-+————-+

Step 7 : I checked the mounted filesystem and sparse volume created in it on openstack node.

[root@node1 ~(keystone_admin)]# df -h /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513
Filesystem            Size  Used Avail Use% Mounted on
192.168.111.84:/vol1  1.5G   33M  1.5G   3% /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513

[root@node1 ~(keystone_admin)]# cd /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513

[root@node1 c2fca7e531a7b6e265900cd0c4494513(keystone_admin)]# ls -lsh
total 0
0 -rw-rw-rw- 1 root root 1.0G Feb 28 02:24 volume-fe4868fb-e7e5-44be-84aa-e67c0b205fd5

Step 8 : I checked on both gluster node that cinder volume was present on glusterfs bricks.