Category Archives: gluster

What are the new features available in RHS 3.0.4 ?

RHS v 3.0.4 has been released.  It has added several new features. In this article I am going to list down the newly available features. I would suggest you to refer the Redhat documentation to know more about them. You may refer the documentation  to know about the Bugs which are taken care in this version and also about the tech previews.

Now the list of newly added features.

1) Three-way replication : As per me this is the major addition in this version of RHS. Previously replicated volumes were limited to two nodes but from this version we can replicated count of 3 which means we can create replicated volume using three nodes. In this case each node will contain one copy that means we will be having more redundancy.

2) Gluster command log file : In earlier versions of RHS this file was hidden. It contains the command logs. From this version this file is not hidden.

3) Small file Performance improvement : While access the volume using CIFS on windows machine, performance was very bad with small files. In this version improvements have been made for the small size performance.

4) Performance Enhancement Options : Now we have two more tuning options to improve the performance of RHS.

1) Event threads 2) Virtual Memory

Some of the Tech Previews features.

1) gstatus utility

2) Striped Volumes

3) nfs-ganesha

4) Non-uniform file allocation.

I would suggest you to refer the redhat documentation provided above to know more about the features.

Advertisements

How to integrate glusterfs with openstack glance ?

In this article I am going to show you the integration of glusterfs with Openstack glance. This setup is quite easy and similar like my previous article of glusterfs integration with cinder.

********************************************************Gluster Setup****************************************************

Step 1 : I have created on one volume on gluster with name vol2. This volume is replicated on two gluster nodes.

Step 2 : I checked the permission of glance user on openstack node using “id glance” command. I set the same user and group permission on volume on gluster node.

[root@Glusternode1 ~]# gluster vol set vol2  storage.owner-uid 161
volume set: success
[root@Glusternode1 ~]# gluster vol set vol2  storage.owner-gid 161
volume set: success

I turnoff the readdir-ahead option to avoid any issue while mounting using fuse.

[root@Glusternode1 ~]# gluster vol set vol2 readdir-ahead off
volume set: success

Step 3 : We have done with gluster part now next steps will be on openstack node.

*********************************************Openstack setup**************************************************************

Step 1 : We need to edit the glance configuration file to use the glusterfs as backend for it.

I took the backup of configuration file.

[root@node1 ~(keystone_admin)]# cp -p /etc/glance/glance-api.conf /var/tmp/

Step 2 : Edited the configuration file to use the glusterfs as backend to store the glance images.

I have done the below modification in configuration file.

[root@node1 ~(keystone_admin)]# diff /etc/glance/glance-api.conf /var/tmp/glance-api.conf
297c297
< filesystem_store_datadir=/mnt/gluster/glance/images/          <<<<<<<<<<<<<

> filesystem_store_datadir=/var/lib/glance/images/

Create directory and change the permissions of that directory.

[root@node1 ~(keystone_admin)]# mkdir -p /mnt/gluster/glance/images
[root@node1 ~(keystone_admin)]# chown -R glance:glance /mnt/gluster/glance/

Step 3 : After that restart the glance service to reflect the changes.

[root@node1 ~(keystone_admin)]# systemctl restart openstack-glance-api

Step 4 : Mount the glusterfs volume using fuse and we can add the entry in /etc/fstab to make it persistent.

[root@node1 ~(keystone_admin)]# mount -t glusterfs 192.168.111.84:/vol2 /mnt/gluster/

Step 5 : Upload two images using openstack gui. Checked the status from CLI.

[root@node1 ~(keystone_admin)]# glance image-list
+————————————–+————-+————-+——————+———+——–+
| ID                                   | Name        | Disk Format | Container Format | Size    | Status |
+————————————–+————-+————-+——————+———+——–+
| 3a446007-8db5-427b-9177-51e1192a7b67 | FirstImage  | vmdk        | bare             | 4824    | active |
| 3d844393-fc43-44f0-aff1-3fa48e669c00 | SecondImage | vmdk        | bare             | 7499776 | active |
+————————————–+————-+————-+——————+———+——–+

Step 6 : Checked the status on gluster node bricks we can see the uploaded images there as well.

How to integrate Glusterfs with openstack cinder ?

In this article I am going to show you the integration of glusterfs using openstack cinder. Procedure is quite simple but you may face lot of issues while doing it.

********************************************************Gluster Setup****************************************************

Step 1 : I have create two node gluster setup and created one volume (vol1) of replicated type.

[root@Glusternode ~]# gluster vol status vol1
Status of volume: vol1
Gluster process                                         Port    Online  Pid
——————————————————————————
Brick 192.168.111.84:/Brick1node1/First1                49152   Y       3086
Brick 192.168.111.85:/Brick1node2/First2                49152   Y       2851
NFS Server on localhost                                 2049    Y       3018
Self-heal Daemon on localhost                           N/A     Y       3025
NFS Server on 192.168.111.84                            2049    Y       3054
Self-heal Daemon on 192.168.111.84                      N/A     Y       3061

Task Status of Volume vol1
——————————————————————————
There are no active volume tasks

Step 2 : Setting the permission on volume so that it can be used as backend of cinder openstack.

I checked the uid and gid of cinder user in my openstack setup. I have setup my openstack using allinone formula and used the packstack for installation.

[root@Glusternode ~]# gluster vol set vol1 group virt
volume set: success
[root@Glusternode ~]# gluster vol set vol1 storage.owner-uid 165
volume set: success
[root@Glusternode ~]# gluster vol set vol1 storage.owner-gid 165
volume set: success

Step 3 : We need to add below entry in /etc/glusterfs/glusterd.vol file.

option rpc-auth-allow-insecure on

After adding above entry restart the glusterd service using below command.

/etc/init.d/glusterd restart

Step 4 : After setting the required permission now we will start working on openstack node.

********************************************************Openstack Setup****************************************************

Step 1 : As I mentioned earlier I have packstack –allinone to install openstack components on single node.

Step 2 : We need to create one file (/etc/cinder/glusterfs) on the node which is used for openstack.

[root@node1 ~]# source /root/keystonerc_admin

In the below file I have added the gluster volume with IP address of gluster node. We can give options as well.

[root@node1 ~(keystone_admin)]# cat /etc/cinder/glusterfs
192.168.111.84:/vol1

Step 3 : I have modified the main /etc/cinder/cinder.conf to add the below entries or if already present you can modify them accordingly.

volume_backend_name=GLUSTER
glusterfs_sparsed_volumes=true
glusterfs_mount_point_base=/var/lib/cinder/volumes
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config = /etc/cinder/glusterfs

Step 4 : I tried creating the cinder volume using CLI. It was showing me the status as “error”.

[root@node1 ~(keystone_admin)]# cinder create –display-name vikrant 1
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-02-28T12:44:47.597406      |
| display_description |                 None                 |
|     display_name    |               vikrant                |
|      encrypted      |                False                 |
|          id         | cf94fe2a-231a-4ef1-99c1-d89d9f5c216e |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@node1 ~(keystone_admin)]# cinder list
+————————————–+——–+————–+——+————-+———-+————-+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+——–+————–+——+————-+———-+————-+
| cf94fe2a-231a-4ef1-99c1-d89d9f5c216e | error  |   vikrant    |  1   |     None    |  false   |             |
+————————————–+——–+————–+——+————-+———-+————-+

I checked the volume.log file I found that glusterfs volume was not getting mounted using glusterfs fuse.

[root@node1 ~(keystone_admin)]# tailf /var/log/cinder/volume.log
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.111.84:/vol1 /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513
Exit code: 1

I deleted the volume using below command.

[root@node1 ~(keystone_admin)]# cinder delete vikrant

Step 5 : I tried to manually mount it using glusterfs fuse. It was giving me the error I checked the gluster log file I found the below message.

[2015-02-28 11:58:14.411118] E [glusterfsd-mgmt.c:1369:mgmt_getspec_cbk] 0-glusterfs: failed to get the ‘volume file’ from server
[2015-02-28 11:58:14.411173] E [glusterfsd-mgmt.c:1460:mgmt_getspec_cbk] 0-mgmt: Server is operating at an op-version which is not supported

I checked the fuse version on glusterfs nodes (Trusted storage pool) and on the openstack nodes I found that version was different.

I set the below option on volume on gluster node.

[root@Glusternode1 First2]# gluster vol vol1 set readdir-ahead off
unrecognized word: vol1 (position 1)

After that I manually tried to mount it using glusterfs I was able to do it successfully 🙂 I unmounted it.

Step 6 : I tried to create the cinder volume again using command line.

[root@node1 ~(keystone_admin)]# cinder create –display-name vikrant1 1
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2015-02-28T12:54:22.417710      |
| display_description |                 None                 |
|     display_name    |               vikrant1               |
|      encrypted      |                False                 |
|          id         | fe4868fb-e7e5-44be-84aa-e67c0b205fd5 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

This time it was showing in available status.

[root@node1 ~(keystone_admin)]# cinder list
+————————————–+———–+————–+——+——
|                  ID                  |   Status  | Display Name | Size | Volum                                                                                        e Type | Bootable | Attached to |
+————————————–+———–+————–+——+——                                                                                        ——-+———-+————-+
| fe4868fb-e7e5-44be-84aa-e67c0b205fd5 | available |   vikrant1   |  1   |     N                                                                                        one    |  false   |             |
+————————————–+———–+————–+——+——                                                                                        ——-+———-+————-+

Step 7 : I checked the mounted filesystem and sparse volume created in it on openstack node.

[root@node1 ~(keystone_admin)]# df -h /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513
Filesystem            Size  Used Avail Use% Mounted on
192.168.111.84:/vol1  1.5G   33M  1.5G   3% /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513

[root@node1 ~(keystone_admin)]# cd /var/lib/cinder/volumes/c2fca7e531a7b6e265900cd0c4494513

[root@node1 c2fca7e531a7b6e265900cd0c4494513(keystone_admin)]# ls -lsh
total 0
0 -rw-rw-rw- 1 root root 1.0G Feb 28 02:24 volume-fe4868fb-e7e5-44be-84aa-e67c0b205fd5

Step 8 : I checked on both gluster node that cinder volume was present on glusterfs bricks.

Why I am getting “0-management: readv on /var/run/” messages in vol log file ?

Today while working on Cu issue I found that below messages were coming continuously in vol.log file.

[2015-02-11 12:10:08.574421] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/fcc8c003f45e9ea3ce0caef63d4f7dff.socket failed (Invalid argument)
[2015-02-11 12:10:11.578722] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/fcc8c003f45e9ea3ce0caef63d4f7dff.socket failed (Invalid argument)
[2015-02-11 12:10:14.583767] W [socket.c:529:__socket_rwv] 0-management: readv on /var/run/fcc8c003f45e9ea3ce0caef63d4f7dff.socket failed (Invalid argument)

I checked with “ss -x” didn’t find any problem with socket.

After doing google for few minutes I found that bugs are already opened for it. This will probably get fixed in future version of RHS  3.0.4

As workaround we can follow these steps to get rid of these messages. These messages were coming at interval of 3s.

1) Open the /etc/init.d/glusterd file and search for LOG_LEVEL parameter.

Before Change :

LOG_LEVEL=''

After Change it should look like :

LOG_LEVEL='ERROR'

2) Restart the glusterd service.

/etc/init.d/glusterd restart

This should start reporting after that.

Note : We are stopping logging of all warning messages. This is your choice whether to live with them or not 🙂

How to add glusterfs storage to RHEVM ?

In this article I am going to show that how can we use gluster as backend for storing Virtual Machine disks.

1)  I have three node trusted storage pool. I have one existing Dist Volume with name vol1 which is distributed across three nodes. Its not recommended way to do it. I suggest you to create Replicated volume. Currently redhat is supporting only two node replica.

[root@Node1 ~]# gluster vol status
Status of volume: vol1
Gluster process                                         Port    Online  Pid
——————————————————————————
Brick 192.168.111.9:/VolBrick1/node1                    49152   Y       1464
Brick 192.168.111.10:/VolBrick1n2/node2                 49152   Y       1449
Brick 192.168.111.11:/VolBrick1n3/node3                 49152   Y       1813
NFS Server on localhost                                 2049    Y       1472
NFS Server on 192.168.111.11                            2049    Y       1824
NFS Server on 192.168.111.10                            2049    Y       1457

Task Status of Volume vol1
——————————————————————————
There are no active volume tasks

2) While I was trying to create new domain in RHEVM environment. I was getting the below error.

“Error while executing action Add Storage Connection: Permission settings on the specified path do not allow access to the storage.Verify permission settings on the specified storage path.”

3) I google about this error I found that we need to set couple of parameters on gluster volume to make it detectable in RHEVM.

[root@Node1 ~]# gluster volume set vol1 group virt
volume set: success

[root@Node1 ~]# gluster volume set vol1 storage.owner-uid 36
volume set: success

[root@Node1 ~]# gluster volume set vol1 storage.owner-gid 36
volume set: success

Step 4 : After making above changes I was able to scan the storage volume vol1 inside the RHEVM. I created on VM using glusterfs as storage.

Step 5 : I went to gluster node Node1 to see the virtual disks.

[root@Node1 ~]# mount.glusterfs 192.168.111.9:/vol1 /mnt
[root@Node1 ~]# cd /mnt
[root@Node1 mnt]# cd 05096fa3-2d61-4ca2-b1dc-82ead6ad0dcc
[root@Node1 05096fa3-2d61-4ca2-b1dc-82ead6ad0dcc]# ll
total 0
drwxr-xr-x 2 vdsm kvm  96 Feb 10  2015 dom_md
drwxr-xr-x 3 vdsm kvm 147 Feb 10  2015 images
drwxr-xr-x 4 vdsm kvm  84 Feb 10  2015 master

Now I am good to create VMs by using gluster storage.

How to install RHSC (Redhat Storage Console) ?

In this article I am going to show you the installation of RHSC which is used to manage the gluster nodes. Its GUI console like RHEVM with which you can create the volumes and do other important operation.

Step 1 : I have installed the RHSC console on RHS node which is not part of Trusted storage pool.

[root@RHSM1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)

[root@RHSM1 ~]# uname -a
Linux RHSM1 2.6.32-431.29.2.el6.x86_64 #1 SMP Sun Jul 27 15:55:46 EDT 2014 x86_64 x86_64 x86_64 GNU

Step 2 : Register your server with appropriate channels.

[root@RHSM1 ~]# rhn-channel -l
jbappplatform-6-x86_64-server-6-rpm
rhel-x86_64-server-6
rhel-x86_64-server-6-rhs-nagios-3
rhel-x86_64-server-6-rhs-rhsc-3

Step 3 : After that issue the command to install the RHSC packages.

[root@RHSM1 ~]# yum install -y rhsc

Above command will take some time to install all the packages. Below packages are installed for me.

[root@RHSM1 ~]# rpm -qa | grep -i rhsc
rhsc-setup-plugins-3.0.3-1.1.el6rhs.noarch
rhsc-cli-3.0.0.0-0.2.el6rhs.noarch
rhsc-webadmin-portal-3.0.3-1.20.el6rhs.noarch
rhsc-sdk-python-3.0.0.0-0.2.el6rhs.noarch
rhsc-dbscripts-3.0.3-1.20.el6rhs.noarch
rhsc-setup-base-3.0.3-1.20.el6rhs.noarch
rhsc-branding-rhs-3.0.0-2.el6rhs.noarch
rhsc-log-collector-3.0.0-4.0.el6rhs.noarch
rhsc-setup-plugin-ovirt-engine-common-3.0.3-1.20.el6rhs.noarch
rhsc-setup-3.0.3-1.20.el6rhs.noarch
redhat-access-plugin-rhsc-3.0.0-1.el6rhs.noarch
rhsc-3.0.3-1.20.el6rhs.noarch
rhsc-lib-3.0.3-1.20.el6rhs.noarch
rhsc-doc-3.0.0-7.el6rhs.noarch
rhsc-monitoring-uiplugin-0.1.3-1.el6rhs.noarch
rhsc-setup-plugin-ovirt-engine-3.0.3-1.20.el6rhs.noarch
rhsc-tools-3.0.3-1.20.el6rhs.noarch
rhsc-restapi-3.0.3-1.20.el6rhs.noarch
rhsc-backend-3.0.3-1.20.el6rhs.noarch

Step 4 : After the packages are install you may run the below command to check whether any package available for upgrade.

[root@RHSM1 ~]# rhsc-upgrade-check
VERB: queue package rhsc-setup for update
VERB: Building transaction
VERB: Empty transaction
VERB: Transaction Summary:
No upgrade

Step 5 : Finally issue below command to set the RHSC it will ask couple for couple of options. You may chose to go with default option if they satisfied your requirement.

[root@RHSM1 ~]# rhsc-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’, ‘/etc/ovirt-engine-setup.conf.d/20-rhsc-packaging.conf’]
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20150210070344-2j75tb.log
Version: otopi-1.2.2 (otopi-1.2.2-1.el6ev)
[ INFO  ] Stage: Environment packages setup

Above output is truncated.

Step 6 : Once the above step is completed you will be able to access the GUI from browser using IP address of RHSC node.

Step 7 : Create Cluster in RHSC and scan the nodes in it. Enjoy the GUI to work on gluster.

How to use gstatus utility in RHS 3.0.3 ?

As I have recently upgraded from 3.0.2 to 3.0.3 I have searching for some new featured in newer version of RHS. I came across wonderful utility gstatus which is used to check the health of cluster and volume.

I installed this utility on one node of gluster.

Step 1 : As my system is registered to redhat channels. I issued yum command to install gstatus.

[root@Node1 ~]# rhn-channel -l
rhel-x86_64-server-6
rhel-x86_64-server-6-rhs-3
rhel-x86_64-server-sfs-6

[root@Node1 ~]# yum install gstatus

Step 2 : Lets see the various usages of this utility.

a) I have installed below version of gstatus.

[root@Node1 ~]# gstatus –version
gstatus 0.62

b) Checking the health of cluster.

[root@Node1 ~]# gstatus -s

Product: RHSS v3            Capacity:  60.00 GiB(raw bricks)
Status: HEALTHY                       10.00 GiB(raw used)
Glusterfs: 3.6.0.42                      60.00 GiB(usable from volumes)
OverCommit: No                Snapshots:   0

Nodes    :  3/ 3             Volumes:  1 Up
Self Heal:  0/ 0                       0 Up(Degraded)
Bricks   :  3/ 3                       0 Up(Partial)
Clients  :     0                       0 Down

Status Messages
– Cluster is HEALTHY, all checks successful

c) Checking the information of all volumes.

[root@Node1 ~]# gstatus -v

Product: RHSS v3            Capacity:  60.00 GiB(raw bricks)
Status: HEALTHY                       10.00 GiB(raw used)
Glusterfs: 3.6.0.42                      60.00 GiB(usable from volumes)
OverCommit: No                Snapshots:   0

Volume Information
vol1             UP – 3/3 bricks up – Distribute
Capacity: (16% used) 10.00 GiB/60.00 GiB (used/total)
Snapshots: 0
Self Heal: N/A
Tasks Active: None
Protocols: glusterfs:on  NFS:on  SMB:on
Gluster Clients : 0

If we want to check for particular volume. gstatus -v <VOL NAME>

d) To take a look at self heal state.

[root@Node1 ~]# gstatus -b

Product: RHSS v3            Capacity:  60.00 GiB(raw bricks)
Status: HEALTHY                       10.00 GiB(raw used)
Glusterfs: 3.6.0.42                      60.00 GiB(usable from volumes)
OverCommit: No                Snapshots:   0

Overcommit parameter will be helpful in case of snapshots.

e) We can use gstatus -a If we want to see the whole cluster information in detail.

f) In case of troubleshooting I guess this would be the best one to check the layout of volume.

[root@Node1 ~]# gstatus -lv vol1

Product: RHSS v3            Capacity:  60.00 GiB(raw bricks)
Status: HEALTHY                       10.00 GiB(raw used)
Glusterfs: 3.6.0.42                      60.00 GiB(usable from volumes)
OverCommit: No                Snapshots:   0

Volume Information
vol1             UP – 3/3 bricks up – Distribute
Capacity: (16% used) 10.00 GiB/60.00 GiB (used/total)
Snapshots: 0
Self Heal: N/A
Tasks Active: None
Protocols: glusterfs:on  NFS:on  SMB:on
Gluster Clients : 0

vol1———— +
|
Distribute (dht)
|
+–192.168.111.9:/VolBrick1/node1(UP) 3.00 GiB/20.00 GiB
|
+–192.168.111.10:/VolBrick1n2/node2(UP) 3.00 GiB/20.00 GiB
|
+–192.168.111.11:/VolBrick1n3/node3(UP) 4.00 GiB/20.00 GiB

g) We can change the output format to json or key value pair.

[root@Node1 ~]# gstatus -v vol1 -o json
2015-02-08 12:00:16.007722 {“brick_count”: 3, “bricks_active”: 3, “client_count”: 0, “glfs_version”: “3.6.0.42”, “node_count”: 3, “nodes_active”: 3, “over_commit”: “No”, “product_name”: “Red Hat Storage Server 3.0 Update 3”, “raw_capacity”: 64078479360, “sh_active”: 0, “sh_enabled”: 0, “snapshot_count”: 0, “status”: “healthy”, “usable_capacity”: 64078479360, “used_capacity”: 10587312128, “volume_count”: 1, “volume_summary”: [{“snapshot_count”: 0, “state”: “up”, “usable_capacity”: 64078479360, “used_capacity”: 10587312128, “volume_name”: “vol1”}]}

“gstatus” is really a wonderful utility to provide the information related gluster cluster health with help of single command.