Configuration of Glusterfs on RHEL (Distributed Volume)

In this article series I will explain the configuration part of Glusterfs which is part of Redhat offering Redhat Storage. I am using evaluation copy of Redhat Storage version 3.0.2.

Step 1 : I have downloaded the evaluation version from Redhat Portal. I have created two VM on  VMware workstation and installed it. Below is the version information of the product.

[root@Node1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)

[root@Node1 ~]# glusterfsd –version
glusterfs 3.6.0.29 built on Oct 18 2014 01:35:04
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/&gt;
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

Step 2 : After installation, I have added the entries in /etc/hosts file on both nodes. Node1 and Node2 will acts like a Gluster Nodes.

[root@Node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.111.129 Node1
192.168.111.130 Node2

Step 3 : After doing the basic setup. We can check the peer status from both the nodes.

[root@Node1 ~]# gluster peer status
Number of Peers: 0

[root@Node2 ~]# gluster peer status
Number of Peers: 0

Step 4 : Currently no node is present in peer status. Lets probe the other node. We need to do it only on one Node.

[root@Node1 ~]# gluster peer probe Node2
peer probe: success.

[root@Node1 ~]# gluster peer status
Number of Peers: 1

Hostname: Node2
Uuid: 5093bf67-ff30-4865-8f84-3734c2c4b752
State: Peer in Cluster (Connected)

[root@Node2 ~]# gluster peer status
Number of Peers: 1

Hostname: Node1
Uuid: a7c5d771-d07a-4875-8bd6-f7f5cae9bb53
State: Peer in Cluster (Connected)

Step 5 : If you have probed the wrong node or you want to detach the node which is probed earlier you can follow the below procedure.

[root@Node1 ~]# gluster peer detach Node2
peer detach: success
[root@Node1 ~]# gluster peer status
Number of Peers: 0

[root@Node2 ~]# gluster peer status
Number of Peers: 0

Step 6 : I have attached disk of 10G to both machines to create the bricks and further on top of the bricks the volumes. I am not going to create thin pool in this article I will cover it later. Thin pool is the new feature present in newer version for snapshots.

I have issued below commands on both nodes.

[root@Node2 ~]# vgcreate FirstVG /dev/sdb1
[root@Node2 ~]# vgs
[root@Node2 ~]# lvcreate –size 5G -n FirstLV1 FirstVG
[root@Node2 ~]#  mkfs.xfs -i size=512 /dev/FirstVG/FirstLV1
[root@Node2 ~]# mkdir /Brick1
[root@Node2 ~]# mount /dev/FirstVG/FirstLV1 /Brick1
Add the entry in /etc/fstab on both servers.
[root@Node2 ~]# mkdir /Brick1/BrickNode2

We need to create one directory beneath actual mount point in this case /Brick1/BrickNode2. It helps to avoid problem when our actual mountpoint got full because if we are not creating this directory it will start filling out root if it got unmounted.

Step 7 : After successful creation of filesystem lets create the volume. We need do this step only on one Node. I am here creating the distributed volume. It uses algorithm to distribute the files across nodes.

[root@Node1 ~]# gluster vol create vol1 Node1:/Brick1/BrickNode1 Node2:/Brick1/BrickNode2
volume create: vol1: success: please start the volume to access data

distributed volume

[root@Node1 ~]# gluster vol info

Volume Name: vol1
Type: Distribute
Volume ID: e06990b5-07f2-4d4d-b7fc-65277bae6093
Status: Created
Snap Volume: no
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: Node1:/Brick1/BrickNode1
Brick2: Node2:/Brick1/BrickNode2
Options Reconfigured:
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable

Step 8 : Volume has been created successfully. We can start the volume and check the status.

[root@Node1 ~]# gluster vol start vol1
volume start: vol1: success

[root@Node1 ~]# gluster vol status vol1
Status of volume: vol1
Gluster process                                         Port    Online  Pid
——————————————————————————
Brick Node1:/Brick1/BrickNode1                          49152   Y       1526
Brick Node2:/Brick1/BrickNode2                          49152   Y       2068
NFS Server on localhost                                 2049    Y       1530
NFS Server on Node2                                     2049    Y       2073

Task Status of Volume vol1
——————————————————————————
There are no active volume tasks

Step 9 : I am on client side now. There are couple of methods to mount the volume on client using NFS, CIFS and glusterfs fuse. I am using NFS but its recommended to use glusterfs which will provide more features. If you want to use glusterfs fuse method you  need to install the fuse package.

[root@client1 ~]# yum install glusterfs-fuse
[root@client1 ~]# cat /etc/mtab | grep -i fusectl
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0

Step 10 : Mounting the volume on client using NFS.

[root@client1 ~]# mount -v -t nfs -o vers=3 192.168.111.129:/vol1 /GlusterMnt-1
mount.nfs: timeout set for Wed Dec 24 01:58:15 2014
mount.nfs: trying text-based options ‘vers=3,addr=192.168.111.129’
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.111.129 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 192.168.111.129 prog 100005 vers 3 prot TCP port 38465

[root@client1 ~]#  df -h /GlusterMnt-1
Filesystem             Size  Used Avail Use% Mounted on
192.168.111.129:/vol1   10G   64M   10G   1% /GlusterMnt-1

Step 11 : I am creating 10 files in it on client side. Remember our volume is of distributed type. As per this configuration our files should get divide equally between two Gluster Nodes (Node1 and Node2).

client1# cd /GlusterMnt-1/
client1# for i in {1..10}
> do
> touch file$i
> done
client1# ll
total 0
-rw-r–r– 1 root root 0 Dec 24 02:01 file1
-rw-r–r– 1 root root 0 Dec 24 02:01 file10
-rw-r–r– 1 root root 0 Dec 24 02:01 file2
-rw-r–r– 1 root root 0 Dec 24 02:01 file3
-rw-r–r– 1 root root 0 Dec 24 02:01 file4
-rw-r–r– 1 root root 0 Dec 24 02:01 file5
-rw-r–r– 1 root root 0 Dec 24 02:01 file6
-rw-r–r– 1 root root 0 Dec 24 02:01 file7
-rw-r–r– 1 root root 0 Dec 24 02:01 file8
-rw-r–r– 1 root root 0 Dec 24 02:01 file9

Step 12 : I have logged into gluster nodes and went to the brick path. On both nodes I can see the 5 files.

[root@Node1 ~]# cd /Brick1/BrickNode1/
[root@Node1 BrickNode1]# ll
total 0
-rw-r–r– 2 root root 0 Dec 24 04:01 file10
-rw-r–r– 2 root root 0 Dec 24 04:01 file3
-rw-r–r– 2 root root 0 Dec 24 04:01 file4
-rw-r–r– 2 root root 0 Dec 24 04:01 file7
-rw-r–r– 2 root root 0 Dec 24 04:01 file9

[root@Node2 ~]# cd /Brick1/BrickNode2/
[root@Node2 BrickNode2]# ll
total 0
-rw-r–r– 2 root root 0 Dec 24 04:01 file1
-rw-r–r– 2 root root 0 Dec 24 04:01 file2
-rw-r–r– 2 root root 0 Dec 24 04:01 file5
-rw-r–r– 2 root root 0 Dec 24 04:01 file6
-rw-r–r– 2 root root 0 Dec 24 04:01 file8

In this article we have configured the distributed volume which will divide the file equally based on algorithm between the trusted pool nodes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s