In this article I am going to show you the configuration of CTDB to provide the failover capability for the client which are mounting the Gluster volumes using NFS and CIFS.
If we are mounting the Gluster volumes on client using NFS or CIFS we need to configure the CTDB on Gluster nodes. So that the client can get the volume access without downtime. There is no need to done this if you are mounting the volume on client using native glusterfs fuse means using mount point options as glusterfs.
Step 1 : We need to install the CTDB package on all the nodes of Trusted storage pool.
[root@Node2 ~]# yum info ctdb*
Loaded plugins: aliases, changelog, downloadonly, product-id, security, subscription-manager,
: tmprepo, verify, versionlock
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Name : ctdb2.5
Arch : x86_64
Version : 2.5.3
Release : 6.el6rhs
Size : 1.2 M
Repo : installed
From repo : RHEL6
Summary : A Clustered Database based on Samba’s Trivial Database (TDB)
URL : http://ctdb.samba.org/
License : GPLv3+
Description : CTDB is a cluster implementation of the TDB database used by Samba and other
: projects to store temporary data. If an application is already using TDB for
: temporary data it is very easy to convert that application to be cluster aware
: and use CTDB instead.
Step 2 : After the installation of package we need to do the below step on all nodes of Trusted storage pool.
Here I have used the RepVol1 which is the name of my replicated volume.
[root@Node2 ~]# sed -i ‘s/all/RepVol1/’ /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
[root@Node2 ~]# sed -i ‘s/all/RepVol1/’ /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
Step 3 : We can stop and start the volume to make the changes into effect.
[root@Node2 ~]# gluster vol stop RepVol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: RepVol1: success
[root@Node2 ~]# gluster vol start RepVol1
volume start: RepVol1: success
Step 4 : You will see one mountpoint in the output of “df -h” on both nodes.
Mountpoint will be mounted at /gluster/lock/.
Step 5 : We can check the parameter of configuration file which will got created automatically.
[root@Node2 ~]# cat /etc/sysconfig/ctdb
Step 6 : We need to create file under /gluster/lock if it is already present need to modify them. Below IPs are my management IPs which are assigned to eth0. Need to do on one node only.
[root@Node2 ~]# cat /gluster/lock/nodes
Link this file to configuration file need to do this on both nodes.
[root@Node2 lock]# ln -s /gluster/lock/nodes /etc/ctdb/nodes
Step 7 : We need one floating IP which will move between nodes to support the continuous availability of volume at client side.
[root@Node2 lock]# cat /etc/ctdb/public_addresses
Need to do this on both nodes.
[root@Node2 lock]# ln -s /gluster/lock/public_addresses /etc/ctdb/public_addresses
Step 8 : After that check the status of CTDB. We need to start the service.
[root@Node2 ~]# service ctdb status
ctdbd is stopped
[root@Node2 ~]# service ctdb start
Starting ctdbd service: [ OK ]
[root@Node1 ~]# ctdb ip
Public IPs on node 0
[root@Node1 ~]# ctdb status
Number of nodes:2
pnn:0 192.168.111.129 OK (THIS NODE)
pnn:1 192.168.111.130 OK
Recovery mode:NORMAL (0)
Step 9 : We can check the floating IP or public IP using below command.
[root@Node1 ~]# ip addr show
Step 10 : We can mount the volume on client using public IP in this case 192.168.111.12
Now suppose this IP is assigned to Node1 eth0 and we have mounted the volume on client using this IP. If we are going to bring down the Node1 IP will get moved to Node2 on eth0. We can verify this with “ip addr show” command.
On client side you will face glitch for few seconds after that you will be getting the service from Node2. But client will be completely unaware of it that which gluster node is serving it.