How to configure CTDB in glusterfs ?

In this article I am going to show you the configuration of CTDB to provide the failover capability for the client which are mounting the Gluster volumes using NFS and CIFS.

If we are mounting the Gluster volumes on client using NFS or CIFS we need to configure the CTDB on Gluster nodes. So that the client can get the volume access without downtime. There is no need to done this if you are mounting the volume on client using native  glusterfs fuse means using mount point options as glusterfs.

Step 1 : We need to install the CTDB package on all the nodes of Trusted storage pool.

[root@Node2 ~]# yum info ctdb*
Loaded plugins: aliases, changelog, downloadonly, product-id, security, subscription-manager,
: tmprepo, verify, versionlock
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Installed Packages
Name        : ctdb2.5
Arch        : x86_64
Version     : 2.5.3
Release     : 6.el6rhs
Size        : 1.2 M
Repo        : installed
From repo   : RHEL6
Summary     : A Clustered Database based on Samba’s Trivial Database (TDB)
URL         : http://ctdb.samba.org/
License     : GPLv3+
Description : CTDB is a cluster implementation of the TDB database used by Samba and other
: projects to store temporary data. If an application is already using TDB for
: temporary data it is very easy to convert that application to be cluster aware
: and use CTDB instead.

Step 2 : After the installation of package we need to do the below step on all nodes of Trusted storage pool.

Here I have used the RepVol1 which is the name of my replicated volume.

[root@Node2 ~]# sed -i ‘s/all/RepVol1/’ /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
[root@Node2 ~]# sed -i ‘s/all/RepVol1/’ /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh

Step 3 : We can stop and start the volume to make the changes into effect.

[root@Node2 ~]# gluster vol stop RepVol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: RepVol1: success
[root@Node2 ~]# gluster vol start RepVol1
volume start: RepVol1: success

Step 4 : You will see one mountpoint in the output of “df -h” on both nodes.

Mountpoint will be mounted at /gluster/lock/.

Step 5 : We can check the parameter of configuration file which will got created automatically.

[root@Node2 ~]# cat /etc/sysconfig/ctdb
CTDB_RECOVERY_LOCK=/gluster/lock/lockfile
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_MANAGES_SAMBA=yes
CTDB_SAMBA_SKIP_SHARE_CHECK=yes

Step 6 :  We need to create file under /gluster/lock if it is already present need to modify them. Below IPs are my management IPs which are assigned to eth0.  Need to do on one node only.

[root@Node2 ~]# cat /gluster/lock/nodes
192.168.111.129
192.168.111.130

Link this file to configuration file need to do this on both nodes.

[root@Node2 lock]# ln -s /gluster/lock/nodes /etc/ctdb/nodes

Step 7 : We need one floating IP which will move between nodes to support the continuous availability of  volume at client side.

[root@Node2 lock]# cat /etc/ctdb/public_addresses
192.168.111.12/24 eth0

Need to do this on both nodes.

[root@Node2 lock]# ln -s /gluster/lock/public_addresses /etc/ctdb/public_addresses

Step 8 : After that check the status of CTDB. We need to start the service.

[root@Node2 ~]# service ctdb status
ctdbd is stopped

[root@Node2 ~]# service ctdb start
Starting ctdbd service:                                    [  OK  ]

[root@Node1 ~]# ctdb ip
Public IPs on node 0
192.168.111.12 1

[root@Node1 ~]# ctdb status
Number of nodes:2
pnn:0 192.168.111.129  OK (THIS NODE)
pnn:1 192.168.111.130  OK
Generation:498454919
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0

Step 9 : We can check the floating IP or public IP using below command.

[root@Node1 ~]#  ip addr show

Step 10 : We can mount the volume on client using public IP in this case 192.168.111.12

Now suppose this IP is assigned to Node1 eth0 and we have mounted the volume on client using this IP. If we are going to bring down the Node1 IP will get moved to Node2 on eth0. We can verify this with “ip addr show” command.

On client side you will face glitch for few seconds after that you will be getting the service from Node2. But client will be completely unaware of it that which gluster node is serving it.

Advertisements

2 thoughts on “How to configure CTDB in glusterfs ?

  1. Ryan Mills

    2016/03/22 14:25:00.014545 [10884]: client/ctdb_client.c:267 Failed to connect client socket to daemon. Errno:No such file or directory(2)
    common/cmdline.c:156 Failed to connect to daemon
    2016/03/22 14:25:00.014609 [10884]: Failed to init ctdb

    Receiving this error when I try ctdb status.
    Logs show:

    2016/03/22 14:24:49.297720 [10883]: Unable to bind on ctdb socket ‘/var/lib/run/ctdb/ctdbd.socket’
    2016/03/22 14:24:49.297781 [10883]: Cannot continue. Exiting!

    Any idea what I can do to fix this?

    Reply
    1. Vikrant Post author

      ryan,sorry, not sure about this error. If you are using RedHat gluster, I suggest you to open a support case with them. On google, I have seen couple of threads about bind socket error may be any of them can help you.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s