How to set client side Quorum in Gluster ?

As gluster is distributed file system so split brain condition can’t be separated from this. To deal with split-brain we can adjust the quorum at server and also at the client level. In this article I am going to show how to set the quorum at client level to ensure that if the split-brain scenario is occurring bring the volume in RO mode instead of RW mode.

Client side setting can be done in two ways either by fixed way (manually) or auto (automatic)

Step 1 : Client side quorum setting in fixed manner. I am having replicated volume(RepVol2) spread across two nodes.  From the gluster node I am setting the quorum for client using fixed option. If we are using fixed option we need to give the minimum number of nodes that should be up so that volume can be in RW mode on client side. If that number is not met then the volume on client will be in RO mode.

[root@Node2 ~]# gluster vol set RepVol2 cluster.quorum-type fixed
volume set: success

In my case I want both my nodes to be up setting the count as 2.

[root@Node2 RepBrck2Node2]# gluster vol set RepVol2 cluster.quorum-count 2
volume set: success

We can verify the same setting in volume information as well.
[root@Node2 RepBrck2Node2]# gluster vol info RepVol2

Volume Name: RepVol2
Type: Replicate
Volume ID: edaf9eb7-74fc-4b48-91c0-1f2a12bb90e0
Status: Started
Snap Volume: no
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: Node3:/Replicated2/RepBrck2Node3
Brick2: Node2:/Replicated2/RepBrck2Node2
Options Reconfigured:
cluster.quorum-count: 2
cluster.quorum-type: fixed
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Step 2 : To verify my setting I have intentionally brought down one node (Node3) in trusted storage pool and then on client(Node1) I tried to touch one file in volume. I have mounted the volume on /mnt on client side.

[root@Node1 mnt]# touch checkfile2
touch: cannot touch `checkfile2′: Read-only file system

We are getting the message RO which is expected.

Step 3 : As soon as I bring the interface of Node3 up I am able to create the file in it its back in RW mode.

Step 4 : I am going to reset the setting set in step 1.

[root@Node2 ~]# gluster vol reset RepVol2 cluster.quorum-count
volume reset: success: reset volume successful
[root@Node2 ~]# gluster vol reset RepVol2 cluster.quorum-type
volume reset: success: reset volume successful

Step 5 : I am setting the auto configuration on volume. In case of auto, mounted volume on client will be RW mode only when 51% of nodes in Trusted storage pool are up and running. In case of 3 node in Trusted storage pool its 2. In case of 2 node trusted storage pool its both because if 1 node is getting down count will become 50% which is not meeting the criteria.
[root@Node2 ~]# gluster vol set RepVol2 cluster.quorum-type auto
volume set: success

I brought down one node, my volume on client side is going into read only mode.

[root@Node1 mnt]# touch autofile1
touch: cannot touch `autofile1′: Read-only file system

If you are going to set the cluster.quorum-count property with cluster.quorum-type set as auto. Then auto option will override the cluster.quorum-count value.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s