How to do RHEL 6 cluster operation using ccs ?

In this article I am going to demonstrate the various actions which we can do on cluster with ccs command. If you are not sure about the command line usage I recommend to use conga interface to make the cluster changes.

1) CLUSTER OPERATIONS

Case 1 : Various way of finding the cluster configuration file version.

Method 1 : Using ccs command

[root@Node3 ~]# ccs -f /etc/cluster/cluster.conf –getversion
17

Method 2 : Check the configuration file.

[root@Node1 log]# cat /etc/cluster/cluster.conf
<?xml version=”1.0″?>
<cluster config_version=”17″ name=”Shiv”>

Method 3 : Using cman_tool command.

[root@Node3 ~]# cman_tool status | grep -i “Config Version”
Config Version: 17

Case 2 : Stopping the cluster services on node.

Stopping the cluster services on Node2 of cluster using below commmand.

[root@Node3 log]# ccs -h Node2 –stop

Node2 will be in offline state. It will no longer be part of cluster.

https://drive.google.com/file/d/0B7F4NEbnRvYiT2RKTDc1REJIVlk/view?usp=sharing

[root@Node3 log]# clustat -l
Cluster Status for Shiv @ Sun Oct 5 04:01:37 2014
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.111.150 1 Online, rgmanager
192.168.111.151 2 Offline
192.168.111.152 3 Online, Local, rgmanager

Service Information
——- ———–

Service Name : service:IP_sg1
Current State : started (112)
Flags : none (0)
Owner : 192.168.111.150
Last Owner : 192.168.111.151
Last Transition : Sun Oct 5 03:32:22 2014

Node2 will not reboot only cluster services will stop.

[root@Node2 log]# clustat -l
Could not connect to CMAN: No such file or directory

[root@Node2 log]# uptime
04:02:30 up 30 min, 1 user, load average: 0.00, 0.00, 0.00

Case 3 : Starting the cluster services on node.

[root@Node3 log]# ccs -h Node2 –start

https://drive.google.com/file/d/0B7F4NEbnRvYibXNmV2hRcG5hdXc/view?usp=sharing

[root@Node3 log]# clustat -l
Cluster Status for Shiv @ Sun Oct 5 04:04:10 2014
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.111.150 1 Online, rgmanager
192.168.111.151 2 Online, rgmanager
192.168.111.152 3 Online, Local, rgmanager

Service Information
——- ———–

Service Name : service:IP_sg1
Current State : started (112)
Flags : none (0)
Owner : 192.168.111.150
Last Owner : 192.168.111.151
Last Transition : Sun Oct 5 03:32:22 2014

2) NODE OPERATIONS

Case 1 : How to check nodes which are part of cluster 

Method 1 : Using ccs command.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsnodes
192.168.111.150: nodeid=1
192.168.111.151: nodeid=2
192.168.111.152: nodeid=3

Method 2 : Most robust one. Need to issue on all cluster nodes.

[root@Node3 log]# cman_tool nodes

Method 3 : Another one is.

[root@Node3 log]# clustat -l

Case 2 : Removing node from cluster

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –rmnode 192.168.111.151

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsnodes
192.168.111.150: nodeid=1
192.168.111.152: nodeid=3

In case of node removal simple method like using “cman_tool -r version” will not work.

It will get reflect in above output but below output is still showing that node is part of cluster.

[root@Node3 log]# clustat -l
Cluster Status for Shiv @ Sun Oct 5 04:11:45 2014
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.111.150 1 Online, rgmanager
192.168.111.151 2 Online, rgmanager
192.168.111.152 3 Online, Local, rgmanager

Service Information
——- ———–

Service Name : service:IP_sg1
Current State : started (112)
Flags : none (0)
Owner : 192.168.111.150
Last Owner : 192.168.111.151
Last Transition : Sun Oct 5 03:32:22 2014

We need to start whole cluster again to reflect the changes.

If stopping it using below command.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –stopall
Stopped 192.168.111.152
Stopped 192.168.111.150

https://drive.google.com/file/d/0B7F4NEbnRvYibG1FWlVyclpPVTg/view?usp=sharing

If starting with below command.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –startall
Started 192.168.111.152
Started 192.168.111.150

https://drive.google.com/file/d/0B7F4NEbnRvYiNFNKZEhYZW94Z1U/view?usp=sharing

3 FENCING OPERATIONS

Case 1 : How to check available fence options.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfenceopts
fence_rps10 – RPS10 Serial Switch
fence_vixel – No description available
fence_egenera – No description available
fence_xcat – No description available
fence_na – Node Assassin

Case 2 : How to check the fence device currently configured on server.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfencedev
vmwarefence: passwd=root123, login=root, ipaddr=192.168.111.130, agent=fence_vmware_soap

Case 3 : How to check the base device of fence.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfenceinst Node1
192.168.111.152
vmwarefence-1
vmwarefence: ssl=on, uuid=42132ed4-a929-c17a-ce5a-6a61e0df1b8a, port=Red-Linux-3
192.168.111.151

4 FAILOVER DOMAIN OPERATIONS

Case 1 : How to check current failover domain

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfailoverdomain
Random1: restricted=0, ordered=0, nofailback=0
192.168.111.150:
192.168.111.151:
192.168.111.152:

Case 2 : How to remove the existing failover domain

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –rmfailoverdomain Random1

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfailoverdomain

[root@Node3 log]# cman_tool -r version

Case 3 : How to add the failover domain

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –addfailoverdomain Random2

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfailoverdomain
Random2: restricted=0, ordered=0, nofailback=0

Case 4 : How to add the node to failover domain

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –addfailoverdomainnode Random2 Node1

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –addfailoverdomainnode Random2 Node2

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –addfailoverdomainnode Random2 Node3

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfailoverdomain
Random2: restricted=0, ordered=0, nofailback=0
Node1:
Node2:
Node3:

If you have mistakenly added the wrong node then we can remove that node.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –rmfailoverdomainnode Random2 Node3

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsfailoverdomain
Random2: restricted=0, ordered=0, nofailback=0
Node1:
Node2:

5  SERVICE OPERATIONS

Case 1 : How to check the current supported serivices.

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsserviceopts
service – Defines a service (resource group).
ASEHAagent – Sybase ASE Failover Instance
SAPDatabase – SAP database resource agent
SAPInstance – SAP instance resource agent
apache – Defines an Apache web server
clusterfs – Defines a cluster file system mount.

Case 2 : How to check options for particular VM

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsserviceopts vm

Case 3 : How to check current services present

[root@Node3 log]# ccs -f /etc/cluster/cluster.conf –lsservices
service: name=IP_sg1, autostart=0, recovery=relocate
ip: ref=192.168.111.160
resources:
ip: monitor_link=on, sleeptime=10, address=192.168.111.160

Advertisements

2 thoughts on “How to do RHEL 6 cluster operation using ccs ?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s