Category Archives: RedHat Cluster

How to take backup of GFS2 in Redhat ?

As we know that in case of clustered file system its not possible to take the snapshots. To take the snapshot we have to mount them exclusively. To show how to do that

I have created two node Redhat cluster and mounted one GFS2 file system on mount point /mygfs2 on both nodes.

My Test Lab OS : Redhat 6.2 64 bit

[root@Node1 mygfs2]# clustat -l
Cluster Status for Shiv @ Tue Sep 30 03:08:18 2014
Member Status: Quorate

Member Name                             ID   Status
—— —-                             —- ——
192.168.56.10                               1 Online, Local
192.168.56.11                               2 Online

[root@Node1 ~]# df -h /mygfs2/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/gfs_vg1-gfs_lv1
700M  281M  420M  41% /mygfs2

[root@Node2 ~]# df -h /mygfs2/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/gfs_vg1-gfs_lv1
700M  281M  420M  41% /mygfs2

Step 1 : I have unmounted the GFS2 file system on both cluster nodes.

[root@Node1 ~]# umount /mygfs2
[root@Node1 ~]#

[root@Node2 ~]# umount /mygfs2
[root@Node2 ~]#

[root@Node1 ~]# lvs gfs_vg1/gfs_lv1
LV      VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
gfs_lv1 gfs_vg1 -wi-a- 700.00m

[root@Node2 ~]# lvs gfs_vg1/gfs_lv1
LV      VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
gfs_lv1 gfs_vg1 -wi-a- 700.00m

Step 2 : Deactivate the logical volume on both nodes using below commands.

[root@Node1 ~]# lvchange -an gfs_vg1/gfs_lv1

[root@Node1 ~]# lvs gfs_vg1/gfs_lv1
LV      VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
gfs_lv1 gfs_vg1 -wi— 700.00m

[root@Node2 ~]# lvchange -an gfs_vg1/gfs_lv1

[root@Node2 ~]# lvs gfs_vg1/gfs_lv1
LV      VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
gfs_lv1 gfs_vg1 -wi— 700.00m

Step 3 : Changed the logical volume to exclusive single node by using below command. This needs to be done only on one node.

[root@Node1 ~]# lvchange -aey gfs_vg1/gfs_lv1

[root@Node1 ~]# lvs gfs_vg1/gfs_lv1
LV      VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
gfs_lv1 gfs_vg1 -wi-a- 700.00m

Step 4 : Created the snapshot volume from that volume. Snapshot volume is named as “snap” and it is of size 100MB.

[root@Node1 ~]# lvcreate –size 100M –snapshot –name snap /dev/gfs_vg
Logical volume “snap” created

[root@Node1 ~]# lvs
LV      VG       Attr   LSize   Origin  Snap%  Move Log Copy%  Conver
gfs_lv1 gfs_vg1  owi-a- 700.00m
snap    gfs_vg1  swi-a- 100.00m gfs_lv1   0.00

Step 5 : I mounted the snapshot volume on one temporary mountpoint to access the data inside it.

[root@Node1 ~]# mount -o lockproto=lock_nolock /dev/gfs_vg1/snap /mnt

[root@Node1 ~]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/gfs_vg1-snap
700M  281M  420M  41% /mnt

Step 6 : After that I tried to mount the original file system I was getting error this was becuase my snap was still mounted. I unmounted my snap volume and tried to mount the original volume again after changing the volume to cluster mode.

[root@Node1 ~]# lvchange -aen gfs_vg1/gfs_lv1
LV gfs_vg1/gfs_lv1 has open snapshot snap: not deactivating

[root@Node1 ~]# umount /mnt

[root@Node1 ~]# lvchange -aen gfs_vg1/gfs_lv1
[root@Node1 ~]#

[root@Node1 ~]# lvchange -aey gfs_vg1/gfs_lv1

Step 7 : I removed the snap volume otherwise if you start using original volume and snap volume reaches 100% value then it will create trouble.

[root@Node1 ~]# lvremove /dev/gfs_vg1/snap
Do you really want to remove active logical volume snap? [y/n]: y
Logical volume “snap” successfully removed

References :

How to backup GFS2 file system ?
https://access.redhat.com/solutions/307333
https://access.redhat.com/solutions/19283
How to change cluster Logical volume to non-clustered ?
https://access.redhat.com/solutions/3618
Why backup using rsync is very slow in GFS2 ?
https://access.redhat.com/solutions

Redhat Cluster Cheat Sheet – Part 1

Its a two Node cluster without any fencing device. I am trying to show you here the basic commands which we can use to check the status of cluster these commands are also helpful from interview prospective.

How to check the version of cluster ?

[root@Node2 ~]# cman_tool -V
cman_tool 3.0.12.1 (built May 8 2012 12:22:25)
Copyright (C) Red Hat, Inc. 2004-2010 All rights reserved.

How to list the gfs2 file system present on server ?

GFS file system can be mounted simultaneously on multiple nodes. To check on how many nodes it is mounted.

[root@Node2 ~]# gfs2_control ls
gfs mountgroups
name gfs2
id 0x7474a276
flags 0x00000008 mounted
change member 2 joined 1 remove 0 failed 0 seq 1,1
members 1 2

How to list the fence device support for the cluster nodes?

Fencing device is used in cluster to make sure that cluster nodes which are separated from cluster will not try to write on shared device at same time.

[root@Node2 ~]# ccs_tool lsfence
ccs_tool: Can’t find “fencedevices” in /etc/cluster/cluster.conf

How to see any node is fenced from cluster?

How to see any node in cluster is fenced or not. Here I am not having fencing device as mentioned earlier.

[root@Node2 ~]# fence_tool ls
fence domain
member count 2
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2

How to list the journals of file system?

Journals are help to contain metadata. These help to achieve better performance in file system. Journal count at minimum should be equal to node count.

[root@Node2 ~]# gfs2_tool journals /dev/mapper/cluster_vg-cluster_lv
journal2 – 128MB
journal1 – 128MB
journal0 – 128MB
3 journal(s) found.

How to list the tuning parameters for cluster?

We can use this parameter to improve the performance of file system.

[root@Node2 ~]# gfs2_tool gettune /dev/mapper/cluster_vg-cluster_lv
incore_log_blocks = 1024
log_flush_secs = 60
quota_warn_period = 10
quota_quantum = 60
max_readahead = 262144
complain_secs = 10
statfs_slow = 0
quota_simul_sync = 64
statfs_quantum = 30
quota_scale = 1.0000 (1, 1)
new_files_jdata = 0

How to list the superblock of GFS2 file system?

Superblock contains the information related to whole file system Use the below command to retrieve the superblock information of GFS file system.

[root@Node2 ~]# gfs2_tool sb /dev/mapper/cluster_vg-cluster_lv all
mh_magic = 0x01161970
mh_type = 1
mh_format = 100
sb_fs_format = 1801
sb_multihost_format = 1900
sb_bsize = 4096
sb_bsize_shift = 12
no_formal_ino = 2
no_addr = 23
no_formal_ino = 1
no_addr = 22
sb_lockproto = lock_dlm
sb_locktable = MysqlCluster:gfs2
uuid = f5a6304d-d176-a83d-75db-b3c9f8739ed3