How to use snapshot in Gluster ?

Snapshot is the new feature added in Redhat Storage. In this article I am going to show you the usage of snapshot feature.

Snapshot as we know is the point in time copy of data. To use the snapshot feature we require LV(logical volume) should be created on thinpool.

Step 1 : I have used the below commands on all nodes of trusted storage pool which are going to provide the bricks for volume.

pvcreate /dev/sdc1
vgcreate dummyvg1 /dev/sdc1
lvcreate -L 950M -T dummyvg1/dummypool1 -c 256K –poolmetadatasize 5M
lvcreate -V 1G -T dummyvg1/dummypool1 -n dummylv1
mkfs.xfs -f -i size=512 -n size=8192 /dev/dummyvg1/dummylv1

Step 2 : After creating the volume I created some files in it so that I can check the functionality of snapshots.

[root@Node1 ~]# gluster vol info RepThinvol1

Volume Name: RepThinvol1
Type: Distribute
Volume ID: 8a284406-702a-4fdc-82bd-02c6df9eec2e
Status: Started
Snap Volume: no
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: Node1:/Replicatedthin1/DirNode1
Brick2: Node2:/Replicatedthin1/DirNode2
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

[root@Node1 ~]# lvs dummyvg1
LV                                 VG       Attr       LSize   Pool       Origin   Data%  Meta%  Move Log Cpy%Sync Convert
80e60851afe543da983a34ada54b5e58_0 dummyvg1 Vwi-a-tz–   1.00g dummypool1 dummylv1 1.42
dummylv1                           dummyvg1 Vwi-aotz–   1.00g dummypool1          1.42
dummypool1                       dummyvg1 twi-a-tz– 952.00m                     2.57   0.54

[root@Node1 ~]# cd /Replicatedthin1/DirNode1/
[root@Node1 DirNode1]# ll
total 0
[root@Node1 DirNode1]# ll
total 0
-rw-r–r– 2 root root 0 Dec 27 07:58 file10
-rw-r–r– 2 root root 0 Dec 27 07:58 file3
-rw-r–r– 2 root root 0 Dec 27 07:58 file4
-rw-r–r– 2 root root 0 Dec 27 07:58 file7
-rw-r–r– 2 root root 0 Dec 27 07:58 file9

Step 3 : Created the snapshot for volume with name snap1.

[root@Node1 DirNode1]# gluster snapshot create snap1 RepThinvol1
snapshot create: success: Snap snap1 created successfully

[root@Node1 DirNode1]# gluster snapshot list RepThinvol1
snap1

Step 4 : We can check the status and information of snap.

[root@Node1 DirNode1]# gluster snapshot info snap1
Snapshot                  : snap1
Snap UUID                 : 4300908a-f91c-45a6-a98f-3bb09e9056e2
Created                   : 2014-12-27 08:00:17
Snap Volumes:

Snap Volume Name          : 80e60851afe543da983a34ada54b5e58
Origin Volume name        : RepThinvol1
Snaps taken for RepThinvol1      : 1
Snaps available for RepThinvol1  : 255
Status                    : Started

[root@Node1 DirNode1]# gluster snapshot status snap1

Snap Name : snap1
Snap UUID : 4300908a-f91c-45a6-a98f-3bb09e9056e2

Brick Path        :   Node1:/var/run/gluster/snaps/80e60851afe543da983a34ada54b5e58/brick1/DirNode1
Volume Group      :   dummyvg1
Brick Running     :   Yes
Brick PID         :   5795
Data Percentage   :   1.42
LV Size           :   1.00g

Brick Path        :   Node2:/var/run/gluster/snaps/80e60851afe543da983a34ada54b5e58/brick2/DirNode2
Volume Group      :   dummyvg1
Brick Running     :   Yes
Brick PID         :   4946
Data Percentage   :   1.42
LV Size           :   1.00g

Below new file system will get mounted on the node after the successful creation of snapshot. It will be mounted on other node as well with name brick2.

/dev/mapper/dummyvg1-f1d53cc0fed54cdb8fe85d1a064741fd_0
1012M   33M  980M   4% /var/run/gluster/snaps/f1d53cc0fed54cdb8fe85d1a064741fd/brick1

I have removed the file file3 from the brick on Node1.

[root@Node1 DirNode1]# ll
total 40960
-rw-r–r– 2 root root 10485760 Dec 27 09:44 file10
-rw-r–r– 2 root root        0 Dec 27  2014 file4
-rw-r–r– 2 root root        0 Dec 27  2014 file7
-rw-r–r– 2 root root        0 Dec 27  2014 file9

[root@Node2 DirNode2]# ll
total 20480
-rw-r–r– 2 root root        0 Dec 27 09:46 file1
-rw-r–r– 2 root root        0 Dec 27 09:46 file2
-rw-r–r– 2 root root        0 Dec 27 09:46 file5
-rw-r–r– 2 root root        0 Dec 27 09:46 file6
-rw-r–r– 2 root root        0 Dec 27 09:46 file8

Step 5 : We can mount the snapshot on client using below command. Note : Only glusterfs can be used to mount it.

[root@Node3 ~]# mount -t glusterfs Node1:/snaps/snap1/RepThinvol1 /snapmnt/
[root@Node3 ~]# df -h /snapmnt/
Filesystem            Size  Used Avail Use% Mounted on
Node1:/snaps/snap1/RepThinvol1
2.0G   66M  2.0G   4% /snapmnt

It will contain the files which are present on both the bricks. It will be in RO mode.

[root@Node3 ~]# cd /snapmnt/
[root@Node3 snapmnt]# ll
total 0
-rw-r–r– 1 root root 0 Dec 27 07:58 file1
-rw-r–r– 1 root root 0 Dec 27 07:58 file10
-rw-r–r– 1 root root 0 Dec 27 07:58 file2
-rw-r–r– 1 root root 0 Dec 27 07:58 file3
-rw-r–r– 1 root root 0 Dec 27 07:58 file4
-rw-r–r– 1 root root 0 Dec 27 07:58 file5
-rw-r–r– 1 root root 0 Dec 27 07:58 file6
-rw-r–r– 1 root root 0 Dec 27 07:58 file7
-rw-r–r– 1 root root 0 Dec 27 07:58 file8
-rw-r–r– 1 root root 0 Dec 27 07:58 file9

Step 6 : To restore the snapshot we need to stop the volume.

[root@Node1 DirNode1]# gluster vol stop RepThinvol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: RepThinvol1: success

[root@Node1 DirNode1]# gluster snapshot restore snap1
Snapshot restore: snap1: Snap restored successfully

After the restoration no snap will be present.

[root@Node1 DirNode1]# gluster snap list
No snapshots present

You will see that file3 which was deleted earlier will come again in your brick1 on Node1.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s