How to setup geo-replication in Glusterfs ?

In this article I am going to show “How to Create geo-replication in Glusterfs version 2.0“. I have four nodes (node1,node2,node3,node4) in Trusted storage pool which is acting like master and on slave side I am having one node(slave)

Step 1 : Created one dist-replicated volume in trusted storage pool which we gonna use for geo-replication.

Node1# gluster vol list
repvol1

Step 2 : Created ssh key from Node1 of Trusted storage pool and saving the key in secret.pem.

Node1# ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/glusterd/geo-replication/secret.pem.
Your public key has been saved in /var/lib/glusterd/geo-replication/secret.pem.pub.
The key fingerprint is:
d9:63:e5:c7:06:6a:90:b6:8b:a2:88:5c:5e:8e:33:84 root@Node1
The key’s randomart image is:
+–[ RSA 2048]—-+
|                 |
|         .       |
|        +   o    |
|       . = + o   |
|  .     S * . +  |
| E .   . + . o   |
|  ….. .        |
|o.o++.           |
|o..oo.           |
+—————–+

Step 3 : After the key generation, add the below command in file public key file. It is the command which is used for syncing the data between sites.

Node1# cat /var/lib/glusterd/geo-replication/secret.pem.pub
command=”/usr/libexec/glusterfs/gsyncd” ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvJvH4MRSSBe4kMKtWaS93k7Z52OHpuP+mdlAp5rzVm8uJ+L3uMqef3wTmHnW5N6Zlb90uS4Q42VwzhVi1i5sHwLzFvaXC+esYe+IRkv0sB9z1iI9tj9DIC+/bnrfAiwHdijYZgP+Ie8rFzB5y70AdK01YOxzuv7ynY6WXIzrBa21uFJmP5Xz6P4FQqo42vJlQsIr4O+9jaYsDMqPc8CrZ44Frg18Upbt3fl4ku6crP/rOOJ9SYssbSy82HeTDtCu6HEjpxxua7Oq/+5/BLeO0RzgbhLTOPLnBBEXeJWh9RCvpTvf9iG9msW+S4cYS5FCDEYSo87lffVhbtuU1N09Tw== root@Node1

Step 5 : Coming to slave site, I have create the user and group on destination node.

Slave# groupadd group1
Slave# useradd -g group1 user1
Slave# passwd user1
Changing password for user user1.
New password:
BAD PASSWORD: it is based on your username
Retype new password:
passwd: all authentication tokens updated successfully.

Step 6 : Created the slavevol1 volume on slave node and added the highlighted entries in configuration file.

Slave# gluster vol list
slavevol1

Slave# mkdir /var/mountbroker-root

Slave# cat /etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /var/lib/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
option mountbroker /var/mountbroker-root
option geo-replication-log-group group1
option mountbroker-geo-replication.user1 slavevol1
end-volume

Step 7 : Restarted the glusterd service on slave node.

Slave# service glusterd restart
Stopping glusterd:                                         [  OK  ]
Starting glusterd:                                         [  OK  ]

Step 8 : Coming back to master node (Node1) copy the  ssh key to slave node using user1.

Copy the key to destination node.

Node1# ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub user1@192.168.111.144     
The authenticity of host ‘192.168.111.144 (192.168.111.144)’ can’t be established.
RSA key fingerprint is 87:cd:36:71:38:96:11:2d:1f:31:87:97:ff:08:63:af.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.111.144’ (RSA) to the list of known hosts.
user1@192.168.111.144’s password:
Now try logging into the machine, with “ssh ‘user1@192.168.111.144′”, and check in:

.ssh/authorized_keys

to make sure we haven’t added extra keys that you weren’t expecting.

Step 9 : Starting the replication between the nodes.

Node1# gluster vol geo-replication repvol1 user1@192.168.111.144::slavevol1 start
Starting geo-replication session between repvol1 & user1@192.168.111.144::slavevol1 has been successful

Node1# gluster vol geo-replication repvol1 user1@192.168.111.144::slavevol1 stat                                                                                        us
MASTER               SLAVE                                              STATUS
——————————————————————————–
repvol1              user1@192.168.111.144::slavevol1                   starting

Node1# gluster vol geo-replication repvol1 user1@192.168.111.144::slavevol1 stat                                                                                        us
MASTER               SLAVE                                              STATUS
——————————————————————————–
repvol1              user1@192.168.111.144::slavevol1                   OK

Step 10 : Suppose after the replication has been set up. Later on you planned to create one more volume like I have done here created volume repvol2 on master node.

Node1# gluster vol list
repvol1
repvol2

We need to make the below modification on client side so that newly created volume can also be replicated. I am using the same user(user1) created earlier for replication.

Slave# cat /etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /var/lib/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
option mountbroker-root /var/mountbroker-root
option geo-replication-log-group group1
option mountbroker-geo-replication.user1 slavevol1,slavevol2
end-volume

After adding the name of the volume in above configuration file. Restarted the glusterd service on slave side.

Slave# service glusterd restart
Stopping glusterd:                                         [  OK  ]
Starting glusterd:                                         [  OK  ]

Step 11 :Start the geo-replication between the nodes for repvol2 volume using user1.

Node1#  gluster vol geo-replication repvol2 user1@192.168.111.144::slavevol2 start
Node1#  gluster vol geo-replication repvol2 user1@192.168.111.144::slavevol2 status
MASTER               SLAVE                                              STATUS
——————————————————————————–
repvol2              user1@192.168.111.144::slavevol2                   starting…
Node1#  gluster vol geo-replication repvol2 user1@192.168.111.144::slavevol2 status
MASTER               SLAVE                                              STATUS
——————————————————————————–
repvol2              user1@192.168.111.144::slavevol2                   OK

Step 12 : If you want to check the various log file and other configuration information associated with volume.

Node1#  gluster vol geo-replication repvol2 user1@192.168.111.144::slavevol2 config
gluster_log_file: /var/log/glusterfs/geo-replication/repvol2/ssh%3A%2F%2Fuser1%40192.168.111.144%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol2.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
session_owner: ccd6dfce-09ae-478c-b22a-5deecc3022b7
remote_gsyncd: /usr/libexec/glusterfs/gsyncd
socketdir: /var/run
state_file: /var/lib/glusterd/geo-replication/repvol2/ssh%3A%2F%2Fuser1%40192.168.111.144%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol2.status
state_socket_unencoded: /var/lib/glusterd/geo-replication/repvol2/ssh%3A%2F%2Fuser1%40192.168.111.144%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol2.socket
gluster_command_dir: /usr/sbin/
pid_file: /var/lib/glusterd/geo-replication/repvol2/ssh%3A%2F%2Fuser1%40192.168.111.144%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol2.pid
log_file: /var/log/glusterfs/geo-replication/repvol2/ssh%3A%2F%2Fuser1%40192.168.111.144%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol2.log
gluster_params: xlator-option=*-dht.assert-no-child-down=true

Advertisements

4 thoughts on “How to setup geo-replication in Glusterfs ?

  1. Sachikanta Mishra

    Thanks for this user guide.. but I am getting some error while trying initiate geo-replication:

    [root@gfs-adm04 geo-replication]# gluster volume geo-replication Data01 georep-user@gfs-adm01::Volume01 create push-pem force
    Passwordless ssh login has not been setup with gfs-adm01 for user georep-user.
    geo-replication command failed
    [root@gfs-adm04 geo-replication]#

    Any help on this would be really great.

    Thanks,
    Sachikanta

    Reply
  2. Sachikanta Mishra

    When I tried by command ‘ssh georep-user@gfs-adm01’ from gfs-adm04 to gfs-adm01 it’s asking me to enter password but if am using command ‘ssh -i /var/lib/glusterd/geo-replication/secret.pem georep-user@gfs-adm01’ it’s working fine without password.

    However I have followed all the steps that you have given on your block. anything wrong am I doing here, please advise.

    Regards,
    Sachikanta

    Reply
  3. Vikrant Post author

    have you issued this command as mentioned in article ? It should copy your key to secondary host. if it’s done then you should be able to make password less connection without specifying the secret.pem in ssh.
    Node1# ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub user1@192.168.111.144

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s