How to configure pnfs server in Fedora 22 ?

In this article I am going to show how to configure pnfs server in fedora. After exporting the share we will mount the filesystem on another fedora node.

I have downloaded fedora 22 ISO image from fedora website and issue dnf update after installing it.

Step 1 : It’s similar to simple export in case of RHEL only option which I have added is pnfs.

pnfsserver# cat /etc/exports
/test1 *(rw,no_root_squash,pnfs)

Step 2 : I have mounted the exported share on another fedora node using below command. Before mounting the share just checking the version supported on NFS client even though I am using the same version of fedora on client.

[root@pnfsclient ~]# cat /proc/fs/nfsd/versions
-2 +3 +4 +4.1 +4.2

I have issued below command in background to capture the tcpdump.

[root@pnfsclient ~]# tcpdump -s 0 -i ens33 host 192.168.111.163 -w /tmp/fedmountv4.1pcap &
[1] 2598

Finally, mounted the filesystem and killed the tcpdump background process.

[root@pnfsclient ~]# mount.nfs -o vers=4.1 192.168.111.163:/test1 /mnt

[root@pnfsclient ~]# kill 2598
[root@pnfsclient ~]# 58 packets captured
58 packets received by filter
0 packets dropped by kernel

[1]+  Done                    tcpdump -s 0 -i ens33 host 192.168.111.163 -w /tmp/fedmountv4.1pcap

Step 3 : Lets check the mounted filesystem options.

[root@pnfsclient ~]# grep -w nfs4 /proc/mounts
192.168.111.163:/test1 /mnt nfs4 rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.111.164,local_lock=none,addr=192.168.111.163 0 0

Hey, but I am not able to verify whether client is using pNFS or not, hold on, use below command to verify your client layout.

[root@pnfsclient ~]# grep LAYOUT /proc/self/mountstats
nfsv4:  bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=LAYOUT_BLOCK_VOLUME
LAYOUTGET: 0 0 0 0 0 0 0 0
LAYOUTCOMMIT: 0 0 0 0 0 0 0 0
LAYOUTRETURN: 0 0 0 0 0 0 0 0

At the time of writing this article, pnfs was supported only for block layout.

Step 4 : Lets have look into the tcpdump output.

a) I am reading tcpdump using command line instead of wireshark. I am interested only EXCHANGE_ID call and reply hence filtered it.

[root@pnfsclient ~]# tshark -tad -n -r /tmp/fedmountv4.1pcap | grep -i exchange
Running as user “root” and group “root”. This could be dangerous.
12 2015-07-05 21:42:23.183471 192.168.111.164 -> 192.168.111.163 NFS 310 V4 Call EXCHANGE_ID
13 2015-07-05 21:42:23.184525 192.168.111.163 -> 192.168.111.164 NFS 202 V4 Reply (Call In 12) EXCHANGE_ID

b) I am opening the frame 12 which is a call from client to pnfs server. Below output is truncated showing only NFS part for the sake of brevity. I have high-lightened the EXCHGID4_FLAG_USE_PNFS_DS and EXCHGID4_FLAG_USE_PNFS_MDS flags in below output.

[root@pnfsclient ~]# tshark -V -tad -n -r /tmp/fedmountv4.1pcap ‘frame.number == 12’

Network File System, Ops(1): EXCHANGE_ID
[Program Version: 4]
[V4 Procedure: COMPOUND (1)]
Tag: <EMPTY>
length: 0
contents: <EMPTY>
minorversion: 1
Operations (count: 1): EXCHANGE_ID
Opcode: EXCHANGE_ID (42)
eia_clientowner
verifier: 0x5598b942166d2919
Data: <DATA>
length: 24
contents: <DATA>
flags: 0x00000101
0… …. …. …. …. …. …. …. = EXCHGID4_FLAG_CONFIRMED_R: Not set
.0.. …. …. …. …. …. …. …. = EXCHGID4_FLAG_UPD_CONFIRMED_REC_A: Not set
                …. …. …. .0.. …. …. …. …. = EXCHGID4_FLAG_USE_PNFS_DS: Not set
                …. …. …. ..0. …. …. …. …. = EXCHGID4_FLAG_USE_PNFS_MDS: Not set
…. …. …. …0 …. …. …. …. = EXCHGID4_FLAG_USE_NON_PNFS: Not set
…. …. …. …. …. …1 …. …. = EXCHGID4_FLAG_BIND_PRINC_STATEID: Set
…. …. …. …. …. …. …. ..0. = EXCHGID4_FLAG_SUPP_MOVED_MIGR: Not set
…. …. …. …. …. …. …. …1 = EXCHGID4_FLAG_SUPP_MOVED_REFER: Set
eia_state_protect: SP4_NONE (0)
eia_client_impl_id
Implementor DNS domain name(nii_domain): kernel.org
length: 10
contents: kernel.org
fill bytes: opaque data
Implementation product name(nii_name): Linux 4.0.4-301.fc22.x86_64 #1 SMP Thu May 21 13:10:33 UTC 2015 x86_64
length: 70
contents: Linux 4.0.4-301.fc22.x86_64 #1 SMP Thu May 21 13:10:33 UTC 2015 x86_64
fill bytes: opaque data
Build timestamp(nii_date)
seconds: 0
nseconds: 0
[Main Opcode: EXCHANGE_ID (42)]

c) Checking the reply  for previous call i.e from pnfs server to client. reply is present in frame 13. Once again output is truncated.

[root@pnfsclient ~]# tshark -V -tad -n -r /tmp/fedmountv4.1pcap ‘frame.number == 13’

Network File System, Ops(1): EXCHANGE_ID
[Program Version: 4]
[V4 Procedure: COMPOUND (1)]
Status: NFS4_OK (0)
Tag: <EMPTY>
length: 0
contents: <EMPTY>
Operations (count: 1)
Opcode: EXCHANGE_ID (42)
Status: NFS4_OK (0)
clientid: 0x6bb7985501000000
seqid: 0x00000001
flags: 0x00020001
0… …. …. …. …. …. …. …. = EXCHGID4_FLAG_CONFIRMED_R: Not set
.0.. …. …. …. …. …. …. …. = EXCHGID4_FLAG_UPD_CONFIRMED_REC_A: Not set
                …. …. …. .0.. …. …. …. …. = EXCHGID4_FLAG_USE_PNFS_DS: Not set
                …. …. …. ..1. …. …. …. …. = EXCHGID4_FLAG_USE_PNFS_MDS: Set
…. …. …. …0 …. …. …. …. = EXCHGID4_FLAG_USE_NON_PNFS: Not set
…. …. …. …. …. …0 …. …. = EXCHGID4_FLAG_BIND_PRINC_STATEID: Not set
…. …. …. …. …. …. …. ..0. = EXCHGID4_FLAG_SUPP_MOVED_MIGR: Not set
…. …. …. …. …. …. …. …1 = EXCHGID4_FLAG_SUPP_MOVED_REFER: Set
eia_state_protect: SP4_NONE (0)
eir_server_owner
minor ID: 0
major ID: <DATA>
length: 21
contents: <DATA>
fill bytes: opaque data
server scope: <DATA>
length: 21
contents: <DATA>
fill bytes: opaque data
eir_server_impl_id
[Main Opcode: EXCHANGE_ID (42)]

I am not using any dataserver that’s why EXCHGID4_FLAG_USE_PNFS_DS flag is unset. If you see the EXCHGID4_FLAG_USE_PNFS_MDS is set because our pNFS server is acting like a metadata server.

I will come up with more articles on pNFS.

Advertisements

3 thoughts on “How to configure pnfs server in Fedora 22 ?

  1. AnonPP

    What type of machines are your server and client systems running on and what type of device is underlying the /test1 filesystem? Are you using physical servers and SAS shared disk or FC LUNs? VMs and iSCSI LUNs? VMs and shared virtual disks? If VMs then what hypervisor? Have you had any luck since with SCSI LAYOUT?

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s