Category Archives: RHEL 7

Difference between virtio-blk and virtio-scsi ?

Both virtio-blk and virtio-scsi are type of para-virtualization then what’s the exact difference between them I was having this question in my mind for sometime.  As readers may already know, by default we are assigning a virtio-blk disk to openstack instance that’s why it shows inside the instance as vd*, we do have the option to assign a scsi disk to instance by setting the metadata properties on glance image which is going to used to spawning an instance, once the instance is spawned it will show the disk name as sd*.

Major advantages of providing the virtio-scsi over virtio-blk is having multiple block devices per virtual SCSI adapter. It’s not like that virtio-scsi is the replacement for virtio-blk. Development work for virtio-blk is also going on.

virtio-blk

  • Three types of storage can be attached to a guest machine using virtio-blk.
    • File
    • Disk
    • LUN

Let’s understand the I/O path for virtio-blk and what improvements are coming it in near future.

Guests :

App –> VFS/Filesystem –> Generic Block Layer –> IO scheduler –> virtio-blk.ko

Host :

QEMU (user space) –> VFS/Filesystem –> Generic Block Layer –> IO Scheduler –> Block Device Layer –> Hard Disk.

We can see that in above flow two IO scheduler are coming into picture which doesn’t make sense for all kind of I/O patterns hence in “Guests” flow scheduler is going to be replaced with BIO based virtio-blk. Also, scheduler option will also be available just in case if some applications takes the advantage of scheduler.

Eventually it would be like :

  • struct request based [Using IO scheduler in guest]
  • struct bio based [Not using IO scheduler in guest]

It’s merged in Kernel 3.7

Add ‘virtio_blk.use_bio=1’ to kernel cmdline of guest no change is needed in host machine. it’s not enabled by default.

Kernel developers are planning to increase the intelligence of this feature by deciding to enable this depending upon underlying device and choosing the best I/O path according to workload.

Host:

Host side virtio-blk implementation include.

  1. Qemu Current : global mutex which is main source of bottleneck because only thread can submit  an I/O
  2. Qemu data plane : Each virtio-blk device has thread dedicated to handle request. Requests are processes without going through the QEMU block layer using Linux AIO directly.vhost-blk
  3. vhost-blk is an in-kernel virtio-blk device accelerator, similar to vhost-net. it’s skipping host user space involvement which help us to avoid the context switching.

It mainly lacks the following capability because it’s not based on scsi model :

  • Thin-provisioned Storage for manageability
  • HA cluster on virtual machines.
  • Backup server for reliability.

As it’s not based on scsi protocol hence lacks the capabilities like Persistence Reservation scsi which is required if we are running disks attached to VM while running in cluster environment, it helps to avoid the data corruption on shared devices.

Exception : In case of virtio-blk scsi commands works when storage is attached as LUN to guest.

virtio-scsi

  • It has mainly three kind of configurations.
    • Qemu [User space target]
      • File
      • Disk
      • LUN   << SCSI command compatible.
    • LIO target [Kernel space target]
    • libiscsi [User Space iscsi initiator]  << SCSI command compatible.

It can support thousands of disks per PCI device True SCSI devices, as the naming convention in guest is showing as sd* hence it’s good for p2v/v2v migration.

How to determine from tcpdump which sec krb5, krb5i or krb5p option is used ?

In this article I am going to show how can we determine from tcpdump which security mode I am using with my nfs share.

Generally, on system it can be easily identified by /proc/mounts output but you can confirm the same from tcpdump as well.

I have collected the tcpdump while mounting the share with krb5, krb5i and krb5p.

First of all some information about these security modes : I got this info from man page of nfs.

sec=krb5        provides cryptographic proof of a user’s identity in each RPC request.  This provides strong  verification of  the  identity  of  users  accessing data on the server.
sec=krb5i       security flavor  provides  a cryptographically  strong  guarantee that the data in each RPC request has not been tampered with.
sec=krb5p       security flavor encrypts every RPC request to prevent data exposure during network transit; however, expect  some  performance  impact  when  using  integrity  checking or encryption.

You need to check this option for NFS call but don’t check it on NULL procedure calls.

a) Identifying the type of security : here we can see that ” GSS Service: rpcsec_gss_svc_none (1)” which indicates that we are using krb5 option.

~~~

Remote Procedure Call, Type:Call XID:0x998e7aaa
Fragment header: Last fragment, 128 bytes
1… …. …. …. …. …. …. …. = Last Fragment: Yes
.000 0000 0000 0000 0000 0000 1000 0000 = Fragment Length: 128
XID: 0x998e7aaa (2576251562)
Message Type: Call (0)
RPC Version: 2
Program: NFS (100003)
Program Version: 4
Procedure: COMPOUND (1)
Credentials
Flavor: RPCSEC_GSS (6)
Length: 24
GSS Version: 1
GSS Procedure: RPCSEC_GSS_DATA (0)
GSS Sequence Number: 1
GSS Service: rpcsec_gss_svc_none (1)
GSS Context
GSS Context Length: 4
GSS Context: 03000000
[Created in frame: 15]
[Destroyed in frame: 17]
Verifier
Flavor: RPCSEC_GSS (6)
GSS Token: 0000001c040404ffffffffff0000000029621b307b46a22f…
GSS Token Length: 28
GSS-API Generic Security Service Application Program Interface
krb5_blob: 040404ffffffffff0000000029621b307b46a22f2416a199…
krb5_tok_id: KRB_TOKEN_CFX_GetMic (0x0404)
krb5_cfx_flags: 0x04
…. .1.. = AcceptorSubkey: Set
…. ..0. = Sealed: Not set
…. …0 = SendByAcceptor: Not set
krb5_filler: ffffffffff
krb5_cfx_seq: 694295344
krb5_sgn_cksum: 7b46a22f2416a1998189d4f3
Network File System, Ops(3): PUTROOTFH, GETFH, GETATTR

~~~

b) Here we can see GSS Service: rpcsec_gss_svc_integrity (2) which indicates that we are using krb5i mount option.

~~~

Remote Procedure Call, Type:Call XID:0x9000c99d
Fragment header: Last fragment, 168 bytes
1… …. …. …. …. …. …. …. = Last Fragment: Yes
.000 0000 0000 0000 0000 0000 1010 1000 = Fragment Length: 168
XID: 0x9000c99d (2415970717)
Message Type: Call (0)
RPC Version: 2
Program: NFS (100003)
Program Version: 4
Procedure: COMPOUND (1)
Credentials
Flavor: RPCSEC_GSS (6)
Length: 24
GSS Version: 1
GSS Procedure: RPCSEC_GSS_DATA (0)
GSS Sequence Number: 1
GSS Service: rpcsec_gss_svc_integrity (2)
GSS Context
GSS Context Length: 4
GSS Context: 18000000
[Created in frame: 13]
[Destroyed in frame: 15]
Verifier
Flavor: RPCSEC_GSS (6)
GSS Token: 0000001c040404ffffffffff00000000048c66c21f96b420…
GSS Token Length: 28
GSS-API Generic Security Service Application Program Interface
krb5_blob: 040404ffffffffff00000000048c66c21f96b4205aa1df73…
krb5_tok_id: KRB_TOKEN_CFX_GetMic (0x0404)
krb5_cfx_flags: 0x04
…. .1.. = AcceptorSubkey: Set
…. ..0. = Sealed: Not set
…. …0 = SendByAcceptor: Not set
krb5_filler: ffffffffff
krb5_cfx_seq: 76310210
krb5_sgn_cksum: 1f96b4205aa1df7338ecf03f
Network File System

~~~

c) If we are seeing GSS Service: rpcsec_gss_svc_privacy (3) that means krb5p option is used to mount the filesystem.

~~~

Remote Procedure Call, Type:Call XID:0xd66bae50
Fragment header: Last fragment, 204 bytes
1… …. …. …. …. …. …. …. = Last Fragment: Yes
.000 0000 0000 0000 0000 0000 1100 1100 = Fragment Length: 204
XID: 0xd66bae50 (3597381200)
Message Type: Call (0)
RPC Version: 2
Program: NFS (100003)
Program Version: 4
Procedure: COMPOUND (1)
Credentials
Flavor: RPCSEC_GSS (6)
Length: 24
GSS Version: 1
GSS Procedure: RPCSEC_GSS_DATA (0)
GSS Sequence Number: 3
GSS Service: rpcsec_gss_svc_privacy (3)
GSS Context
GSS Context Length: 4
GSS Context: 1a000000
[Created in frame: 13]
[Destroyed in frame: 16]
Verifier
Flavor: RPCSEC_GSS (6)
GSS Token: 0000001c040404ffffffffff000000002e82de2d0c6204fb…
GSS Token Length: 28
GSS-API Generic Security Service Application Program Interface
krb5_blob: 040404ffffffffff000000002e82de2d0c6204fbd16f6ad2…
krb5_tok_id: KRB_TOKEN_CFX_GetMic (0x0404)
krb5_cfx_flags: 0x04
…. .1.. = AcceptorSubkey: Set
…. ..0. = Sealed: Not set
…. …0 = SendByAcceptor: Not set
krb5_filler: ffffffffff
krb5_cfx_seq: 780328493
krb5_sgn_cksum: 0c6204fbd16f6ad270552024
GSS-Wrap
Network File System

~~~

Tip : When we are using krb5p option you will not be able to see the content of the nfs portion of frame because of high security. New layer GSS-Wrap is introduced.

Summary :

GSS Service: rpcsec_gss_svc_none (1)         == krb5

GSS Service: rpcsec_gss_svc_integrity (2)  == krb5i

GSS Service: rpcsec_gss_svc_privacy (3)  ===   krb5p

Reference :

https://tools.ietf.org/html/rfc2203#section-5.3.2

How to check the supported encryption types in nfs ?

In this article I am going to show how we can find the supported encryption types from nfs server.

You need to go to below path to see the supported encryption types :

cat /proc/fs/nfsd/supported_krb5_enctypes
enctypes=18,17,16,23,3,1,2

It will show you the numbers only kindly find the below mapping with encryption types :

18 -- aes256-cts-hmac-sha1-96
17 -- aes128-cts-hmac-sha1-96
16 -- des3-cbc-sha1-kd
23 -- rc4-hmac
3  -- des-cbc-md5 
1 -- des-cbc-crc 
2 -- des-cbc-md4

Which encryption type your nfs client and servers are using, you can find 
that from tcpdump or from the kerberos logs when the ticket is getting 
generated.

Showing an example from NFS tcpdump to identify which "Encryption type" we 
are using.

Network File System
    [Program Version: 4]
    [V4 Procedure: NULL (0)]
    GSS Context
        GSS Context Length: 4
        GSS Context: 18000000
        [Created in frame: 13]
    GSS Major Status: 0
    GSS Minor Status: 0
    GSS Sequence Window: 128
    GSS Token: 0000009c60819906092a864886f71201020202006f818930...
        GSS Token Length: 156
        GSS-API Generic Security Service Application Program Interface
            OID: 1.2.840.113554.1.2.2 (KRB5 - Kerberos 5)
            krb5_blob: 02006f8189308186a003020105a10302010fa27a3078a003...
                krb5_tok_id: KRB5_AP_REP (0x0002)
                Kerberos AP-REP
                    Pvno: 5
                    MSG Type: AP-REP (15)
                    enc-part aes256-cts-hmac-sha1-96
                        Encryption type: aes256-cts-hmac-sha1-96 (18)   <<<<<<<<<
                        enc-part: 164eac87c8137e058a30c26f87d4020f13b34621b048b9b4...


If you are facing issue while doing kerbero mount of nfs share, it's good 
practice to see whether you are able to mount the share using keytab 
generated with "all" encryption type instead of any specific one.

How to use Vagrant to create VM — Part 1

In this article I am going to show the usage of vagrant. I found this is the best tool to build your test environment. Especially in cases when you are working on clusters where you need to setup number of nodes to do the complete setup of cluster. Manual configuration may lead to issue, you can use the single command like “vagrant up” to bring up the whole cluster.

I have used Virtual box as a provider in this article. vagrant supports VMware workstation, VMware fusion and AWS also.

Step 1 : I have downloaded the centos 6.6 .box image which is compatible format while bringing up the VMs using vagrant. Along with box image I have downloaded the vagrant rpm. I have already virtual box installed on my machine. Everthying is available for free download.

[root@test /vagrantwork]# ll
total 427500
-rw-r–r– 1 root root 364979712 May 30 20:27 centos-6.6-x86_64.box
-rw-r–r– 1 root root  72766101 Jul 11 04:18 vagrant_1.7.3_x86_64.rpm

Step 2 : I installed the vagrant rpm using simple command like “rpm -ivh”. No dependency issue. Checking the version after the installation.

[root@test /vagrantwork]# vagrant –version
Vagrant 1.7.3

Step 3 : Creating a vagrant file using below command.

[root@test /vagrantwork]# vagrant init centosfile1 centos-6.6-x86_64.box
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

Step 4 : After issuing above command, a new file is created in current directory by default with name of “Vagrantfile”.

[root@test /vagrantwork]# ll Vagrantfile
-rw-r–r– 1 root root 2963 Aug 26 19:12 Vagrantfile

Step 5 : Lets check the content of that file. As we have used centos box image while using init command to create the file hence it’s showing the URL of the same in below output.

[root@test /vagrantwork]# awk ‘{$1=$1;print}’ Vagrantfile | egrep -v “^(#|$)”
Vagrant.configure(2) do |config|
config.vm.box = “centosfile1”
config.vm.box_url = “centos-6.6-x86_64.box”
end

Step 6 : Lets start the Virtual machine with one go. I have shown all the messages which i got on console while creating a new VM. It took only few minutes to bring up the new VM.

[root@test /vagrantwork]# vagrant up
Bringing machine ‘default’ up with ‘virtualbox’ provider…
==> default: Box ‘centosfile1’ could not be found. Attempting to find and install…
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly…
==> default: Adding box ‘centosfile1’ (v0) for provider: virtualbox
default: Unpacking necessary files from: file:///vagrantwork/centos-6.6-x86_64.box
==> default: Successfully added box ‘centosfile1’ (v0) for ‘virtualbox’!
==> default: Importing base box ‘centosfile1’…
==> default: Matching MAC address for NAT networking…
==> default: Setting the name of the VM: vagrantwork_default_1440596686223_32361
==> default: Clearing any previously set forwarded ports…
==> default: Clearing any previously set network interfaces…
==> default: Preparing network interfaces based on configuration…
default: Adapter 1: nat
==> default: Forwarding ports…
default: 22 => 2222 (adapter 1)
==> default: Booting VM…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying…
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it’s present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM…
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 4.3.28
default: VirtualBox Version: 5.0
==> default: Mounting shared folders…
default: /vagrant => /vagrantwork

Step 7 : I checked the status of running VM using below command.

[root@test /vagrantwork]# vagrant status
Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

Step 8 : If you want to know about the provider like in this case, it’s virtual box, you may issue below command.

[root@test /vagrantwork]# vagrant global-status
id       name    provider   state   directory
————————————————————————
444be5a  default virtualbox running /vagrantwork

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date. To interact with any of the machines, you can go to
that directory and run Vagrant, or you can use the ID directly
with Vagrant commands from any directory. For example:
“vagrant destroy 1a2b3c4d”

Step 9 : As our VM is up and running fine. We can do the ssh without using any password to VM.

[root@test /vagrantwork]# vagrant ssh
Last login: Sat May 30 12:27:44 2015 from 10.0.2.2
Welcome to your Vagrant-built virtual machine.

Step 10 : After logging into VM, I am checking the share filesystem. By default it will share the filesystem in which we have created a Vagrantfile.

[vagrant@localhost ~]$ df -h /vagrant/
Filesystem      Size  Used Avail Use% Mounted on
vagrant          92G   32G   61G  35% /vagrant
[vagrant@localhost ~]$ cd /vagrant/
[vagrant@localhost vagrant]$ ll
total 427500
-rw-r–r–. 1 vagrant vagrant 364979712 May 30 16:57 centos-6.6-x86_64.box
-rw-r–r–. 1 vagrant vagrant  72766101 Jul 11 00:48 vagrant_1.7.3_x86_64.rpm
-rw-r–r–. 1 vagrant vagrant      2963 Aug 26 15:42 Vagrantfile

Step 11 : if I am going to create any file in shared filesystem, changes will be reflected in both ways.

How to enable nfs on Netapp Filer ?

In this article, I am going to show how to enable NFS service on Netapp.

Step 1 : First verify whether license is installed for NFS or not.

In my case I have added the license for both NFS and CIFS.

filer1> license
Serial Number: 4082368-50-8
Owner: filer1
Package           Type    Description           Expiration
—————– ——- ——————— ——————–
NFS               license NFS License           –
CIFS              license CIFS License          –

If it’s not already added you need to add the license using below command.

filer > license add LICENSE-NUMBER

Step 2 : After the verify whether nfs is running or not. I verified it was not running, I started it and after that it’s showing in “running” state.

filer1> nfs status
NFS server is NOT running.

filer1> nfs on
NFS server is running.

filer1> nfs status
NFS server is running.

Step 3 : After enabling the nfs service, I was able to mount the NFS share on client using NFSv3 option. It was not getting mounted with NFSv4.

If you look carefully in below output I have used nfs4 option to mount it with NFSv4 but it got mounted with NFSv3.

[root@nfsclient ~]# mount -t nfs4 192.168.111.150:/vol/nfsstore2 /mnt

[root@nfsclient ~]# df -h /mnt
Filesystem                      Size  Used Avail Use% Mounted on
192.168.111.150:/vol/nfsstore2   95M   64K   95M   1% /mnt

[root@nfsclient ~]# nfsstat -m
/mnt from 192.168.111.150:/vol/nfsstore2
Flags: rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.111.150,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=192.168.111.150

Step 4 : To mount it with NFSv4 we need to enable one nfs option on filer.

a) After issuing below command you will find that it will show nfs.v4.enable in off status. Note : Below output is truncated to show only one option.

filer1> options nfs

nfs.v4.enable                off

b) We need to enable this to support NFSv4.

filer1> options nfs.v4.enable on

Step 5 : Now, I am able to mount it using NFSv4 option.

[root@nfsclient ~]# mount -t nfs4 192.168.111.150:/vol/nfsstore2 /mnt

[root@nfsclient ~]# nfsstat -m
/mnt from 192.168.111.150:/vol/nfsstore2
Flags: rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.111.163,local_lock=none,addr=192.168.111.150

How to use referrals in NFS ?

In this article, I am going to show you the usage of referrals in NFS. For theoretical information about referrals, I suggest you to refer the NFv4 RFC. (https://tools.ietf.org/html/rfc7530#section-4).

My lab setup :

–> NFS servers ( fedora 22).

–> NFS client ( RHEL 7)

Step 1 : I have exported on xfs filesystems from NFS server named server1.

[root@server1 AWK]# df -h /share2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        5.0G   33M  5.0G   1% /share2

[root@server1 AWK]# cat /etc/exports
/share2 *(rw,no_root_squash)

Step 2 : I am giving the referral of the exported filesystem on another NFS server which is server2.

[root@server2 ~]# cat /etc/exports
/share2 *(no_root_squash,no_subtree_check,refer=/share2@192.168.111.163,sync)

Make sure that /share2 directory is present locally.

Step 3 : Mount the filesystem on client and capture the tcpdump while mounting.

[root@rheldesk ~]# tcpdump -s0 -i ens33 -w /tmp/referral1.pcap &
[1] 2244

In below command 192.168.111.164 is the IP address of server2.

[root@rheldesk ~]# mount -t nfs -o vers=4 192.168.111.164:/share2 /referal1/

In output of df on client you will see that filesystem is mounted from 192.168.111.163 which is the IP address of server1. Actual NFS server from which we have exported the filesystem.

Step 4 : Analyzing the tcpdump.

As per RFC we should get the errors like NFSERR_MOVED and we got the same in tcpdump.

Just a p recheck to see, how many IPs sending packets are captured in tcpdump. Totally three IPs, one of client, two of NFS servers.

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -R nfs -T fields -e ip.src -e ip.dst | sort | uniq -c
tshark: -R without -2 is deprecated. For single-pass filtering use -Y.
Running as user “root” and group “root”. This could be dangerous.
21 192.168.111.123 192.168.111.163
18 192.168.111.123 192.168.111.164
21 192.168.111.163 192.168.111.123
18 192.168.111.164 192.168.111.123

a) Let’s check the NFS operations ending with error.

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -R ‘nfs.status != 0’
tshark: -R without -2 is deprecated. For single-pass filtering use -Y.
Running as user “root” and group “root”. This could be dangerous.
144 2015-07-18 22:45:23.462655 192.168.111.164 -> 192.168.111.123 NFS 130 V4 Reply (Call In 143) LOOKUP | GETFH Status: NFS4ERR_MOVED
148 2015-07-18 22:45:23.464255 192.168.111.164 -> 192.168.111.123 NFS 130 V4 Reply (Call In 147) LOOKUP | GETFH Status: NFS4ERR_MOVED

b) As per RFC, if the client encounter NFS4ERR_MOVED then it should send the GETATTR with FS_LOCATIONS and MOUNTED_ON_FILEID

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -Y ‘rpc.xid == 0x479edcd4’
Running as user “root” and group “root”. This could be dangerous.
145 2015-07-18 22:45:23.462924 192.168.111.123 -> 192.168.111.164 NFS 202 V4 Call LOOKUP DH: 0x62d40c52/share2
146 2015-07-18 22:45:23.463513 192.168.111.164 -> 192.168.111.123 NFS 230 V4 Reply (Call In 145) LOOKUP

Same thing is happening in call 145. It’s asking for filesystem location and mounted_on_fileid. Outputs are truncated.

In frame 145 GETATTR call is going with FSID and FS_LOCATION because of the error in 144.

Opcode: GETATTR (9)
Attr mask[0]: 0x01000100 (FSID, FS_LOCATIONS)
reqd_attr: FSID (8)
reco_attr: FS_LOCATIONS (24)
Attr mask[1]: 0x00800000 (MOUNTED_ON_FILEID)
reco_attr: MOUNTED_ON_FILEID (55)

In frame 146, we can see the reply coming with information of NFS server IP i.e 192.168.111.163

Opcode: GETATTR (9)
Status: NFS4_OK (0)
Attr mask[0]: 0x01000100 (FSID, FS_LOCATIONS)
reqd_attr: FSID (8)
fattr4_fsid
fsid4.major: 134217728
fsid4.minor: 134217728
reco_attr: FS_LOCATIONS (24)
fattr4_fs_locations
pathname components (1)
Name: share2
length: 6
contents: share2
fill bytes: opaque data
fs_location4:
num: 1
fs_location4
server:
num: 1
server: 192.168.111.163
length: 15
contents: 192.168.111.163
fill bytes: opaque data
pathname components (1)
Name: share2
length: 6
contents: share2
fill bytes: opaque data
Attr mask[1]: 0x00800000 (MOUNTED_ON_FILEID)
reco_attr: MOUNTED_ON_FILEID (55)
fileid: 0x000000000106bfe6

c) Now client will start communicating with NFS server 192.168.111.163 to mount the filesystem. In subsequent calls it will check filehandler type whether its volatile or permanent.

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -Y ‘nfs && frame.number ge 160’
Running as user “root” and group “root”. This could be dangerous.
160 2015-07-18 22:45:23.467782 192.168.111.123 -> 192.168.111.163 NFS 262 V4 Call SETCLIENTID
161 2015-07-18 22:45:23.468550 192.168.111.163 -> 192.168.111.123 NFS 130 V4 Reply (Call In 160) SETCLIENTID
162 2015-07-18 22:45:23.468669 192.168.111.123 -> 192.168.111.163 NFS 170 V4 Call SETCLIENTID_CONFIRM
165 2015-07-18 22:45:23.470028 192.168.111.163 -> 192.168.111.123 NFS 114 V4 Reply (Call In 162) SETCLIENTID_CONFIRM
167 2015-07-18 22:45:23.470693 192.168.111.163 -> 192.168.111.123 NFS 142 V1 CB_NULL Call
169 2015-07-18 22:45:23.471066 192.168.111.123 -> 192.168.111.163 NFS 94 V1 CB_NULL Reply (Call In 167)

Why nfstest is not generating the traces (packet captures) ?

If you are running tests with nfstest, you may have noticed that its not generating the packet captures for every test.  Nothing to worry, now new option (–createtraces) has been added to create packet captures for each test when it’s specified.

You may use the below link to check the recent developments regarding nfstest.

http://git.linux-nfs.org/?p=mora/nfstest.git;a=summary

I have cloned the git link provided in above link. If you are not having git package install on system, you need to install it.

# git clone git://git.linux-nfs.org/projects/mora/nfstest.git

It will download a directory with name “nfstest”. cd into that directory and run the below command to make the tool work.

# python setup.py install

Now with each test just add the –createtraces option, you will get the packet capture.

Happy NFS Testing!!!