Category Archives: RHEL 6

How to determine from tcpdump which sec krb5, krb5i or krb5p option is used ?

In this article I am going to show how can we determine from tcpdump which security mode I am using with my nfs share.

Generally, on system it can be easily identified by /proc/mounts output but you can confirm the same from tcpdump as well.

I have collected the tcpdump while mounting the share with krb5, krb5i and krb5p.

First of all some information about these security modes : I got this info from man page of nfs.

sec=krb5        provides cryptographic proof of a user’s identity in each RPC request.  This provides strong  verification of  the  identity  of  users  accessing data on the server.
sec=krb5i       security flavor  provides  a cryptographically  strong  guarantee that the data in each RPC request has not been tampered with.
sec=krb5p       security flavor encrypts every RPC request to prevent data exposure during network transit; however, expect  some  performance  impact  when  using  integrity  checking or encryption.

You need to check this option for NFS call but don’t check it on NULL procedure calls.

a) Identifying the type of security : here we can see that ” GSS Service: rpcsec_gss_svc_none (1)” which indicates that we are using krb5 option.

~~~

Remote Procedure Call, Type:Call XID:0x998e7aaa
Fragment header: Last fragment, 128 bytes
1… …. …. …. …. …. …. …. = Last Fragment: Yes
.000 0000 0000 0000 0000 0000 1000 0000 = Fragment Length: 128
XID: 0x998e7aaa (2576251562)
Message Type: Call (0)
RPC Version: 2
Program: NFS (100003)
Program Version: 4
Procedure: COMPOUND (1)
Credentials
Flavor: RPCSEC_GSS (6)
Length: 24
GSS Version: 1
GSS Procedure: RPCSEC_GSS_DATA (0)
GSS Sequence Number: 1
GSS Service: rpcsec_gss_svc_none (1)
GSS Context
GSS Context Length: 4
GSS Context: 03000000
[Created in frame: 15]
[Destroyed in frame: 17]
Verifier
Flavor: RPCSEC_GSS (6)
GSS Token: 0000001c040404ffffffffff0000000029621b307b46a22f…
GSS Token Length: 28
GSS-API Generic Security Service Application Program Interface
krb5_blob: 040404ffffffffff0000000029621b307b46a22f2416a199…
krb5_tok_id: KRB_TOKEN_CFX_GetMic (0x0404)
krb5_cfx_flags: 0x04
…. .1.. = AcceptorSubkey: Set
…. ..0. = Sealed: Not set
…. …0 = SendByAcceptor: Not set
krb5_filler: ffffffffff
krb5_cfx_seq: 694295344
krb5_sgn_cksum: 7b46a22f2416a1998189d4f3
Network File System, Ops(3): PUTROOTFH, GETFH, GETATTR

~~~

b) Here we can see GSS Service: rpcsec_gss_svc_integrity (2) which indicates that we are using krb5i mount option.

~~~

Remote Procedure Call, Type:Call XID:0x9000c99d
Fragment header: Last fragment, 168 bytes
1… …. …. …. …. …. …. …. = Last Fragment: Yes
.000 0000 0000 0000 0000 0000 1010 1000 = Fragment Length: 168
XID: 0x9000c99d (2415970717)
Message Type: Call (0)
RPC Version: 2
Program: NFS (100003)
Program Version: 4
Procedure: COMPOUND (1)
Credentials
Flavor: RPCSEC_GSS (6)
Length: 24
GSS Version: 1
GSS Procedure: RPCSEC_GSS_DATA (0)
GSS Sequence Number: 1
GSS Service: rpcsec_gss_svc_integrity (2)
GSS Context
GSS Context Length: 4
GSS Context: 18000000
[Created in frame: 13]
[Destroyed in frame: 15]
Verifier
Flavor: RPCSEC_GSS (6)
GSS Token: 0000001c040404ffffffffff00000000048c66c21f96b420…
GSS Token Length: 28
GSS-API Generic Security Service Application Program Interface
krb5_blob: 040404ffffffffff00000000048c66c21f96b4205aa1df73…
krb5_tok_id: KRB_TOKEN_CFX_GetMic (0x0404)
krb5_cfx_flags: 0x04
…. .1.. = AcceptorSubkey: Set
…. ..0. = Sealed: Not set
…. …0 = SendByAcceptor: Not set
krb5_filler: ffffffffff
krb5_cfx_seq: 76310210
krb5_sgn_cksum: 1f96b4205aa1df7338ecf03f
Network File System

~~~

c) If we are seeing GSS Service: rpcsec_gss_svc_privacy (3) that means krb5p option is used to mount the filesystem.

~~~

Remote Procedure Call, Type:Call XID:0xd66bae50
Fragment header: Last fragment, 204 bytes
1… …. …. …. …. …. …. …. = Last Fragment: Yes
.000 0000 0000 0000 0000 0000 1100 1100 = Fragment Length: 204
XID: 0xd66bae50 (3597381200)
Message Type: Call (0)
RPC Version: 2
Program: NFS (100003)
Program Version: 4
Procedure: COMPOUND (1)
Credentials
Flavor: RPCSEC_GSS (6)
Length: 24
GSS Version: 1
GSS Procedure: RPCSEC_GSS_DATA (0)
GSS Sequence Number: 3
GSS Service: rpcsec_gss_svc_privacy (3)
GSS Context
GSS Context Length: 4
GSS Context: 1a000000
[Created in frame: 13]
[Destroyed in frame: 16]
Verifier
Flavor: RPCSEC_GSS (6)
GSS Token: 0000001c040404ffffffffff000000002e82de2d0c6204fb…
GSS Token Length: 28
GSS-API Generic Security Service Application Program Interface
krb5_blob: 040404ffffffffff000000002e82de2d0c6204fbd16f6ad2…
krb5_tok_id: KRB_TOKEN_CFX_GetMic (0x0404)
krb5_cfx_flags: 0x04
…. .1.. = AcceptorSubkey: Set
…. ..0. = Sealed: Not set
…. …0 = SendByAcceptor: Not set
krb5_filler: ffffffffff
krb5_cfx_seq: 780328493
krb5_sgn_cksum: 0c6204fbd16f6ad270552024
GSS-Wrap
Network File System

~~~

Tip : When we are using krb5p option you will not be able to see the content of the nfs portion of frame because of high security. New layer GSS-Wrap is introduced.

Summary :

GSS Service: rpcsec_gss_svc_none (1)         == krb5

GSS Service: rpcsec_gss_svc_integrity (2)  == krb5i

GSS Service: rpcsec_gss_svc_privacy (3)  ===   krb5p

Reference :

https://tools.ietf.org/html/rfc2203#section-5.3.2

How to check the supported encryption types in nfs ?

In this article I am going to show how we can find the supported encryption types from nfs server.

You need to go to below path to see the supported encryption types :

cat /proc/fs/nfsd/supported_krb5_enctypes
enctypes=18,17,16,23,3,1,2

It will show you the numbers only kindly find the below mapping with encryption types :

18 -- aes256-cts-hmac-sha1-96
17 -- aes128-cts-hmac-sha1-96
16 -- des3-cbc-sha1-kd
23 -- rc4-hmac
3  -- des-cbc-md5 
1 -- des-cbc-crc 
2 -- des-cbc-md4

Which encryption type your nfs client and servers are using, you can find 
that from tcpdump or from the kerberos logs when the ticket is getting 
generated.

Showing an example from NFS tcpdump to identify which "Encryption type" we 
are using.

Network File System
    [Program Version: 4]
    [V4 Procedure: NULL (0)]
    GSS Context
        GSS Context Length: 4
        GSS Context: 18000000
        [Created in frame: 13]
    GSS Major Status: 0
    GSS Minor Status: 0
    GSS Sequence Window: 128
    GSS Token: 0000009c60819906092a864886f71201020202006f818930...
        GSS Token Length: 156
        GSS-API Generic Security Service Application Program Interface
            OID: 1.2.840.113554.1.2.2 (KRB5 - Kerberos 5)
            krb5_blob: 02006f8189308186a003020105a10302010fa27a3078a003...
                krb5_tok_id: KRB5_AP_REP (0x0002)
                Kerberos AP-REP
                    Pvno: 5
                    MSG Type: AP-REP (15)
                    enc-part aes256-cts-hmac-sha1-96
                        Encryption type: aes256-cts-hmac-sha1-96 (18)   <<<<<<<<<
                        enc-part: 164eac87c8137e058a30c26f87d4020f13b34621b048b9b4...


If you are facing issue while doing kerbero mount of nfs share, it's good 
practice to see whether you are able to mount the share using keytab 
generated with "all" encryption type instead of any specific one.

How to use Vagrant to create VM — Part 1

In this article I am going to show the usage of vagrant. I found this is the best tool to build your test environment. Especially in cases when you are working on clusters where you need to setup number of nodes to do the complete setup of cluster. Manual configuration may lead to issue, you can use the single command like “vagrant up” to bring up the whole cluster.

I have used Virtual box as a provider in this article. vagrant supports VMware workstation, VMware fusion and AWS also.

Step 1 : I have downloaded the centos 6.6 .box image which is compatible format while bringing up the VMs using vagrant. Along with box image I have downloaded the vagrant rpm. I have already virtual box installed on my machine. Everthying is available for free download.

[root@test /vagrantwork]# ll
total 427500
-rw-r–r– 1 root root 364979712 May 30 20:27 centos-6.6-x86_64.box
-rw-r–r– 1 root root  72766101 Jul 11 04:18 vagrant_1.7.3_x86_64.rpm

Step 2 : I installed the vagrant rpm using simple command like “rpm -ivh”. No dependency issue. Checking the version after the installation.

[root@test /vagrantwork]# vagrant –version
Vagrant 1.7.3

Step 3 : Creating a vagrant file using below command.

[root@test /vagrantwork]# vagrant init centosfile1 centos-6.6-x86_64.box
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

Step 4 : After issuing above command, a new file is created in current directory by default with name of “Vagrantfile”.

[root@test /vagrantwork]# ll Vagrantfile
-rw-r–r– 1 root root 2963 Aug 26 19:12 Vagrantfile

Step 5 : Lets check the content of that file. As we have used centos box image while using init command to create the file hence it’s showing the URL of the same in below output.

[root@test /vagrantwork]# awk ‘{$1=$1;print}’ Vagrantfile | egrep -v “^(#|$)”
Vagrant.configure(2) do |config|
config.vm.box = “centosfile1”
config.vm.box_url = “centos-6.6-x86_64.box”
end

Step 6 : Lets start the Virtual machine with one go. I have shown all the messages which i got on console while creating a new VM. It took only few minutes to bring up the new VM.

[root@test /vagrantwork]# vagrant up
Bringing machine ‘default’ up with ‘virtualbox’ provider…
==> default: Box ‘centosfile1’ could not be found. Attempting to find and install…
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly…
==> default: Adding box ‘centosfile1’ (v0) for provider: virtualbox
default: Unpacking necessary files from: file:///vagrantwork/centos-6.6-x86_64.box
==> default: Successfully added box ‘centosfile1’ (v0) for ‘virtualbox’!
==> default: Importing base box ‘centosfile1’…
==> default: Matching MAC address for NAT networking…
==> default: Setting the name of the VM: vagrantwork_default_1440596686223_32361
==> default: Clearing any previously set forwarded ports…
==> default: Clearing any previously set network interfaces…
==> default: Preparing network interfaces based on configuration…
default: Adapter 1: nat
==> default: Forwarding ports…
default: 22 => 2222 (adapter 1)
==> default: Booting VM…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying…
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it’s present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM…
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 4.3.28
default: VirtualBox Version: 5.0
==> default: Mounting shared folders…
default: /vagrant => /vagrantwork

Step 7 : I checked the status of running VM using below command.

[root@test /vagrantwork]# vagrant status
Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

Step 8 : If you want to know about the provider like in this case, it’s virtual box, you may issue below command.

[root@test /vagrantwork]# vagrant global-status
id       name    provider   state   directory
————————————————————————
444be5a  default virtualbox running /vagrantwork

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date. To interact with any of the machines, you can go to
that directory and run Vagrant, or you can use the ID directly
with Vagrant commands from any directory. For example:
“vagrant destroy 1a2b3c4d”

Step 9 : As our VM is up and running fine. We can do the ssh without using any password to VM.

[root@test /vagrantwork]# vagrant ssh
Last login: Sat May 30 12:27:44 2015 from 10.0.2.2
Welcome to your Vagrant-built virtual machine.

Step 10 : After logging into VM, I am checking the share filesystem. By default it will share the filesystem in which we have created a Vagrantfile.

[vagrant@localhost ~]$ df -h /vagrant/
Filesystem      Size  Used Avail Use% Mounted on
vagrant          92G   32G   61G  35% /vagrant
[vagrant@localhost ~]$ cd /vagrant/
[vagrant@localhost vagrant]$ ll
total 427500
-rw-r–r–. 1 vagrant vagrant 364979712 May 30 16:57 centos-6.6-x86_64.box
-rw-r–r–. 1 vagrant vagrant  72766101 Jul 11 00:48 vagrant_1.7.3_x86_64.rpm
-rw-r–r–. 1 vagrant vagrant      2963 Aug 26 15:42 Vagrantfile

Step 11 : if I am going to create any file in shared filesystem, changes will be reflected in both ways.

How to use referrals in NFS ?

In this article, I am going to show you the usage of referrals in NFS. For theoretical information about referrals, I suggest you to refer the NFv4 RFC. (https://tools.ietf.org/html/rfc7530#section-4).

My lab setup :

–> NFS servers ( fedora 22).

–> NFS client ( RHEL 7)

Step 1 : I have exported on xfs filesystems from NFS server named server1.

[root@server1 AWK]# df -h /share2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        5.0G   33M  5.0G   1% /share2

[root@server1 AWK]# cat /etc/exports
/share2 *(rw,no_root_squash)

Step 2 : I am giving the referral of the exported filesystem on another NFS server which is server2.

[root@server2 ~]# cat /etc/exports
/share2 *(no_root_squash,no_subtree_check,refer=/share2@192.168.111.163,sync)

Make sure that /share2 directory is present locally.

Step 3 : Mount the filesystem on client and capture the tcpdump while mounting.

[root@rheldesk ~]# tcpdump -s0 -i ens33 -w /tmp/referral1.pcap &
[1] 2244

In below command 192.168.111.164 is the IP address of server2.

[root@rheldesk ~]# mount -t nfs -o vers=4 192.168.111.164:/share2 /referal1/

In output of df on client you will see that filesystem is mounted from 192.168.111.163 which is the IP address of server1. Actual NFS server from which we have exported the filesystem.

Step 4 : Analyzing the tcpdump.

As per RFC we should get the errors like NFSERR_MOVED and we got the same in tcpdump.

Just a p recheck to see, how many IPs sending packets are captured in tcpdump. Totally three IPs, one of client, two of NFS servers.

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -R nfs -T fields -e ip.src -e ip.dst | sort | uniq -c
tshark: -R without -2 is deprecated. For single-pass filtering use -Y.
Running as user “root” and group “root”. This could be dangerous.
21 192.168.111.123 192.168.111.163
18 192.168.111.123 192.168.111.164
21 192.168.111.163 192.168.111.123
18 192.168.111.164 192.168.111.123

a) Let’s check the NFS operations ending with error.

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -R ‘nfs.status != 0’
tshark: -R without -2 is deprecated. For single-pass filtering use -Y.
Running as user “root” and group “root”. This could be dangerous.
144 2015-07-18 22:45:23.462655 192.168.111.164 -> 192.168.111.123 NFS 130 V4 Reply (Call In 143) LOOKUP | GETFH Status: NFS4ERR_MOVED
148 2015-07-18 22:45:23.464255 192.168.111.164 -> 192.168.111.123 NFS 130 V4 Reply (Call In 147) LOOKUP | GETFH Status: NFS4ERR_MOVED

b) As per RFC, if the client encounter NFS4ERR_MOVED then it should send the GETATTR with FS_LOCATIONS and MOUNTED_ON_FILEID

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -Y ‘rpc.xid == 0x479edcd4’
Running as user “root” and group “root”. This could be dangerous.
145 2015-07-18 22:45:23.462924 192.168.111.123 -> 192.168.111.164 NFS 202 V4 Call LOOKUP DH: 0x62d40c52/share2
146 2015-07-18 22:45:23.463513 192.168.111.164 -> 192.168.111.123 NFS 230 V4 Reply (Call In 145) LOOKUP

Same thing is happening in call 145. It’s asking for filesystem location and mounted_on_fileid. Outputs are truncated.

In frame 145 GETATTR call is going with FSID and FS_LOCATION because of the error in 144.

Opcode: GETATTR (9)
Attr mask[0]: 0x01000100 (FSID, FS_LOCATIONS)
reqd_attr: FSID (8)
reco_attr: FS_LOCATIONS (24)
Attr mask[1]: 0x00800000 (MOUNTED_ON_FILEID)
reco_attr: MOUNTED_ON_FILEID (55)

In frame 146, we can see the reply coming with information of NFS server IP i.e 192.168.111.163

Opcode: GETATTR (9)
Status: NFS4_OK (0)
Attr mask[0]: 0x01000100 (FSID, FS_LOCATIONS)
reqd_attr: FSID (8)
fattr4_fsid
fsid4.major: 134217728
fsid4.minor: 134217728
reco_attr: FS_LOCATIONS (24)
fattr4_fs_locations
pathname components (1)
Name: share2
length: 6
contents: share2
fill bytes: opaque data
fs_location4:
num: 1
fs_location4
server:
num: 1
server: 192.168.111.163
length: 15
contents: 192.168.111.163
fill bytes: opaque data
pathname components (1)
Name: share2
length: 6
contents: share2
fill bytes: opaque data
Attr mask[1]: 0x00800000 (MOUNTED_ON_FILEID)
reco_attr: MOUNTED_ON_FILEID (55)
fileid: 0x000000000106bfe6

c) Now client will start communicating with NFS server 192.168.111.163 to mount the filesystem. In subsequent calls it will check filehandler type whether its volatile or permanent.

[root@rheldesk tmp]# tshark -tad -n -r referral2.pcap -Y ‘nfs && frame.number ge 160’
Running as user “root” and group “root”. This could be dangerous.
160 2015-07-18 22:45:23.467782 192.168.111.123 -> 192.168.111.163 NFS 262 V4 Call SETCLIENTID
161 2015-07-18 22:45:23.468550 192.168.111.163 -> 192.168.111.123 NFS 130 V4 Reply (Call In 160) SETCLIENTID
162 2015-07-18 22:45:23.468669 192.168.111.123 -> 192.168.111.163 NFS 170 V4 Call SETCLIENTID_CONFIRM
165 2015-07-18 22:45:23.470028 192.168.111.163 -> 192.168.111.123 NFS 114 V4 Reply (Call In 162) SETCLIENTID_CONFIRM
167 2015-07-18 22:45:23.470693 192.168.111.163 -> 192.168.111.123 NFS 142 V1 CB_NULL Call
169 2015-07-18 22:45:23.471066 192.168.111.123 -> 192.168.111.163 NFS 94 V1 CB_NULL Reply (Call In 167)

How to check the source code in RHEL ?

If you guys like to dig more about the RHEL issue then Redhat is now providing the wonderful utility to see the code.

Below link, one Destination for most of the answers 🙂

https://access.redhat.com/labs/

Search for “RED HAT CODE BROWSER” and click on “Go to App”

Or

Use the below link directly.

https://access.redhat.com/labs/psb/

Select the kernel version then fs (if you want to check the code for any filesystem). All the filesystems are present inside the fs. In below link I am at CIFS readdir code.

https://access.redhat.com/labs/psb/versions/kernel-3.10.0-229.4.2.el7/fs/cifs/readdir.c

It will show you all the struct and functions on left hand side.

How chown command work in case of NFSv4 ?

NFSv4 handles user identities differently than NFSv3.  In NFSv3, an nfs client would simply pass a UID number in chown (and other requests) and the nfs server would accept that (even if the nfs server did not know of an account with that UID number).  However, v4 was designed to pass identities in the form of @.

Now if chown command is failing while issuing from NFS client two possible reasons could be.

–> Domain name issue on NFS server and client.

–> Same user is not present on both NFS server and client.

I have created some scenarios in my test lab. I have used RHEL as NFS server and another RHEL vm as NFS client. Results of some of tests may vary if you are using different NAS server like EMC and Netapp.

Setup Info : Some basic information about my setup. /vicky is exported from NFS server, its having the ownership of user3:user3. Notably user3 is present on both NFS server and client with same UID.

[root@nfsserver ~]# df -h /vicky
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/nfsvg-nfslv1  485M  6.3M  454M   2% /vicky

[root@nfsserver ~]# cat /etc/exports | egrep -v “^#|^$”
/vicky *(rw,sync,no_root_squash)

[root@nfsserver ~]# ls -ld /vicky
drwxrwxrwx. 2 user3 user3 1024 Jun  3 07:02 /vicky

[root@nfsclient ~]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
10.65.210.252:/vicky  485M  6.3M  454M   2% /mnt

[root@nfsclient ~]# ls -ld /mnt
drwxrwxrwx. 2 user3 user3 1024 Jun  3 07:02 /mnt

Test 1 : I deleted the user3 as you done that on your server. Now the ownership is showing in numeric because the user is not present on client. But on server it is still showing ownership as user3:user3.

[root@nfsclient ~]# userdel -r user3

[root@nfsclient ~]# ls -ld /mnt
drwxrwxrwx. 2 502 504 1024 Jun  3 07:02 /mnt

Test 2 : I am changing the ownership to user2:user2 **Notably same user is present on NFS server with same UID and GID.**

On NFS client :
[root@nfsclient ~]# id user2
uid=501(user2) gid=501(user2) groups=501(user2),502(group2)

[root@nfsclient ~]# chown user2:user2 /mnt

[root@nfsclient ~]# ls -ld /mnt
drwxrwxrwx. 2 user2 user2 1024 Jun  3 07:02 /mnt

On NFS server : I am checking the ownership on NFS server its showing like below which is expected because we have change it user2:user2 from client.

[root@nfsserver ~]# id user2
uid=501(user2) gid=501(user2) groups=501(user2),502(group2)

[root@nfsserver ~]# ls -ld /vicky
drwxrwxrwx. 2 user2 user2 1024 Jun  3 07:02 /vicky

Test 3 :  I deleted the user user2 from NFS client and added the same user with different UID. Now user2 is having different UID on NFS server and client.

[root@nfsclient ~]# userdel -r user2

[root@nfsclient ~]# useradd -u 510 user2

[root@nfsclient ~]# chown user2:user2 /mnt

[root@nfsclient ~]# ls -ld /mnt
drwxrwxrwx. 2 nobody nobody 1024 Jun  3 07:02 /mnt

[root@nfsserver ~]# ls -ld /vicky
drwxrwxrwx. 2 510 510 1024 Jun  3 07:02 /vicky

Note : I have noticed in case of EMC NAS server I was not able to issue chown command. If I am mounting the same share with NFSv3 option I was able to perform the chown command.

I will add more test results if I encountered more issues.

How to setup a secure (kerberos) NFS share ?

In this article I am going to show the steps to configure secure NFS server.

Below are the setup details.

* Two RHEL 6.5 Machines (dns1 and dns2).

* dns1 is DNS server, IPA server and NFS server.

* dns2 playing the role of IPA client and NFS client.

Step 1 : I have configured the IPA server on node dns1 following redhat documentation. Trust me its very easy setup. Before setting up IPA I have manually configured the DNS by making dns1 as server and dns2 as client.

Step 2 : After installing IPA server I added NFS as a prinicpal on IPA server.

[root@dns1 ~]# ipa service-add nfs/dns1.abc.com
—————————————-
Added service “nfs/dns1.abc.com@ABC.COM”
—————————————-
Principal: nfs/dns1.abc.com@ABC.COM
Managed by: dns1.abc.com

[root@dns1 ~]# ipa service-add nfs/dns2.abc.com
—————————————-
Added service “nfs/dns2.abc.com@ABC.COM”
—————————————-
Principal: nfs/dns2.abc.com@ABC.COM
Managed by: dns2.abc.com

Step 3  : We need to perform the following steps for key related to NFS prinicpal.

[root@dns1 ~]# ipa-getkeytab -s dns1.abc.com -p nfs/dns1.abc.com -k /etc/krb5.keytab
Keytab successfully retrieved and stored in: /etc/krb5.keytab

Step 4 : Configure nfs configuration file to use the SECURE_NFS by default.

[root@dns1 ~]# cat /etc/sysconfig/nfs |grep SECURE
SECURE_NFS=”yes”

Step 5 : I have exported the filesystem using kerberos option from NFS server i.e dns1.

[root@dns1 ~]# cat /etc/exports
/vicky *(rw,sync,sec=sys:krb5:krb5i:krb5p)

[root@dns1 ~]# /etc/init.d/nfs status
rpc.svcgssd (pid 2710) is running…
rpc.mountd (pid 2720) is running…
nfsd (pid 2736 2735 2734 2733 2732 2731 2730 2729) is running…
rpc.rquotad (pid 2716) is running…

[root@dns2 ~]# cat /etc/sysconfig/nfs | grep SECURE
#SECURE_NFS=”yes”
SECURE_NFS=”yes”

Make sure that idmapd.conf is having the domain name set correctly.

[root@dns2 ~]# cat /etc/idmapd.conf | grep -v ^# | grep -i domain
Domain = abc.com

[root@dns2 ~]# /etc/init.d/rpcidmapd start
Starting RPC idmapd:                                       [  OK  ]

Step 6 : Get the kerberos key on NFS client.

[root@dns2 ~]# kinit admin
Password for admin@ABC.COM:

Step 7 : Retrieve the principal keytab.

[root@dns2 ~]# ipa-getkeytab -s dns1.abc.com -p nfs/dns2.abc.com -k /etc/krb5.keytab
Keytab successfully retrieved and stored in: /etc/krb5.keytab

[root@dns2 ~]# /etc/init.d/rpcsvcgssd start
Starting RPC svcgssd:                                      [  OK  ]

[root@dns2 ~]# /etc/init.d/rpcgssd start
Starting RPC gssd:                                         [  OK  ]

Step 8 : Mount the nfs4 share on client using kerberos option.

[root@dns2 ~]# mount -t nfs4 -o sec=krb5 192.168.111.149:/vicky /mnt

[root@dns2 ~]# df -h /mnt
Filesystem              Size  Used Avail Use% Mounted on
192.168.111.149:/vicky   97M  5.5M   87M   6% /mnt

[root@dns2 ~]# cat /etc/mtab | grep -i nfs4
192.168.111.149:/vicky /mnt nfs4 rw,sec=krb5,addr=192.168.111.149,clientaddr=192.168.111.150 0 0

[root@dns2 ~]# umount /mnt

[root@dns2 ~]# mount -a

[root@dns2 ~]# df -h /mnt
Filesystem              Size  Used Avail Use% Mounted on
192.168.111.149:/vicky   97M  5.5M   87M   6% /mnt