How to configure NFS server and client in Solaris 11

Today I was reading about NFS in Solaris 11.1 I have tried basic setup in my Lab which is shared here; will come with more advanced about NFS and autofs in coming article.

My Server is solaris 11 (192.168.120.150) and my client is client1 (192.168.120.160)

  • Server Side configuration

Step 1 : On server check whether any NFS mount point is shared. Currently nothing is shared.

root@solaris11:~# share -F nfs

Step 2 : Created on directory and shared it in read write mode. We can see the same in output of step 1.

root@solaris11:~# mkdir /export/home/user1
root@solaris11:~# share -F nfs -o rw /export/home/user1
root@solaris11:~# share -F nfs
export_home_user1 /export/home/user1 sec=sys,rw

Step 3 : Check the status of NFS services.
root@solaris11:~# svcs -a | grep -i nfs
disabled 22:53:07 svc:/network/nfs/cbd:default
disabled 22:53:08 svc:/network/nfs/client:default
online 22:53:34 svc:/network/nfs/fedfs-client:default
online 23:06:17 svc:/network/nfs/status:default
online 23:06:17 svc:/network/nfs/mapid:default
online 23:06:17 svc:/network/nfs/nlockmgr:default
online 23:06:18 svc:/network/nfs/rquota:default
online 23:06:18 svc:/network/nfs/server:default

Above changes will persist across reboots.

  • Client Side configuration

Step 4 : Check the share from NFS server bearing IP address 192.168.120.150 Before mounting the file system I checked the status of services as well.

root@client1:~# dfshares 192.168.120.150
RESOURCE SERVER ACCESS TRANSPORT
192.168.120.150:/export/home/user1 192.168.120.150 – –

root@client1:~# svcs -a | grep nfs
disabled 22:53:06 svc:/network/nfs/mapid:default
disabled 22:53:06 svc:/network/nfs/status:default
disabled 22:53:06 svc:/network/nfs/nlockmgr:default
disabled 22:53:06 svc:/network/nfs/cbd:default
disabled 22:53:06 svc:/network/nfs/client:default
disabled 22:53:07 svc:/network/nfs/server:default
disabled 22:53:42 svc:/network/nfs/rquota:default
online 22:53:37 svc:/network/nfs/fedfs-client:default

Step 5 : Mounted the NFS file system on client side successfully and unmounted it after that.

root@client1:~# mount -F nfs 192.168.120.150:/export/home/user1 /mnt
root@client1:~# df -h /mnt
Filesystem Size Used Available Capacity Mounted on
192.168.120.150:/export/home/user1
3.6G 32K 3.6G 1% /mnt

root@client1:~# umount /mnt

Now if you want to change the shared options of NFS mount point on server we have to simply issue the command again on server with new options with same NFS file system. I have changed the setting so that client 192.168.120.160 can’t mount the NFS file system. Verify the setting with share -F nfs.

@solaris11:~# share -F nfs -o ro=-192.168.120.160 /export/home/user1

root@solaris11:~# share -F nfs
export_home_user1 /export/home/user1 sec=sys,ro=-192.168.120.160

On client1 I can see that NFS mount point is shared but we are not able to mount it.

root@client1:~# dfshares 192.168.120.150
RESOURCE SERVER ACCESS TRANSPORT
192.168.120.150:/export/home/user1 192.168.120.150 – –
root@client1:~# mount 192.168.120.150:/export/home/user1 /mnt
nfs mount: mount: /mnt: Permission denied

I have reverted the NFS mount point setting as in Step 2 to continue with article. Next part of article is get more grip on NFS.

  • How to change the NFS settings between Server and Client

Step 6 : We can check the status and properties of NFS on server using sharectl command

root@solaris11:~# sharectl status
nfs online
autofs online client

root@solaris11:~# sharectl get nfs
servers=1024
lockd_listen_backlog=32
lockd_servers=1024
lockd_retransmit_timeout=5
grace_period=90
server_versmin=2
server_versmax=4
client_versmin=2
client_versmax=4
server_delegation=on
nfsmapid_domain=
max_connections=-1
protocol=ALL
listen_backlog=32
device=
showmount_info=full

Similary we can do the same on client1 as well. Currently client_versmax on client side is NFS v4 I am changing it to 3. Now our server is having NFS v4 and client is having NFS v3. Lets see how negotiation took place.

root@client1:~# sharectl set -p server_versmax=3 nfs

root@client1:~# sharectl get nfs
servers=1024
lockd_listen_backlog=32
lockd_servers=1024
lockd_retransmit_timeout=5
grace_period=90
server_versmin=2
server_versmax=4
client_versmin=2
client_versmax=3
server_delegation=on
nfsmapid_domain=
max_connections=-1
protocol=ALL
listen_backlog=32
device=
showmount_info=full

If I am mounting the NFS file system on client which is shared from server1 (192.168.120.150) and after that checking the mounted NFS version. It is showing me Version 3. because my client is having max NFS version 3 and server is having NFS version 4 It will take the maximum version into effect which is present both in server and client in this case it is NFS version 3.

root@client1:~# df -h /mnt
Filesystem Size Used Available Capacity Mounted on
192.168.120.150:/export/home/user1 3.6G 32K 3.6G 1% /mnt

root@client1:~# nfsstat -m
/mnt from 192.168.120.150:/export/home/user1
Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=32768,wsize=32768,retrans=5,timeo=600
Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60

root@client1:~# umount /mnt

Unmounted the file system and reverted the client_versmax to version 4. If I am mounting the NFS file system now It will show in version 4 which is as expected 🙂

root@client1:~# sharectl set -p client_versmax=4 nfs
root@client1:~# mount -F nfs 192.168.120.150:/export/home/user1 /mnt
root@client1:~# nfsstat -m
/mnt from 192.168.120.150:/export/home/user1
Flags: vers=4,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=1048576,wsize=1048576,retrans=5,timeo=600
Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60

 

Advertisements

6 thoughts on “How to configure NFS server and client in Solaris 11

    1. Vikrant Post author

      its been a while I didn’t play with nfs on solaris, if I am recalling correctly, yes nfs configuration on nfs server will survives reboots.

      Reply
  1. sam

    I have multiple nfs shares linked to my client server, is there any file where I can see the list?
    Also df -h is hanging as some host nfs server is not reachable right now, will a reboot be an issue? I wanted to ensure the server wont check for those mount points will booting so that it does not get stuck.

    Reply
    1. Vikrant Post author

      Reboot should not cause an issue, if you are going to comment all nfs mounts in /etc/fstab file (in case of linux). In case of linux, you can see the list of mounted filesystems from /etc/mtab and /proc/mounts, you may need to find the solaris equivalent. Sorry, I don’t remember the solaris paths.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s