Category Archives: netapp

How to create new volumes in Netapp ?

While playing with Netapp simulator, I checked the minimum size which can be used to create the volume.

Step 1 : Tried to create new volume with name nfsstore1 on aggr0 with size of 10MB. It throws the error because the minimum size is 20MB.

filer1*> vol create nfsstore1 aggr0 10M
vol create: Volume size is too small; minimum is 20m

Step 2 : Tried the same command with minimum size which we got in the error message in previous step. As expected volume is successfully created.

filer1*> vol create nfsstore1 aggr0 20M
Fri Jul 31 12:47:45 GMT [filer1:wafl.vol.runningOutOfInodes:warning]: The file system on Volume nfsstore1 is using 80 percent or more of the files that can be contained on the volume.
Creation of volume ‘nfsstore1’ with size 20m on containing aggregate
‘aggr0’ has completed.

How to check the vol and aggregate space in Netapp ?

In this article, I am going to show the various command which are used to check the space utilization in Netapp.

1) Checking the volume and aggregate space using df command. Its the same way which we are using in Linux/Unix boxes.

a) Checking volume space.

filer1*> df -h
Filesystem               total       used      avail capacity  Mounted on
/vol/vol0/               808MB      224MB      584MB      28%  /vol/vol0/
/vol/vol0/.snapshot       42MB     1304KB       41MB       3%  /vol/vol0/.snapshot

b) Checking aggregate space.

filer1*> df -Ah
Aggregate                total       used      avail capacity
aggr0                    900MB      856MB       43MB      95%
aggr0/.snapshot            0TB        0TB        0TB       0%

2) More information can be seen using below command. In includes the snapshot in total used calculation.

filer1*> vol status -S
Volume : vol0

Feature                                           Used      Used%
——————————–      —————-      —–
User Data                                        222MB        26%
Filesystem Metadata                       220KB         0%
Inodes                                             2.40MB         0%
Snapshot Reserve                          42.5MB         5%

Total                                                 267MB        31%

3) One another way.

filer1*> vol size vol0
vol size: Flexible volume ‘vol0’ has size 871916k.

4) In similar way aggregate space utilization can be checked using below commands.

filer1*> aggr status -S
Aggregate : aggr0

Feature                                           Used      Used%
——————————–      —————-      —–
Volume Footprints                                856MB        95%
Aggregate Metadata                             212KB         0%

Total Used                                             856MB        95%

filer1*> aggr show_space aggr0
Aggregate ‘aggr0’

Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape
1024000KB        102400KB             0KB        921600KB             0KB             0KB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol0                             876760KB        235100KB          volume

Aggregate                       Allocated            Used           Avail
Total space                      876760KB        235100KB         44628KB
Snap reserve                             0KB          5052KB             0KB
WAFL reserve                  102400KB          1596KB        100804KB

5) If we want to see the underlying disks of the aggregate. Currently it’s using three disks and the raid group name is rg0.

a) Checking the underlying disks.

filer1*> aggr status -r
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal, block checksums)

RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
——— ——  ————- —- —- —- —– ————–    ————–
dparity   v5.16   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
parity    v5.17   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
data      v5.18   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448

Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
———       ——  ————- —- —- —- —– ————–    ————–
Spare disks for block checksum
spare           v5.19   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.20   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.21   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.22   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.24   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.25   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.26   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.27   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.28   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.29   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.32   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448

b) Adding one disk to the raid group0 in aggr0.

filer1*> aggr add aggr0 -g rg0 -d v5.19
Fri Jul 31 12:51:38 GMT [filer1:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/v5.19 Shelf ? Bay ? [NETAPP   VD-1000MB-FZ-520 0042] S/N [08193103] to aggregate aggr0 has completed successfully
Addition of 1 disk to the aggregate has completed.

c) Okay, disk is added successfully. Compare the below output with output shown in a).  Now, rg0 (raid group 0) is having two data disks.

filer1*> aggr status -r
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal, block checksums)

RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
——— ——  ————- —- —- —- —– ————–    ————–
dparity   v5.16   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
parity    v5.17   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
data      v5.18   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
data      v5.19   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448

Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
———       ——  ————- —- —- —- —– ————–    ————–
Spare disks for block checksum
spare           v5.20   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.21   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.22   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.24   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.25   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.26   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.27   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.28   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.29   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.32   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448

6) If you want to seen on which aggregate your volume is created. You may use the below command for detail information.

filer1*> vol status  -v
Volume State           Status                Options
vol0 online          raid_dp, flex         root, diskroot, nosnap=off, nosnapdir=off,
64-bit                minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off,
create_ucode=off, convert_ucode=off,
maxdirsize=41861, schedsnapname=ordinal,
fs_size_fixed=off, guarantee=volume,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off,
try_first=volume_grow, read_realloc=off,
snapshot_clone_dependency=off,
dlog_hole_reserve=off, nbu_archival_snap=off
Volume UUID: 343319dd-7536-4a1e-b43b-d6ae1e07c653
Containing aggregate: ‘aggr0’

Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal, block checksums

Snapshot autodelete settings for vol0:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
mode=off
Hybrid Cache:
Eligibility=read-write

7) Or another way, how many volumes are created on aggregate.

filer1*> aggr status aggr0
Aggr State           Status                Options
aggr0 online          raid_dp, aggr         root
64-bit

Volumes: vol0

Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal, block checksums

How to enable nfs on Netapp Filer ?

In this article, I am going to show how to enable NFS service on Netapp.

Step 1 : First verify whether license is installed for NFS or not.

In my case I have added the license for both NFS and CIFS.

filer1> license
Serial Number: 4082368-50-8
Owner: filer1
Package           Type    Description           Expiration
—————– ——- ——————— ——————–
NFS               license NFS License           –
CIFS              license CIFS License          –

If it’s not already added you need to add the license using below command.

filer > license add LICENSE-NUMBER

Step 2 : After the verify whether nfs is running or not. I verified it was not running, I started it and after that it’s showing in “running” state.

filer1> nfs status
NFS server is NOT running.

filer1> nfs on
NFS server is running.

filer1> nfs status
NFS server is running.

Step 3 : After enabling the nfs service, I was able to mount the NFS share on client using NFSv3 option. It was not getting mounted with NFSv4.

If you look carefully in below output I have used nfs4 option to mount it with NFSv4 but it got mounted with NFSv3.

[root@nfsclient ~]# mount -t nfs4 192.168.111.150:/vol/nfsstore2 /mnt

[root@nfsclient ~]# df -h /mnt
Filesystem                      Size  Used Avail Use% Mounted on
192.168.111.150:/vol/nfsstore2   95M   64K   95M   1% /mnt

[root@nfsclient ~]# nfsstat -m
/mnt from 192.168.111.150:/vol/nfsstore2
Flags: rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.111.150,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=192.168.111.150

Step 4 : To mount it with NFSv4 we need to enable one nfs option on filer.

a) After issuing below command you will find that it will show nfs.v4.enable in off status. Note : Below output is truncated to show only one option.

filer1> options nfs

nfs.v4.enable                off

b) We need to enable this to support NFSv4.

filer1> options nfs.v4.enable on

Step 5 : Now, I am able to mount it using NFSv4 option.

[root@nfsclient ~]# mount -t nfs4 192.168.111.150:/vol/nfsstore2 /mnt

[root@nfsclient ~]# nfsstat -m
/mnt from 192.168.111.150:/vol/nfsstore2
Flags: rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.111.163,local_lock=none,addr=192.168.111.150