Glimpse of btrfs File System Part — 2

Continuing from previous post.

Case 7 : If we want to increase the size of file system we can add new disks into it and after adding them we can initiate the balancing operation on it.

In the output of case 6 we were having only sdh1 and sdf1 present in the configuration. Now I have added two more disk partitions into it each of similar size as earlier 512MB.

[root@localhost btrfs2]# btrfs device add /dev/sdi1 /btrfs2/
[root@localhost btrfs2]# btrfs device add /dev/sdg1 /btrfs2/

[root@localhost btrfs2]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 4 FS bytes used 168.00KB
devid    4 size 512.00MB used 240.00MB path /dev/sdi1
devid    2 size 512.00MB used 0.00 path /dev/sdh1
devid    3 size 512.00MB used 0.00 path /dev/sdg1
devid    1 size 512.00MB used 240.00MB path /dev/sdf1

Started balancing of data on the disks.

[root@localhost btrfs2]# btrfs balance start /btrfs2/
Done, had to relocate 2 out of 2 chunks

[root@localhost btrfs2]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 4 FS bytes used 104.00KB
devid    4 size 512.00MB used 32.00MB path /dev/sdi1
devid    2 size 512.00MB used 208.00MB path /dev/sdh1
devid    3 size 512.00MB used 208.00MB path /dev/sdg1
devid    1 size 512.00MB used 32.00MB path /dev/sdf1

Btrfs v0.20-rc1

Now we can see the available increased space in df output. Now here available space is misleading as our configuration is RAID 1 hence only 1GB space is available.

[root@localhost btrfs2]# df -h /btrfs2/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdf1       2.0G  208K  2.0G   1% /btrfs2

We can check by trying to create file of 2GB.

[root@localhost btrfs2]# dd if=/dev/zero of=/btrfs2/testfile1 bs=1M count=2000
dd: error writing ‘/btrfs2/testfile1’: No space left on device
949+0 records in
948+0 records out
994623488 bytes (995 MB) copied, 3.03338 s, 328 MB/s

Only 1000MB has been created.

[root@localhost btrfs2]# du -sh *
949M    testfile1

[root@localhost btrfs2]# df -h /btrfs2/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdf1       2.0G  1.9G   80M  96% /btrfs2

[root@localhost btrfs2]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 4 FS bytes used 950.04MB
devid    4 size 512.00MB used 479.00MB path /dev/sdi1
devid    2 size 512.00MB used 511.00MB path /dev/sdh1
devid    3 size 512.00MB used 511.00MB path /dev/sdg1
devid    1 size 512.00MB used 479.00MB path /dev/sdf1

Btrfs v0.20-rc1

Case 9 : If we want to increase the space in file system as in previous case whole space is depleted.

Added two new partitions each of 512MB to existing setup.

[root@localhost btrfs2]# btrfs device add /dev/sdf2 /btrfs2/
[root@localhost btrfs2]# btrfs device add /dev/sdg2 /btrfs2/

[root@localhost btrfs2]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 6 FS bytes used 950.04MB
devid    4 size 512.00MB used 479.00MB path /dev/sdi1
devid    2 size 512.00MB used 511.00MB path /dev/sdh1
devid    6 size 512.00MB used 0.00 path /dev/sdg2
devid    3 size 512.00MB used 511.00MB path /dev/sdg1
devid    5 size 512.00MB used 0.00 path /dev/sdf2
devid    1 size 512.00MB used 479.00MB path /dev/sdf1

Btrfs v0.20-rc1

[root@localhost btrfs2]# btrfs balance start /btrfs2/

[root@localhost btrfs2]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 6 FS bytes used 950.11MB
devid    4 size 512.00MB used 479.00MB path /dev/sdi1
devid    2 size 512.00MB used 208.00MB path /dev/sdh1
devid    6 size 512.00MB used 511.00MB path /dev/sdg2
devid    3 size 512.00MB used 208.00MB path /dev/sdg1
devid    5 size 512.00MB used 511.00MB path /dev/sdf2
devid    1 size 512.00MB used 479.00MB path /dev/sdf1

Btrfs v0.20-rc1
[root@localhost btrfs2]# df -h /btrfs2/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdf1       3.0G  1.9G  1.1G  64% /btrfs2

Case 10 : Reducing the file system size. If we want to remove the disks which we have added for expansion.

We need to issue the balance command couple of times. Till the output is taking used from two disks only. We need to make sure that if we want to end with up only two disks, then one disk is having space to accommodate the whole data, remember RAID-1

[root@localhost btrfs2]# btrfs balance start /btrfs2/
Done, had to relocate 5 out of 5 chunks

[root@localhost btrfs2]# btrfs fi show
failed to open /dev/fd0: No such device or address
failed to open /dev/sr0: No medium found
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 6 FS bytes used 160.00KB
devid    4 size 512.00MB used 0.00 path /dev/sdi1
devid    2 size 512.00MB used 32.00MB path /dev/sdh1
devid    6 size 512.00MB used 320.00MB path /dev/sdg2
devid    3 size 512.00MB used 32.00MB path /dev/sdg1
devid    5 size 512.00MB used 320.00MB path /dev/sdf2
devid    1 size 512.00MB used 0.00 path /dev/sdf1

I issued two more time after that final output becomes.

[root@localhost btrfs2]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 6 FS bytes used 168.00KB
devid    4 size 512.00MB used 0.00 path /dev/sdi1
devid    2 size 512.00MB used 352.00MB path /dev/sdh1
devid    6 size 512.00MB used 0.00 path /dev/sdg2
devid    3 size 512.00MB used 352.00MB path /dev/sdg1
devid    5 size 512.00MB used 0.00 path /dev/sdf2
devid    1 size 512.00MB used 0.00 path /dev/sdf1

Now I have removed the disks which are used 0.

[root@localhost ~]# btrfs device delete /dev/sdi1 /btrfs2
[root@localhost ~]# btrfs device delete /dev/sdg2 /btrfs2
[root@localhost ~]# btrfs device delete /dev/sdf2 /btrfs2
[root@localhost ~]# btrfs device delete /dev/sdf1 /btrfs2

[root@localhost ~]# btrfs fi show
Label: none  uuid: cf73a753-f541-4abb-a519-989d011671fd
Total devices 3 FS bytes used 104.00KB
devid    2 size 512.00MB used 352.00MB path /dev/sdh1
devid    3 size 512.00MB used 352.00MB path /dev/sdg1
*** Some devices missing

Btrfs v0.20-rc1

Finally we are on our original size back 🙂

[root@localhost ~]# df -h /btrfs2/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdh1       1.0G  208K  958M   1% /btrfs2

Case 10 : If we want to run the scrub just like zpool scrub and monitor the status of scrub.

[root@localhost btrfs1]# btrfs scrub start /btrfs1/
scrub started on /btrfs1/, fsid cf73a753-f541-4abb-a519-989d011671fd (pid=2904)

[root@localhost btrfs1]# btrfs scrub status /btrfs1/
scrub status for cf73a753-f541-4abb-a519-989d011671fd
scrub started at Sat Nov 22 08:19:53 2014 and finished after 0 seconds
total bytes scrubbed: 64.00KB with 0 errors

Case 11 : Creating subvolumes in file system.

[root@localhost ~]# btrfs subvolume create /btrfs2/subvol1
Create subvolume ‘/btrfs2/subvol1’

[root@localhost subvol1]#  btrfs subvolume list -put /btrfs1/
ID      gen     parent  top level       uuid    path
—      —     ——  ———       —-    —-
273     225     5       5               7a6c7490-a70b-894e-b1a0-eb1adf297146    subvol1

Another way to list the subvolume.

[root@localhost ~]#  btrfs subvolume show /btrfs1/subvol1/
/btrfs1/subvol1
Name:                   subvol1
uuid:                   7a6c7490-a70b-894e-b1a0-eb1adf297146
Parent uuid:            –
Creation time:          2014-11-22 09:01:13
Object ID:              273
Generation (Gen):       225
Gen at creation:        224
Parent:                 5
Top Level:              5
Flags:                  –
Snapshot(s):

Subvolumes are like directory but biggest advantage of them is that we can take the snapshot of subvolume that is not possible in case of directory.

[root@localhost ~]# btrfs subvolume snapshot /btrfs2/subvol1 /btrfs2/vol1snap1
Create a snapshot of ‘/btrfs2/subvol1’ in ‘/btrfs2/vol1snap1’

Whatever the contents of subvol1 they will get copied to vol1snap1. In below output we can see that vol1snap1 is the snapshot of subvolume.

[root@localhost vol1snap1]# btrfs subvol show /btrfs2/subvol1/
/btrfs2/subvol1
Name:                   subvol1
uuid:                   7a6c7490-a70b-894e-b1a0-eb1adf297146
Parent uuid:            –
Creation time:          2014-11-22 09:01:13
Object ID:              273
Generation (Gen):       228
Gen at creation:        224
Parent:                 5
Top Level:              5
Flags:                  –
Snapshot(s):
vol1snap1

[root@localhost ~]# btrfs subvolume list -put /btrfs2/
ID      gen     parent  top level       uuid    path
—      —     ——  ———       —-    —-
273     228     5       5               7a6c7490-a70b-894e-b1a0-eb1adf297146    subvol1
274     228     5       5               dfb3ed6a-4601-2b4e-9d4d-8232550d3a2a    vol1snap1

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s