Tag Archives: file system

Whats the use of temporary filesystems in RHEL 7 ?

After the installation of RHEL 7 we can see temporary  filesystems in output of df. Whats the purpose of these filesystem I will explain in this article.
Okay, here is the output of df from RHEL 7.1 which is same for RHEL 7 as well. Filesystems in question are “/dev/” “/dev/shm” “/run” and “/sys/fs/cgroup”.

[root@Node71 ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   11G  3.3G  7.1G  32% /
devtmpfs               908M     0  908M   0% /dev
tmpfs                  917M   92K  917M   1% /dev/shm
tmpfs                  917M  8.9M  908M   1% /run
tmpfs                  917M     0  917M   0% /sys/fs/cgroup
/dev/sda1              497M  124M  373M  25% /boot

If you look closely three of them are of same size and devtmpfs is of bit different size.

RAM assigned to my test VM is 2GB. As all these are temporary filesystem so they are taking the space from the RAM. If we count all of them then total will become 3659 MB (3.5GB approx).

Now question arises how this is possible ?

Answer is these are virtual filesystem remember hence they are taking 3.5 GB RAM even though the system is having 2 GB RAM.

Another question may arise what will happen if all the filesystems got full ?

This condition can’t happen because the system would run into problems much before that. Lets see how much RAM they are currently using in blocks.

Here is how much memory is used by each filesystem (in 1k blocks):

Filesystem            1K-blocks    Used Available Use% Mounted on
devtmpfs                 928872       0    928872   0% /dev
tmpfs                    938832      92    938740   1% /dev/shm
tmpfs                    938832    9096    929736   1% /run
tmpfs                    938832       0    938832   0% /sys/fs/cgroup

Total Used is :        9182

So, although all those filesystems together could use 3.5 GB, they are only using 9MB.

/dev is used to store device nodes.  It should never claim to use memory.
/sys/fs/cgroup is used for cgroups.  It will use very little memory.
/run is used for system logs, and other daemon related files.  It is unlikely to grow to anywhere near the maximum size of the filesystem.
/dev/shm is used for temporary files.  It is intended to be used for shared memory allocations.

Of all of these file systems, /dev/shm is the only one that normal users can write to, so the only one that may use a significant amount of memory.

Advertisements

Why ext3 file system not getting mounted in RHEL 5 ?

Today I got an issue where after server reboot one of ext3 file system was not getting mounted on RHEL 5.10. It was not the root file system.

Below was the error which I was getting while mounted that file system.

[root@Node2 ~]# mount -t ext3 /dev/vg1/lv1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-lv1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog – try
dmesg | tail  or so

When I skip the type of file system “-t ext3” option I was able to mount the file system. Before doing that I tried fsck as well but that didn’t work.

[root@Node2 ~]# mount /dev/vg1/lv1 /mnt

I issued mount command to see the type of file system. Found that its in ext2 type.

[root@Node2 ~]# mount | grep -w ‘/mnt’
/dev/mapper/vg1-lv1 on /mnt type ext2 (rw)

I used the tune2fs utility to convert it into ext3 file system. Note : Strongly recommended to keep the backup of file system to avoid any unforeseen issue.

[root@Node2 ~]# tune2fs -j /dev/vg1/lv1
tune2fs 1.41.12 (17-May-2010)
Creating journal inode: done
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

After that I was able to mount the file system as ext3 🙂 Luckily in this case my data was intact 🙂

[root@Node2 ~]# mount -t ext3 /dev/vg1/lv1 /mnt

[root@Node2 ~]# mount | grep /mnt
/dev/mapper/vg1-lv1 on /mnt type ext3 (rw)

How to Fix issue with File system Superblock in Red Hat Linux

Sometimes the file system superblock is getting corrupted. In worst case scenario we are also not able to run the file system check. By default file system check will be performed the on the superblock located at 0.
But nothing to worry as we know the superblock is having multiple copies saved at different locations in a file system. We can use those copies to to run the e2fsck on file system or to mount the file system.

Now the question arises

  • How would we come to know about the location of copies of superblock?
  • How can we run the file system check using another copy of superblock?
  • How can we mount the file system using another copy of superblock?

In this article I am going to address all these questions.

I have one file system of 19G on which I am going to perform all the actions.

Question 1 : How would we come to know about the location of copies of superblock?

Answer 1 : dumpe2fs and mke2fs commands can be used to know about superblock.

dumpe2fs : This command can be used on mounted file system as well means the file systems which are working absolutely fine.

After running this command you will get to know each and every term associated with file system from that platheora of information you can capture only superblock information.

[root@node1 ~]# dumpe2fs /dev/mapper/VolGroup01-VolLV01 | grep -i superblock
dumpe2fs 1.41.12 (17-May-2010)
Primary superblock at 0, Group descriptors at 1-2
Backup superblock at 32768, Group descriptors at 32769-32770
Backup superblock at 98304, Group descriptors at 98305-98306
Backup superblock at 163840, Group descriptors at 163841-163842
Backup superblock at 229376, Group descriptors at 229377-229378
Backup superblock at 294912, Group descriptors at 294913-294914
Backup superblock at 819200, Group descriptors at 819201-819202
Backup superblock at 884736, Group descriptors at 884737-884738
Backup superblock at 1605632, Group descriptors at 1605633-1605634
Backup superblock at 2654208, Group descriptors at 2654209-2654210
Backup superblock at 4096000, Group descriptors at 4096001-4096002

mke2fs : This command should be run on unmounted file system. NOTE : Don’t run this command without -n option.

[root@node1 ~]# mke2fs -n /dev/mapper/VolGroup01-VolLV01
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1245184 inodes, 4980736 blocks
249036 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
152 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

So first most important question is answered.

Question 2 : How can we run the file system check using another copy of superblock?

Answer 2 : As we were not able to run the file system check on superblock at 0 location. Hence we proceed with file system check on superblock at 32768(Value taken from Step1 output).

Able to run the file system check on copy of superblock.

[root@node1 ~]# e2fsck -b 32768 /dev/mapper/VolGroup01-VolLV01
e2fsck 1.41.12 (17-May-2010)
One or more block group descriptor checksums are invalid. Fix<y>? yes

Group descriptor 0 checksum is invalid. FIXED.
Group descriptor 1 checksum is invalid. FIXED.
Group descriptor 2 checksum is invalid. FIXED.
Group descriptor 3 checksum is invalid. FIXED.
Group descriptor 4 checksum is invalid. FIXED.

Question 3 : How can we mount the file system using another copy of superblock?

Answer 3 : As we are not able to mount the file system using default superblock which starts at 0. We will try to mount the file system copy of superblock.

[root@node1 ~]# mount -o sb=98304 /dev/mapper/VolGroup01-VolLV01 /var/crash1

Tip : One useful command to determine when the file system is created and last mount time”

[root@node1 ~]# dumpe2fs -h /dev/mapper/vg_node1-lv_root | grep -E “Filesystem created|Last mount time”
dumpe2fs 1.41.12 (17-May-2010)
Filesystem created: Fri May 9 15:05:30 2014
Last mount time: Thu Jun 26 22:37:06 2014

 

How to Convert Normal LV to ThinPool.

In this post, I am going to explain the steps to convert LV to ThinPool.

Step 1 : Here I have create new Volume named as testlv4-nt of size 100M on volume group testvg.

[root@localhost ~]# lvcreate -n testlv4-nt -L 100M testvg
Logical volume “testlv4-nt” created

Step 2 : Created xfs file system on the new created volume.

[root@localhost ~]# mkfs.xfs /dev/testvg/testlv4-nt
meta-data=/dev/testvg/testlv4-nt isize=256 agcount=4, agsize=6400 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=25600, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=4265, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Step 3 : Created volume for metadata which is require for converstion of Logical Volume to Thin pool. Created file system on that Volume as well.

[root@localhost ~]# lvcreate -n testlv4md-nt -L 50M testvg
Rounding up size to full physical extent 52.00 MiB
Logical volume “testlv4md-nt” created

[root@localhost ~]# mkfs.xfs /dev/testvg/testlv4md-nt
meta-data=/dev/testvg/testlv4md-nt isize=256 agcount=2, agsize=6656 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=13312, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=4265, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Formula for calculation of metadata size is Pool_LV_SIZE/Pool_LV_Chunk_size*64. Here I have taken the random value for creation.

Currently we are not having flexibility for changing the metadata size at later point of time.

Step 4 : Converted the Volume to thinpool. Name of Volume will not change but now it is thinpool you can check the attributes in below command to confirm the same.

[root@localhost ~]# lvconvert –thinpool testvg/testlv4-nt –poolmetadata testvg/testlv4md-nt
Converted testvg/testlv4-nt to thin pool.

[root@localhost ~]# lvs | egrep “LV|testlv4-nt”
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
testlv4-nt testvg twi-a-tz– 100.00m 0.00

Step 5 : Now we can use the thinpool to create Volumes on top of it.

[root@localhost ~]# lvcreate -V200M -T testvg/testlv4-nt -n testlv4-tp
Logical volume “testlv4-tp” created