Use sed and awk one liner on ceph ?

In this article I am going to show you the usage of basic awk and sed command liners on ceph for getting the desired outputs. I have created them as part of practice in my test lab.

–> In ceph cluster we will be having lot of objects if we want to see the mapping of each and every object that will be difficult manual task. I have used below one liner to list the content of every pool and every object inside the pool.

[root@ceph-m2 ~]# for i in `rados lspools`; do for j in `rbd -p $i ls`; do ceph osd map $i $j; done; done
osdmap e81 pool ‘rbd’ (2) object ‘foo_cp’ -> pg 2.85a7b54d (2.d) -> up ([2,5,4], p2) acting ([2,5,4], p2)
osdmap e81 pool ‘rbd’ (2) object ‘image1’ -> pg 2.571c6242 (2.2) -> up ([4,5,2], p4) acting ([4,5,2], p4)
osdmap e81 pool ‘pool1’ (4) object ‘foo’ -> pg 4.7fc1f406 (4.6) -> up ([4,5,2], p4) acting ([4,5,2], p4)
osdmap e81 pool ‘pool1’ (4) object ‘foo1’ -> pg 4.be9754b3 (4.13) -> up ([5,2,4], p5) acting ([5,2,4], p5)
osdmap e81 pool ‘pool1’ (4) object ‘foo_cp’ -> pg 4.85a7b54d (4.d) -> up ([2,4,5], p2) acting ([2,4,5], p2)
osdmap e81 pool ‘pool2’ (5) object ‘foo’ -> pg 5.7fc1f406 (5.6) -> up ([5,2,4], p5) acting ([5,2,4], p5)

Above output is truncated for the save of brevity.

–> We want to calculate the number of objects present on each OSD in cluster. I have used the below oneliner to determine the same. I have captured the output got in previous oneliner in file text2.txt 

[root@ceph-m2 ~]# awk -F “[][]” ‘{print $2}’ text2.txt | sed ‘s/,/\n/g’ | awk  ‘{a[$1]++} END { for (i in a) print “Number of objects on OSD.” i” is ” a[i] }’
Number of objects on OSD.4 is 10
Number of objects on OSD.5 is 10
Number of objects on OSD.2 is 10

–> I have established password less root connection to the all the servers which are contributing OSDs in cluster from one mon node. We can use the below command to see the underlying disks of every OSD. In below output ceph-4 is presenting OSD.4

[root@ceph-m2 ~]# for i in `ceph osd tree | grep -w host | awk ‘{print $4}’`
> do
> ssh $i df -h | grep -i osd
> done
/dev/sdc1               19G  124M   19G   1% /var/lib/ceph/osd/ceph-4
/dev/sdb1              2.0G   35M  2.0G   2% /var/lib/ceph/osd/ceph-0
/dev/sdb1              2.0G   34M  2.0G   2% /var/lib/ceph/osd/ceph-1
/dev/sdc1               19G  124M   19G   1% /var/lib/ceph/osd/ceph-5
/dev/sdc1               19G  188M   19G   1% /var/lib/ceph/osd/ceph-2
/dev/sdb1              2.0G   34M  2.0G   2% /var/lib/ceph/osd/ceph-3

–> If you want to see the number of PGs (placement groups) present in each pool.

[root@ceph-m2 ~]# ceph pg dump | grep ‘active+clean’ | awk ‘{print $1}’ | awk -F. ‘{count[$1]++} END { for (i in count) print “Pool Number ” i, “has “, count[i], ” PGs” done }’
dumped all in format plain
Pool Number 4 has  32  PGs
Pool Number 5 has  16  PGs
Pool Number 6 has  8  PGs
Pool Number 0 has  64  PGs
Pool Number 1 has  64  PGs
Pool Number 2 has  64  PGs

In above output we are getting the only the pool number not the pool name. We can get the pool name from below command.

[root@ceph-m2 ~]# ceph osd lspools | sed ‘s/,/\n/g’
0 data
1 metadata
2 rbd
4 pool1
5 pool2
6 images

I am still working towards more improvement on above hacks and working on some new as well 🙂

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s