How to map objects to OSDs in Ceph ?

After successful installation of my ceph cluster. I used the rados to put the objects in ceph cluster. In this article I am going to show to map objects to OSDs.

Step 1 : I have created one pool with name pool1. I have kept the placement group count of 16 and placement group number is also kept same.

[root@mon1 ~]# ceph osd pool create pool1 16 16

By default, ceph is having three pools (data, metadata and rbd) as we have added fourth pool manually hence its showing in below output. In below output 3 is the pool1 number.

[root@mon1 ~]# ceph osd lspools
0 data,1 metadata,2 rbd,3 pool1,

Step 2 : After creating the pool I am creating one test file and putting this as a object inside the pool.

[root@mon1 ~]# dd if=/dev/zero of=/tmp/test bs=10M count=1

[root@mon1 ~]# rados -p pool1 put object1 /tmp/test

We can see the count of objects present in pool using below command.

[root@mon1 ~]# rados df
pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
data            –                          0            0            0            0           0            0            0            0            0
metadata        –                          0            0            0            0           0            0            0            0            0
pool1           –                      10240            1            0            0           0            1            0            3        10240
rbd             –                          0            0            0            0           0            0            0            0            0
total used          137988            1
total avail       59670720
total space       59808708

Step 3 : We can list the objects present in pool using below command.

[root@mon1 ~]# rados -p pool1 ls
object1

Step 4 : Object is stored in the pool but we are not sure about the location of object. We can use the below command to locate the object in pool.

[root@mon1 ~]# ceph osd map pool1 object1
osdmap e15 pool ‘pool1’ (3) object ‘object1’ -> pg 3.bac5debc (3.c) -> up ([2,1,0], p2) acting ([2,1,0], p2)

Understanding the above output.

e15 –> epoch number. its like a version number which increment with every change.

pool1 (3) –> Pool1 is the pool in which we have put the object along with it pool number is mentioned.

pg 3.bac5debc (3.c) –> Its the placement group information. We need to check the value present in brackets.

You can grep the placement group in below command to get more information about it.

[root@mon1 ~]# ceph pg dump | grep -i 3.c
dumped all in format plain
3.c    1    0    0    0    10485760    3    3    active+clean    2015-04-21 11:51:11.250871    15’3    15:11    [2,1,0]    2    [2,1,0]    2    0’0    2015-04-21 11:51:10.194493    0’0    2015-04-21 11:51:10.194493

up ([2,1,0], p2) acting ([2,1,0], p2) –> By default ceph will store three copies of object. 0,1,2 present the OSDs present in ceph cluster. p2 is meant for the primary OSD in our case primary OSD is with number 2.

We can find which host is hosting which OSD using below command. You can replace the number according to your output.

Below output showing that OSD with number 0 is present on server with IP 10.65.211.69.

[root@mon1 ~]# ceph osd find 0
{ “osd”: 0,
“ip”: “10.65.211.69:6800\/20628″,
“crush_location”: { “host”: “osd1”,
“root”: “default”}}

[root@mon1 ~]# ceph osd find 1
{ “osd”: 1,
“ip”: “10.65.211.170:6800\/20616″,
“crush_location”: { “host”: “osd2”,
“root”: “default”}}

[root@mon1 ~]# ceph osd find 2
{ “osd”: 2,
“ip”: “10.65.234.184:6800\/20611″,
“crush_location”: { “host”: “osd3”,
“root”: “default”}}

Our primary OSD 2 is present on 10.65.234.184

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s