How to deploy Redhat Ceph storage cluster ?

Redhat has lauched the official version of ceph as part of Redhat ceph storage 1.2.3. Ceph is massively scalable distributed storage. In this article I am going to show the installation of official version of Redhat Ceph storage 1.2.3.

My Lab Setup :

I have installed 6 VMs in VMware environment. All VMs are having same version of RHEL 7.1.

Admin Node:

192.168.111.140 ceph-admin

Monitor Nodes :

192.168.111.141 ceph-m1
192.168.111.142 ceph-m2
192.168.111.143 ceph-m3   << Will also host OSD.

OSD Nodes :

192.168.111.144 ceph-osd1
192.168.111.145 ceph-osd2

Step 1 : I have put the entry of hostname and ipaddress in /etc/hosts file on all six nodes.

[root@ceph-admin ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.111.140 ceph-admin
192.168.111.141 ceph-m1
192.168.111.142 ceph-m2
192.168.111.143 ceph-m3
192.168.111.144 ceph-osd1
192.168.111.145 ceph-osd2

Step 2 : I stopped the iptables on all nodes for smooth installation.

[root@ceph-admin ~]# systemctl stop firewalld
[root@ceph-admin ~]# systemctl disable firewalld
rm ‘/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service’
rm ‘/etc/systemd/system/basic.target.wants/firewalld.service’

Step 3 : As per Redhat official documentation its recommended to disable the selinux as well hence did the same.

[root@ceph-admin ~]# setenforce 0
[root@ceph-admin ~]# cat /etc/sysconfig/selinux | egrep -v “^#|^$”
SELINUX=disabled
SELINUXTYPE=targeted

Please change the below entry in /etc/sudoers file on all nodes.

From :

Defaults    requiretty

To :
Defaults:ceph    !requiretty

Step 4 : I have registered all VMs with Redhat channels to get the packages. You need to use your redhat credentials to do the same.

[root@ceph-admin ~]# subscription-manager register
[root@ceph-admin ~]# subscription-manager attach –auto
[root@ceph-admin ~]# subscription-manager repos –disable=”*”
[root@ceph-admin ~]#  subscription-manager repos –enable=rhel-7-server-rpms –enable=rhel-7-server-rhceph-1.2-calamari-rpms –enable=rhel-7-server-rhceph-1.2-installer-rpms –enable=rhel-7-server-rhceph-1.2-mon-rpms –enable=rhel-7-server-rhceph-1.2-osd-rpms

Step 5 :  Added ceph user and gave the sudoers permission to ceph user on all nodes.

[root@ceph-admin ~]# useradd ceph
[root@ceph-admin ~]# echo root123 | passwd ceph –stdin
[root@ceph-admin ~]# echo “ceph ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/ceph
[root@ceph-admin ~]# chmod 0440 /etc/sudoers.d/ceph

Step 6 : Downloaded the official Redhat Ceph storage iso from redhat website. Mounted the same on admin node. Note : Now I am working from ceph user.

[ceph@ceph-admin ~]$ sudo mount /dev/cdrom /mnt
mount: /dev/sr0 is write-protected, mounting read-only

Step 7 : Copied the necessary files from mounted iso.

[ceph@ceph-admin ~]$ sudo cp /mnt/RHCeph-Calamari-1.2-x86_64-c1e8ca3b6c57-285.pem /etc/pki/product/285.pem
[ceph@ceph-admin ~]$ sudo cp /mnt/RHCeph-Installer-1.2-x86_64-8ad6befe003d-281.pem /etc/pki/product/281.pem
[ceph@ceph-admin ~]$ sudo cp /mnt/RHCeph-MON-1.2-x86_64-d8afd76a547b-286.pem /etc/pki/product/286.pem
[ceph@ceph-admin ~]$ sudo cp /mnt/RHCeph-OSD-1.2-x86_64-25019bf09fe9-288.pem /etc/pki/product/288.pem

Step 8 : Generate ssh key and copy the pub key to all other nodes for password less connection.

[ceph@ceph-admin ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Created directory ‘/home/ceph/.ssh’.
======Output truncated=======

Copy the key to other nodes and verify your password less connection.

[ceph@ceph-admin ~]$ ssh-copy-id ceph@ceph-m1
The authenticity of host ‘ceph-m1 (192.168.111.141)’ can’t be established.
ECDSA key fingerprint is 42:8a:5e:7d:37:e4:45:a6:f8:40:f6:1f:60:6a:26:6c.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
ceph@ceph-m1’s password:

Step 9 : Install the ice-setup rpm from the mounted ISO.

[ceph@ceph-admin ~]$ sudo yum install /mnt/ice_setup-*.rpm
Loaded plugins: product-id, subscription-manager

Step 10 : Create one directory. We need to issue all cluster commands from that directory only so that we will be having all required configuration files at single place.

[ceph@ceph-admin ~]$ mkdir ceph-config
[ceph@ceph-admin ~]$ cd ceph-config/
[ceph@ceph-admin ceph-config]$ sudo ice_setup -d /mnt
–>
–> ==== interactive mode ====
–>
–> follow the prompts to complete the interactive mode
–> if specific actions are required (e.g. just install Calamari)
–> cancel this script with Ctrl-C, and see the help menu for details
–> default values are presented in brackets
–> press Enter to accept a default value, if one is provided
–> do you want to continue?
–> this script will setup Calamari, package repo, and ceph-deploy
–> with the following steps:
–> 1. Configure the ICE Node (current host) as a repository Host
–> 2. Install Calamari web application on the ICE Node (current host)
–> 3. Install ceph-deploy on the ICE Node (current host)
–> 4. Configure host as a ceph and calamari minion repository for remote hosts
–> provide the path to packages to place in the repo [/mnt]
–>
–> ==== Step 1: Calamari & ceph-deploy repo setup ====

Step 11 : After performing above step we can see the cephdeploy.conf file in directory. Its a repo file for the package installation.

[ceph@ceph-admin ceph-config]$ ll
total 4
-rw-r–r– 1 root root 635 Apr 17 11:07 cephdeploy.conf

[ceph@ceph-admin ceph-config]$ cat cephdeploy.conf  | egrep -v “^#|^$”
[ceph-deploy-calamari]
master = ceph-admin
[calamari-minion]
name=Calamari
baseurl=http://ceph-admin/static/calamari-minions
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
gpgcheck=1
enabled=1
priority=1
proxy=_none_
[ceph]
name=Ceph
baseurl=http://ceph-admin/static/ceph/0.80.8
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
gpgcheck=1
default=true
priority=1
proxy=_none_

Step 12 : Starting with cluster configuration.

[ceph@ceph-admin ceph-config]$ ceph-deploy new ceph-m1 ceph-m2 ceph-m3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/ceph-config/cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22-rc1): /bin/ceph-deploy new ceph-m1 ceph-m2 ceph-m3
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-m1][DEBUG ] connected to host: ceph-admin
[ceph-m1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-m1

It will create couple of more files inside the directory.

[ceph@ceph-admin ceph-config]$ ll
total 16
-rw-rw-r– 1 ceph ceph  282 Apr 17 11:09 ceph.conf
-rw-r–r– 1 root root  635 Apr 17 11:07 cephdeploy.conf
-rw-rw-r– 1 ceph ceph 2810 Apr 17 11:09 ceph.log
-rw-rw-r– 1 ceph ceph   73 Apr 17 11:09 ceph.mon.keyring

ceph.conf is the main configuration file. I have made one change inside it, I have changed the journal size to 1GB by default it’s 5GB.

[ceph@ceph-admin ceph-config]$ cat ceph.conf
[global]
fsid = bf2e4e65-265b-46ad-87af-847f00a5533f
mon_initial_members = ceph-m1, ceph-m2, ceph-m3
mon_host = 192.168.111.141,192.168.111.142,192.168.111.143
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

[osd]
osd_journal_size = 1024

Step 13 : Start with deployment on all nodes.

[ceph@ceph-admin ceph-config]$ ceph-deploy install ceph-m1 ceph-m2 ceph-m3 ceph-osd1 ceph-osd2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/ceph-config/cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22-rc1): /bin/ceph-deploy install ceph-m1 ceph-m2 ceph-m3 ceph-osd1 ceph-osd2
[ceph_deploy.install][DEBUG ] Installing stable version giant on cluster ceph hosts ceph-m1 ceph-m2 ceph-m3 ceph-osd1 ceph-osd2

[ceph@ceph-admin ceph-config]$ ceph-deploy mon create-initial

[ceph@ceph-admin ceph-config]$ ll
total 280
-rw-rw-r– 1 ceph ceph     71 Apr 17 12:04 ceph.bootstrap-mds.keyring
-rw-rw-r– 1 ceph ceph     71 Apr 17 12:04 ceph.bootstrap-osd.keyring
-rw-rw-r– 1 ceph ceph     63 Apr 17 12:04 ceph.client.admin.keyring
-rw-rw-r– 1 ceph ceph    368 Apr 17 11:13 ceph.conf
-rw-r–r– 1 root root    635 Apr 17 11:07 cephdeploy.conf
-rw-rw-r– 1 ceph ceph 199580 Apr 17 12:11 ceph.log
-rw-rw-r– 1 ceph ceph     73 Apr 17 11:09 ceph.mon.keyring

Step 14 : Finally connect the monitor nodes to calamari and initialize the calamari.

[ceph@ceph-admin ceph-config]$ ceph-deploy calamari connect ceph-m1 ceph-m2 ceph-m3

[ceph@ceph-admin ceph-config]$ sudo calamari-ctl initialize
[INFO] Loading configuration..
[INFO] Starting/enabling salt…
[INFO] Starting/enabling postgres…
[INFO] Initializing database…
[INFO] Initializing web interface…
[INFO] You will now be prompted for login details for the administrative user account.  This is the account you will use to log into the web interface once setup is complete.
Username (leave blank to use ‘root’):
Email address:
Password:
Password (again):
Superuser created successfully.
[INFO] Starting/enabling services…
[INFO] Restarting services…
[INFO] Complete.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s