Quantcast
Viewing all articles
Browse latest Browse all 11063

How to build a Ceph Distributed Storage Cluster on CentOS 7

Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure.

In this tutorial, I will guide you to install and build a Ceph cluster on CentOS 7. A Ceph cluster requires these Ceph components:

Ceph OSDs (ceph-osd) - Handles the data store, data replication and recovery. A Ceph cluster needs at least two Ceph OSD servers. I will use three CentOS 7 OSD servers here. Ceph Monitor (ceph-mon) - Monitors the cluster state, OSD map and CRUSH map. I will use one server. Ceph Meta Data Server (ceph-mds) - This is needed to use Ceph as a File System. Prerequisites 6 server nodes, all with CentOS 7 installed. Root privileges on all nodes.

The servers in this tutorial will use the following hostnames and IP addresses.

hostname IP address

ceph-admin 10.0.15.10

mon1 10.0.15.11

osd1 10.0.15.21

osd2 10.0.15.22

osd3 10.0.15.23

client 10.0.15.15

All OSD nodes need two partitions, one root (/) partition and an empty partition that is used as Ceph data storage later.

Step 1 - Configure All Nodes

In this step, we will configure all 6 nodes to prepare them for the installation of the Ceph Cluster. You have to follow and run all commands below on all nodes. And make sure ssh-server is installed on all nodes.

Create a Ceph User

Create a new user named ' cephuser ' on all nodes.

useradd -d /home/cephuser -m cephuser

passwd cephuser

After creating the new user, we need to configure sudo for 'cephuser'. He must be able to run commands as root and to get root privileges without a password.

Run the command below to create a sudoers file for the user and edit the /etc/sudoers file with sed.

echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser

chmod 0440 /etc/sudoers.d/cephuser

sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

Install and Configure NTP

Install NTP to synchronize date and time on all nodes. Run thentpdate command to set a date and time via NTP protocol, we will use the us pool NTP server. Then start and enable NTP server to run at boot time.

yum install -y ntp ntpdate ntp-doc

ntpdate 0.us.pool.ntp.org

hwclock --systohc

systemctl enable ntpd.service

systemctl start ntpd.service

Install Open-vm-tools

If you are running all nodes inside VMware, you need to install this virtualization utility. Otherwise skip this step.

yum install -y open-vm-tools

Disable SElinux

Disable SELinux on all nodes by editing the SELinux configuration file with the sed stream editor.

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

Configure Hosts File

Edit the /etc/hosts file on all node with the vim editor and add lines with the IP address and hostnames of all cluster nodes.

vim /etc/hosts

Paste the configuration below:

10.0.15.10 ceph-admin 10.0.15.11 mon1 10.0.15.21 osd1 10.0.15.22 osd2 10.0.15.23 osd3 10.0.15.15 client

Save the file and exit vim.

Now you can try to ping between the servers with their hostname to test the network connectivity. Example:

ping -c 5 mon1


Image may be NSFW.
Clik here to view.
How to build a Ceph Distributed Storage Cluster on CentOS 7

Step 2 - Configure the SSH Server

In this step, I will configure the ceph-admin node . The admin node is used for configuring the monitor node and the osd nodes. Login to the ceph-admin node and become the ' cephuser '.

ssh[emailprotected]

su - cephuser

The admin node is used for installing and configuring all cluster nodes, so the user on the ceph-admin node must have privileges to connect to all nodes without a password. We have to configure password-less SSH access for 'cephuser' on 'ceph-admin' node.

Generate the ssh keys for ' cephuser '.

ssh-keygen

leave passphrase blank/empty.

Next, create theconfiguration file for the ssh configuration.

vim ~/.ssh/config

Paste configuration below:

Host ceph-admin Hostname ceph-admin User cephuser Host mon1 Hostname mon1 User cephuser Host osd1 Hostname osd1 User cephuser Host osd2 Hostname osd2 User cephuser Host osd3 Hostname osd3 User cephuser Host client Hostname client User cephuser

Save the file.


Image may be NSFW.
Clik here to view.
How to build a Ceph Distributed Storage Cluster on CentOS 7

Change the permission of the config file.

chmod 644 ~/.ssh/config

Now add the SSH key to all nodes with the ssh-copy-id command.

ssh-keyscan osd1 osd2 osd3 mon1 client >> ~/.ssh/known_hosts

ssh-copy-id osd1

ssh-copy-id osd2

ssh-copy-id osd3

ssh-copy-id mon1

ssh-copy-id client

Type in your 'cephuser' password when requested.


Image may be NSFW.
Clik here to view.
How to build a Ceph Distributed Storage Cluster on CentOS 7

When you are finished, try to access osd1 server from the ceph-admin node.

ssh osd1

Step 3 - Configure Firewalld

We will use Firewalld to protect the system. In this step, we will enable firewald on all nodes, then open the ports needed byceph-admon, ceph-mon and ceph-osd.

Login to the ceph-admin node and start firewalld.

ssh[emailprotected]

systemctl start firewalld

systemctl enable firewalld

Open port 80, 2003 and 4505-4506, and then reload the firewall.

sudo firewall-cmd --zone=public --add-port=80/tcp --permanent

sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent

sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent

sudo firewall-cmd --reload

From the ceph-admin node, login to the monitor node 'mon1' and start firewalld.

ssh mon1

sudo systemctl start firewalld

sudo systemctl enable firewalld

Open new port on the Ceph monitor node and reload the firewall.

sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

sudo firewall-cmd --reload

Finally, openport 6800-7300 on each of theosd nodes - osd1, osd2 and os3.

Login to each osd node from the ceph-admin node.

ssh osd1

sudo systemctl start firewalld

sudo systemctl enable firewalld

Openthe ports and reload the firewall.

sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

sudo firewall-cmd --reload

Firewalld configuration is done.

Step 4 - Configure the Ceph OSD Nodes

In this tutorial, we have 3 OSD nodes and each nodehas two partitions.

/dev/sda for the root partition. /dev/sdb is an empty partition - 30GB in my case.

We will use /dev/sdb for the Ceph disk. From the ceph-admin node, login to all OSD nodes and format the /dev/sdb partition with XFS .

ssh osd1

ssh osd2

ssh osd3

Check the partition with thefdisk command.

sudo fdisk -l /dev/sdb

Format the /d

Viewing all articles
Browse latest Browse all 11063

Trending Articles