How To Set Up a Ceph Cluster within Kubernetes Using Rook

By | October 3, 2022
How To Set Up a Ceph Cluster within Kubernetes Using Rook

Kubernetes is called stateless because it has no persistent storage, and thus the container does not store past transactions.

Rook is used for storage purposes and it uses the Kubernetes platform. It is an automated storage administrator. On the other hand, Ceph is a storage platform that offers file storage facilities. It has clusters that can be deployed on any hardware because of its core CRUSH (Controlled Replication Under Scalable Hashing) algorithm.

A Ceph cluster in Kubernetes with Rook enhances scalability and provides seamless storage solutions without manual setups and only using a command-line tool.

In this article, you will have a clear idea of the Ceph cluster that can be created with the Rook as the storage administrator and store data for a MongoDB database.

Requirements

  • You need a Kubernetes Quickstart guide and learn how to create a cluster on DigitalOcean. Once you have created such a cluster with 3 nodes you need to ensure that they have 2 vCPUs and 4 GB of Memory.
  • The kubectl command-line tool should be set up for cluster connectivity.
  • You also need a DigitalOcean block storage of around 100 GB per node in the newly created cluster. Each node needs a separate volume. You can manually set up the volume of the nodes based on the droplets of the node pool.

Steps to Set Up a Ceph Cluster within Kubernetes Using Rook

Step 1: Setting up Rook

Once you have the Kubernetes cluster with 3 nodes, you can now set up the storage administrator Rook.

Set up the Rook operator on the Kubernetes cluster. Check if it has been deployed properly. This attaches itself to the clusters and tracks the storage daemons for an effective active cluster.

You can start with the LVM package installation and this can act as the first step for Ceph. Set up a DaemonSet with LVM package.

Create YAML file:

nano lvm.YAML

DaemonSet can select the storage container that needs to be deployed.

For example, a DaemonSet executes Debian, and sets up lvm2 with apt command and then makes a copy of the setup files with volumeMounts:

lvm.yaml

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: lvm

namespace: kube-system

spec:

revisionHistoryLimit: 10

selector:

matchLabels:

name: lvm

template:

metadata:

labels:

name: lvm

spec:

containers:

– args:

– apt -y update; apt -y install lvm2

command:

– /bin/sh

– -c

image: debian:10

imagePullPolicy: IfNotPresent

name: lvm

securityContext:

privileged: true

volumeMounts:

– mountPath: /etc

name: etc

– mountPath: /sbin

name: sbin

– mountPath: /usr

name: usr

– mountPath: /lib

name: lib

dnsPolicy: ClusterFirst

restartPolicy: Always

schedulerName: default-scheduler

securityContext:

volumes:

– hostPath:

path: /etc

type: Directory

name: etc

– hostPath:

path: /sbin

type: Directory

name: sbin

– hostPath:

path: /usr

type: Directory

name: usr

– hostPath:

path: /lib

type: Directory

name: lib

Once configured you can execute the following command:

kubectl apply -f lvm.yaml

Replicate the Rook repository and set up the Rook cluster with the following command:

git clone –single-branch –branch release-1.3 https://github.com/rook/rook.git

The above command will make an exact copy of the Rook repository that is there on GitHub.

View the directory with the command as follows:

cd rook/cluster/examples/kubernetes/ceph

Set up the Kubernetes directory with the following command:

kubectl create -f common.yaml

Note: This standard file installs the Rook operator, Ceph daemons in a common namespace.

Now, you need to create the Rook operator.

Before the setup, you need to edit the operator.yaml file. Search CSI_RBD_GRPC_METRICS_PORT variable band and delete it. Select the file:

nano operator.yaml

Search CSI_RBD_GRPC_METRICS_PORT variable, remove the # and uncomment it by removing the #, and you can also change the port:

Operator.yaml (File Name)

kind: ConfigMap

apiVersion: v1

metadata:

name: rook-ceph-operator-config

namespace: rook-ceph

data:

ROOK_CSI_ENABLE_CEPHFS: “true”

ROOK_CSI_ENABLE_RBD: “true”

ROOK_CSI_ENABLE_GRPC_METRICS: “true”

CSI_ENABLE_SNAPSHOTTER: “true”

CSI_FORCE_CEPHFS_KERNEL_CLIENT: “true”

ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: “false”

# Configure CSI CSI Ceph FS grpc and liveness metrics port

# CSI_CEPHFS_GRPC_METRICS_PORT: “9091”

# CSI_CEPHFS_LIVENESS_METRICS_PORT: “9081”

# Configure CSI RBD grpc and liveness metrics port

CSI_RBD_GRPC_METRICS_PORT: “9093”

# CSI_RBD_LIVENESS_METRICS_PORT: “9080”

Save the file and exit.

Set up the operator:

kubectl create -f operator.yaml

Output:

configmap/rook-ceph-operator-config created

deployment.apps/rook-ceph-operator created

Check status with the following command:

kubectl get pod -n rook-ceph

Use -n flag for a specific Kubernetes namespace to build DaemonSets for rook-discovery agents:

Output:

NAME READY STATUS RESTARTS AGE

rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 92s

rook-discover-6fhlb 1/1 Running 0 55s

rook-discover-97kmz 1/1 Running 0 55s

rook-discover-z5k2z 1/1 Running 0 55s

Next you will create a Ceph cluster.

Step 2: Creating a Ceph Cluster

Once you have installed the Rook Ceph Cluster, follow the below steps.

Ceph has several unique components:

  • The MONs or monitors in Ceph help to store the cluster maps used by the Ceph daemons for connection. You need more than one Ceph MON to make your storage service accessible.
  • The MGR or Ceph Managers are daemons that track the runtime metrics and state of the Ceph cluster. They run along with the monitors.
  • Ceph also has an OSD or Object Store Device that saves the objects for storing objects and making them accessible over the network.

To access Ceph storage, clients need to access Ceph Monitors (MONs) for the cluster map. This gives them the data storage location and cluster topology. Then they need to select the OSD.

All the Ceph components communicate and connect with Rook agents. This keeps the Ceph components and storage maps hidden during the configurations.

Next step you can complete the Ceph cluster.

Finish the setup process with the example configuration file available in the examples directory which is a part of the Rook project. You can also create your own configuration. This is really good for use case documentation.

To create a Ceph cluster Kubernetes Object, start by creating a YAML file with the help of the following command:

nano cephcluster.yaml

This has all the variables that define how to set up a Ceph cluster. So, here we will install Ceph Monitors (MON) and activate the most important Ceph dashboard. The Ceph dashboard gives you information on the status of your Ceph cluster.

You can explain the core apiVersion and the Kubernetes Object kind and specify the name, namespace with the .yaml:

cephcluster.yaml

apiVersion: ceph.rook.io/v1

kind: CephCluster

metadata:

name: rook-ceph

namespace: rook-ceph

The spec key is the basic framework used to create the Ceph cluster.

spec:

cephVersion:

image: ceph/ceph:v14.2.8

allowUnsupported: false

cephcluster.yaml

spec:

cephVersion:

image: ceph/ceph:v14.2.8

allowUnsupported: false

Set up the data directory using the dataDirHostPath key:

cephcluster.yaml

dataDirHostPath: /var/lib/rook

You can skip upgrade checks by using:

cephcluster.yaml

skipUpgradeChecks: false

continueUpgradeAfterChecksEvenIfNotHealthy: false

MONs can be deployed on each node:

cephcluster.yaml

mon:

count: 3

allowMultiplePerNode: false

You can edit the port and use a reverse proxy as a prefix:

cephcluster.yaml

dashboard:

enabled: true

# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)

# urlPrefix: /ceph-dashboard

# serve the dashboard at the given port.

# port: 8443

# serve the dashboard using SSL

ssl: false

Activate the monitoring:

cephcluster.yaml

monitoring:

enabled: false

rulesNamespace: rook-ceph

RBD graphics can be shared between Ceph clusters. This is done by enabling rbdMirroring. For this article the worker number is 0:

cephcluster.yaml

rbdMirroring:

workers: 0

Activate crash collector for daemons:

cephcluster.yaml

crashCollector:

disable: false

This is to delete your cluster:

cephcluster.yaml

cleanupPolicy:

deleteDataDirOnHosts: “”

removeOSDsIfOutAndSafeToRemove: false

Use the cluster key for data storage:

cephcluster.yaml

storage:

useAllNodes: true

useAllDevices: true

config:

# metadataDevice: “md0” # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.

# databaseSizeMB: “1024” # uncomment if the disks are smaller than 100 GB

# journalSizeMB: “1024” # uncomment if the disks are 20 GB or smaller

The disruptionManagement key handles daemon disruptions:

cephcluster.yaml

disruptionManagement:

managePodBudgets: false

osdMaintenanceTimeout: 30

manageMachineDisruptionBudgets: false

machineDisruptionBudgetNamespace: openshift-machine-api

cephcluster.yaml

apiVersion: ceph.rook.io/v1

kind: CephCluster

metadata:

name: rook-ceph

namespace: rook-ceph

spec:

cephVersion:

image: ceph/ceph:v14.2.8

allowUnsupported: false

dataDirHostPath: /var/lib/rook

skipUpgradeChecks: false

continueUpgradeAfterChecksEvenIfNotHealthy: false

mon:

count: 3

allowMultiplePerNode: false

dashboard:

enabled: true

# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)

# urlPrefix: /ceph-dashboard

# serve the dashboard at the given port.

# port: 8443

# serve the dashboard using SSL

ssl: false

monitoring:

enabled: false

rulesNamespace: rook-ceph

rbdMirroring:

workers: 0

crashCollector:

disable: false

cleanupPolicy:

deleteDataDirOnHosts: “”

removeOSDsIfOutAndSafeToRemove: false

storage:

useAllNodes: true

useAllDevices: true

config:

# metadataDevice: “md0” # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.

# databaseSizeMB: “1024” # uncomment if the disks are smaller than 100 GB

# journalSizeMB: “1024” # uncomment if the disks are 20 GB or smaller

disruptionManagement:

managePodBudgets: false

osdMaintenanceTimeout: 30

manageMachineDisruptionBudgets: false

machineDisruptionBudgetNamespace: openshift-machine-api

You can make a customized installation and select from a range of options from the Rook repository.

  • kubectl apply -f cephcluster.yaml

Check if the pods are active :

  • kubectl get pod -n rook-ceph

Output:

NAME READY STATUS RESTARTS AGE

csi-cephfsplugin-lz6dn 3/3 Running 0 3m54s

csi-cephfsplugin-provisioner-674847b584-4j9jw 5/5 Running 0 3m54s

csi-cephfsplugin-provisioner-674847b584-h2cgl 5/5 Running 0 3m54s

csi-cephfsplugin-qbpnq 3/3 Running 0 3m54s

csi-cephfsplugin-qzsvr 3/3 Running 0 3m54s

csi-rbdplugin-kk9sw 3/3 Running 0 3m55s

csi-rbdplugin-l95f8 3/3 Running 0 3m55s

csi-rbdplugin-provisioner-64ccb796cf-8gjwv 6/6 Running 0 3m55s

csi-rbdplugin-provisioner-64ccb796cf-dhpwt 6/6 Running 0 3m55s

csi-rbdplugin-v4hk6 3/3 Running 0 3m55s

rook-ceph-crashcollector-pool-33zy7-68cdfb6bcf-9cfkn 1/1 Running 0 109s

rook-ceph-crashcollector-pool-33zyc-565559f7-7r6rt 1/1 Running 0 53s

rook-ceph-crashcollector-pool-33zym-749dcdc9df-w4xzl 1/1 Running 0 78s

rook-ceph-mgr-a-7fdf77cf8d-ppkwl 1/1 Running 0 53s

rook-ceph-mon-a-97d9767c6-5ftfm 1/1 Running 0 109s

rook-ceph-mon-b-9cb7bdb54-lhfkj 1/1 Running 0 96s

rook-ceph-mon-c-786b9f7f4b-jdls4 1/1 Running 0 78s

rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 6m58s

rook-ceph-osd-prepare-pool-33zy7-c2hww 1/1 Running 0 21s

rook-ceph-osd-prepare-pool-33zyc-szwsc 1/1 Running 0 21s

rook-ceph-osd-prepare-pool-33zym-2p68b 1/1 Running 0 21s

rook-discover-6fhlb 1/1 Running 0 6m21s

rook-discover-97kmz 1/1 Running 0 6m21s

rook-discover-z5k2z 1/1 Running 0 6m21s

The Ceph cluster is created and now you can add storage blocks.

Step 3: Add Storage

Start by creating a storage class and cephblockpool. This makes Kubernetes and Rook work together:

  • kubectl apply -f ./csi/rbd/storageclass.yaml

Output

cephblockpool.ceph.rook.io/replicapool created

storageclass.storage.k8s.io/rook-ceph-block created

Note: If the Rook operator is not rook-ceph you have to edit the provisioner.

Install the storageclass and cephblockpool, and create PersistentVolumeClaim (PVC) for your software.

First create a YAML file:

  • nano PVC-rook-ceph-block.yaml

For PersistentVolumeClaim add:

pvc-rook-ceph-block.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: mongo-pvc

spec:

storageClassName: rook-ceph-block

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 5Gi

Select the stable apiVersion which is v1. Specify Kubernetes the resource by using the kind key. For this, it will be PersistentVolumeClaim

The spec key selects the framework for PersistentVolumeClaim. Select rook-ceph-block. Set the access mode and resources. The ReadWriteOnce lets it mount once.

The PVC can be deployed:

  • kubectl apply -f PVC-rook-ceph-block.yaml

Output

persistentvolumeclaim/mongo-pvc created

For PVC status:

  • kubectl get pvc

Ready with bound PVC:

Output

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

mongo-pvc Bound pvc-ec1ca7d1-d069-4d2a-9281-3d22c10b6570 5Gi RWO rook-ceph-block 16s

In the next step, you will mount this to an application for data persistence.

Step 4: MongoDB Deployment

Now that storage and PVC are available, you can set up the MongoDB application.

To do so, you need to meet the following requirements:

  • Take the latest mongo image version and aim for a single storage unit deployment
  • Stores the data of the MongoDB database.
  • This will expose MongoDB on port 31017 of every node

Select and open the .mongo yaml configuration file:

  • nano mongo.yaml

Replicate with Deployment resource:

mongo.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: mongo

spec:

selector:

matchLabels:

app: mongo

template:

metadata:

labels:

app: mongo

spec:

containers:

– image: mongo:latest

name: mongo

ports:

– containerPort: 27017

name: mongo

volumeMounts:

– name: mongo-persistent-storage

mountPath: /data/db

volumes:

– name: mongo-persistent-storage

persistentVolumeClaim:

claimName: mongo-pvc

You need an apiVersion for manifesting. For installation, use the latest stable apiVersion: apps/v1. Define your resource on Kubernetes using the kind key. The definitions need to have metadata. name.

From the spec section, you will know in what state the deployment is. Define your cross-reference labeling with metadata. labels.

In the spec.the template you can etch out the basics of your pod deployment. Then this image with all the details will be used by Kubernetes.

Add the Kubernetes Service that makes MongoDB accessible port on port 31017 :

mongo.yaml

apiVersion: v1

kind: Service

metadata:

name: mongo

labels:

app: mongo

spec:

selector:

app: mongo

type: NodePort

ports:

– port: 27017

nodePort: 31017

This service will get all the connections on port 31017 and pass them to 27017. This is for accessing the application.

Set this up with:

  • kubectl apply -f mongo.yaml

Output

deployment.apps/mongo created

service/mongo created

Check the status:

  • kubectl get svc,deployments

Output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 33m

service/mongo NodePort 10.245.124.118 <none> 27017:31017/TCP 4m50s

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/mongo 1/1 1 1 4m50s

Use the MongoDB shell and access it with kubectl.

The pod name will be:

  • kubectl get pods

Output

NAME READY STATUS RESTARTS AGE

mongo-7654889675-mjcks 1/1 Running 0 13m

Use the name

  • kubectl exec -it your_pod_name mongo

Create a database:

  • use test

Output

switched to db test

Add test data and use the insertOne() to add a new document to create the database:

  • db.test.insertOne( {name: “test”, number: 10 })

Output

{

“acknowledged” : true,

“insertedId” : ObjectId(“5f22dd521ba9331d1a145a58”)

}

Recover the data:

  • db.getCollection(“test”).find()

Output

NAME READY STATUS RESTARTS AGE

{ “_id” : ObjectId(“5f1b18e34e69b9726c984c51”), “name” : “test”, “number” : 10 }

Check data persistence:

  • kubectl delete pod -l app=mongo

Connect to the MongoDB shell and access the data once you have the pod name :

  • kubectl get pods

Output

NAME READY STATUS RESTARTS AGE

mongo-7654889675-mjcks 1/1 Running 0 13m

Execute with the pod name:

  • kubectl exec -it your_pod_name mongo

Recover the data for print

  • use test
  • db.getCollection(“test”).find()

Output

NAME READY STATUS RESTARTS AGE

{ “_id” : ObjectId(“5f1b18e34e69b9726c984c51”), “name” : “test”, “number” : 10 }

Rook and Ceph are completely deployed and we can now figure out a bit about the Rook toolbox.

Step 5: Rook Toolbox

Here, in this step, you get to know the latest Ceph deployment status and also upgrades to Ceph configurations.

Start the setup with the toolbox.yaml file, from the examples/kubernetes/ceph directory:

  • kubectl apply -f toolbox.yaml

Output

deployment.apps/rook-ceph-tools created

Check if the pod is active

  • kubectl -n rook-ceph get pod -l “app=rook-ceph-tools”

Output

NAME READY STATUS RESTARTS AGE

rook-ceph-tools-7c5bf67444-bmpxc 1/1 Running 0 9s

Connect the pod with kubectl exec :

  • kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l “app=rook-ceph-tools” -o jsonpath='{.items[0].metadata.name}’) bash
  1. The kubectl exec command enables the pods and sets up a variable or initiates a service. You can open the BASH terminal.
  2. Apply the -n flag for the namespace the pod is running in.
  3. For interactive mode, use -i (interactive) and -t (TTY) flags to communicate the terminal you open.
  4. Add this $() that lets you add an expression prefixed to the command.

Now once you are connected, execute Ceph commands for the latest status or error alerts. Execute ceph status to know the current health of Ceph configuration

  • ceph status

Output

cluster:

id: 71522dde-064d-4cf8-baec-2f19b6ae89bf

health: HEALTH_OK

services:

mon: 3 daemons, quorum a,b,c (age 23h)

mgr: a(active, since 23h)

osd: 3 osds: 3 up (since 23h), 3 in (since 23h)

data:

pools: 1 pools, 32 pgs

objects: 61 objects, 157 MiB

usage: 3.4 GiB used, 297 GiB / 300 GiB avail

pgs: 32 active+clean

io:

client: 5.3 KiB/s wr, 0 op/s rd, 0 op/s wr

For OSD status:

  • ceph osd status

Output

+—-+————+——-+——-+——–+———+——–+———+———–+

| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |

+—-+————+——-+——-+——–+———+——–+———+———–+

| 0 | node-3jis6 | 1165M | 98.8G | 0 | 0 | 0 | 0 | exists,up |

| 1 | node-3jisa | 1165M | 98.8G | 0 | 5734 | 0 | 0 | exists,up |

| 2 | node-3jise | 1165M | 98.8G | 0 | 0 | 0 | 0 | exists,up |

+—-+————+——-+——-+——–+———+——–+———+———–+

Conclusion

This is how you can install a Rook Ceph cluster on Kubernetes that helps you to set up data and share it amongst multiple pods. This article also shows the Rook Toolbox that can be used to debug and troubleshoot your Ceph deployment. If you want to know more in detail you check out the Rook documentation and explore different configuration samples and parameters.

Leave a Reply

Your email address will not be published.