How to deploy ROOK with CEPH in Kubernetes

In the following article I try to describe all configurations which I came a long when creating and destroying ROOK.IO with CEPH.

Image from sakuragawa.moe
Image from rook.io

Block Storage

Object Storage

Shared Filesystem

Procedure

NOTE: I assume you’ve a running Kubernetes-Cluster with 3 or more Nodes and full access with a kubeconfig.

  1. Prepare and deploy the Ceph-cluster
  2. Setup the tools and do basic health-checks

Getting started

Getting the repository:

$ git clone https://github.com/rook/rook.git
$ cd rook/cluster/examples/kubernetes/ceph

Lets get it started

Prepare the Ceph-Cluster

NOTE: It is no change required on the following files.

Common.yaml provides:

  • Namespace
  • CustomResourceDefinitionRole
  • ServiceAccount
  • RoleBinding
  • ClusterRole
  • ClusterRoleBinding
$ kubectl create -f common.yaml

operator.yaml provides:

  • Deployment
$ kubectl create -f operator.yaml

cluster.yaml provides:

  • CephCluster
$ kubectl create -f cluster.yaml

toolbox.yaml provides:

  • Deployment
$ kubectl create -f toolbox.yaml

Deploy the Storage Provider (BS)

Block Storage

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
[...]
spec:
[...]
replicated:
size: 3 #if you have a 3 or more node cluster

Then we will apply it with:

$ kubectl create -f csi/rdb/storageclass.yaml

Example PVC claim (BS):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rook-ceph-block-pvc
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Deploy the Storage Provider (SFS)

Shared-File-System

  • myfs must match in both files, so if you rename it do in both of the following files!

filesystem.yaml provides:

  • CephFilesystem
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
[...]
spec:
metadataPool:
replicated:
size: 3 # if you have a 3 or more node cluster
dataPools:
- replicated:
size: 3 # if you have a 3 or more node cluster
[...]

Apply using:

$ kubectl create -f csi/cephfs/filesystem.yaml

storageclass.yaml provides:

  • StorageClass
$ kubectl create -f csi/cephfs/storageclass.yaml

Verify a succesfull deployment:

$ kubectl exec -n rook-ceph rook-ceph-tools-<randomID> ceph status
[...]
services:
[...]
mds: myfs:2 {0=myfs-a=up:active,1=ros-netcupfs-b=up:active} 1 up:standby-replay

Example PVC claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs

Deploy the Storage Provider (OBS)

Object Storage

object.yaml provides:

  • CephObjectStore
spec:
[...]
replicated:
size: 3 #if you have a 3 or more node cluster

Apply using:

$ kubectl create -f csi/rdb/storageclass.yaml

storageclass-bucket-delete.yaml provides:

  • StorageClass
$ kubectl create -f storageclass-bucket-(delete|retain).yaml

Example Bucket creation:

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: ceph-obs-bucket
spec:
generateBucketName: ceph-bkt
storageClassName: rook-ceph-bucket

Teardown Rook-CEPH Cluster

NOTE:

  • The following will delete all PV, PVCs created with on of the rook-ceph storageclasses!
  • The order of execution is important, since otherwise you can have lost or stucked resources
$ kubectl delete -f csi/cephfs/
$ kubectl delete -f csi/rdb/
$ kubectl delete -f cluster.yaml
$ kubectl delete -f operator.yaml
$ kubectl delete -f common.yaml

Sources:

Working as a IT-Operations engineer at NeXenio, a spin-off by Hasso-Plattner-Institute for products around a digitial workspace.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store