How to deploy ROOK with CEPH in Kubernetes

Image from sakuragawa.moe
Image from rook.io

Block storage allows a single pod to mount storage. This guide shows how to create a simple, multi-tier web application on Kubernetes using persistent volumes enabled by Rook.

Object storage exposes an S3 API to the storage cluster for applications to put and get data.

A shared filesystem can be mounted with read/write permission from multiple pods. This may be useful for applications which can be clustered using a shared filesystem.

Procedure

Getting started

$ git clone https://github.com/rook/rook.git
$ cd rook/cluster/examples/kubernetes/ceph

Lets get it started

$ kubectl create -f common.yaml
$ kubectl create -f operator.yaml
$ kubectl create -f cluster.yaml
$ kubectl create -f toolbox.yaml

Deploy the Storage Provider (BS)

Block storage allows a single pod to mount storage. This guide shows how to create a simple, multi-tier web application on Kubernetes using persistent volumes enabled by Rook.

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
[...]
spec:
[...]
replicated:
size: 3 #if you have a 3 or more node cluster
$ kubectl create -f csi/rdb/storageclass.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rook-ceph-block-pvc
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Deploy the Storage Provider (SFS)

A shared filesystem can be mounted with read/write permission from multiple pods. This may be useful for applications which can be clustered using a shared filesystem.

Create the filesystem by specifying the desired settings for the metadata pool, data pools, and metadata server in the CephFilesystem CRD. In this example we create the metadata pool with replication of three and a single data pool with replication of three. For more options, see the documentation on creating shared filesystems.

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
[...]
spec:
metadataPool:
replicated:
size: 3 # if you have a 3 or more node cluster
dataPools:
- replicated:
size: 3 # if you have a 3 or more node cluster
[...]
$ kubectl create -f csi/cephfs/filesystem.yaml

Before Rook can start provisioning storage, a StorageClass needs to be created based on the filesystem. This is needed for Kubernetes to interoperate with the CSI driver to create persistent volumes.

$ kubectl create -f csi/cephfs/storageclass.yaml

To see detailed status of the filesystem, start and connect to the Rook toolbox. A new line will be shown with ceph status for the mds service. In this example, there is one active instance of MDS which is up, with one MDS instance in standby-replay mode in case of failover.

$ kubectl exec -n rook-ceph rook-ceph-tools-<randomID> ceph status
[...]
services:
[...]
mds: myfs:2 {0=myfs-a=up:active,1=ros-netcupfs-b=up:active} 1 up:standby-replay
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs

Deploy the Storage Provider (OBS)

Object storage exposes an S3 API to the storage cluster for applications to put and get data.

spec:
[...]
replicated:
size: 3 #if you have a 3 or more node cluster
$ kubectl create -f csi/rdb/storageclass.yaml
$ kubectl create -f storageclass-bucket-(delete|retain).yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: ceph-obs-bucket
spec:
generateBucketName: ceph-bkt
storageClassName: rook-ceph-bucket

Teardown Rook-CEPH Cluster

$ kubectl delete -f csi/cephfs/
$ kubectl delete -f csi/rdb/
$ kubectl delete -f cluster.yaml
$ kubectl delete -f operator.yaml
$ kubectl delete -f common.yaml

Sources:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store