Rook-Ceph Deployment Using Helm
This guide explains how to deploy Rook-Ceph on a K8S cluster at Voltagepark.
By default, Voltage Park On-Demand bare metal servers have one NVMe mounted at the root partition and six (6) additional NVMe disks. The six additional NVMe disks are unmounted, offering high flexibility for custom HPC workloads.
This Rook-Ceph Deployment Using Helm procedure assumes that Kubernetes is already up and running, that you have Helm installed, and your nodes have six (6) clean 2.9TB NVMe disks each.
IMPORTANT: You need at least three nodes to deploy a healthy Ceph cluster.
Use Case | Nodes | Description |
|---|---|---|
Minimum functional | 3 | Bare minimum to enable 3x replication and data durability. |
Recommended for HA + performance | 5 | Provides better performance, improved failure tolerance, and smoother recovery. |
High-performance scalable cluster | 6+ | Maximizes NVMe utilization and scales bandwidth and I/O performance. |
π§© 1. Add Helm Repo & Update
helm repo add rook-release https://charts.rook.io/release helm repo update
π 2. Create Namespace
kubectl create namespace rook-ceph
π¦ 3. Install Rook Operator (Helm-managed)
helm install rook-ceph rook-release/rook-ceph --namespace rook-ceph
Verify itβs running:
kubectl -n rook-ceph get pods -l app=rook-ceph-operator
π 4. Create CephCluster Custom Resource
This example features three nodes, utilizing all available disks on the servers.
ceph-cluster.yaml
apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.7 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false mgr: modules: - name: prometheus enabled: true dashboard: enabled: true network: provider: host storage: useAllNodes: true useAllDevices: true config: osdsPerDevice: "1" disruptionManagement: managePodBudgets: true osdMaintenanceTimeout: 30
Apply it:
kubectl apply -f ceph-cluster.yaml
π 5. Monitor Cluster Startup
kubectl -n rook-ceph get pods -w kubectl -n rook-ceph get cephcluster
Look for:
rook-ceph-mon-*, rook-ceph-mgr-*, rook-ceph-osd-*PHASE: Ready, HEALTH_OK
This may take a few minutes as all OSDs initialize.
π§ 6. Deploy Toolbox Pod
To run ceph CLI commands:
kubectl apply -f https://raw.githubusercontent.com/rook/rook/v1.13.8/deploy/examples/toolbox.yaml
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
Test:
ceph status ceph osd tree ceph df
π§ͺ Confirm All NVMe Disks Are Used
Run this in the toolbox pod:
ceph orch device ls
You should see 6 disks per node listed and marked as βin useβ by OSDs.
πΎ 7. Create Block StorageClass (RBD)
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/rook/rook/v1.17.7/deploy/examples/csi/rbd/storageclass.yaml
Set default (optional):
kubectl patch storageclass rook-ceph-block \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
π¦ 8. Test PVC with Ubuntu Pod
Create a test PVC with 3 TB test-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-ceph-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 3072Gi storageClassName: rook-ceph-block
Apply:
kubectl apply -f test-pvc.yaml
Create Ubuntu Pod
ubuntu-ceph-test.yaml
apiVersion: v1 kind: Pod metadata: name: ubuntu-ceph-test spec: containers: - name: ubuntu image: ubuntu:22.04 command: ["/bin/bash", "-c", "--"] args: ["while true; do sleep 30; done;"] volumeMounts: - mountPath: /mnt/ceph name: ceph-vol volumes: - name: ceph-vol persistentVolumeClaim: claimName: test-ceph-pvc restartPolicy: Never tolerations: - key: "node-role.kubernetes.io/control-plane" operator: "Exists" effect: "NoSchedule" - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"
Deploy:
kubectl apply -f ubuntu-ceph-test.yaml kubectl exec -it ubuntu-ceph-test -- bash
The Ceph filesystem will be mounted at /mnt/ceph.
π 9. Conclusion
Congratulations! You have successfully deployed Rook-Ceph on your Kubernetes cluster, allocating the six unmounted NVMe disks.
You can now use this storage for high-performance workloads.
If you encounter issues, reach out to support@voltagepark.com for assistance.