Ceph on Minikube

TL;DR Install, Uninstall

At home I enjoy running a bare metal Kubernetes cluster that I use as a lab for running various experiments.  One of the challenges that I have in particular with this setup is that when I wanted to take a hard look at running container storage interface tools, the hardware I had wasn't able to support multiple disks.  So I turned to Minikube so that I could use virtual machines to create a Kubernetes cluster comprised of virtualized nodes.  In the past I have used Minikube to test application configuration in a Kubernetes environment; but I had discovered that as of version 1.10.1, Minikube can create a cluster with multiple nodes.  This is awesome!  And what is more awesome is I now can add a virtualized disks to each virtual machine so that it can be consumed by ceph through rook!

Environment details:

I will omit the hardware specific details, mainly because most modern CPUs are multicore.  And if you are reading this, I will assume that you know the limits of your machine.

With that said, this lab was setup on an Ubuntu 20.04 LTS operating system.  The hypervisor used is qemu-kvm version 4.2.1.  Minikube is version 1.21.0. Minikube driver is KVM2. Kubectl is version 1.21.1. Rook is version 1.6.5.

Continuing with the assumption I made at the beginning of this section, I will also assume that you know how to install the above software for your operating system.

Step One - Start Minikube with multiple nodes (in this case 5)

$ minikube start -n 5

Step Two - Create qcow images to be used as block devices

In this particular case we will be creating a total of four block devices.  The reason for this is because when Minikube deploys a multiple node cluster, only the first virtual machine is used as the leader node.  And we want the leader node, leading and being undistracted by writing data to a file system.  The follower machines are postfixed with -m0X, so we want to create disks that match that cadence for identification purposes.  You will also notice that the disk sizes in this example are 25GB, adjust to your needs.

$ sudo qemu-img create -f qcow2 /var/lib/libvirt/images/minikube-m02.qcow2 25000M -o preallocation=full
$ sudo qemu-img create -f qcow2 /var/lib/libvirt/images/minikube-m03.qcow2 25000M -o preallocation=full
$ sudo qemu-img create -f qcow2 /var/lib/libvirt/images/minikube-m04.qcow2 25000M -o preallocation=full
$ sudo qemu-img create -f qcow2 /var/lib/libvirt/images/minikube-m05.qcow2 25000M -o preallocation=full

Step Three - Make the qcow images available as disks:

We will need to attach the qcow images to their respective virtual machine so that the operating system can detect them on boot.

$ virsh attach-disk --domain minikube-m02 /var/lib/libvirt/images/minikube-m02.qcow2 --target vdb --persistent --config --live
$ virsh attach-disk --domain minikube-m03 /var/lib/libvirt/images/minikube-m03.qcow2 --target vdb --persistent --config --live
$ virsh attach-disk --domain minikube-m04 /var/lib/libvirt/images/minikube-m04.qcow2 --target vdb --persistent --config --live
$ virsh attach-disk --domain minikube-m05 /var/lib/libvirt/images/minikube-m05.qcow2 --target vdb --persistent --config --live

These new disks should be seen by the operating system and readily available, but I have found that the availability was hit and miss.  So now that the disks are attached to the Minikube virtual machines we will need to stop and restart the virtual machines so that we can be sure that the disks are visible to the operating system.

$ minikube stop
$ minikube start

Step four - Install Rook:

Get the specific version of rook for this example.

$ git clone https://github.com/rook/rook
$ cd rook
$ git checkout v1.6.5

With the above done you now have the proper version of the rook release.  Now to install the operator and let it deploy ceph.

$ kubectl create -f cluster/examples/kubernetes/ceph/common.yaml
$ kubectl create -f cluster/examples/kubernetes/ceph/operator.yaml
$ kubectl create -f cluster/examples/kubernetes/ceph/crds.yaml
$ kubectl create -f cluster/examples/kubernetes/ceph/cluster.yaml
$ kubectl create -f cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml

Once the above commands have been run, go talk a brief walk and get a drink as this will take a few minutes.  The install will happen within the rook-ceph namespace, so that is where you will be able to look to check on the status of your deployment.

Last but not least, you can remove the default setting from the standard storage class, and use your rook-ceph storage class as the default.

$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

$ kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Step Five - Enjoy your new persistent volume layer:

Do all of the things you want with persistent volumes and persistent volume claims.  Rook will handle the orchestration for you.

Step Six - Its the end of the day and it is time to go home:

Delete you Kubernetes cluster!

$ minikube delete

Then delete the qcow images you used as disks.

$ sudo rm /var/lib/libvirt/images/minikube-m02.qcow2
$ sudo rm /var/lib/libvirt/images/minikube-m03.qcow2
$ sudo rm /var/lib/libvirt/images/minikube-m04.qcow2
$ sudo rm /var/lib/libvirt/images/minikube-m05.qcow2

And time for family!