💎一站式轻松地调用各大LLM模型接口,支持GPT4、智谱、星火、月之暗面及文生图 广告
[TOC] 本文将介绍如何在kubernetes集群中使用RBD进行PV的动态供给。 ### **准备ceph集群** 首先,我们需要准备一个Ceph集群,然后提前创建好一个pool给kubernetes集群使用,假设我们创建的pool名字叫kube。 在kubernetes集群的节点上安装好ceph-common。 ### **创建secret以保存keyring** 由于访问ceph集群需要身份验证(在kubernetes中我们打算直接使用ceph的admin用户),所以我们先把admin用户的keyring保存在kubernetes的secret中。 我们不能把keyring的内容直接保存在secret中,而要先经过base64编码。假设`ceph.client.admin.keyring`文件(在ceph集群Monitor节点的`/etc/ceph/`目录下)的内容如下: ``` [client.admin] key = AQBRCtFci+UPKxAAvifYirfhtgEMGP46mkre+A== ``` 我们对key进行base64编码,得到 ``` $ echo -n "AQBRCtFci+UPKxAAvifYirfhtgEMGP46mkre+A==" | base64 QVFCUkN0RmNpK1VQS3hBQXZpZllpcmZodGdFTUdQNDZta3JlK0E9PQ== ``` 当然我们也可以通过以下的命令获取(在ceph集群的monitor节点上执行): ``` $ grep key /etc/ceph/ceph.client.admin.keyring | awk '{printf "%s", $NF}' | base64 QVFCUkN0RmNpK1VQS3hBQXZpZllpcmZodGdFTUdQNDZta3JlK0E9PQ== $ ceph auth get-key client.admin | base64 QVFCUkN0RmNpK1VQS3hBQXZpZllpcmZodGdFTUdQNDZta3JlK0E9PQ== ``` 接下来,我们在kubernetes集群中来创建这个secret,我们把它创建在命名空间kube-system中,名字叫`ceph-admin-secret`,注意下面的type字段的值必须为`kubernetes.io/rbd` ``` apiVersion: v1 kind: Secret metadata: name: ceph-adamin-secret namespace: kube-system type: kubernetes.io/rbd data: key: QVFCUkN0RmNpK1VQS3hBQXZpZllpcmZodGdFTUdQNDZta3JlK0E9PQ= ``` ### **安装RBD-Provisioner** 注意:如果你使用二进制安装的k8s集群,该步骤可以省略 本文中我们使用的是kubeadm安装的k8s集群,由于kube-controller-manager镜像中没有安装ceph-common组件(没有rbd可执行文件),那么controller-manager无法在ceph中创建image。所以,我们需要借助第三方插件RBD-Provisioner。 在`https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd/deploy/rbac`中有六个文件,下载下来 * clusterrole.yaml * clusterrolebinding.yaml * deployment.yaml * role.yaml * rolebinding.yaml * serviceaccount.yaml 然后把clusterrolebinding.yaml与rolebinding.yaml中的namespace的值更改为上一步中的kube-system ``` $ sed -r -i "s/namespace: [^ ]+/namespace: kube-system/g" clusterrolebinding.yaml rolebinding.yaml ``` 我们在kube-system这个命名空间中创建这些资源 ``` $ kubectl -n kube-system apply -f ./ ``` 然后验证以上六个资源已经创建成功: ``` $ kubectl get clusterrole rbd-provisioner NAME AGE rbd-provisioner 2m51s $ kubectl get clusterrolebinding rbd-provisioner -n kube-system NAME AGE rbd-provisioner 3m38s $ kubectl get deployment rbd-provisioner -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE rbd-provisioner 1/1 1 1 7m46s $ kubectl get role rbd-provisioner -n kube-system NAME AGE rbd-provisioner 8m33s $ kubectl get rolebinding rbd-provisioner -n kube-system NAME AGE rbd-provisioner 9m6s $ kubectl get serviceaccount rbd-provisioner -n kube-system NAME SECRETS AGE rbd-provisioner 1 9m43s ``` ### **创建StorageClass** 然后在kubernetes集群中创建provisioner为 `ceph.com/rbd` 的StorageClass(如果你使用的是二进制安装的k8s集群,则provisioner应该为 `kubernetes.io/rbd` ): ``` apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rbd provisioner: ceph.com/rbd reclaimPolicy: Delete volumeBindingMode: Immediate parameters: monitors: 192.168.2.100:6789 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: kube-system pool: kube userId: admin userSecretName: ceph-admin-secret userSecretNamespace: kube-system fsType: ext4 imageFeatures: layering imageFormat: "2" ``` * monitors:ceph集群的monitor,使用,隔开 * adminId:用来在Pool中创建image的用户,默认为admin;这里我们就是使用ceph的admin用户 * adminSecretName:adminId用户的secret名字,就是我们上一步创建的 ceph-admin-secret * adminSecretNamespace:adminSecretName的命名空间,上一步中我们把它放在了kube-system中 * pool:kubernetes集群会在这个pool中创建image,第一步我们提前准备的pool的名字叫kube * userId:用来把ceph中的image映射到kubernetes节点上的ceph用户,默认与adminId相同。这里我们显示设置与adminId相同 * userSecretName:userId的secret名字 * userSecretNamespace:userId的命名空间 * fsType:kubernetes支持文件系统类型,默认为ext4。当把ceph的image挂载到k8s节点上时,kubernetes会将其格式化为fsType ### **创建一个PVC** 接下来,我们创建一个StorageClassName为rbd的PVC,yaml文件如下: ``` apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-rbd spec: storageClassName: rbd resources: requests: storage: 1Gi accessModes: - ReadWriteOnce ``` 然后我们查看该PVC的情况,发现已经自动为其创建了一个PV,名字叫`pvc-5d26aba6-7169-11e9-98c4-000c29483500` ``` $ kubectl get pvc pvc-rbd NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-rbd Bound pvc-5d26aba6-7169-11e9-98c4-000c29483500 1Gi RWO rbd 9s ``` 然后,我们查看自动创建好的PV及其详情,从PV的yaml文件可以看出,该PV对应的是ceph集群的kube存储池中的`kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e`镜像 ``` $ kubectl get pv pvc-5d26aba6-7169-11e9-98c4-000c29483500 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-5d26aba6-7169-11e9-98c4-000c29483500 1Gi RWO Delete Bound default/pvc-rbd rbd 2m35s $ kubectl get pv pvc-5d26aba6-7169-11e9-98c4-000c29483500 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: ceph.com/rbd rbdProvisionerIdentity: ceph.com/rbd creationTimestamp: "2019-05-08T08:15:02Z" finalizers: - kubernetes.io/pv-protection name: pvc-5d26aba6-7169-11e9-98c4-000c29483500 resourceVersion: "131376" selfLink: /api/v1/persistentvolumes/pvc-5d26aba6-7169-11e9-98c4-000c29483500 uid: 61548449-7169-11e9-98c4-000c29483500 spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-rbd namespace: default resourceVersion: "131368" uid: 5d26aba6-7169-11e9-98c4-000c29483500 persistentVolumeReclaimPolicy: Delete rbd: fsType: ext4 image: kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e keyring: /etc/ceph/keyring monitors: - 192.168.2.100:6789 pool: kube secretRef: name: ceph-admin-secret namespace: kube-system user: admin storageClassName: rbd volumeMode: Filesystem status: phase: Bound ``` 我们去ceph集群中查看是否存在这个image ``` $ rbd ls kube kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e $ rbd info kube/kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e rbd image 'kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e': size 1GiB in 256 objects order 22 (4MiB objects) block_name_prefix: rbd_data.853f6b8b4567 format: 2 features: layering flags: create_timestamp: Wed May 8 16:15:02 2019 ``` ### **创建Pod** 接下来,我们创建一个Pod,来使用这个PVC 等待Pod为Running状态 ``` $ kubectl get pod tomcat -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat 1/1 Running 0 105s 172.26.0.27 peng01 <none> <none> ``` 然后我们查看这个Pod所在主机的块设备情况,发现ceph中的这上image已经attach到这个node上了,并且格式化为ext4的文件系统,mount到了`/var/lib/kubelet/pods/eeb62f01-716c-11e9-98c4-000c29483500/volumes/kubernetes.io~rbd/pvc-5d26aba6-7169-11e9-98c4-000c29483500`目录上 ``` $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT rbd0 252:0 0 1G 0 disk /var/lib/kubelet/pods/eeb62f01-716c-11e9-98c4-000c29483500/volumes/kubernetes.io~rbd/pvc-5d26aba6-7169-11e9-98c4-000c29483500 sr0 11:0 1 4.2G 0 rom sda 8:0 0 20G 0 disk ├─sda2 8:2 0 19G 0 part │ ├─centos-swap 253:1 0 2G 0 lvm │ └─centos-root 253:0 0 17G 0 lvm / └─sda1 8:1 0 1G 0 part /boot $ df -hT /dev/rbd0 Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 ext4 976M 2.6M 958M 1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/kube-image-kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e $ rbd showmapped 2019-05-08 16:46:46.779781 7fb5b7fb4d80 -1 did not load config file, using default settings. id pool image snap device 0 kube kubernetes-dynamic-pvc-608f2028-7169-11e9-81a0-2ec1066e948e - /dev/rbd0 ``` ### **删除Pod** 接着,我们删除这个Pod,然后再查看Pod所在节点上的block情况,发现image已经deattach了 ``` $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 4.2G 0 rom sda 8:0 0 20G 0 disk ├─sda2 8:2 0 19G 0 part │ ├─centos-swap 253:1 0 2G 0 lvm │ └─centos-root 253:0 0 17G 0 lvm / └─sda1 8:1 0 1G 0 part /boot $ rbd showmapped 2019-05-08 16:54:35.857699 7fe2e41dad80 -1 did not load config file, using default settings. ``` ### **删除PVC** 删除PVC,然后检查PV,发现PV也已经被删除了 ``` $ kubectl delete pvc pvc-rbd persistentvolumeclaim "pvc-rbd" deleted $ kubectl get pv No resources found. ``` 然后去ceph中查看image,发现image也被删除了 ### **Reference** * http://liupeng0518.github.io/2018/12/29/k8s/%E6%8C%81%E4%B9%85%E5%8C%96%E5%AD%98%E5%82%A8/kubernetes%E5%AF%B9%E6%8E%A5ceph/ * https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd/deploy