🔥码云GVP开源项目 12k star Uniapp+ElementUI 功能强大 支持多语言、二开方便! 广告
[TOC] ## 版本说明 | CSI 版本 | kubernetes 版本 | | :------: | :------------: | | 3.5.1 | 1.18.18 | > 详细的对应的版本,请查看下面的参考文章 ## ceph侧执行 1. ceph创建kubernetes存储池 ```shell $ ceph osd pool create kubernetes 128 128 pool 'kubernetes' created ``` 2. 初始化存储池 ```shell $ rbd pool init kubernetes ``` 3. 为 Kubernetes 和 ceph-csi 创建一个新用户 ```shell sudo ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes' -o /etc/ceph/ceph.client.kubernetes.keyring ``` 4. 获取ceph相关信息 ```shell $ ceph mon dump epoch 2 fsid b87d2535-406b-442d-8de2-49d86f7dc599 last_changed 2022-06-15T17:35:37.096336+0800 created 2022-06-15T17:35:05.828763+0800 min_mon_release 15 (octopus) 0: [v2:192.168.31.69:3300/0,v1:192.168.31.69:6789/0] mon.ceph01 1: [v2:192.168.31.102:3300/0,v1:192.168.31.102:6789/0] mon.ceph02 2: [v2:192.168.31.165:3300/0,v1:192.168.31.165:6789/0] mon.ceph03 dumped monmap epoch 2 ``` ## k8s部署ceph-csi > 本文章ceph-csi部署在 `kube-storage` 命名空间下 1. 创建命名空间 ```shell $ sudo mkdir -p /opt/addons/cephrbd && \ sudo chown -R ${USER}. /opt/addons && cd /opt/addons/cephrbd $ cat << EOF | tee 0.namespace.yml >> /dev/null apiVersion: v1 kind: Namespace metadata: name: csi-system EOF $ kubectl apply -f 0.namespace.yml namespace/kube-storage created ``` 2. 生成类似于以下示例的 csi-config-map.yaml 文件,将 fsid 替换为“clusterID”,并将监视器地址替换为“monitors”: ```shell $ cat << EOF | tee 1.csi-config-map.yml >> /dev/null apiVersion: v1 kind: ConfigMap data: config.json: |- [ { "clusterID": "b87d2535-406b-442d-8de2-49d86f7dc599", "monitors": [ "192.168.31.69:6789", "192.168.31.102:6789", "192.168.31.165:6789" ] } ] metadata: name: ceph-csi-config namespace: csi-system EOF $ kubectl apply -f 1.csi-config-map.yml configmap/ceph-csi-config created ``` > 根据ceph侧执行的返回结果来填写内容 3. 创建csi的kvs配置文件 ```shell $ cat << EOF | tee 2.csi-kms-config-map.yml >> /dev/null apiVersion: v1 kind: ConfigMap data: config.json: |- {} metadata: name: ceph-csi-encryption-kms-config namespace: csi-system EOF $ kubectl apply -f 2.csi-kms-config-map.yml configmap/ceph-csi-encryption-kms-config createdv ``` 4. 创建rbd的访问权限 ```shell $ cat << EOF | tee 3.csi-rbd-secret.yml >> /dev/null apiVersion: v1 kind: Secret metadata: name: csi-rbd-secret namespace: csi-system stringData: userID: kubernetes # ceph auth get-key client.kubernetes 获取key,不需要base64。 userKey: AQCfkKpidBhVHBAAJTzhkRKlSMuWDDibrlbPDA== EOF $ kubectl apply -f 3.csi-rbd-secret.yml secret/csi-rbd-secret created ``` 5. 创建ceph配置文件以及密钥文件 ```shell $ kubectl -n csi-system create configmap ceph-config --from-file=/etc/ceph/ceph.conf --from-file=keyring=/etc/ceph/ceph.client.kubernetes.keyring configmap/ceph-config created ``` 6. 创建相关的rbac权限 ```shell $ cat << EOF | tee 4.csi-provisioner-rbac.yaml >> /dev/null --- apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-provisioner # replace with non-default namespace name namespace: csi-system --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-external-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "update", "delete", "patch"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: [""] resources: ["persistentvolumeclaims/status"] verbs: ["update", "patch"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshots"] verbs: ["get", "list", "patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshots/status"] verbs: ["get", "list", "patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents"] verbs: ["create", "get", "list", "watch", "update", "delete", "patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotclasses"] verbs: ["get", "list", "watch"] - apiGroups: ["storage.k8s.io"] resources: ["volumeattachments"] verbs: ["get", "list", "watch", "update", "patch"] - apiGroups: ["storage.k8s.io"] resources: ["volumeattachments/status"] verbs: ["patch"] - apiGroups: ["storage.k8s.io"] resources: ["csinodes"] verbs: ["get", "list", "watch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents/status"] verbs: ["update", "patch"] - apiGroups: [""] resources: ["configmaps"] verbs: ["get"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-provisioner-role subjects: - kind: ServiceAccount name: rbd-csi-provisioner # replace with non-default namespace name namespace: csi-system roleRef: kind: ClusterRole name: rbd-external-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: # replace with non-default namespace name namespace: csi-system name: rbd-external-provisioner-cfg rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list", "watch", "create", "update", "delete"] - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["get", "watch", "list", "delete", "update", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-provisioner-role-cfg # replace with non-default namespace name namespace: csi-system subjects: - kind: ServiceAccount name: rbd-csi-provisioner # replace with non-default namespace name namespace: csi-system roleRef: kind: Role name: rbd-external-provisioner-cfg apiGroup: rbac.authorization.k8s.io EOF $ cat << EOF | tee 5.csi-nodeplugin-rbac.yaml >> /dev/null --- apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-nodeplugin # replace with non-default namespace name namespace: csi-system --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-nodeplugin rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get"] # allow to read Vault Token and connection options from the Tenants namespace - apiGroups: [""] resources: ["secrets"] verbs: ["get"] - apiGroups: [""] resources: ["configmaps"] verbs: ["get"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get"] - apiGroups: ["storage.k8s.io"] resources: ["volumeattachments"] verbs: ["list", "get"] - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-nodeplugin subjects: - kind: ServiceAccount name: rbd-csi-nodeplugin # replace with non-default namespace name namespace: csi-system roleRef: kind: ClusterRole name: rbd-csi-nodeplugin apiGroup: rbac.authorization.k8s.io EOF $ kubectl apply -f 4.csi-provisioner-rbac.yml serviceaccount/rbd-csi-provisioner created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created $ kubectl apply -f 5.csi-nodeplugin-rbac.yml serviceaccount/rbd-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created ``` 7. 创建 ceph-csi 配置器 ```shell $ cat << 'EOF' | tee 6.csi-rbdplugin-provisioner.yml >> /dev/null --- kind: Service apiVersion: v1 metadata: name: csi-rbdplugin-provisioner # replace with non-default namespace name namespace: csi-system labels: app: csi-metrics spec: selector: app: csi-rbdplugin-provisioner ports: - name: http-metrics port: 8080 protocol: TCP targetPort: 8680 --- kind: Deployment apiVersion: apps/v1 metadata: name: csi-rbdplugin-provisioner # replace with non-default namespace name namespace: csi-system spec: replicas: 3 selector: matchLabels: app: csi-rbdplugin-provisioner template: metadata: labels: app: csi-rbdplugin-provisioner spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - csi-rbdplugin-provisioner topologyKey: "kubernetes.io/hostname" serviceAccountName: rbd-csi-provisioner priorityClassName: system-cluster-critical containers: - name: csi-provisioner image: registry.k8s.io/sig-storage/csi-provisioner:v3.5.0 args: - "--csi-address=$(ADDRESS)" - "--v=1" - "--timeout=150s" - "--retry-interval-start=500ms" - "--leader-election=true" # set it to true to use topology based provisioning - "--feature-gates=Topology=false" - "--feature-gates=HonorPVReclaimPolicy=true" - "--prevent-volume-mode-conversion=true" # if fstype is not specified in storageclass, ext4 is default - "--default-fstype=ext4" - "--extra-create-metadata=true" env: - name: ADDRESS value: unix:///csi/csi-provisioner.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: csi-snapshotter image: registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2 args: - "--csi-address=$(ADDRESS)" - "--v=1" - "--timeout=150s" - "--leader-election=true" - "--extra-create-metadata=true" env: - name: ADDRESS value: unix:///csi/csi-provisioner.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: csi-attacher image: registry.k8s.io/sig-storage/csi-attacher:v4.3.0 args: - "--v=1" - "--csi-address=$(ADDRESS)" - "--leader-election=true" - "--retry-interval-start=500ms" - "--default-fstype=ext4" env: - name: ADDRESS value: /csi/csi-provisioner.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: csi-resizer image: registry.k8s.io/sig-storage/csi-resizer:v1.8.0 args: - "--csi-address=$(ADDRESS)" - "--v=1" - "--timeout=150s" - "--leader-election" - "--retry-interval-start=500ms" - "--handle-volume-inuse-error=false" - "--feature-gates=RecoverVolumeExpansionFailure=true" env: - name: ADDRESS value: unix:///csi/csi-provisioner.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: csi-rbdplugin image: quay.io/cephcsi/cephcsi:v3.9.0 args: - "--nodeid=$(NODE_ID)" - "--type=rbd" - "--controllerserver=true" - "--endpoint=$(CSI_ENDPOINT)" - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)" - "--v=5" - "--drivername=rbd.csi.ceph.com" - "--pidlimit=-1" - "--rbdhardmaxclonedepth=8" - "--rbdsoftmaxclonedepth=4" - "--enableprofiling=false" - "--setmetadata=true" env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # - name: KMS_CONFIGMAP_NAME # value: encryptionConfig - name: CSI_ENDPOINT value: unix:///csi/csi-provisioner.sock - name: CSI_ADDONS_ENDPOINT value: unix:///csi/csi-addons.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - mountPath: /dev name: host-dev - mountPath: /sys name: host-sys - mountPath: /lib/modules name: lib-modules readOnly: true - name: ceph-csi-config mountPath: /etc/ceph-csi-config/ - name: ceph-csi-encryption-kms-config mountPath: /etc/ceph-csi-encryption-kms-config/ - name: keys-tmp-dir mountPath: /tmp/csi/keys - name: ceph-config mountPath: /etc/ceph/ - name: oidc-token mountPath: /run/secrets/tokens readOnly: true - name: csi-rbdplugin-controller image: quay.io/cephcsi/cephcsi:v3.9.0 args: - "--type=controller" - "--v=5" - "--drivername=rbd.csi.ceph.com" - "--drivernamespace=$(DRIVER_NAMESPACE)" - "--setmetadata=true" env: - name: DRIVER_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace imagePullPolicy: "IfNotPresent" volumeMounts: - name: ceph-csi-config mountPath: /etc/ceph-csi-config/ - name: keys-tmp-dir mountPath: /tmp/csi/keys - name: ceph-config mountPath: /etc/ceph/ - name: liveness-prometheus image: quay.io/cephcsi/cephcsi:v3.9.0 args: - "--type=liveness" - "--endpoint=$(CSI_ENDPOINT)" - "--metricsport=8680" - "--metricspath=/metrics" - "--polltime=60s" - "--timeout=3s" env: - name: CSI_ENDPOINT value: unix:///csi/csi-provisioner.sock - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: socket-dir mountPath: /csi imagePullPolicy: "IfNotPresent" volumes: - name: host-dev hostPath: path: /dev - name: host-sys hostPath: path: /sys - name: lib-modules hostPath: path: /lib/modules - name: socket-dir emptyDir: { medium: "Memory" } - name: ceph-config configMap: name: ceph-config - name: ceph-csi-config configMap: name: ceph-csi-config - name: ceph-csi-encryption-kms-config configMap: name: ceph-csi-encryption-kms-config - name: keys-tmp-dir emptyDir: { medium: "Memory" } - name: oidc-token projected: sources: - serviceAccountToken: path: oidc-token expirationSeconds: 3600 audience: ceph-csi-kms EOF $ kubectl apply -f 6.csi-rbdplugin-provisioner.yml service/csi-rbdplugin-provisioner created deployment.apps/csi-rbdplugin-provisioner created $ kubectl -n kube-storage get pod -l app=csi-rbdplugin-provisioner NAME READY STATUS RESTARTS AGE csi-rbdplugin-provisioner-6bd5bd5fd9-psp58 7/7 Running 0 19m csi-rbdplugin-provisioner-6bd5bd5fd9-sl4kq 7/7 Running 0 19m csi-rbdplugin-provisioner-6bd5bd5fd9-wwzzp 7/7 Running 0 19m ``` > sed -ri 's@registry.k8s.io/sig-storage@172.139.20.170:5000/csi@g' 6.csi-rbdplugin-provisioner.yml > sed -ri 's@quay.io/cephcsi@172.139.20.170:5000/csi@g' 6.csi-rbdplugin-provisioner.yml 8. 创建 ceph-csi 节点器 ```shell $ cat << 'EOF' | tee 7.csi-rbdplugin.yml >> /dev/null --- kind: DaemonSet apiVersion: apps/v1 metadata: name: csi-rbdplugin # replace with non-default namespace name namespace: csi-system spec: selector: matchLabels: app: csi-rbdplugin template: metadata: labels: app: csi-rbdplugin spec: serviceAccountName: rbd-csi-nodeplugin hostNetwork: true hostPID: true priorityClassName: system-node-critical # to use e.g. Rook orchestrated cluster, and mons' FQDN is # resolved through k8s service, set dns policy to cluster first dnsPolicy: ClusterFirstWithHostNet containers: - name: driver-registrar # This is necessary only for systems with SELinux, where # non-privileged sidecar containers cannot access unix domain socket # created by privileged CSI driver container. securityContext: privileged: true allowPrivilegeEscalation: true image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0 args: - "--v=1" - "--csi-address=/csi/csi.sock" - "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock" env: - name: KUBE_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: - name: socket-dir mountPath: /csi - name: registration-dir mountPath: /registration - name: csi-rbdplugin securityContext: privileged: true capabilities: add: ["SYS_ADMIN"] allowPrivilegeEscalation: true image: quay.io/cephcsi/cephcsi:v3.9.0 args: - "--nodeid=$(NODE_ID)" - "--pluginpath=/var/lib/kubelet/plugins" - "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/" - "--type=rbd" - "--nodeserver=true" - "--endpoint=$(CSI_ENDPOINT)" - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)" - "--v=5" - "--drivername=rbd.csi.ceph.com" - "--enableprofiling=false" # If topology based provisioning is desired, configure required # node labels representing the nodes topology domain # and pass the label names below, for CSI to consume and advertise # its equivalent topology domain # - "--domainlabels=failure-domain/region,failure-domain/zone" # # Options to enable read affinity. # If enabled Ceph CSI will fetch labels from kubernetes node and # pass `read_from_replica=localize,crush_location=type:value` during # rbd map command. refer: # https://docs.ceph.com/en/latest/man/8/rbd/#kernel-rbd-krbd-options # for more details. # - "--enable-read-affinity=true" # - "--crush-location-labels=topology.io/zone,topology.io/rack" env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # - name: KMS_CONFIGMAP_NAME # value: encryptionConfig - name: CSI_ENDPOINT value: unix:///csi/csi.sock - name: CSI_ADDONS_ENDPOINT value: unix:///csi/csi-addons.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - mountPath: /dev name: host-dev - mountPath: /sys name: host-sys - mountPath: /run/mount name: host-mount - mountPath: /etc/selinux name: etc-selinux readOnly: true - mountPath: /lib/modules name: lib-modules readOnly: true - name: ceph-csi-config mountPath: /etc/ceph-csi-config/ - name: ceph-csi-encryption-kms-config mountPath: /etc/ceph-csi-encryption-kms-config/ - name: plugin-dir mountPath: /var/lib/kubelet/plugins mountPropagation: "Bidirectional" - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: "Bidirectional" - name: keys-tmp-dir mountPath: /tmp/csi/keys - name: ceph-logdir mountPath: /var/log/ceph - name: ceph-config mountPath: /etc/ceph/ - name: oidc-token mountPath: /run/secrets/tokens readOnly: true - name: liveness-prometheus securityContext: privileged: true allowPrivilegeEscalation: true image: quay.io/cephcsi/cephcsi:v3.9.0 args: - "--type=liveness" - "--endpoint=$(CSI_ENDPOINT)" - "--metricsport=8680" - "--metricspath=/metrics" - "--polltime=60s" - "--timeout=3s" env: - name: CSI_ENDPOINT value: unix:///csi/csi.sock - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: socket-dir mountPath: /csi imagePullPolicy: "IfNotPresent" volumes: - name: socket-dir hostPath: path: /var/lib/kubelet/plugins/rbd.csi.ceph.com type: DirectoryOrCreate - name: plugin-dir hostPath: path: /var/lib/kubelet/plugins type: Directory - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate - name: ceph-logdir hostPath: path: /var/log/ceph type: DirectoryOrCreate - name: registration-dir hostPath: path: /var/lib/kubelet/plugins_registry/ type: Directory - name: host-dev hostPath: path: /dev - name: host-sys hostPath: path: /sys - name: etc-selinux hostPath: path: /etc/selinux - name: host-mount hostPath: path: /run/mount - name: lib-modules hostPath: path: /lib/modules - name: ceph-config configMap: name: ceph-config - name: ceph-csi-config configMap: name: ceph-csi-config - name: ceph-csi-encryption-kms-config configMap: name: ceph-csi-encryption-kms-config - name: keys-tmp-dir emptyDir: { medium: "Memory" } - name: oidc-token projected: sources: - serviceAccountToken: path: oidc-token expirationSeconds: 3600 audience: ceph-csi-kms --- # This is a service to expose the liveness metrics apiVersion: v1 kind: Service metadata: name: csi-metrics-rbdplugin # replace with non-default namespace name namespace: csi-system labels: app: csi-metrics spec: ports: - name: http-metrics port: 8080 protocol: TCP targetPort: 8680 selector: app: csi-rbdplugin EOF $ kubectl apply -f 7.csi-rbdplugin.yml daemonset.apps/csi-rbdplugin created service/csi-metrics-rbdplugin created $ kubectl -n kube-storage get pod -l app=csi-rbdplugin NAME READY STATUS RESTARTS AGE csi-rbdplugin-747x8 3/3 Running 0 7m38s csi-rbdplugin-8l5pj 3/3 Running 0 7m38s csi-rbdplugin-d9pnv 3/3 Running 0 7m38s csi-rbdplugin-rslnz 3/3 Running 0 7m38s csi-rbdplugin-tcrs4 3/3 Running 0 7m38s ``` 如果kubelet数据目录有做修改的话,请修改相关的配置。 例如,kubelet数据目录在`/data/k8s/data/kubelet`目录下。那么请执行 `sed -ri 's#/var/lib/kubelet#/data/k8s/data/kubelet#g' 7.csi-rbdplugin.yml` 来修改配置文件 > sed -ri 's@registry.k8s.io/sig-storage@172.139.20.170:5000/csi@g' 7.csi-rbdplugin.yml > sed -ri 's@quay.io/cephcsi@172.139.20.170:5000/csi@g' 7.csi-rbdplugin.yml 9. 创建SC动态供应 ```shell $ cat << EOF | tee 8.csi-rbd-sc.yaml >> /dev/null --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-rbd-sc provisioner: rbd.csi.ceph.com parameters: clusterID: b87d2535-406b-442d-8de2-49d86f7dc599 pool: kubernetes imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret csi.storage.k8s.io/provisioner-secret-namespace: csi-system csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret csi.storage.k8s.io/controller-expand-secret-namespace: csi-system csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret csi.storage.k8s.io/node-stage-secret-namespace: csi-system reclaimPolicy: Delete allowVolumeExpansion: true mountOptions: - discard EOF $ kubectl apply -f 8.csi-rbd-sc.yaml storageclass.storage.k8s.io/csi-rbd-sc created ``` > 注意修改 `clusterID` 字段内容 > ## 验证 创建一个1Gb的pvc ```shell $ cat << EOF | tee 9.raw-block-pvc.yaml >> /dev/null --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: raw-block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 1Gi storageClassName: csi-rbd-sc EOF $ kubectl apply -f 9.raw-block-pvc.yaml persistentvolumeclaim/raw-block-pvc created $ kubectl get pvc raw-block-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE raw-block-pvc Bound pvc-b89dd991-4b74-432c-bebf-97098f9b8740 1Gi RWO csi-rbd-sc 25s ``` ## 附件内容 链接:https://pan.baidu.com/s/1wXSMC8-yoJfCKjcTLR-AbQ 提取码:gcuj ## 参考文章 ceph官网文章:https://docs.ceph.com/en/octopus/rbd/rbd-kubernetes/ Kubernetes CSI 开发者文档:https://kubernetes-csi.github.io/docs/drivers.html ceph-csi文档:https://github.com/ceph/ceph-csi/tree/v3.5.1