kubernets1.13.1集羣使用ceph rbd塊存儲

參考文檔

https://github.com/kubernetes/examples/tree/master/staging/volumes/rbd
http://docs.ceph.com/docs/mimic/rados/operations/pools/
https://blog.csdn.net/aixiaoyang168/article/details/78999851 
https://www.cnblogs.com/keithtt/p/6410302.html
https://kubernetes.io/docs/concepts/storage/volumes/
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
https://blog.csdn.net/wenwenxiong/article/details/78406136
http://www.mamicode.com/info-detail-1701743.html

簡介

ceph支持對象存儲,文件系統及塊存儲,是三合一存儲類型,kubernetes的樣例中有cephfs與rbd兩種使用方式的介紹,cephfs需要node節點安裝ceph才能支持,rbd需要node節點安裝ceph-common才支持。
使用上的區別如下:

Volume Plugin   ReadWriteOnce   ReadOnlyMany    ReadWriteMany
CephFS              ✓               ✓               ✓
RBD                 ✓               ✓               -

基本環境

k81集羣1.13.1版本

[root@elasticsearch01 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
10.2.8.34   Ready    <none>   24d   v1.13.1
10.2.8.65   Ready    <none>   24d   v1.13.1

ceph集羣 luminous版本

[root@ceph01 ~]# ceph -s
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03
    mgr: ceph03(active), standbys: ceph02, ceph01
    osd: 24 osds: 24 up, 24 in
    rgw: 3 daemons active

操作步驟

一、ceph集羣創建ceph池及鏡像

[root@ceph01 ~]# ceph osd pool create rbd-k8s 1024 1024 
For better initial performance on pools expected to store a large number of objects, consider supplying the expected_num_objects parameter when creating the pool.

[root@ceph01 ~]# ceph osd lspools 
1 rbd-es,2 .rgw.root,3 default.rgw.control,4 default.rgw.meta,5 default.rgw.log,6 default.rgw.buckets.index,7 default.rgw.buckets.data,8 default.rgw.buckets.non-ec,9 rbd-k8s,

[root@ceph01 ~]# rbd create rbd-k8s/cephimage1 --size 10240
[root@ceph01 ~]# rbd create rbd-k8s/cephimage2 --size 20480
[root@ceph01 ~]# rbd create rbd-k8s/cephimage3 --size 40960
[root@ceph01 ~]# rbd list rbd-k8s
cephimage1
cephimage2
cephimage3

二、k8s集羣使用ceph rbd塊存儲

1、下載樣例

[root@elasticsearch01 ~]# git clone https://github.com/kubernetes/examples.git
Cloning into 'examples'...
remote: Enumerating objects: 11475, done.
remote: Total 11475 (delta 0), reused 0 (delta 0), pack-reused 11475
Receiving objects: 100% (11475/11475), 16.94 MiB | 6.00 MiB/s, done.
Resolving deltas: 100% (6122/6122), done.

[root@elasticsearch01 ~]# cd examples/staging/volumes/rbd
[root@elasticsearch01 rbd]# ls
rbd-with-secret.yaml  rbd.yaml  README.md  secret
[root@elasticsearch01 rbd]# cp -a ./rbd /k8s/yaml/volumes/

2、k8s集羣節點安裝ceph客戶端

[root@elasticsearch01 ceph]# yum  install ceph-common

3、修改rbd-with-secret.yaml配置文件
修改後配置如下:

[root@elasticsearch01 rbd]# cat rbd-with-secret.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: rbd2
spec:
  containers:
    - image: kubernetes/pause
      name: rbd-rw
      volumeMounts:
      - name: rbdpd
        mountPath: /mnt/rbd
  volumes:
    - name: rbdpd
      rbd:
        monitors:
        - '10.0.4.10:6789'
        - '10.0.4.13:6789'
        - '10.0.4.15:6789'
        pool: rbd-k8s
        image: cephimage1
        fsType: ext4
        readOnly: true
        user: admin
        secretRef:
          name: ceph-secret

如下參數根據實際情況修改:
monitors:這是 Ceph集羣的monitor 監視器,Ceph 集羣可以配置多個 monitor,本配置3個mon
pool:這是Ceph集羣中存儲數據進行歸類區分使用,這裏用的pool爲rbd-ceph
image:這是Ceph 塊設備中的磁盤映像文件,這裏用的是cephimage1
fsType:文件系統類型,默認使用 ext4 即可
readOnly:是否爲只讀,這裏測試使用只讀即可
user:這是Ceph Client訪問Ceph存儲集羣所使用的用戶名,這裏我們使用admin 即可
keyring:這是Ceph集羣認證需要的密鑰環,搭建Ceph存儲集羣時生成的ceph.client.admin.keyring
imageformat:這是磁盤映像文件格式,可以使用 2,或者老一些的1,內核版本比較低的使用1
imagefeatures: 這是磁盤映像文件的特徵,需要uname -r查看集羣系統內核所支持的特性,這裏Ceontos7.4內核版本爲3.10.0-693.el7.x86_64只支持layering

4、使用ceph認證祕鑰
在集羣中使用secret更方便易於擴展且安全

[root@ceph01 ~]# cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
    key = AQBHVp9bPirBCRAAUt6Mjw5PUjiy/RDHyHZrUw==

[root@ceph01 ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ==

5、創建ceph-secret

[root@elasticsearch01 rbd]# cat secret/ceph-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "kubernetes.io/rbd"
data:
  key: QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ==

[root@elasticsearch01 rbd]# kubectl create -f secret/ceph-secret.yaml 
secret/ceph-secret created

6、創建pod測試rbd
按照官網的案例直接創建即可

[root@elasticsearch01 rbd]# kubectl create -frbd-with-secret.yaml 

但是生產環境中不直接使用volumes,他會隨着pods的創建兒創建,刪除而刪除,數據得不到保存,如果需要數據不丟失,需要藉助pv和pvc實現

7、創建ceph-pv
注意rbd是讀寫一次,只讀多次,目前還不支持讀寫多次,我們日常使用rbd映射磁盤時也是一個image只掛載一個客戶端上;cephfs可以支持讀寫多次

[root@elasticsearch01 rbd]# cat rbd-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-rbd-pv
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - '10.0.4.10:6789'
      - '10.0.4.13:6789'
      - '10.0.4.15:6789'
    pool: rbd-k8s
    image: cephimage2
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

[root@elasticsearch01 rbd]# kubectl create -f rbd-pv.yaml 
persistentvolume/ceph-rbd-pv created

[root@elasticsearch01 rbd]# kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
ceph-rbd-pv   20Gi       RWO            Recycle          Available  

8、創建ceph-pvc

[root@elasticsearch01 rbd]# cat rbd-pv-claim.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-rbd-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

[root@elasticsearch01 rbd]# kubectl create -f rbd-pv-claim.yaml 
persistentvolumeclaim/ceph-rbd-pv-claim created

[root@elasticsearch01 rbd]# kubectl get pvc
NAME                STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-rbd-pv-claim   Bound    ceph-rbd-pv   20Gi       RWO                           6s

[root@elasticsearch01 rbd]# kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
ceph-rbd-pv   20Gi       RWO            Recycle          Bound    default/ceph-rbd-pv-claim                           5m28s

9、創建pod通過pv、pvc方式測試rbd
由於需要格式化掛載rbd,rbd空間比較大10G,需要時間比較久,大概需要幾分鐘

[root@elasticsearch01 rbd]# cat rbd-pv-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: ceph-rbd-pv-pod1
spec:
  containers:
  - name: ceph-rbd-pv-busybox
    image: busybox
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-rbd-vol1
      mountPath: /mnt/ceph-rbd-pvc/busybox
      readOnly: false
  volumes:
  - name: ceph-rbd-vol1
    persistentVolumeClaim:
      claimName: ceph-rbd-pv-claim

[root@elasticsearch01 rbd]# kubectl create -f rbd-pv-pod.yaml 
pod/ceph-rbd-pv-pod1 created

[root@elasticsearch01 rbd]# kubectl get pods
NAME               READY   STATUS              RESTARTS   AGE
busybox            1/1     Running             432        18d
ceph-rbd-pv-pod1   0/1     ContainerCreating   0          19s

報錯如下
MountVolume.WaitForAttach failed for volume "ceph-rbd-pv" : rbd: map failed exit status 6, rbd output: rbd: sysfs write failed RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address
解決方法
禁用一些特性,這些特性在centos7.4內核上不支持,所以生產環境中k8s及相關ceph最好使用內核版本高的系統做爲底層操作系統
rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten

[root@ceph01 ~]# rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten

三、驗證效果

1、k8s集羣端驗證

[root@elasticsearch01 rbd]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
busybox            1/1     Running   432        18d     10.254.35.3   10.2.8.65   <none>           <none>
ceph-rbd-pv-pod1   1/1     Running   0          3m39s   10.254.35.8   10.2.8.65   <none>           <none>

[root@elasticsearch02 ceph]# df -h |grep rbd
/dev/rbd0                  493G  162G  306G  35% /data
/dev/rbd1                   20G   45M   20G   1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2

[root@elasticsearch01 rbd]# kubectl exec -ti ceph-rbd-pv-pod1 sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  49.1G      7.4G     39.1G  16% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     7.8G         0      7.8G   0% /sys/fs/cgroup
/dev/vda1                49.1G      7.4G     39.1G  16% /dev/termination-log
/dev/vda1                49.1G      7.4G     39.1G  16% /etc/resolv.conf
/dev/vda1                49.1G      7.4G     39.1G  16% /etc/hostname
/dev/vda1                49.1G      7.4G     39.1G  16% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
/dev/rbd1                19.6G     44.0M     19.5G   0% /mnt/ceph-rbd-pvc/busybox
tmpfs                     7.8G     12.0K      7.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                     7.8G         0      7.8G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/timer_stats
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     7.8G         0      7.8G   0% /proc/scsi
tmpfs                     7.8G         0      7.8G   0% /sys/firmware
/ # cd /mnt/ceph-rbd-pvc/busybox/
/mnt/ceph-rbd-pvc/busybox # ls
lost+found
/mnt/ceph-rbd-pvc/busybox # touch ceph-rbd-pods
/mnt/ceph-rbd-pvc/busybox # ls
ceph-rbd-pods  lost+found
/mnt/ceph-rbd-pvc/busybox # echo busbox>ceph-rbd-pods 
/mnt/ceph-rbd-pvc/busybox # cat ceph-rbd-pods 
busbox

2、ceph集羣端驗證

[root@ceph01 ~]# ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED 
    65.9TiB     58.3TiB      7.53TiB         11.43 
POOLS:
    NAME                           ID     USED        %USED     MAX AVAIL     OBJECTS 
    rbd-es                         1      1.38TiB      7.08       18.1TiB      362911 
    .rgw.root                      2      1.14KiB         0       18.1TiB           4 
    default.rgw.control            3           0B         0       18.1TiB           8 
    default.rgw.meta               4      46.9KiB         0        104GiB         157 
    default.rgw.log                5           0B         0       18.1TiB         345 
    default.rgw.buckets.index      6           0B         0        104GiB        2012 
    default.rgw.buckets.data       7      1.01TiB      5.30       18.1TiB     2090721 
    default.rgw.buckets.non-ec     8           0B         0       18.1TiB           0 
    rbd-k8s                        9       137MiB         0       18.1TiB          67 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章