Kubernetes使用Ceph靜態卷部署應用
1. kubernetes 中的存儲方案
對於有狀態服務,存儲是一個至關重要的問題。k8s提供了非常豐富的組件來支持存儲,這裏大致列一下:
- volume: 就是直接掛載在pod上的組件,k8s中所有的其他存儲組件都是通過volume來跟pod直接聯繫的。volume有個type屬性,type決定了掛載的存儲是什麼,常見的比如:emptyDir,hostPath,nfs,rbd,以及下文要說的persistentVolumeClaim等。跟docker裏面的volume概念不同的是,docker裏的volume的生命週期是跟docker緊緊綁在一起的。這裏根據type的不同,生命週期也不同,比如emptyDir類型的就是跟docker一樣,pod掛掉,對應的volume也就消失了,而其他類型的都是永久存儲。詳細介紹可以參考Volumes
- Persistent Volumes:顧名思義,這個組件就是用來支持永久存儲的,Persistent Volumes組件會抽象後端存儲的提供者(也就是上文中volume中的type)和消費者(即具體哪個pod使用)。該組件提供了PersistentVolume和PersistentVolumeClaim兩個概念來抽象上述兩者。一個PersistentVolume(簡稱PV)就是後端存儲提供的一塊存儲空間,具體到ceph rbd中就是一個image,一個PersistentVolumeClaim(簡稱PVC)可以看做是用戶對PV的請求,PVC會跟某個PV綁定,然後某個具體pod會在volume 中掛載PVC,就掛載了對應的PV。關於更多詳細信息比如PV,PVC的生命週期,dockerfile 格式等信息參考Persistent Volumes
- Dynamic Volume Provisioning: 動態volume發現,比如上面的Persistent Volumes,我們必須先要創建一個存儲塊,比如一個ceph中的image,然後將該image綁定PV,才能使用。這種靜態的綁定模式太僵硬,每次申請存儲都要向存儲提供者索要一份存儲快。Dynamic Volume Provisioning就是解決這個問題的。它引入了StorageClass這個概念,StorageClass抽象了存儲提供者,只需在PVC中指定StorageClass,然後說明要多大的存儲就可以了,存儲提供者會根據需求動態創建所需存儲快。甚至於,我們可以指定一個默認StorageClass,這樣,只需創建PVC就可以了。
2. 環境準備
可用的kubernetes
可用的Ceph集羣
Ceph monitor節點:lab1、lab2、lab3
# k8s
192.168.105.92 lab1 # master1
192.168.105.93 lab2 # master2
192.168.105.94 lab3 # master3
192.168.105.95 lab4 # node4
192.168.105.96 lab5 # node5
192.168.105.97 lab6 # node6
192.168.105.98 lab7 # node7
在每個k8s node中安裝yum install -y ceph-common
3. CephFS方式部署容器
3.1 創建Ceph admin secret
ceph auth get-key client.admin > /tmp/secret
kubectl create namespace cephfs
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret
3.2 創建pv
vim cephfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-pv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
cephfs:
monitors:
- 192.168.105.92:6789
- 192.168.105.93:6789
- 192.168.105.94:6789
user: admin
secretRef:
name: ceph-admin-secret
readOnly: false
persistentVolumeReclaimPolicy: Recycle
3.3 創建pvc
vim cephfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-pv-claim1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
3.4 部署驗證
vim cephfs-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-cephfs
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ceph-cephfs-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: ceph-cephfs-volume
persistentVolumeClaim:
claimName: cephfs-pv-claim1
kubectl create -f cephfs-pv.yaml
kubectl create -f cephfs-pvc.yaml
kubectl create -f cephfs-nginx.yaml
驗證結果:
[root@lab1 cephfs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
cephfs-pv1 1Gi RWX Recycle Bound default/cephfs-pv-claim1 1h
[root@lab1 cephfs]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cephfs-pv-claim1 Bound cephfs-pv1 1Gi RWX 1h
test-pvc Bound test-pv 1Gi RWO 32m
[root@lab1 cephfs]# kubectl get pod |grep nginx-cephfs
nginx-cephfs-7777495b9b-29vtw 1/1 Running 0 13m
[root@lab1 cephfs]# kubectl exec -it nginx-cephfs-7777495b9b-29vtw -- df -h|grep nginx
192.168.105.92:6789:/ 1.6T 4.1G 1.6T 1% /usr/share/nginx/html
4. RBD方式部署容器
4.1 創建Ceph admin secret
ceph auth get-key client.admin > /tmp/secret
kubectl create namespace cephfs
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret
4.2 創建Ceph pool 和Image
ceph osd pool create kube 128 128
rbd create kube/foo -s 10G --image-feature layering
4.3 創建pv
vim rbd-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: rbd-pv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 192.168.105.92:6789
- 192.168.105.93:6789
- 192.168.105.94:6789
pool: kube
image: foo
user: admin
secretRef:
name: ceph-secret
persistentVolumeReclaimPolicy: Recycle
4.4 創建pvc
vim rbd-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rbd-pv-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
4.5 部署驗證
vim rbd-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-rbd
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ceph-rbd-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: ceph-rbd-volume
persistentVolumeClaim:
claimName: rbd-pv-claim1
kubectl create -f rbd-pv.yaml
kubectl create -f rbd-pvc.yaml
kubectl create -f rbd-nginx.yaml
驗證結果:
[root@lab1 rbd]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
cephfs-pv1 1Gi RWX Recycle Bound default/cephfs-pv-claim1 2h
rbd-pv1 5Gi RWO Recycle Bound default/rbd-pv-claim1 8m
[root@lab1 rbd]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cephfs-pv-claim1 Bound cephfs-pv1 1Gi RWX 2h
claim2 Pending rbd 2h
claim3 Pending rbd 2h
rbd-pv-claim1 Bound rbd-pv1 5Gi RWO 8m
[root@lab1 rbd]# kubectl exec -it nginx-rbd-6b555f58c9-7k2k9 -- df -h|grep nginx
/dev/rbd0 9.8G 37M 9.7G 1% /usr/share/nginx/html
進入容器使用dd
測試,發現容器容易掛。而且經過驗證,容器掛載的目錄大小取決於rbd image的大小
dd if=/dev/zero of=/usr/share/nginx/html/test.data bs=1G count=8 &
root@nginx-rbd-6b555f58c9-7k2k9:/usr/share/nginx/html# error: Internal error occurred: error executing command in container: Error response from daemon: Container 12f9c29c03082d27c7ed4327536626189d02be451029f7385765d3c2e1451062 is not running: Exited (0) Less than a second ago
參考資料:
[1] https://kubernetes.io/docs/concepts/storage/volumes/
[2] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[3] https://zhangchenchen.github.io/2017/11/17/kubernetes-integrate-with-ceph/