k8s實踐17:kubernetes對接nfs存儲實現pvc動態按需創建分配綁定pv

1.
開始前的想法.
前面測試pv&&pvc的部署和簡單配置應用,實現pod應用數據存儲到pvc並且和pod解耦的目的.
前面操作是全手動操作,手動創建pv,手動創建pvc,如果集羣pod少,這樣操作可以.
假如集羣有1000個以上的pod,每個pod都需要使用pvc存儲數據,如果只能手動去一個個創建pv,pvc,工作量不可想像.
如果可以創建pod的時候,創建pod的用戶定義pvc,然後集羣能夠根據用戶的pvc需求創建pv,實現動態的pv&&pvc創建分配.
kubernetes支持對接存儲動態創建分配pv&&pvc.
這是本次測試的目的.

2.
測試環境

實驗環境,存儲用nfs簡單部署測試.

3.
nfs部署


參考前面的文檔
pod應用數據存儲解耦pv&&pvc

4.
storage classes

官方文檔:
https://kubernetes.io/docs/concepts/storage/storage-classes/
kubernetes支持用storage classes對接存儲,實現動態pv&&pvc創建分配.
kubernetes內置支持對接很多存儲類型,比如cephfs,glusterfs等等,具體參考官方文檔.
kubernetes內置不支持對接nfs存儲類型.需要使用外部的插件.
外部插件參考文檔:
https://github.com/kubernetes-incubator/external-storage
nfs插件配置文檔:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
nfs-client-provisioner是一個kubernetes的簡易NFS的外部provisioner,本身不提供NFS,需要現有的NFS服務器提供存儲

5.
nfs存儲配置文件

[root@k8s-master1 nfs]# ls
class.yaml  deployment.yaml  rbac.yaml  test-claim.yaml  test-pod.yaml

5.1
class.yaml

[root@k8s-master1 nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

創建一個storageclass
kind: StorageClass

新建的storageclass名字爲:managed-nfs-storage                                         
name: managed-nfs-storage
 
provisioner直譯爲供應者,結合實際這裏應該是指storageclass的對接存儲類程序名字(個人理解),這個名字必須和deplotment.yaml的PROVISIONER_NAME變量值相同.                      
provisioner: fuseim.pri/ifs                                         

[root@k8s-master1 nfs]# kubectl apply -f class.yaml
storageclass.storage.k8s.io "managed-nfs-storage" created
[root@k8s-master1 nfs]# kubectl get storageclass
NAME                  PROVISIONER      AGE
managed-nfs-storage   fuseim.pri/ifs   7s

5.2
deployment.yaml

[root@k8s-master1 nfs]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.10.10.60
            - name: NFS_PATH
              value: /ifs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.10.10.60
            path: /ifs/kubernetes
[root@k8s-master1 nfs]#

創建sa,名字爲:nfs-client-provisioner

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

pod名字和使用的鏡像

containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest

pod裏掛載的路徑

 volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes

pod讀取的變量,這裏需要修改成本地nfs的地址和路徑

 env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.10.10.60
            - name: NFS_PATH
              value: /ifs/kubernetes

nfs服務的地址和路徑,需要修改成本地nfs的地址和路徑

 volumes:
        - name: nfs-client-root
          nfs:
            server: 10.10.10.60
            path: /ifs/kubernetes 

修改後的deployment.yaml文件,只是修改了nfs的地址和目錄

[root@k8s-master1 nfs]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.32.130
            - name: NFS_PATH
              value: /mnt/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.32.130
            path: /mnt/k8s
[root@k8s-master1 nfs]# kubectl apply -f deployment.yaml
serviceaccount "nfs-client-provisioner" created
deployment.extensions "nfs-client-provisioner" created
[root@k8s-master1 nfs]#
[root@k8s-master1 nfs]# kubectl get pod
NAME                                      READY     STATUS    RESTARTS   AGE
nfs-client-provisioner-65bf6bd464-qdzcj   1/1       Running   0          1m
[root@k8s-master1 nfs]# kubectl describe pod nfs-client-provisioner-65bf6bd464-qdzcj
Name:               nfs-client-provisioner-65bf6bd464-qdzcj
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-master3/192.168.32.130
Start Time:         Wed, 24 Jul 2019 14:44:11 +0800
Labels:             app=nfs-client-provisioner
                    pod-template-hash=65bf6bd464
Annotations:        <none>
Status:             Running
IP:                 172.30.35.3
Controlled By:      ReplicaSet/nfs-client-provisioner-65bf6bd464
Containers:
  nfs-client-provisioner:
    Container ID:   docker://67329cd9ca608223cda961a1bfe11524f2586e8e1ccba45ad57b292b1508b575
    Image:          quay.io/external_storage/nfs-client-provisioner:latest
    Image ID:       docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 24 Jul 2019 14:45:52 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  fuseim.pri/ifs
      NFS_SERVER:        192.168.32.130
      NFS_PATH:          /mnt/k8s
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-4n4jn (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.32.130
    Path:      /mnt/k8s
    ReadOnly:  false
  nfs-client-provisioner-token-4n4jn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-client-provisioner-token-4n4jn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                  Message
  ----    ------     ----  ----                  -------
  Normal  Scheduled  2m    default-scheduler     Successfully assigned default/nfs-client-provisioner-65bf6bd464-qdzcj to k8s-master3
  Normal  Pulling    2m    kubelet, k8s-master3  pulling image "quay.io/external_storage/nfs-client-provisioner:latest"
  Normal  Pulled     54s   kubelet, k8s-master3  Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest"
  Normal  Created    54s   kubelet, k8s-master3  Created container
  Normal  Started    54s   kubelet, k8s-master3  Started container
[root@k8s-master1 nfs]#

5.3
rbac.yaml
指定sa:nfs-client-provisioner的權限
nfs-client-provisioner在deployment部署時,已經創建.

[root@k8s-master1 nfs]# cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master1 nfs]#
[root@k8s-master1 nfs]# kubectl apply -f rbac.yaml
serviceaccount "nfs-client-provisioner" unchanged
clusterrole.rbac.authorization.k8s.io "nfs-client-provisioner-runner" created
clusterrolebinding.rbac.authorization.k8s.io "run-nfs-client-provisioner" created
role.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created
rolebinding.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created
[root@k8s-master1 nfs]#

檢索下

[root@k8s-master1 nfs]# kubectl get clusterrole |grep nfs
nfs-client-provisioner-runner                                          2m
[root@k8s-master1 nfs]# kubectl get role |grep nfs
leader-locking-nfs-client-provisioner   2m
[root@k8s-master1 nfs]# kubectl get rolebinding |grep nfs
leader-locking-nfs-client-provisioner   2m
[root@k8s-master1 nfs]# kubectl get clusterrolebinding |grep nfs
run-nfs-client-provisioner                             2m
[root@k8s-master1 nfs]#

6.
測試

使用官方的test-claim.yaml測試

[root@k8s-master1 nfs]# cat test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

讀取執行test.claim.yaml文件的pv,pvc情況

[root@k8s-master1 nfs]# kubectl get pv
No resources found.
[root@k8s-master1 nfs]# kubectl get pvc
No resources found.
[root@k8s-master1 nfs]#

讀取執行

[root@k8s-master1 nfs]# kubectl apply -f test-claim.yaml
persistentvolumeclaim "test-claim" created

執行後的pv,pvc情況

[root@k8s-master1 nfs]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                STORAGECLASS          REASON    AGE
pvc-4fb682ac-ade0-11e9-8401-000c29383c89   1Mi        RWX            Delete           Bound     default/test-claim   managed-nfs-storage             6s
[root@k8s-master1 nfs]# kubectl get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Bound     pvc-4fb682ac-ade0-11e9-8401-000c29383c89   1Mi        RWX            managed-nfs-storage   8s
[root@k8s-master1 nfs]#

成功了.對接nfs存儲類後,用戶可以申請創建pvc,系統自動創建pv並綁定pvc.
檢索nfs server的存儲目錄

[root@k8s-master3 k8s]# pwd
/mnt/k8s
[root@k8s-master3 k8s]# ls
default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
[root@k8s-master3 k8s]#

檢索pod裏的掛載目錄

[root@k8s-master1 nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes
default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
[root@k8s-master1 nfs]#

7.
使用官方的test-pod.yaml測試

[root@k8s-master1 nfs]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@k8s-master1 nfs]#
[root@k8s-master1 nfs]# kubectl apply -f test-pod.yaml
pod "test-pod" created
[root@k8s-master1 nfs]# kubectl get pod
NAME                                      READY     STATUS      RESTARTS   AGE
test-pod                                  0/1       Completed   0          1m

pod啓動後,在/mnt目錄創建了文件SUCCESS
pvc掛載的pod目錄就是/mnt
在nfs server目錄可以看到test-pod創建的SUCCESS文件:

[root@k8s-master3 default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# pwd
/mnt/k8s/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
[root@k8s-master3 default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# ls
SUCCESS

檢索nfs-client-provisioner

[root@k8s-master1 nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89
SUCCESS

8.
測試之後的一個疑問

刪除pod,pvc存儲的數據還在,刪除pvc之後,pvc目錄和存儲的數據都丟失.
爲了防止用戶操作失誤,是否可以保留一份備份呢?
答案是可以.

[root@k8s-master1 nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
[root@k8s-master1 nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

archiveOnDelete: "false"   
這個參數可以設置爲false和true.
archiveOnDelete字面意思爲刪除時是否存檔,false表示不存檔,即刪除數據,true表示存檔,即重命名路徑.

修改測試

[root@k8s-master1 nfs]# kubectl get storageclass
NAME                  PROVISIONER      AGE
managed-nfs-storage  fuseim.pri/ifs  1m
[root@k8s-master1 nfs]# kubectl describe storageclass
Name:            managed-nfs-storage
IsDefaultClass:  No
Annotations:    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"managed-nfs-storage","namespace":""},"parameters":{"archiveOnDelete":"true"},"provisioner":"fuseim.pri/ifs"}

Provisioner:          fuseim.pri/ifs
Parameters:            archiveOnDelete=true
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:        Delete
VolumeBindingMode:    Immediate
Events:                <none>

刪除pod,pvc

[root@k8s-master1 nfs]# kubectl get pod
NAME                                      READY    STATUS      RESTARTS  AGE
test-pod                                  0/1      Completed  0          6s
[root@k8s-master1 nfs]# kubectl get pv,pvc
NAME                                                        CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS    CLAIM                STORAGECLASS          REASON    AGE
persistentvolume/pvc-5a12cb0e-adeb-11e9-8401-000c29383c89  1Mi        RWX            Delete          Bound    default/test-claim  managed-nfs-storage            17s

NAME                              STATUS    VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS          AGE
persistentvolumeclaim/test-claim  Bound    pvc-5a12cb0e-adeb-11e9-8401-000c29383c89  1Mi        RWX            managed-nfs-storage  17s
[root@k8s-master1 nfs]# kubectl delete -f test-pod.yaml
pod "test-pod" deleted
[root@k8s-master1 nfs]# kubectl delete -f test-claim.yaml
persistentvolumeclaim "test-claim" deleted
[root@k8s-master1 nfs]# kubectl get pv,pvc
No resources found.
[root@k8s-master1 nfs]#

檢索nfs server 存儲路徑,文件自動做了備份.

 [root@k8s-master3 archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# pwd
/mnt/k8s/archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89
[root@k8s-master3 archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# ls
SUCCESS

切記用上archiveOnDelete:true

9.
部署nfs存儲之後,用戶可以自行申請pvc.
不再需要再一個個手動創建pv對應pvc的申請.
其實還是有點不方便,可以不可以創建pod的時候就自動申請創建pvc,而不再需要再創建pod前先申請pvc然後再掛載進pod呢?
這是statefulset裏的volumeClaimTemplates的功能.
下篇再來測試.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章