利用NFS動態提供Kubernetes後端存儲卷

利用NFS動態提供Kubernetes後端存儲卷

本文將介紹使用nfs-client-provisioner這個應用,利用NFS Server給Kubernetes作爲持久存儲的後端,並且動態提供PV。

安裝nfs

我這邊安裝的是一臺nfs服務器,比較簡單。其他節點安裝nfs-utils就夠了。

yum -y install nfs-utils rpcbind
find /etc/ -name '*rpcbind.socket*'   
vim /etc/systemd/system/sockets.target.wants/rpcbind.socket #上一條文件結果
#查看是否和下面一樣
[Unit]
Description=RPCbind Server Activation Socket
[Socket]
ListenStream=/var/run/rpcbind.sock
# RPC netconfig can't handle ipv6/ipv4 dual sockets
BindIPv6Only=ipv6-only
ListenStream=0.0.0.0:111
ListenDatagram=0.0.0.0:111
#ListenStream=[::]:111
#ListenDatagram=[::]:111
[Install]
WantedBy=sockets.target

配置服務開機運行:

systemctl enable rpcbind.service &&systemctl start rpcbind.service
systemctl enable nfs.service &&systemctl start nfs.service

配置共享目錄:

#創建共享目錄,目錄自己定
mkdir -p /usr/share/k8s
#按需設定目錄權限
chmod -R 666 /usr/share/k8s
#更改共享設置
vi /etc/exports
/usr/share/k8s *(insecure,rw,no_root_squash) 
systemctl restart nfs

測試Nfs服務是否正常:
選擇另外一臺主機進行測試,另一臺主機也安裝了nfs-utils,沒安裝就執行:

#安裝nfs-utils用於測試
yum -y install nfs-utils rpcbind
#查看Nfs主機上的共享
showmount -e 192.168.161.180
Export list for 192.168.161.180:
/usr/share/k8s *

#嘗試掛載
mount -t nfs 192.168.161.180:/usr/share/k8s(共享目錄) /usr/share/k8s(本地目錄)

#查看是否掛載成功
df -Th

k8s中使用nfs做存儲盤

1、配置rbac:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

2、storageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: stateful-nfs
provisioner: zy-test                  #這個要和nfs-client-provisioner的env環境變量中的PROVISIONER_NAME的value值對應。
reclaimPolicy: Retain               #指定回收策略爲Retain(手動釋放)

3、pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  storageClassName: stateful-nfs              #定義存儲類的名稱,需與SC的名稱對應
  accessModes:
    - ReadWriteMany                        #訪問模式爲RWM
  resources:
    requests:
      storage: 100Mi

4、測試deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
spec:
  replicas: 1                              #指定副本數量爲1
  strategy:
    type: Recreate                      #指定策略類型爲重置
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner            #指定rbac yanl文件中創建的認證用戶賬號
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner     #使用的鏡像
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes             #指定容器內掛載的目錄
          env:
            - name: PROVISIONER_NAME           #容器內的變量用於指定提供存儲的名稱
              value: zy-test
            - name: NFS_SERVER                      #容器內的變量用於指定nfs服務的IP地址
              value: 192.168.161.180				#ip是nfs服務器地址
            - name: NFS_PATH                       #容器內的變量指定nfs服務器對應的目錄
              value: /usr/share/k8s
      volumes:                                                #指定掛載到容器內的nfs的路徑及IP
        - name: nfs-client-root
          nfs:
            server: 192.168.161.180
            path: /usr/share/k8s

看下如果有pv創建出來和pvc被綁定,當然po需要是running狀態,那就是成功:

[root@master1 nfs]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-6d85fe39-b6b4-4c29-ade8-4aff4ce7fb4e   100Mi      RWX            Delete           Bound    default/test-pvc   stateful-nfs            26m
[root@master1 nfs]# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-6d85fe39-b6b4-4c29-ade8-4aff4ce7fb4e   100Mi      RWX            stateful-nfs   61m

參考文章:
[1]:https://www.cnblogs.com/gytangyao/p/11407221.html
[2]:https://blog.csdn.net/ANXIN997483092/article/details/100177380
[3]:https://blog.51cto.com/14157628/2470107

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章