k8s pv 和 PVC 爲何綁定不上
使用statefuset 部署有狀態應用,應用總是處於pending 狀態,在開始之前先介紹什麼是statefuset, 在 k8s 中一般用 deployment 管理無狀態應用,statefuset 用來管理有狀態應用,如 redis 、mysql 、zookper 等分佈式應用,這些應用的啓動停止都會有嚴格的順序
一、statefulset
headless (無頭服務),沒有cluserIP, 資源標識符,用於生成可解析的dns 記錄
StatefulSet 用於pod 資源的管理
volumeClaimTemplates 提供存儲
二、statefulset 部署
使用nfs 做網絡存儲
搭建nfs
配置共享存儲目錄
創建pv
編排 yaml
搭建nfs
yum install nfs-utils -y
mkdir -p /usr/local/k8s/redis/pv{7..12} # 創建掛載目錄
cat /etc/exports /usr/local/k8s/redis/pv7 172.16.0.0/16(rw,sync,no_root_squash) /usr/local/k8s/redis/pv8 172.16.0.0/16(rw,sync,no_root_squash) /usr/local/k8s/redis/pv9 172.16.0.0/16(rw,sync,no_root_squash) /usr/local/k8s/redis/pv10 172.16.0.0/16(rw,sync,no_root_squash) /usr/local/k8s/redis/pv11 172.16.0.0/16(rw,sync,no_root_squash exportfs -avr
創建pv
cat nfs_pv2.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv7 spec: capacity: storage: 500M accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: slow nfs: server: 172.16.0.59 path: "/usr/local/k8s/redis/pv7" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv8 spec: capacity: storage: 500M accessModes: - ReadWriteMany storageClassName: slow persistentVolumeReclaimPolicy: Retain nfs: server: 172.16.0.59 path: "/usr/local/k8s/redis/pv8" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv9 spec: capacity: storage: 500M accessModes: - ReadWriteMany storageClassName: slow persistentVolumeReclaimPolicy: Retain nfs: server: 172.16.0.59 path: "/usr/local/k8s/redis/pv9" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv10 spec: capacity: storage: 500M accessModes: - ReadWriteMany storageClassName: slow persistentVolumeReclaimPolicy: Retain nfs: server: 172.16.0.59 path: "/usr/local/k8s/redis/pv10" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv11 spec: capacity: storage: 500M accessModes: - ReadWriteMany storageClassName: slow persistentVolumeReclaimPolicy: Retain nfs: server: 172.16.0.59 path: "/usr/local/k8s/redis/pv11" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv12 spec: capacity: storage: 500M accessModes: - ReadWriteMany storageClassName: slow persistentVolumeReclaimPolicy: Retain nfs: server: 172.16.0.59 path: "/usr/local/k8s/redis/pv12"
kubectl apply -f nfs_pv2.yaml
查看 # 創建成功
編寫yaml 編排應用
apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 resources: requests: cpu: "500m" memory: "500Mi" ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: ["ReadWriteOnce"] storageClassName: "slow" resources: requests: storage: 400Mi
kubectl create -f new-stateful.yaml
查看headless 創建成功
查看pod 是否創建成功
查看pvc 是否創建成功
pod 啓動沒有成功,依賴於pvc ,查看pvc 的日誌,沒有找到對應的pvc,明明寫了啊
查看 關聯信息,有下面這個屬性
storageClassName: "slow"
三、statefulset 排障
pvc 無法創建,導致pod 無法正常啓動,yaml 文件重新檢查了幾遍,
思考的方向:pvc 如何綁定pv ,通過storageClassName 去關聯,pv 也創建成功了,也存在storageClassName: slow 這個屬性,結果愣是找不到
。。。。
。。。。
後面檢查pv 和pvc 的權限是否一直
發現pv 設置的權限
volumeClaimTemplates: 聲明的pvc 權限
兩邊的權限不一致,
操作
刪除 pvc kubectl delete pvc myappdata-myapp-0 -n daemon
刪除 yaml 文件, kubectl delete -f new-stateful.yaml -n daemon
嘗試修改 accessModes: ["ReadWriteMany"]
再次查看
提示:pv 和PVC 設定權限注
四、statefulset 測試,域名解析
kubectl exec -it myapp-0 sh -n daemon
nslookup myapp-0.myapp.daemon.svc.cluster.local
解析的規則 如下
myapp-0 myapp daemon
FQDN: $(podname).(headless server name).namespace.svc.cluster.local
容器裏面如沒有nsllokup ,需要安裝對應的包,busybox 可以提供類似的功能
提供yaml 文件
apiVersion: v1 kind: Pod metadata: name: busybox namespace: daemon spec: containers: - name: busybox image: busybox:1.28.4 command: - sleep - "7600" resources: requests: memory: "200Mi" cpu: "250m" imagePullPolicy: IfNotPresent restartPolicy: Never
五、statefulset 的擴縮容
擴容:
Statefulset 資源的擴縮容與Deployment 資源相似,即通過修改副本數,Statefulset 資源的拓展過程,與創建過程類似,應用名稱的索引號,依次增加
可使用 kubectl scale
kubectl patch
實踐:kubectl scale statefulset myapp --replicas=4
縮容:
縮容只需要將pod 副本數調小
kubectl patch statefulset myapp -p '{"spec":{"replicas":3}}' -n daemon
提示:資源擴縮容,需要動態創建pv 與pvc 的綁定關係,這裏使用的是nfs 做持久化存儲,pv 的多少是預先創建的
六、statefulset 的滾動更新
滾動更新
金絲雀發佈
滾動更新
滾動更新 是從索引pod 號最大的開始的,終止完一個資源,在進行開始下一個pod ,滾動更新是 statefulset 默認的更新策略
kubectl set image statefulset/myapp myapp=ikubernetes/myapp:v2 -n daemon
升級過程
查看pod 狀態
kubectl get pods -n daemon
查看升級後鏡像是否更新
kubectl describe pod myapp-0 -n daemon