kubernetes存儲 -- Volumes管理(三)StatefulSet控制器、StatefulSet部署mysql主從集羣

StatefulSet控制器

StatefulSet可以通過Headless Service維持Pod的拓撲狀態.

  • StatefulSet將應用狀態抽象成了兩種情況:
    • 拓撲狀態:應用實例必須按照某種順序啓動。新創建的Pod必須和原來Pod的網絡標識一樣
    • 存儲狀態:應用的多個實例分別綁定了不同存儲數據。
  • StatefulSet給所有的Pod進行了編號,編號規則是:$(statefulset名稱)-$(序號),從0開始。
[root@server2 vol]# mkdir nginx		/新建個目錄
[root@server2 vol]# cd nginx/

創建一個無頭服務:

[root@server2 nginx]# vim service.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None		/沒有IP
  selector:
    app: nginx			/標籤選擇

創建StatefulSet控制器:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web		/注意這裏的名稱是web
spec:
  serviceName: "nginx"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
[root@server2 nginx]# kubectl apply -f  statefulset.yml 
kubestatefulset.apps/web created

[root@server2 nginx]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-576d464467-9qs9h   1/1     Running   0          132m
web-0                                     1/1     Running   0          56s
web-1                                     1/1     Running   0          52s

它創建的pod 名字在 web 的基礎上加上了序號,可見它是按順序創建的.
我們可以把nfs的動態卷全部放到另一個命名空間去,讓這個實驗更清晰:

[root@server2 nginx]# kubectl create namespace nfs-client-provisioner
namespace/nfs-client-provisioner created
[root@server2 nfs-client]# vim rbac.yml 
[root@server2 nfs-client]# vim deployment.yml
:%s/default/nfs-client-provisioner/g 			/將這兩個文件的命名空間全部替換
[root@server2 nfs-client]# kubectl apply -f . -n nfs-client-provisioner 

[root@server2 nfs-client]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          8m48s
web-1   1/1     Running   0          8m44s		/只剩statefulset 的pod 了 
root@server2 nginx]# kubectl describe svc nginx 
Name:              nginx
Namespace:         default
Labels:            app=nginx
Annotations:       Selector:  app=nginx
Type:              ClusterIP
IP:                None
Port:              web  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.141.234:80,10.244.22.31:80
Session Affinity:  None
Events:            <none>

我們可以通過 dig 直接解析到我們的兩個pod的ip:

[root@server2 nginx]# dig nginx.default.svc.cluster.local @10.244.179.75		/@後爲dns地址

;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN	A	10.244.22.31
nginx.default.svc.cluster.local. 30 IN	A	10.244.141.234

[root@server2 nginx]# dig web-0.nginx.default.svc.cluster.local @10.244.179.75	/也可以直接解析pod

;; ANSWER SECTION:
web-0.nginx.default.svc.cluster.local. 30 IN A	10.244.141.234
//我們在訪問的時候就可以直接使用域名來訪問了,這就是無頭服務。

pod的彈縮:

拉伸到5個:
[root@server2 nginx]# vim statefulset.yml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 5
  [root@server2 nginx]# kubectl apply -f statefulset.yml
[root@server2 nginx]# kubectl  get pod
NAME    READY   STATUS              RESTARTS   AGE
web-0   1/1     Running             0          9s
web-1   1/1     Running             0          5s
web-2   0/1     ContainerCreating   0          2s
[root@server2 nginx]# kubectl  get pod
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          37s
web-1   1/1     Running   0          33s
web-2   1/1     Running   0          30s
web-3   1/1     Running   0          27s
web-4   1/1     Running   0          25s
可以看出前面的創建了,後面的纔會創建

收縮:
[root@server2 nginx]# vim statefulset.yml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 0			/ 0個副本

[root@server2 nginx]# kubectl apply -f statefulset.yml 
statefulset.apps/web configured

[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS        RESTARTS   AGE
web-0   1/1     Running       0          3m16s
web-1   1/1     Running       0          3m12s
web-2   1/1     Running       0          3m9s
web-3   1/1     Running       0          3m6s
web-4   0/1     Terminating   0          3m4s

[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS        RESTARTS   AGE
web-0   1/1     Running       0          3m19s
web-1   1/1     Running       0          3m15s
web-2   1/1     Running       0          3m12s
web-3   0/1     Terminating   0          3m9s

[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS        RESTARTS   AGE
web-0   1/1     Running       0          3m25s
web-1   1/1     Running       0          3m21s
web-2   1/1     Terminating   0          3m18s

[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS        RESTARTS   AGE
web-0   1/1     Running       0          3m31s
web-1   0/1     Terminating   0          3m27s

[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS        RESTARTS   AGE
web-0   0/1     Terminating   0          3m38s
它會從後往前一個一個進行回收

加上pv的存儲:

  • PV和PVC的設計,使得StatefulSet對存儲狀態的管理成爲了可能:
[root@server2 nginx]# vi  statefulset.yml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:			/從這網下爲新加入的內容
        - name: www
          mountPath: /usr/share/nginx/html		/掛載點
  volumeClaimTemplates:
    - metadata:
        name: www
      spec:
        storageClassName: managed-nfs-storage		/存儲類名稱,可以不加,因爲我們設置了默認的存儲類
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
[root@server2 nginx]# kubectl apply -f statefulset.yml 
statefulset.apps/web created

[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          21s
web-1   1/1     Running   0          18s
web-2   1/1     Running   0          14s

[root@server2 nginx]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
www-web-0   Bound    pvc-364e56c9-729a-4955-90b9-edc892e95474   1Gi        RWO            managed-nfs-storage   24s
www-web-1   Bound    pvc-4b02cc74-ea4f-4673-8e6f-dd2660b05aa2   1Gi        RWO            managed-nfs-storage   21s
www-web-2   Bound    pvc-63b16bd5-7554-4426-84f2-f29757189571   1Gi        RWO            managed-nfs-storage   17s

[root@server2 nginx]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS          REASON   AGE
pvc-364e56c9-729a-4955-90b9-edc892e95474   1Gi        RWO            Delete           Bound    default/www-web-0   managed-nfs-storage            26s
pvc-4b02cc74-ea4f-4673-8e6f-dd2660b05aa2   1Gi        RWO            Delete           Bound    default/www-web-1   managed-nfs-storage            23s
pvc-63b16bd5-7554-4426-84f2-f29757189571   1Gi        RWO            Delete           Bound    default/www-web-2   managed-nfs-storage            18s
  • StatefulSet爲每一個Pod分配並創建一個同樣編號的PVC。這樣,kubernetes就可以通過Persistent Volume機制爲這個PVC綁定對應的PV,從而保證每一個Pod都擁有一個獨立的Volume。

在 nfs 主機上查看

[root@server1 nfsdata]# ll
total 0
drwxrwxrwx 2 root root 6 Jul  6 17:01 default-www-web-0-pvc-364e56c9-729a-4955-90b9-edc892e95474
drwxrwxrwx 2 root root 6 Jul  6 17:01 default-www-web-1-pvc-4b02cc74-ea4f-4673-8e6f-dd2660b05aa2
drwxrwxrwx 2 root root 6 Jul  6 17:01 default-www-web-2-pvc-63b16bd5-7554-4426-84f2-f29757189571

#編輯發佈頁面
[root@server1 nfsdata]# echo web-0 >default-www-web-0-pvc-364e56c9-729a-4955-90b9-edc892e95474/index.html
[root@server1 nfsdata]# echo web-1 > default-www-web-1-pvc-4b02cc74-ea4f-4673-8e6f-dd2660b05aa2/index.html
[root@server1 nfsdata]# echo web-2 > default-www-web-2-pvc-63b16bd5-7554-4426-84f2-f29757189571/index.html

我們可以直接把k8s的dns服務地址寫到配置文件中,就可以自動榜我們解析了:
[root@server2 nginx]# vim /etc/resolv.conf

nameserver 10.244.179.75
nameserver 114.114.114.114

[root@server2 nginx]# curl web-1.nginx.default.svc.cluster.local
web-1
[root@server2 nginx]# curl web-0.nginx.default.svc.cluster.local
web-0			/可以這樣解析
[root@server2 nginx]# curl nginx.default.svc.cluster.local
web-0			/也可以不加pod名
[root@server2 nginx]# curl nginx.default.svc.cluster.local
web-2
[root@server2 nginx]# curl nginx.default.svc.cluster.local
web-1
[root@server2 nginx]# curl nginx.default.svc.cluster.local
web-2
//並且可以實現負載均衡。
  • Pod被刪除後重建,重建Pod的網絡標識也不會改變,Pod的拓撲狀態按照Pod的“名字+編號”的方式固定下來,並且爲每個Pod提供了一個固定且唯一的訪問入口,即Pod對應的DNS記錄。
[root@server2 nginx]# vim statefulset.yml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 0			/改變副本數爲0
[root@server2 nginx]# kubectl apply -f statefulset.yml 
statefulset.apps/web configured
[root@server2 nginx]# kubectl get pod
No resources found in default namespace.			/已經沒有pod了
[root@server2 nginx]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
www-web-0   Bound    pvc-364e56c9-729a-4955-90b9-edc892e95474   1Gi        RWO            managed-nfs-storage   19m
www-web-1   Bound    pvc-4b02cc74-ea4f-4673-8e6f-dd2660b05aa2   1Gi        RWO            managed-nfs-storage   19m
www-web-2   Bound    pvc-63b16bd5-7554-4426-84f2-f29757189571   1Gi        RWO            managed-nfs-storage   19m
//但是pvc和pv依然存在
[root@server2 nginx]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS          REASON   AGE
pvc-364e56c9-729a-4955-90b9-edc892e95474   1Gi        RWO            Delete           Bound    default/www-web-0   managed-nfs-storage            20m
pvc-4b02cc74-ea4f-4673-8e6f-dd2660b05aa2   1Gi        RWO            Delete           Bound    default/www-web-1   managed-nfs-storage            20m
pvc-63b16bd5-7554-4426-84f2-f29757189571   1Gi        RWO            Delete           Bound    default/www-web-2   managed-nfs-storage            20m
[root@server2 nginx]# vim statefulset.yml 

spec:
  serviceName: "nginx"
  replicas: 3   /改回3個
[root@server2 nginx]# kubectl apply -f statefulset.yml 
statefulset.apps/web configured
[root@server2 nginx]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          13s
web-1   1/1     Running   0          10s
web-2   1/1     Running   0          7s			/pod 名稱沒有變化
[root@server2 nginx]# curl web-0.nginx.default.svc.cluster.local
web-0
[root@server2 nginx]# curl web-1.nginx.default.svc.cluster.local
web-1
[root@server2 nginx]# curl web-2.nginx.default.svc.cluster.local
web-2

//而且裏面的數據沒有變化

StatefulSet部署mysql主從集羣

獲取幫助:https://kubernetes.io/zh/docs/tasks/run-application/run-replicated-stateful-application/

在這裏插入圖片描述

  1. 首先通過 cm 將主節點和從結點的配置文件分離。
  2. 通過statefulset控制器啓動服務,0 是master,後面的都是slave結點。
  3. 無頭服務爲每個結點都分配了網絡標識,可以通過它來訪問master 主機, 來進行寫操作(值有master可以進行寫操作)
  4. clusterIP服務提供一個VIP,用來訪問結點,進行讀操作

以配置文件創建 ConfigMap :

[root@server2 mysql]# vim cm.yml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only

[root@server2 mysql]# kubectl apply -f cm.yml 
configmap/mysql created
[root@server2 mysql]# kubectl get cm
NAME    DATA   AGE
mysql   2      2s

創建svc,做讀寫分離

[root@server2 mysql]# vim svc.yml 
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None		/無頭服務
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306				/默認使用clusterIP
  selector:
    app: mysql
[root@server2 mysql]# kubectl apply -f svc.yml 
service/mysql created
service/mysql-read created
[root@server2 mysql]# kubectl get all
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    18d
service/mysql        ClusterIP   None            <none>        3306/TCP   4s
service/mysql-read   ClusterIP   10.105.219.33   <none>        3306/TCP   4s

創建statefulset控制器,部署mysql主從
它裏面需要mysql:5.7鏡像和 xtrabackup:1.0 鏡像,我們下載好放到 harbor倉庫中。

爲了方便理解,我們先只生成一個副本,即master結點。

[root@server2 mysql]# vim statefulsrt.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql			/匹配標籤
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:			/初始化容器,就是在容器創建前界定的一些數據
      - name: init-mysql			
        image: mysql:5.7
        command:
        - bash			/執行一個shell腳本
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.從pod序號索引生成mysql服務器id。
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1			/取主機名,取不到就退出
          ordinal=${BASH_REMATCH[1]}					/取pod序號
          echo [mysqld] > /mnt/conf.d/server-id.cnf		/創建文件
          # 添加一個偏移量以避免保留server-id=0值。=0時不允許任何slave結點複製
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # 將相應的conf.d文件從config-map複製到emptyDir。
          if [[ $ordinal -eq 0 ]]; then		/判斷序號是否等於0
            cp /mnt/config-map/master.cnf /mnt/conf.d/			/是就將master配置文件拷貝過來,文件來自 cm
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/			/不是就拷貝slave的配置文件
          fi
        volumeMounts:			/掛載cm
        - name: conf			/對應最後資源清單最下面的emptydir卷
          mountPath: /mnt/conf.d		
        - name: config-map				/對應cm,裏面放的配置文件
          mountPath: /mnt/config-map
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data						/pvc的掛載
          mountPath: /var/lib/mysql
          subPath: mysql			/生成mysql子目錄存放數據
        - name: conf  /初始化完成後在把conf卷掛載到 /etc/mysql/conf.d/上,此時裏面已經有初始化時生成的數據了
          mountPath: /etc/mysql/conf.d		
        resources:
          requests:
            cpu: 300m
            memory: 1Gi
        livenessProbe:				/存活探針
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:				/就緒探針
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      volumes:				/卷
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:			/	pvc	模板
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

沒有定義存儲類,所以可以用默認的。

[root@server2 mysql]# kubectl apply -f  statefulsrt.yml 
statefulset.apps/mysql created
[root@server2 mysql]# kubectl get pod -w
NAME      READY   STATUS     RESTARTS   AGE
mysql-0   0/1     Init:0/1   0          5s
mysql-0   0/1     PodInitializing   0          25s		/初始化
mysql-0   0/1     Running           0          26s		/存活檢測
mysql-0   1/1     Running           0          85s		/就緒檢測
 
[root@server2 mysql]# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
data-mysql-0   Bound    pvc-e5c41a78-ec82-4fa0-aa84-c96cde5798ce   10Gi       RWO            managed-nfs-storage   2m18s
/生成了pvc,掛載在 容器內的 /var/lib/mysql 
[root@server2 mysql]# kubectl describe pod mysql-0
      /etc/mysql/conf.d from conf (rw)
      /var/lib/mysql from data (rw,path="mysql")

初始化的時候我們把 master.cnf 和serve-id.cnf 文件放在/mnt/conf.d裏面,這些數據是容器共享的,只要pod不刪除,卷就一直在。容器也能共享初始化容器的內容。

[root@server2 mysql]# kubectl logs mysql-0 -c init-mysql 
++ hostname
+ [[ mysql-0 =~ -([0-9]+)$ ]]
+ ordinal=0
+ echo '[mysqld]'
+ echo server-id=100
+ [[ 0 -eq 0 ]]
+ cp /mnt/config-map/master.cnf /mnt/conf.d/

[root@server2 mysql]# kubectl exec mysql-0 -c mysql -- ls /etc/mysql/conf.d
master.cnf
server-id.cnf
[root@server2 mysql]# kubectl exec mysql-0 -c mysql -- cat /etc/mysql/conf.d/master.cnf
# Apply this config only on the master.
[mysqld]
log-bin			是在定義cm時的內容。
[root@server2 mysql]# kubectl exec mysql-0 -c mysql -- cat /etc/mysql/conf.d/server-id.cnf
[mysqld]
server-id=100			是初始化容器生成的內容。

我們可以看出在初始化的時候生成了 server-id.cnf 和 msater.cnf 這兩個文件.初始化完成後,容器又把conf這個卷掛載到了/etc/mysql/conf.d/目錄下,server-id.cnf 和 msater.cnf 這兩個文件這兩個文件也共享了過來。
訪問:

[root@server2 mysql]# yum install -y mysql			/安裝一個客戶端

[root@server2 mysql]# cat /etc/resolv.conf
nameserver 10.244.179.75			/k8s的dns 解析
nameserver 114.114.114.114

[root@server2 mysql]# mysql -h mysql-0.mysql.default.svc.cluster.local
				通過無頭服務進行數據庫的遠程連接。
MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+

master就配置好了。

然後我們在添加slave結點:
slave結點除了配置文件我們最主要的問題就是如何從主結點取複製數據? 我們可以在pod中運行一個容器,init容器不能和普通容器共存,它運行完就退出了。

所以我們在開啓一個 xtrabackup 容器,打開3307端口,專門用於數據的傳輸,其它的pod 可以通過連接這個3307端口,來拷貝/var/lib/mysql中的數據。

[root@server2 mysql]# vim statefulsrt.yml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 2		/建立2個副本,一個msater一個slave
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: mysql:5.7
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql			/備份數據,主節點是不需要備份的,從結點才需要
        image: xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # 如果數據目錄已經存在,則跳過克隆。主節點纔有這個數據目錄。主節點就會跳過這一步
          [[ -d /var/lib/mysql/mysql ]] && exit 0			/存在就正常退出
          # 跳過主服務器上的克隆(序號索引0)。
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          #接收上一個結點的3307端口中的數據。mywql-1結點從mysql-0 中克隆數據 ,mysql-2從mysql-1中克隆數據
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql		/mysql-$(($ordinal-1)).mysql是無頭服務裏面的解析
          # 準備備份
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d

      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 300m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup			/在開啓一個容器
        image: xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307			/打開3307端口
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql		/進入數據庫數據目錄

          # 確定克隆數據的binlog位置(如果有的話)。
          if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then			
            # XtraBackup已經生成了一個部分的“更改MASTER TO”查詢
            # 因爲我們是從現有的奴隸身上克隆出來的。(需要去掉後面的分號!)
            cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in			/從slave端複製數據是這種形式,直接生成文件
            # 在本例中忽略xtrabackup_binlog_info(它是無用的).
            rm -f xtrabackup_slave_info xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then				/數據從master端複製是這種形式:
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm -f xtrabackup_binlog_info xtrabackup_slave_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in			/把這些信息寫入 change_master_to.sql.in文件
          fi

          # 檢查是否需要通過啓動複製來完成克隆。
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done

            echo "Initializing replication from clone position"
            mysql -h 127.0.0.1 \
                  -e "$(<change_master_to.sql.in), \
                          MASTER_HOST='mysql-0.mysql', \			/訪問master數據庫的信息
                          MASTER_USER='root', \
                          MASTER_PASSWORD='', \
                          MASTER_CONNECT_RETRY=10; \
                        START SLAVE;" || exit 1
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig			/更改文件愛你名,防止再次初始化
          fi

          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \		/監聽3307端口,等待外部請求的連接
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data			/我們發現這個容器也掛載的是pvc,存放的mysql 的數據
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf        
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

就是說新添加的初始化容器 clone-mysql 會複製上一個結點的數據,(master結點會跳過這一步),而新添加的xtrabackup 容器會打開3307端口,等待下一個結點的連接。

  1. master結點初始化時不會拷貝數據,因爲它是master結點,初始化完成後建立mysql容器,在建立xtrabackup容器,打開3307端口,等待第一個slave結點的連接,
  2. 第一個slave結點初始化時,會通過初始化容器 clone-mysql 訪問master結點的3307端口(mysql-$(($ordinal-1)).mysql 3307)複製數據,然後建立mysql容器,建立abackup容器,打開3307端口,等待第二個slave結點的連接。
  3. 第二個結點初始時,通過初始化容器 訪問第一個slave結點的3307端口複製數據,建立abackup容器,打開3307端口,等待第三個slave結點的連接。

我們先收縮容器至0個 ( replicas: 0),apply 一下,刪除先遣創建的msater結點,因爲我們沒有打開3307端口。

然後在將副本數增加到兩個( replicas: 2),在apply:

[root@server2 mysql]# kubectl apply -f statefulsrt.yml 
statefulset.apps/mysql configured

NAME      READY   STATUS            RESTARTS   AGE
mysql-0   0/2     PodInitializing   0          9s
mysql-0   1/2     Running           0          21s
mysql-0   2/2     Running           0          22s			/master結點運行了
mysql-1   0/2     Pending           0          0s
mysql-1   0/2     Pending           0          0s
mysql-1   0/2     Pending           0          2s
mysql-1   0/2     Init:0/1          0          2s
mysql-1   0/2     Init:0/1          0          6s
mysql-1   0/2     PodInitializing   0          27s
mysql-1   1/2     Running           0          55s
mysql-1   1/2     Error             0          82s
mysql-1   1/2     Running           1          83s
mysql-1   2/2     Running           1          87s			/slave1結點運行了
[root@server2 mysql]# kubectl logs  mysql-0 -c init-mysql 
++ hostname
+ [[ mysql-0 =~ -([0-9]+)$ ]]
+ ordinal=0
+ echo '[mysqld]'
+ echo server-id=100
+ [[ 0 -eq 0 ]]
+ cp /mnt/config-map/master.cnf /mnt/conf.d/		初始化容器複製配置文件
[root@server2 mysql]# kubectl logs  mysql-0 -c clone-mysql 
+ [[ -d /var/lib/mysql/mysql ]]		/目錄已經存在自動退出
+ exit 0

[root@server2 mysql]# kubectl logs  mysql-1 -c init-mysql 
++ hostname
+ [[ mysql-1 =~ -([0-9]+)$ ]]
+ ordinal=1
+ echo '[mysqld]'
+ echo server-id=101
+ [[ 1 -eq 0 ]]
+ cp /mnt/config-map/slave.cnf /mnt/conf.d/				/slave結點複製slave的配置文件

[root@server2 mysql]# kubectl logs  mysql-1 -c clone-mysql 
+ [[ -d /var/lib/mysql/mysql ]]		/判斷目錄不存在,繼續往下走,拷貝數據
++ hostname
+ [[ mysql-1 =~ -([0-9]+)$ ]]
+ ordinal=1
+ [[ 1 -eq 0 ]]
+ ncat --recv-only mysql-0.mysql 3307		/接收master 結點的3307端口
+ xbstream -x -C /var/lib/mysql
+ xtrabackup --prepare --target-dir=/var/lib/mysql		/開始拷貝數據
[root@server1 nfsdata]# ll
total 0
drwxrwxrwx 3 root root 19 Jul  6 20:02 default-data-mysql-0-pvc-e5c41a78-ec82-4fa0-aa84-c96cde5798ce
drwxrwxrwx 3 root root 19 Jul  6 21:56 default-data-mysql-1-pvc-5ff29fa6-56e0-4021-9f4d-d989116bebff
[root@server1 nfsdata]# 

nfs主機上也已經記錄了。
我們在擴容到三個( replicas: 3):

[root@server2 mysql]# kubectl logs  mysql-2 -c clone-mysql 
+ [[ -d /var/lib/mysql/mysql ]]		/判斷目錄不存在,繼續往下走,拷貝數據
++ hostname
+ [[ mysql-1 =~ -([0-9]+)$ ]]
+ ordinal=1
+ [[ 1 -eq 0 ]]
+ ncat --recv-only mysql-1.mysql 3307		/接收slave1 結點的3307端口
+ xbstream -x -C /var/lib/mysql
+ xtrabackup --prepare --target-dir=/var/lib/mysql		/開始拷貝數據

mysql的主從是一個非常耗資源的事情,所以我們收縮結點的時候( replicas: 2):

[root@server2 mysql]# kubectl get pod
NAME      READY   STATUS        RESTARTS   AGE
mysql-0   2/2     Running       0          3m53s
mysql-1   2/2     Running       0          4m38s
mysql-2   2/2     Terminating   0          6m3s			/會從最後一個開始回收

但是此時它的pvc時仍然存在的:

[root@server1 nfsdata]# ls
 default-data-mysql-0-pvc-e5c41a78-ec82-4fa0-aa84-c96cde5798ce 
 default-data-mysql-1-pvc-5ff29fa6-56e0-4021-9f4d-d989116bebff  
 default-data-mysql-2-pvc-ad0de3c0-508d-4b3a-9d57-852824fc2b4b

當我們再次創建這個pod時,它會自動從原來的pvc上直接掛載,就不用再做那些初始化了。

測試主從複製:

[root@server2 mysql]# mysql -h mysql-0.mysql.default.svc.cluster.local
在master數據庫創建一個數據庫


MySQL [(none)]> create database linux;
Query OK, 1 row affected (0.19 sec)

MySQL [(none)]> show databases;
+------------------------+
| Database               |
+------------------------+
| information_schema     |
| linux                  |
| mysql                  |
| performance_schema     |
| sys                    |
| xtrabackup_backupfiles |
+------------------------+
6 rows in set (0.01 sec)

[root@server2 mysql]# mysql -h mysql-1.mysql.default.svc.cluster.local

MySQL [(none)]> show databases;
+------------------------+
| Database               |
+------------------------+
| information_schema     |
| linux                  |				/從結點會直接拷貝過來
| mysql                  |
| performance_schema     |
| sys                    |
| xtrabackup_backupfiles |
+------------------------+
6 rows in set (0.23 sec)

[root@server2 mysql]# mysql -h mysql-2.mysql.default.svc.cluster.local


MySQL [(none)]> show databases;
+------------------------+
| Database               |
+------------------------+
| information_schema     |
| linux                  |
| mysql                  |
| performance_schema     |
| sys                    |
| xtrabackup_backupfiles |
+------------------------+
6 rows in set (0.08 sec)

MySQL [(none)]> create database woaini;
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
/但是從結點無法寫入,因爲是隻讀的。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章