Kubernetes共享存儲之Glusterfs+Heketi

   Kubernetes持久化存儲可分爲靜態存儲以及動態存儲,靜態存儲,常用的用hostpath本地存儲,NFS,glusterfs存儲等,需要事先部署好存儲卷pv,再通過K8S的pvc獲取到存儲空間進行存儲。動態存儲,事先部署好glusterfs集羣以及Heketi,通過二者協作,達到只需要創建申請pvc就可以動態申請到存儲空間,省去了底層存儲卷以及pv的創建。

   準備三臺虛擬機,配置2核4g    kubeadm安裝的高可用K8S集羣

   172.30.0.74   k8smaster1    hostname: k8smaster1

   172.30.0.82   k8smaster2    hostname: k8smaster2

   172.30.0.90   k8snode         hostname: k8snode

  glusterfs集羣創建

  配置glusterfs yum源

      # CentOS-Gluster-4.1.repo
      #
      # Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
      # information

      [centos-gluster41]
      name=CentOS-$releasever - Gluster 4.1 (Long Term Maintanance)
      baseurl=http://mirror.centos.org/centos-7/7/storage/x86_64/gluster-4.1/
      gpgcheck=0
      enabled=1
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage

      [centos-gluster41-test]
      name=CentOS-$releasever - Gluster 4.1 Testing (Long Term Maintenance)
      baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-4.1/
      gpgcheck=0
      enabled=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage

  安裝glusterfs

   # yum install glusterfs-server

   啓動:

   # systemctl start glusterd

   # systemctl start glusterfsd 

  每臺glusterfs節點都要安裝


  glusterfs使用

  # glusterfs peer probe k8smaster2

  # glusterfs peer probe k8snode

  將glusterfs節點加入到集羣中


  設置glusterfs卷:

   # mkdir /data/brick1/gv2

   三臺機子都需要執行


   創建複製卷:

   # gluster volume create gv2 replica 2  172.30.0.74:/data/brick1/gv2 172.30.0.82:/data/brick1/gv2 force


   啓動glusterfs卷:

   # gluster volume start gv2


   # gluster volume info 

   查看卷的情況

   glusterfs集羣創建完成


  Heketi配置

  Heketi提供了一個RESTful管理界面,可以用來管理GlusterFS卷的生命週期。 通過Heketi,就可以像使用OpenStack Manila,Kubernetes和OpenShift一樣申請可以動態配置GlusterFS卷。Heketi會動態在集羣內選擇bricks構建所需的volumes,這樣以確保數據的副本會分散到集羣不同的故障域內。同時Heketi還支持任意數量的ClusterFS集羣,以保證接入的雲服務器不侷限於單個GlusterFS集羣。

  獲取安裝包

   # wget https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz

   # tar xf heketi-v5.0.1.linux.amd64.tar.gz

   # ln -s /root/heketi/heketi /bin/heketi

   # ln -s /root/heketi/heketi-cli /bin/heketi-cli

  修改Heketi配置文件

  修改heketi配置文件/etc/heketi/heketi.json,內容如下:

  ......
  #修改端口,防止端口衝突
    "port": "18080",
  ......
  #允許認證
    "use_auth": true,
  ......
  #admin用戶的key改爲adminkey
        "key": "adminkey"
  ......
  #修改執行插件爲ssh,並配置ssh的所需證書,注意要能對集羣中的機器免密ssh登陸,使用ssh-copy-id把pub key拷到每臺glusterfs服務器上
      "executor": "ssh",
      "sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
      },
  ......
  # 定義heketi數據庫文件位置
      "db": "/var/lib/heketi/heketi.db"
  ......
  #調整日誌輸出級別
      "loglevel" : "warning"


  PS:需要說明的是,heketi有三種executor,分別爲mock、ssh、kubernetes,建議在測試環境使用mock,生產環境使用ssh,當glusterfs以容器的方式部署在kubernetes上時,才使用kubernetes。我們這裏將glusterfs和heketi獨立部署,使用ssh的方式。


  啓動Heketi:

    nohup heketi --config=/etc/heketi/heketi.json & 

 

  Heketi添加Glusterfs

  創建集羣

     # heketi-cli --user admin --server http://172.30.0.74:28080 --secret adminkey --json  cluster create

     {"id":"7e320f3f04068c0564eb92e865263bd4","nodes":[],"volumes":[]}

   使用返回的唯一集羣id將三個節點加入到集羣中

   # heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json  node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.73 --storage-host-name 172.30.0.74 --zone 1

    # heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json  node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.81 --storage-host-name 172.30.0.82 --zone 1

    # heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json  node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.89 --storage-host-name 172.30.0.90 --zone 1

  

    在三個節點創建邏輯卷,作爲Heketi的device,方便後續擴展,注意Heketi只支持裸分區或者裸磁盤,不需要格式化文件系統

    # heketi-cli  --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080  --node "78850cf6d4a44964b1fdf09970feb0"

    # heketi-cli  --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "560f238695f64479298429c062dc4c"
   
    # heketi-cli  --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "4e3e965421d26e6858d18e6ccaf19f"


  生產實際配置

  上面展示瞭如何手動一步步生成cluster,往cluster中添加節點,添加device的操作,在我們實際生產配置中,可以直接通過配置文件完成。 

  創建一個/etc/heketi/topology-sample.json的文件,內容如下:

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.75.175"
                            ],
                            "storage": [
                                "192.168.75.175"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vda2"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.75.176"
                            ],
                            "storage": [
                                "192.168.75.176"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vda2"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.75.177"
                            ],
                            "storage": [
                                "192.168.75.177"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vda2"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.75.178"
                            ],
                            "storage": [
                                "192.168.75.178"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vda2"
                    ]
                }               
            ]
        }
    ]
}

  創建:

  heketi-cli  topology load --json topology-sample.json


  至此前期glusterfs以及Heketi的準備工作已完成

  創建K8S的storageclass文件,使得K8S調用Heketi創建底層pv

  創建storageclass

  [root@consolefan-1 yaml]# cat glusterfs/storageclass-glusterfs.yaml
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    name: glusterfs
  provisioner: kubernetes.io/glusterfs
  parameters:
    resturl: "http://172.30.0.74:28080"
    clusterid: "7e320f3f04068c0564eb92e865263b"
    restauthenabled: "true"
    restuser: "admin"
    restuserkey: "adminkey"
    gidMin: "40000"
    gidMax: "50000"
    volumetype: "replicate:2"


  # kubectl apply -f storageclass-glusterfs.yaml、

 

  創建pvc進行驗證:

  [root@consolefan-1 yaml]# cat glusterfs/pvc-glusterfs.yaml
  kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: glusterfs-test1
    namespace: default
    annotations:
      volume.beta.kubernetes.io/storage-class: "glusterfs"
  spec:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: 1Gi

 

  # kubectl apply -f pvc-glusterfs.yaml

  構建成功

   圖片.png

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章