rook使用教程,快速編排ceph

kubernetes集羣三步安裝

安裝

git clone https://github.com/rook/rook

cd cluster/examples/kubernetes/ceph
kubectl create -f operator.yaml 

查看operator是否成功:

[root@dev-86-201 ~]# kubectl get pod -n rook-ceph-system
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-agent-5z6p7                 1/1     Running   0          88m
rook-ceph-agent-6rj7l                 1/1     Running   0          88m
rook-ceph-agent-8qfpj                 1/1     Running   0          88m
rook-ceph-agent-xbhzh                 1/1     Running   0          88m
rook-ceph-operator-67f4b8f67d-tsnf2   1/1     Running   0          88m
rook-discover-5wghx                   1/1     Running   0          88m
rook-discover-lhwvf                   1/1     Running   0          88m
rook-discover-nl5m2                   1/1     Running   0          88m
rook-discover-qmbx7                   1/1     Running   0          88m

然後創建ceph集羣:

kubectl create -f cluster.yaml

查看ceph集羣:

[root@dev-86-201 ~]# kubectl get pod -n rook-ceph
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-mgr-a-8649f78d9b-jklbv   1/1     Running   0          64m
rook-ceph-mon-a-5d7fcfb6ff-2wq9l   1/1     Running   0          81m
rook-ceph-mon-b-7cfcd567d8-lkqff   1/1     Running   0          80m
rook-ceph-mon-d-65cd79df44-66rgz   1/1     Running   0          79m
rook-ceph-osd-0-56bd7545bd-5k9xk   1/1     Running   0          63m
rook-ceph-osd-1-77f56cd549-7rm4l   1/1     Running   0          63m
rook-ceph-osd-2-6cf58ddb6f-wkwp6   1/1     Running   0          63m
rook-ceph-osd-3-6f8b78c647-8xjzv   1/1     Running   0          63m

參數說明:

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    # For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags
    image: ceph/ceph:v13.2.2-20181023
  dataDirHostPath: /var/lib/rook # 數據盤目錄
  mon:
    count: 3
    allowMultiplePerNode: true
  dashboard:
    enabled: true
  storage:
    useAllNodes: true
    useAllDevices: false
    config:
      databaseSizeMB: "1024"
      journalSizeMB: "1024"

訪問ceph dashboard:

[root@dev-86-201 ~]# kubectl get svc -n rook-ceph
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
rook-ceph-mgr             ClusterIP   10.98.183.33     <none>        9283/TCP         66m
rook-ceph-mgr-dashboard   NodePort    10.103.84.48     <none>        8443:31631/TCP   66m  # 把這個改成NodePort模式
rook-ceph-mon-a           ClusterIP   10.99.71.227     <none>        6790/TCP         83m
rook-ceph-mon-b           ClusterIP   10.110.245.119   <none>        6790/TCP         82m
rook-ceph-mon-d           ClusterIP   10.101.79.159    <none>        6790/TCP         81m

然後訪問https://10.1.86.201:31631 即可
rook使用教程,快速編排ceph

管理賬戶admin,獲取登錄密碼:

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode

使用

創建pool

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool   # operator會監聽並創建一個pool,執行完後界面上也能看到對應的pool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block    # 這裏創建一個storage class, 在pvc中指定這個storage class即可實現動態創建PV
provisioner: ceph.rook.io/block
parameters:
  blockPool: replicapool
  # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist
  clusterNamespace: rook-ceph
  # Specify the filesystem type of the volume. If not specified, it will use `ext4`.
  fstype: xfs
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Retain

創建pvc

在cluster/examples/kubernetes 目錄下,官方給了個worldpress的例子,可以直接運行一下:

kubectl create -f mysql.yaml
kubectl create -f wordpress.yaml

查看PV PVC:

[root@dev-86-201 ~]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde   20Gi       RWO            rook-ceph-block   144m
wp-pv-claim      Bound    pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde   20Gi       RWO            rook-ceph-block   144m

[root@dev-86-201 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde   20Gi       RWO            Retain           Bound    default/mysql-pv-claim   rook-ceph-block            145m
pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde   20Gi       RWO            Retain           Bound    default/wp-pv-claim      rook-ceph-block            145m

看下yaml文件:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block   # 指定storage class
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi  # 需要一個20G的盤

...

        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim  # 指定上面定義的PVC

是不是非常簡單。

要訪問wordpress的話請把service改成NodePort類型,官方給的是loadbalance類型:

kubectl edit svc wordpress

[root@dev-86-201 kubernetes]# kubectl get svc
NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
wordpress         NodePort    10.109.30.99   <none>        80:30130/TCP   148m

總結

分佈式存儲在容器集羣中充當非常重要的角色,使用容器集羣一個非常重要的理念就是把集羣當成一個整體使用,如果你在使用中還關心單個主機,比如調度到某個節點,

掛載某個節點目錄等,必然會導致不能把雲的威力百分之百發揮出來。 一旦計算存儲分離後,就可真正實現隨意漂移,對集羣維護來說是個極大的福音。

比如集羣機器過保了需要下架,那麼我們雲化的架構因爲所有東西無單點,所以只需要簡單驅逐改節點,然後下架即可,不用關心上面跑的是什麼業務,不管是有狀態還是無

狀態的都可以自動修復。 不過目前面臨最大的挑戰可能還是分佈式存儲的性能問題。 在性能要求不苛刻的場景下我是極推薦這種計算存儲分離架構的。

探討可加QQ羣:98488045

公衆號:

sealyun

微信羣:

rook使用教程,快速編排ceph

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章