ceph 集羣的兩種部署方法

Ceph ansible 部署步驟:

    1.  git clone -b stable-3.0 https://github.com/ceph/ceph-ansible.git ; cd ceph-ansible

    2. 生成 inventory
cat > inventory <<EOF
[mons]
ceph1
ceph2
ceph3

[mgrs]
ceph1
ceph2
ceph3

[osds]
ceph1
ceph2
ceph3
EOF
    3. mv site.yml.sample site.yml

    4. cd group_vars; 
cat > all.yml <<EOF
---
ceph_origin: repository
ceph_repository: community
ceph_stable_release: luminous
public_network: "172.17.0.0/20"
cluster_network: "{{ public_network }}"
monitor_interface: eth0
devices:
  - '/dev/vdb'
osd_scenario: collocated
EOF

    5. ansible-playbook -i inventory site.yml.sample

    6.  啓動mgr 內置的dashboard:
        ceph mgr module enable  dashboard

ceph 在k8s上的使用步驟:

    doc:
        https://docs.openshift.org/3.6/install_config/storage_examples/ceph_rbd_dynamic_example.html

    1.  在ceph 集羣部署中 創建用於在k8s中,存儲數據用的pool:
        ceph osd pool create kube 128

    注:
        # 128 爲 pg 數 的計算公式一般爲

        若少於5個OSD, 設置pg_num爲128。
        5~10個OSD,設置pg_num爲512。
        10~50個OSD,設置pg_num爲4096。
        超過50個OSD,可以參考計算公式。

        http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/

        pg數 = osd 數量 * 100 / pool 複製份數 / pool 數量

        # 查看 pool 複製份數, 既 ceph.conf 裏設置的 osd_pool_default_size
           ceph osd dump |grep size|grep rbd

        # 當 osd  pool複製數  pool 數量 變更時,應該重新計算並變更 pg 數
        # 變更 pg_num 的時候 應該將 pgp_num 的數量一起變更,否則無法報錯
           ceph osd pool set rbd pg_num 256
           ceph osd pool set rbd pgp_num 256

    2.  創建secret 認證:
        1.  創建一個client.kube用戶:
            ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

        2.  將用戶client.kube 的keyring 轉換成base64的格式:
            ceph auth get-key client.kube | base64
            (生成:QVFCRE1aRmFhdWdFQWhBQUI5a21HbUVXRTgwQ2xJSWFJTVphTUE9PQ==)

        3.  創建ceph-secret.yaml:
            cat > ceph-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
    name: ceph-secret
    namespace: kube-system
type: kubernetes.io/rbd
data:
    key: QVFCRE1aRmFhdWdFQWhBQUI5a21HbUVXRTgwQ2xJSWFJTVphTUE9PQ==

    3.  創建rbd-provisioner
    4.  在k8s中創建ceph的 Storage Class:
        1. 方式一:作爲k8s的默認 Storage Class:
            cat > ceph-storage.yaml <<EOF
—-
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/rbd
parameters:
    # ceph 集羣中的monitor節點IP:PORT
    monitors: 172.17.0.49:6789,172.17.0.44:6789,172.17.0.28:6789
    # 指定可以在ceph pool 中創建image的用戶
    adminId: kube
    # adminId的 secret name
    adminSecretName: ceph-secret
    adminSecretNamespace: kube-system
    # 存儲池
    pool: kube
    # 用於映射rbd image的用戶,默認同adminId相同
    userId: kube
    # userId使用的 secret,並且必須與pvc在同一命名空間
    userSecretName: ceph-secret-user

        2. 方式二:作爲k8s的非默認 Storage Class:
            cat > ceph-storage.yaml <<EOF
—
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph-storageclass
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.17.0.49:6789,172.17.0.44:6789,172.17.0.28:6789
  adminId: kube
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-user
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

EOF

創建用於存放MSP的image:

    rbd create kube/fabric —size 512 —image-feature=layering

將 kube/fabric 映射出來:

    rbd map kube/fabric

在ceph集羣中,創建cephfs:

    ceph osd pool create cephfs_data 64

    ceph osd pool create cephfs_metadata 64

    ceph fs new fabric_fs cephfs_metadata cephfs_data

禁用/開啓ceph dashboard:

    ceph mgr module disable dashboard 

    ceph mgr module enable dashboard

Ceph cluster 安裝步驟2 :

    1.  mkdir myceph; cd myceph

    2. ceph-deploy new node1 node2 …..

    3. cat >> ceph.conf <<EOF
public network = 172.17.0.0/20
mon allow pool delete = true
mon_max_pg_per_osd = 300
mgr initial modules = dashboard prometheus

osd pool default size = 1
osd pool default min size = 1

mon_clock_drift_allowed = 2
mon_clock_drift_warn_backoff = 30
EOF

    4. ceph-deploy install  --repo-url https://registry.umarkcloud.com:8080/repository/yum-ceph-proxy/ --release=luminous node1 node2 …

        ceph-deploy install --release=luminous  --nogpgcheck --no-adjust-repos  k8s7 k8s8 k8s9

            ceph-deploy install --repo-url https://mirrors.aliyun.com/ceph/rpm-mimic/el7 --gpg-url https://mirrors.aliyun.com/ceph/keys/release.asc ceph1 ceph2 ceph3

    5. ceph-deploy mon create-initial

    6. ceph-deploy admin k8s7 k8s8 k8s9

    7. ceph-deploy mgr  create k8s7 k8s8 k8s9

    8. ceph-deploy osd create --data /dev/vdb(或 vg/lv) node1
        ceph-deploy osd create --data /dev/vdb node2
        ceph-deploy osd create --data /dev/vdb node3

    9. ceph-deploy  mds create k8s7 k8s8

    10. ceph osd pool create kube 128; ceph osd pool application enable kube rbd

    11. ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

    12. ceph osd pool create cephfs_data 64

         ceph osd pool create cephfs_metadata 64

        ceph fs new data cephfs_metadata cephfs_data

    13. ceph auth get-key client.admin | base64

    14. ceph auth get-key client.kube | base64 

    15. helm install -n cephpr —namespace ceph .

    16. 安裝 rgw 以支持s3 api 訪問:
        a. 安裝軟件:
            ceph-deploy install --rgw umark-poweredge-r540
        b. 部署服務:
            ceph-deploy rgw create umark-poweredge-r540
        c. 修改ceph.conf 配置文件:
            [client.rgw.umark-poweredge-r540]
            rgw_frontends = "civetweb port=80"
        d.  重啓服務:
             sudo systemctl restart ceph-radosgw.target

        e.  創建用戶:
            sudo radosgw-admin user create --uid='testuser' --display-name='First User'

- [ ] Spec:
  volumeClaimTemplates:
  - metadata:
      name: orderer-home
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ceph-rbd"
      resources:
        requests:
          storage: 2Gi
  - metadata:
      name: orderer-block
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: "cephfs"
      resources:
        requests:
          storage: 100Mi

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: {{ .Values.global.fabricMspPvcName }}
    namespace: {{ .Release.Namespace }}
    annotations:
      "helm.sh/created": {{.Release.Time.Seconds | quote }}
      "helm.sh/hook": pre-install
      "helm.sh/resource-policy": keep
spec:
    accessModes:
      - ReadOnlyMany
    resources:
      requests:
        storage: 1Gi
{{- if eq .Values.storage.type "gluster" }}
    volumeName: {{ .Values.persistence.fabricMspPvName }}
{{- else if eq .Values.storage.type "ceph" }}
    storageClassName: {{ .Values.storage.className }}
{{- end }}

{{- if eq .Values.storage.type "gluster" }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .Values.persistence.fabricMspPvName}}
spec:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  glusterfs:
    endpoints: gluster-endpoints
    path: {{ .Values.persistence.fabricGlusterVolumeName }}
  persistentVolumeReclaimPolicy: Retain
{{- end }}

Ceph 升級 to mimic:

A. 升級軟件包:
    ceph-deploy install --release=mimic  --nogpgcheck --no-adjust-repos  pk8s1 pk8s2 pk8s3 
B. 
    1. 重啓monitor服務:
        systemctl restart ceph-mon.target
    2. 重啓osd服務:
        systemctl restart ceph-osd.target
    3. 重啓mds服務:
        systemctl restart ceph-mds.target
    4. 升級client:
        yum -y install ceph-common
    5. 重啓mgr服務:
        systemctl restart ceph-mgr.target
C. 查看是否正常:
    1.  ceph mon stat
    2. ceph osd stat
    3. ceph mds stat

D. 啓用新的dashboard:
    1. ceph dashboard create-self-signed-cert
    2. ceph config set mgr mgr/dashboard/server_addr $IP
    3. ceph config set mgr mgr/dashboard/pk8s1/server_addr $IP
        ceph config set mgr mgr/dashboard/pk8s2/server_addr $IP
        ceph config set mgr mgr/dashboard/pk8s3/server_addr $IP
    4. ceph dashboard set-login-credentials <username> <password>

ceph mgr module enable dashboard

容器中掛載 cephfs的方式:

    mount.ceph 172.17.32.2:/ /mnt -o name=admin,secret=AQA2wjBbMljPKBAAID24oKDVT9NGuUxHzpo+1w==

Ceph backup:

    兩種情況:
        1.  自己創建的pvc.yaml :
              annotations:
                    "helm.sh/hook": pre-install
                 "helm.sh/hook-delete-policy": "before-hook-creation"  (此刪除策略,不管是使用 helm upgrade 還是 helm install 不會出新 resources exists。。。的提示)

Ceph-rbd image 掛載的幾種情況:
    1.  Image 已經在pod中被掛載,然後可以在k8s集羣外(即本地)二次掛載;

    2. 假如 image 已經在本地掛載,那麼當pod 重新啓動時,此image 將無法再被掛載,會報如下錯誤:


        當本地掛載的image 被卸載之後,pod會自動恢復正常。
3. 因爲rbd 在k8s中只支持ReadWriteOnce 和 ReadOnlyMany 兩種模式,由於需要往rbd image 中寫入鏡像,所以只有ReadWriteOnce這一種模式可以使用了,倘若在兩個不同的pod中調用同一個pvc,那麼就會報如下錯誤:

4. 使用python 腳本來掛載pod 所使用的原始的 rbd image:
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章