準備
1,所有節點添加域名解析,設置root無祕鑰登錄
2,所有節點(包括客服端)創建cent用戶
#useradd cent && echo "123" | passwd --stdin cent
設置超級權限
#echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
#chmod 440 /etc/sudoers.d/ceph
3,在部署節點切換爲cent用戶,設置無密鑰登陸各節點包括客戶端節點
4,在部署節點切換爲cent用戶,在cent用戶家目錄,設置如下文件( define all nodes and users ):
$vi ~/.ssh/config
Host pikachu4
Hostname pikachu4
User cent
Host pikachu1
Hostname pikachu1
User cent
Host pikachu3
Hostname pikachu3
User cent
修改權限
$chmod 600 ~/.ssh/config
所有節點配置國內ceph源
1,all-node(包括客戶端)
#vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph-install
baseurl=https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/
enable=1
gpgcheck=0
2,需要到https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/下載下面的rpm包(貼心小提示:安包之前快個照)
ceph-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm
ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
ceph-selinux-10.2.11-0.el7.x86_64.rpm
ceph-test-10.2.11-0.el7.x86_64.rpm
libcephfs1-10.2.11-0.el7.x86_64.rpm
libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
librados2-10.2.11-0.el7.x86_64.rpm
librados2-devel-10.2.11-0.el7.x86_64.rpm
libradosstriper1-10.2.11-0.el7.x86_64.rpm
libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
librbd1-10.2.11-0.el7.x86_64.rpm
librbd1-devel-10.2.11-0.el7.x86_64.rpm
librgw2-10.2.11-0.el7.x86_64.rpm
librgw2-devel-10.2.11-0.el7.x86_64.rpm
python-ceph-compat-10.2.11-0.el7.x86_64.rpm
python-cephfs-10.2.11-0.el7.x86_64.rpm
python-rados-10.2.11-0.el7.x86_64.rpm
python-rbd-10.2.11-0.el7.x86_64.rpm
rbd-fuse-10.2.11-0.el7.x86_64.rpm
rbd-mirror-10.2.11-0.el7.x86_64.rpm
rbd-nbd-10.2.11-0.el7.x86_64.rpm
在部署節點再安裝ceph-deploy的rpm包,下載需要到https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/中找到最新對應的ceph-deploy-xxxxx.noarch.rpm 下載
ceph-deploy-1.5.39-0.noarch.rpm
3,將下載好的rpm拷貝到所有節點,並安裝。注意ceph-deploy-xxxxx.noarch.rpm 只有部署節點用到,其他節點不需要,部署節點也需要安裝其餘的rpm
4,在部署節點==(cent用戶下執行)==:安裝 ceph-deploy,在root用戶下,進入下載好的rpm包目錄,執行:
#yum localinstall -y ./*
#ceph-deploy --version
如果安裝失敗,只要保證安裝 python-setuptools 版本是 0.9.8-7.el7 就可以了
pikachu1和pikachu3各準備一塊磁盤,並格式化爲xfs
#mkfs -t xfs /dev/sdb1
#mkfs -t xfs /dev/sdc1
部署
部署節點
#su - cent
$mkdir ceph
配置新集羣
$cd ceph
$ceph-deploy new pikachu1 pikachu3
$ls
$vim ceph.conf
添加:
osd_pool_default_size = 2
默認添加副本數(幾副本)
osd_pool_default_min_size = 1
最小副本數
osd_pool_default_pg_num = 128
默認pg數量
osd_pool_default_pgp_num = 128
默認pgp與pg保持一致
osd_crush_chooseleaf_type = 1
在部署節點執行,所有節點安裝ceph軟件
$ceph-deploy install pikachu4 pikachu1 pikachu3
在部署節點初始化集羣:
$ceph-deploy mon create-initial
列出節點磁盤:
$ceph-deploy disk list pikachu1
擦淨節點磁盤(格式化):
$ceph-deploy disk zap pikachu1:/dev/sdb1
$ceph-deploy disk zap pikachu3:/dev/sdc1
部署OSD(準備Object Storage Daemon)
$ceph-deploy osd prepare pikachu1:/dev/sdb1
$ceph-deploy osd prepare pikachu3:/dev/sdc1
激活Object Storage Daemon
$ceph-deploy osd activate pikachu1:/dev/sdb1
$ceph-deploy osd activate pikachu3:/dev/sdc1
把ceph配置文件傳給每個節點
$ceph-deploy admin pikachu4 pikachu1 pikachu3
在ceph集羣中任意節點檢測
#ceph -s
實時刷新
#ceph -w
#ceph osd tree
#systemctl status [email protected]
客戶端設置
客戶端也要有cent用戶
#useradd cent && echo "123" | passwd --stdin cent
#echo-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
#chmod440 /etc/sudoers.d/ceph
在部署節點執行,安裝ceph客戶端及設置
$ceph-deploy install controller
$ceph-deploy admin controller
客戶端執行
#sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
客戶端執行,塊設備rdb配置:
創建rbd:
#rbd create disk01 --size 5G --image-feature layering
列示rbd:
#rbd ls -l
映射rbd的image map:
#rbd map disk01
顯示map:
#rbd showmapped
格式化disk01文件系統xfs:sudo
#mkfs.xfs /dev/rbd0
掛載硬盤:sudo
#mount /dev/rbd0 /mnt
驗證是否掛着成功:
#df -hT
客戶端通過文件系統訪問
在部署節點執行,選擇一個node來創建MDS
$su - cent
$cd ceph
$ceph-deploy mds create pikachu3
在pikachu3上執行
#chmod 644 /etc/ceph/ceph.client.admin.keyring
在MDS節點pikachu3上創建 cephfs_data 和 cephfs_metadata 的 pool
#ceph osd pool create cephfs_data 128
#ceph osd pool create cephfs_metadata 128
123是pg
開啓pool
$ceph fs new cephfs cephfs_metadata cephfs_data
顯示ceph fs
$ceph fs ls
$ceph mds stat
客戶端執行
安裝ceph-fuse:
$sudo yum -y install ceph-fuse
獲取admin key
$ssh cent@pikachu3 "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key
掛載ceph-fs
$sudo mount -t ceph pikachu3:6789:/ /media -o name=admin,secretfile=admin.key
停止ceph-mds服務
#umount /media/
node pikachu4
#systemctl stop ceph-mds@pikachu3
#ceph mds fail 0
#ceph fs rm cephfs --yes-i-really-mean-it
顯示pool
#ceph osd lspools
#ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
#ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
刪除環境:
部署節點
$ceph-deploy purge dlp node1 node2 node3 controller
$ceph-deploy purgedata dlp node1 node2 node3 controller
$ceph-deploy forgetkeys
$rm -rf ceph*