Cinder的核心功能是對卷的管理,允許對卷、卷的類型、卷的快照、卷備份進行處理。它爲後端不同的存儲設備提供給了統一的接口,不同的塊設備服務廠商在Cinder中實現其驅動,可以被Openstack整合管理,nova與cinder的工作原理類似。支持多種 back-end(後端)存儲方式,包括 LVM,NFS,Ceph 和其他諸如 EMC、IBM 等商業存儲產品和方案。
一篇cinder原理的詳細的介紹-密碼 xiaoyuanqujing@666
從OpenStack的角度看塊存儲的世界
分佈式存儲 Ceph 介紹及原理架構分享 上
分佈式存儲 Ceph 介紹及原理架構分享 下
三種存儲方案 DAS,NAS,SAN在數據庫存儲上的應用
DAS、SAN、NAS三種存儲方式的概念及應用
Cinder各組件功能
Cinder-api 是 cinder 服務的 endpoint,提供 rest 接口,負責處理 client 請求,並將 RPC 請求發送至 cinder-scheduler 組件。
Cinder-scheduler 負責 cinder 請求調度,其核心部分就是 scheduler_driver, 作爲 scheduler manager 的 driver,負責 cinder-volume 具體的調度處理,發送 cinder RPC 請求到選擇的 cinder-volume。
Cinder-volume 負責具體的 volume 請求處理,由不同後端存儲提供 volume 存儲空間。目前各大存儲廠商已經積極地將存儲產品的 driver 貢獻到 cinder 社區
十六、Cinder控制節點集羣部署
https://docs.openstack.org/cinder/train/install/
1. 創建cinder數據庫
在任意控制節點創建數據庫,後臺數據自動同步;
mysql -u root -pZxzn@2020
create database cinder;
grant all privileges on cinder.* to 'cinder'@'%' identified by 'Zxzn@2020';
grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'Zxzn@2020';
flush privileges;
2. 創建cinder相關服務憑證
在任意控制節點操作,以controller01節點爲例;
2.1 創建cinder服務用戶
source admin-openrc
openstack user create --domain default --password Zxzn@2020 cinder
2.2 向cinder用戶賦予admin權限
openstack role add --project service --user cinder admin
2.3 創建cinderv2和cinderv3服務實體
#cinder服務實體類型 "volume"
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
2.4 創建塊存儲服務API端點
- 塊存儲服務需要每個服務實體的端點
- cinder-api後綴爲用戶project-id,可通過
openstack project list
查看
#v2
openstack endpoint create --region RegionOne volumev2 public http://10.15.253.88:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://10.15.253.88:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://10.15.253.88:8776/v2/%\(project_id\)s
#v3
openstack endpoint create --region RegionOne volumev3 public http://10.15.253.88:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://10.15.253.88:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://10.15.253.88:8776/v3/%\(project_id\)s
3. 部署與配置cinder
3.1 安裝cinder
在全部控制節點安裝cinder服務,以controller01節點爲例
yum install openstack-cinder -y
3.2 配置cinder.conf
在全部控制節點操作,以controller01節點爲例;注意my_ip
參數,根據節點修改;
#備份配置文件/etc/cinder/cinder.conf
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.15.253.163
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.15.253.88:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen '$my_ip'
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen_port 8776
openstack-config --set /etc/cinder/cinder.conf DEFAULT log_dir /var/log/cinder
#直接連接rabbitmq集羣
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:Zxzn@[email protected]/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password Zxzn@2020
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
將cinder配置文件拷貝到另外兩個控制節點上:
scp -rp /etc/cinder/cinder.conf controller02:/etc/cinder/
scp -rp /etc/cinder/cinder.conf controller03:/etc/cinder/
##controller02上
sed -i "s#10.15.253.163#10.15.253.195#g" /etc/cinder/cinder.conf
##controller03上
sed -i "s#10.15.253.163#10.15.253.227#g" /etc/cinder/cinder.conf
3.3 配置nova.conf使用塊存儲
在全部控制節點操作,以controller01節點爲例;
配置只涉及nova.conf的[cinder]
字段;
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
3.4 同步cinder數據庫
任意控制節點操作;
su -s /bin/sh -c "cinder-manage db sync" cinder
#驗證
mysql -ucinder -pZxzn@2020 -e "use cinder;show tables;"
3.5 啓動服務並設置開機自啓動
全部控制節點操作;修改了nova配置文件,首先需要重啓nova服務
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
3.6 控制節點驗證
openstack volume service list
#也可以使用 cinder service-list
4. 設置pcs資源
在任意控制節點操作;添加資源cinder-api
與cinder-scheduler
-
cinder-api
與cinder-scheduler
以active/active
模式運行; -
openstack-nova-volume
以active/passive
模式運行
pcs resource create openstack-cinder-api systemd:openstack-cinder-api clone interleave=true
pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler clone interleave=true
查看資源
pcs resource
十七、Cinder存儲節點集羣部署
- 資源有限,這裏將cinder存儲節點暫時部署到三臺計算節點上
- 使用ceph作爲後端的存儲進行使用
- 在採用ceph或其他商業/非商業後端存儲時,建議將
cinder-volume
服務部署在控制節點,通過pacemaker將服務運行在active/passive模式。
⑨ OpenStack高可用集羣部署方案(train版)—CentOS8安裝與配置Ceph集羣
Openstack的存儲面臨的問題
https://docs.openstack.org/arch-design/
企業上線openstack,必須要思考和解決三方面的難題:
1.控制集羣的高可用和負載均衡,保障集羣沒有單點故障,持續可用,
2.網絡的規劃和neutron L3的高可用和負載均衡,
3.存儲的高可用性和性能問題。
存儲openstack中的痛點與難點之一
在上線和運維中,值得考慮和規劃的重要點,openstack支持各種存儲,包括分佈式的文件系統,常見的有:ceph,glusterfs和sheepdog
,同時也支持商業的FC存儲,如IBM,EMC,NetApp和huawei
的專業存儲設備,一方面能夠滿足企業的利舊和資源的統一管理。
Ceph概述
ceph作爲近年來呼聲最高的統一存儲,在雲環境下適應而生,ceph成就了openstack和cloudstack這類的開源的平臺方案,同時openstack的快速發展,也吸引了越來越多的人蔘與到ceph的研究中來。ceph在整個社區的活躍度越來越高,越來越多的企業,使用ceph做爲openstack的glance,nova,cinder
的存儲。
ceph是一種統一的分佈式文件系統;能夠支持三種常用的接口:
1.對象存儲接口,兼容於S3,用於存儲結構化的數據,如圖片,視頻,音頻等文件,其他對象存儲有:S3,Swift,FastDFS
等;
2.文件系統接口,通過cephfs
來完成,能夠實現類似於nfs的掛載文件系統,需要由MDS來完成,類似的文件系存儲有:nfs,samba,glusterfs
等;
3.塊存儲,通過rbd實現,專門用於存儲雲環境下塊設備,如openstack的cinder卷存儲,這也是目前ceph應用最廣泛的地方。
1. 部署與配置cinder
1.1 安裝cinder
在全部計算點安裝;compute節點已配置好openstack源,如果是分離的cinder節點需要提前做好優化
yum install openstack-cinder targetcli python3-keystone -y
1.2 配置cinder.conf
在全部計算點配置;注意my_ip
參數,根據節點修改;
#備份配置文件/etc/cinder/cinder.conf
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '#|^$' /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.15.253.162
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.15.253.88:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:Zxzn@[email protected]/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password Zxzn@2020
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
將cinder配置文件拷貝到另外兩個計算節點上:
scp -rp /etc/cinder/cinder.conf compute02:/etc/cinder/
scp -rp /etc/cinder/cinder.conf compute03:/etc/cinder/
##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/cinder/cinder.conf
##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/cinder/cinder.conf
1.3 啓動服務並設置開機自啓動
全部計算節點操作;
systemctl restart openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
1.4 在控制節點進行驗證
執行狀態檢查;此時後端存儲服務爲ceph,但ceph相關服務尚未啓用並集成到cinder-volume,所以服務的狀態也是down
openstack volume service list