一. ceph 實現與openstack(kilo)集成
注意: 不同的版本配置文件稍有不同,請參考官方文檔:
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#any-openstack-version
環境說明:
192.168.10.95 glance (控制節點,glance cinder)
192.168.10.99 network01
192.168.10.101 compute01
1. 安裝ceph客戶端
安裝Ceph客戶端
apt-get install python-ceph -y
apt-get install ceph-common -y
2. 創建pool:
ceph osd pool create datastore 512
rados lspools
3. 創建用戶:
ceph auth get-or-create client.kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=datastore'
ceph auth get-or-create client.kilo | ssh 192.168.10.95 sudo tee /etc/ceph/ceph.client.kilo.keyring
ceph auth get-or-create client.kilo2 mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=datastore2'
ceph auth get-or-create client.kilo2 | ssh 192.168.10.95 sudo tee /etc/ceph/ceph.client.kilo2.keyring
ssh 192.168.10.95 sudo chmod +r /etc/ceph/ceph.client.kilo.keyring
將/etc/ceph/ceph.conf文件拷貝到openstack 節點上:
ssh 192.168.10.95 sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
ceph與glance
配置glance-api.conf文件
具體配置要在ceph官網上找,不同版本的openstack配置位置可能不相同
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configure-openstack-to-use-ceph
[DEFAULT]
default_store = rbd
[glance_store]
stores = rbd
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = kilo (創建的ceph用戶)
rbd_store_pool = datastore (創建的池)
show_image_direct_url = True
重啓glance服務:
service glance-api restart
ervice glance-registry restart
上傳一個鏡像,測試ceph是否配置成功作爲glance後端使用:
參考上邊glance使用
查看ceph中datastore pool的列表
rados --pool=datastore ls
###############################################
vi /etc/glance/glance-api.conf
[DEFAULT]
show_image_direct_url = True
[glance_store]
stores = rbd
default_store = rbd
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = kilo
rbd_store_pool = datastore
在glance節點修改
腳本方式修改:
crudini --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url True
crudini --set /etc/glance/glance-api.conf glance_store stores rbd
crudini --set /etc/glance/glance-api.conf glance_store default_store rbd
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_user kilo
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_pool datastore
###############################################
創建nova,cinder使用的祕鑰
一. 在openstack計算節點上生成一個uuid:
root@ubuntu:~# uuidgen
9fa61a4a-da28-4b24-a319-52e6dee46660
創建一個臨時文件
vi secret.xml
<secret ephemeral='no' private='no'>
<uuid>9fa61a4a-da28-4b24-a319-52e6dee46660</uuid>
<usage type='ceph'>
<name>client.kilo secret</name>
</usage>
</secret>
在計算節點:
創建的 secret.xml 文件創建密鑰:
virsh secret-define --file secret.xml
Secret 9fa61a4a-da28-4b24-a319-52e6dee46660 created
安裝Ceph客戶端
apt-get install python-ceph -y
apt-get install ceph-common -y
mkdir /etc/ceph
在ceph服務器上
ceph auth get-or-create client.kilo2 | ssh 192.168.10.101 sudo tee /etc/ceph/ceph.client.kilo2.keyring
設定 libvirt 使用上面的密鑰:
virsh secret-set-value --secret cf133036-7099-43e3-b60f-dc487c72d3d0 --base64 AQBmpBBXWIB0FxAAPWDi60w6jImcwuzWcZAvbQ== && rm ceph.client.kilo2.keyring secret.xml
查看祕鑰
virsh secret-list
ceph與cinder
配置cinder.conf配置文件
[DEFAULT]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = datastore
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_user = kilo
glance_api_version = 2
rbd_secret_uuid = cf133036-7099-43e3-b60f-dc487c72d3d0
重啓服務
service cinder-api restart
service cinder-scheduler restart
service cinder-volume restart
測試cinder是否使用ceph
創建卷cephVolume:
cinder create --display-name cephVolume 1
通過cinder list與rados --pool=datastore ls驗證cephVolume是否放在cinder上
############################################
vi /etc/cinder/cinder.conf
[DEFAULT]
.......
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = datastore2
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_user = kilo2
glance_api_version = 2
rbd_secret_uuid = cf133036-7099-43e3-b60f-dc487c72d3d0
腳本方式修改:
在glance節點:
crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
crudini --set /etc/cinder/cinder.conf ceph volume_driver = cinder.volume.drivers.rbd.RBDDriver
crudini --set /etc/cinder/cinder.conf ceph rbd_pool datastore2
crudini --set /etc/cinder/cinder.conf ceph rbd_ceph_conf /etc/ceph/ceph.conf
crudini --set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot false
crudini --set /etc/cinder/cinder.conf ceph rbd_max_clone_depth 5
crudini --set /etc/cinder/cinder.conf ceph rbd_user kilo2
crudini --set /etc/cinder/cinder.conf ceph glance_api_version 2
crudini --set /etc/cinder/cinder.conf ceph rbd_secret_uuid cf133036-7099-43e3-b60f-dc487c72d3d0
############################################
ceph與nova
修改計算節點的nova.conf文件
[libvirt]
images_type = rbd
images_rbd_pool = datastore2
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = icehouse
rbd_secret_uuid = cf133036-7099-43e3-b60f-dc487c72d3d0
inject_password = false
inject_key = false
inject_partition = -2
#################################################################
在計算節點:
crudini --set /etc/nova/nova.conf libvirt images_type rbd
crudini --set /etc/nova/nova.conf libvirt images_rbd_pool datastore2
crudini --set /etc/nova/nova.conf libvirt images_rbd_ceph_conf /etc/ceph/ceph.conf
crudini --set /etc/nova/nova.conf libvirt rbd_user icehouse
crudini --set /etc/nova/nova.conf libvirt rbd_secret_uuid cf133036-7099-43e3-b60f-dc487c72d3d0
crudini --set /etc/nova/nova.conf libvirt inject_password false
crudini --set /etc/nova/nova.conf libvirt inject_key false
crudini --set /etc/nova/nova.conf libvirt inject_partition -2
#################################################################
重啓nova
service nova-compute restart
測試nova是否使用ceph:
在控制檯創建虛擬機
nova list
rados --pool=datastore2 ls