openstack運維實戰系列(十七)之glance與ceph結合

1. 需求說明

   glance作爲openstack中image服務,支持多種適配器,支持將image存放到本地文件系統,http服務器,ceph分佈式文件系統,glusterfs和sleepdog等開源的分佈式文件系統上,本文,通過將講述glance如何和ceph結合。

   目前glance採用的是本地filesystem的方式存儲,存放在默認的路徑/var/lib/glance/images下,當把本地的文件系統修改爲分佈式的文件系統ceph之後,原本在系統中鏡像將無法使用,所以建議當前的鏡像刪除,部署好ceph之後,再統一上傳至ceph中存儲。

2.原理解析

   使用ceph的rbd接口,需要通過libvirt,所以需要在客戶端機器上安裝libvirt和qemu,關於ceph和openstack結合的結構如下,同時,在openstack中,需要用到存儲的地方有三個:1. glance的鏡像,默認的本地存儲,路徑在/var/lib/glance/images目錄下,2. nova虛擬機存儲,默認本地,路徑位於/var/lib/nova/instances目錄下,3. cinder存儲,默認採用LVM的存儲方式。

ditaa-e4a4957f90e4d8ebac2608e1544c34bf78

3. glance與ceph聯動

1.創建資源池pool

1、ceph默認創建了一個pool:rbd
[root@controller_10_1_2_230 ~]# ceph osd lspools
0 rbd,

[root@controller_10_1_2_230 ~]# ceph osd pool stats
pool rbd id 0
  nothing is going on

2、創建一個pool,指定pg_num的大小爲128
[root@controller_10_1_2_230 ~]# ceph osd pool create images 128
pool 'images' created

3、查看pool的pg_num和pgp_num大小
[root@controller_10_1_2_230 ~]# ceph osd pool get images pg_num
pg_num: 128
[root@controller_10_1_2_230 ~]# ceph osd pool get images pgp_num
pgp_num: 128

4、查看ceph中的pools
[root@controller_10_1_2_230 ~]# ceph osd lspools
0 rbd,1 images,            
[root@controller_10_1_2_230 ~]# ceph osd pool stats
pool rbd id 0
  nothing is going on

pool images id 1                #增加了一個pool,id號碼是1
  nothing is going on

2.配置ceph客戶端

1. glance作爲ceph的客戶端,即glance-api,需要有ceph的配置文件,從ceph的monitor節點複製一份配置文件過去即可,我所在環境中控制節點和ceph monitor爲同一臺機器,不需要操作

#如果controller節點和ceph的monitor節點是分開,則需要複製
[root@controller_10_1_2_230 ~]# scp /etc/ceph/ceph.conf root@controller_10_1_2_230:/etc/ceph/
ceph.conf  

2. 安裝客戶端rpm包

[root@controller_10_1_2_230 ~]# yum install python-rbd -y

3.配置ceph認證

1. 添加認證的key
[root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'class-read object_prefix rbd_children,allow rwx pool=images'   
[client.glance]
        key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==

2. 查看認證列表
[root@controller_10_1_2_230 ~]# ceph auth list
installed auth entries:

osd.0
        key: AQDsx6lWYGehDxAAGwcYP9jDvH2Zaa8JlGwj1Q==
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.1
        key: AQD1x6lWQCYBERAAjIKO1LVpj8FvVefDvNQZSA==
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQCexqlWQL6OGBAA2v5LsYEB5VgLyq/K2huY3A==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: AQCexqlWUMNRMRAAZEp/UlhQuaixMcNy5d5pPw==
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
        key: AQCexqlWQFfpJBAAfPCx4sTLNztBESyFKys9LQ==
        caps: [mon] allow profile bootstrap-osd
client.glance                                             #glance連接ceph的認證信息
        key: AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
        caps: [mon] allow r
        caps: [osd] class-read object_prefix rbd_children,allow rwx pool=images 
 
3. 將glance生成的key拷貝至
[root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance
[client.glance]
        key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==

#將key導出到客戶端              
[root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
        key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
[root@controller_10_1_2_230 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring 
[root@controller_10_1_2_230 ~]# ll /etc/ceph/ceph.client.glance.keyring 
-rw-r--r-- 1 glance glance 64 Jan 28 17:17 /etc/ceph/ceph.client.glance.keyring

4. 配置glance使用ceph做爲後端存儲

1、備份glance-api的配置文件,以便於恢復
[root@controller_10_1_2_230 ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig

2、修改glance配置文件,連接至ceph
[root@controller_10_1_2_230 ~]# vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = messaging
rabbit_hosts = 10.1.2.230:5672
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
rabbit_max_retries = 0
rabbit_ha_queues = True
rabbit_durable_queues = False
rabbit_userid = glance
rabbit_password = GLANCE_MQPASS
rabbit_virtual_host = /glance

default_store=rbd             #glance使用的後端存儲
known_stores=glance.store.rbd.Store      #配置rbd的驅動

rbd_store_ceph_conf=/etc/ceph/ceph.conf    #ceph的配置文件,包含有monitor的地址,通過查找monitor,可以獲取認證信息
rbd_store_user=glance                      #認證用戶,即是剛創建的用戶
rbd_store_pool=images                      #連接的存儲池
rbd_store_chunk_size=8                     #設置chunk size,即切割的大小

3. 重啓glance服務
[root@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-api restart                 
Stopping openstack-glance-api:                             [  OK  ]
Starting openstack-glance-api:                             [  OK  ]
[root@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-registry restart
Stopping openstack-glance-registry:                        [  OK  ]
Starting openstack-glance-registry:                        [  OK  ]
[root@controller_10_1_2_230 ~]# tail -2 /etc/glance/glance-api.conf
# location strategy defined by the 'location_strategy' config option.
#store_type_preference =
[root@controller_10_1_2_230 ~]# tail -2 /var/log/glance/registry.log
2016-01-28 18:40:25.231 21890 INFO glance.wsgi.server [-] Started child 21896
2016-01-28 18:40:25.232 21896 INFO glance.wsgi.server [-] (21896) wsgi starting up on http://0.0.0.0:9191/

5. 測試glance和ceph聯動情況

[root@controller_10_1_2_230 ~]# glance --debug image-create --name glance_ceph_test --disk-format qcow2  --container-format bare  --file  cirros-0.3.3-x86_64-disk.img    
curl -i -X POST -H 'x-image-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-image-meta-size: 13200896' -H 'x-image-meta-is_public: False' -H 'X-Auth-Token: 062af9027a85487997d176c9f1e963f2' -H 'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format: qcow2' -H 'x-image-meta-name: glance_ceph_test' -d '<open file u'cirros-0.3.3-x86_64-disk.img', mode 'rb' at 0x1ba24b0>' http://controller:9292/v1/images

HTTP/1.1 201 Created
content-length: 489
etag: 133eae9fb1c98f45894a4e60d8736619
location: http://controller:9292/v1/images/348a90e8-3631-4a66-a45d-590ec6413e7d
date: Thu, 28 Jan 2016 10:42:06 GMT
content-type: application/json
x-openstack-request-id: req-b993bc0b-447e-49b4-a8ce-bd7765199d5a

{"image": {"status": "active", "deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": "2016-01-28T10:42:06", "owner": "ef4b83a909dc4689b663ff2c70022478", "min_disk": 0, "is_public": false, "deleted_at": null, "id": "348a90e8-3631-4a66-a45d-590ec6413e7d", "size": 13200896, "virtual_size": null, "name": "glance_ceph_test", "checksum": "133eae9fb1c98f45894a4e60d8736619", "created_at": "2016-01-28T10:42:04", "disk_format": "qcow2", "properties": {}, "protected": false}}

+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 133eae9fb1c98f45894a4e60d8736619     |
| container_format | bare                                 |
| created_at       | 2016-01-28T10:42:04                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 348a90e8-3631-4a66-a45d-590ec6413e7d |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | glance_ceph_test                     |
| owner            | ef4b83a909dc4689b663ff2c70022478     |
| protected        | False                                |
| size             | 13200896                             |
| status           | active                               |
| updated_at       | 2016-01-28T10:42:06                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

[root@controller_10_1_2_230 ~]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 56e96957-1308-45c7-9c66-1afff680b217 | cirros-0.3.3-x86_64 | qcow2       | bare             | 13200896 | active |
| 348a90e8-3631-4a66-a45d-590ec6413e7d | glance_ceph_test    | qcow2       | bare             | 13200896 | active |    #上傳成功
+--------------------------------------+---------------------+-------------+------------------+----------+--------+

6.查看ceph池的數據

[root@controller_10_1_2_230 ~]# rados -p images ls
rbd_directory
rbd_header.10d7caaf292
rbd_data.10dd1fd73446.0000000000000001
rbd_id.348a90e8-3631-4a66-a45d-590ec6413e7d
rbd_header.10dd1fd73446
rbd_data.10d7caaf292.0000000000000000
rbd_data.10dd1fd73446.0000000000000000
rbd_id.8a09b280-5916-44c6-9ce8-33bb57a09dad    @@@glance中的數據存儲到了ceph文件系統中@@@

4. 總結

   將openstack的glance的數據存儲到ceph中是一種非常好的解決方案,既能夠保障image數據的安全性,同時glance和nova在同個存儲池中,能夠基於copy-on-write的方式快速創建虛擬機,能夠在秒級爲單位實現vm的創建。





   

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章