⑤ OpenStack高可用集羣部署方案(train版)—Nova

Nova具體功能如下:
1 實例生命週期管理
2 管理計算資源
3 網絡和認證管理
4 REST風格的API
5 異步的一致性通信
6 Hypervisor透明:支持Xen,XenServer/XCP, KVM, UML, VMware vSphere and Hyper-V

十三、Nova控制節點集羣部署

https://docs.openstack.org/nova/stein/install/

1. 創建nova相關數據庫

在任意控制節點創建數據庫,數據庫自動同步,以controller01節點爲例;

#創建nova_api,nova和nova_cell0數據庫並授權
mysql -uroot -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'Zxzn@2020';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'Zxzn@2020';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Zxzn@2020';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Zxzn@2020';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'Zxzn@2020';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'Zxzn@2020';
flush privileges;

2. 創建nova相關服務憑證

在任意控制節點操作,以controller01節點爲例;

2.1 創建nova用戶

source admin-openrc
openstack user create --domain default --password Zxzn@2020 nova

2.2 向nova用戶賦予admin權限

openstack role add --project service --user nova admin

2.3 創建nova服務實體

openstack service create --name nova --description "OpenStack Compute" compute

2.4 創建Compute API服務端點

api地址統一採用vip,如果public/internal/admin分別設計使用不同的vip,請注意區分;

--region與初始化admin用戶時生成的region一致;

openstack endpoint create --region RegionOne compute public http://10.15.253.88:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://10.15.253.88:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://10.15.253.88:8774/v2.1

3. 安裝nova軟件包

在全部控制節點安裝nova相關服務,以controller01節點爲例;

  • nova-api(nova主服務)
  • nova-scheduler(nova調度服務)
  • nova-conductor(nova數據庫服務,提供數據庫訪問)
  • nova-novncproxy(nova的vnc服務,提供實例的控制檯)
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

4. 部署與配置

https://docs.openstack.org/nova/stein/install/controller-install-rdo.html

在全部控制節點配置nova相關服務,以controller01節點爲例;

注意my_ip參數,根據節點修改;注意nova.conf文件的權限:root:nova

#備份配置文件/etc/nova/nova.conf
cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip  10.15.253.163
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron  true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:Zxzn@[email protected]:5672
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen_port 8774
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen '$my_ip'

openstack-config --set /etc/nova/nova.conf api auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf api_database  connection  mysql+pymysql://nova:Zxzn@[email protected]/nova_api

openstack-config --set /etc/nova/nova.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/nova/nova.conf cache enabled True
openstack-config --set /etc/nova/nova.conf cache memcache_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf database connection  mysql+pymysql://nova:Zxzn@[email protected]/nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri  http://10.15.253.88:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url  http://10.15.253.88:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username  nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password  Zxzn@2020

openstack-config --set /etc/nova/nova.conf vnc enabled  true
openstack-config --set /etc/nova/nova.conf vnc server_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_port  6080

openstack-config --set /etc/nova/nova.conf glance  api_servers  http://10.15.253.88:9292

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf placement region_name  RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name  Default
openstack-config --set /etc/nova/nova.conf placement project_name  service
openstack-config --set /etc/nova/nova.conf placement auth_type  password
openstack-config --set /etc/nova/nova.conf placement user_domain_name  Default
openstack-config --set /etc/nova/nova.conf placement auth_url  http://10.15.253.88:5000/v3
openstack-config --set /etc/nova/nova.conf placement username  placement
openstack-config --set /etc/nova/nova.conf placement password  Zxzn@2020

注意:

# 前端採用haproxy時,服務連接rabbitmq會出現連接超時重連的情況,可通過各服務與rabbitmq的日誌查看;
# transport_url=rabbit://openstack:Zxzn\@[email protected]:5673
# rabbitmq本身具備集羣機制,官方文檔建議直接連接rabbitmq集羣;但採用此方式時服務啓動有時會報錯,原因不明;如果沒有此現象,建議連接rabbitmq直接對接集羣而非通過前端haproxy的vip
transport_url=rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672

將nova的配置文件拷貝到另外的控制節點上:

scp -rp /etc/nova/nova.conf controller02:/etc/nova/
scp -rp /etc/nova/nova.conf controller03:/etc/nova/

##controller02上
sed -i "s#10.15.253.163#10.15.253.195#g" /etc/nova/nova.conf

##controller03上
sed -i "s#10.15.253.163#10.15.253.227#g" /etc/nova/nova.conf

5. 同步nova相關數據庫並驗證

任意控制節點操作;填充nova-api數據庫

#填充nova-api數據庫,無輸出
#填充cell0數據庫,無輸出
#創建cell1表
#同步nova數據庫
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

驗證nova cell0和cell1是否正確註冊

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

驗證nova數據庫是否正常寫入

mysql -h controller01 -u nova -pZxzn@2020 -e "use nova_api;show tables;"
mysql -h controller01 -u nova -pZxzn@2020 -e "use nova;show tables;"
mysql -h controller01 -u nova -pZxzn@2020 -e "use nova_cell0;show tables;"

6. 啓動nova服務,並配置開機啓動

在全部控制節點操作,以controller01節點爲例;

systemctl enable openstack-nova-api.service 
systemctl enable openstack-nova-scheduler.service 
systemctl enable openstack-nova-conductor.service 
systemctl enable openstack-nova-novncproxy.service

systemctl restart openstack-nova-api.service 
systemctl restart openstack-nova-scheduler.service 
systemctl restart openstack-nova-conductor.service 
systemctl restart openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service 
systemctl status openstack-nova-scheduler.service 
systemctl status openstack-nova-conductor.service 
systemctl status openstack-nova-novncproxy.service


netstat -tunlp | egrep '8774|8775|8778|6080'
curl http://myvip:8774

7. 驗證

列出各服務控制組件,查看狀態;

[root@controller01 ~]# openstack compute service list

展示api端點;

[root@controller01 ~]# openstack catalog list

檢查cell與placement api;都爲success爲正常

[root@controller01 ~]# nova-status upgrade check

8. 設置pcs資源

在任意控制節點操作;添加資源openstack-nova-api,openstack-nova-consoleauth,openstack-nova-scheduler,openstack-nova-conductor與openstack-nova-novncproxy

pcs resource create openstack-nova-api systemd:openstack-nova-api clone interleave=true
pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler clone interleave=true
pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor clone interleave=true
pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy clone interleave=true

#建議openstack-nova-api,openstack-nova-conductor與openstack-nova-novncproxy 等無狀態服務以active/active模式運行;
#openstack-nova-scheduler等服務以active/passive模式運行

查看pcs資源

[root@controller01 ~]# pcs resource 
  * vip (ocf::heartbeat:IPaddr2):   Started controller03
  * Clone Set: openstack-keystone-clone [openstack-keystone]:
    * Started: [ controller01 controller02 controller03 ]
  * Clone Set: lb-haproxy-clone [lb-haproxy]:
    * Started: [ controller03 ]
    * Stopped: [ controller01 controller02 ]
  * Clone Set: openstack-glance-api-clone [openstack-glance-api]:
    * Started: [ controller01 controller02 controller03 ]
  * Clone Set: openstack-nova-api-clone [openstack-nova-api]:
    * Started: [ controller01 controller02 controller03 ]
  * Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]:
    * Started: [ controller01 controller02 controller03 ]
  * Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]:
    * Started: [ controller01 controller02 controller03 ]
  * Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]:
    * Started: [ controller01 controller02 controller03 ]

登陸haproxy的web界面可以查看nova的服務添加成功


十四、Nova計算節點集羣部署

10.15.253.162 c2m16h600 compute01
10.15.253.194 c2m16h600 compute02
10.15.253.226 c2m16h600 compute03

1. 安裝nova-compute

在全部計算節點安裝nova-compute服務,以compute01節點爲例;

#在基礎配置時已經下載好了openstack的源和需要的依賴,所以直接下載需要的服務組件即可
yum install openstack-nova-compute -y
yum install -y openstack-utils -y

2. 部署與配置

在全部計算節點安裝nova-compute服務,以compute01節點爲例;

注意my_ip參數,根據節點修改;注意nova.conf文件的權限:root:nova

#備份配置文件/etc/nova/nova.confcp /etc/nova/nova.conf{,.bak}
cp /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf

2.1 確定計算節點是否支持虛擬機硬件加速

[root@compute01 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0
$ 如果此命令返回值不是0,則計算節點支持硬件加速,不需要加入下面的配置。
$ 如果此命令返回值是0,則計算節點不支持硬件加速,並且必須配置libvirt爲使用QEMU而不是KVM
$ 需要編輯/etc/nova/nova.conf 配置中的[libvirt]部分:因測試使用爲虛擬機,所以修改爲qemu

2.2 編輯配置文件nova.conf

openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:Zxzn@[email protected]
openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 10.15.253.162
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron  true
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver

openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone

openstack-config --set /etc/nova/nova.conf  keystone_authtoken www_authenticate_uri  http://10.15.253.88:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url  http://10.15.253.88:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username  nova
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password  Zxzn@2020

openstack-config --set /etc/nova/nova.conf libvirt virt_type  qemu

openstack-config --set  /etc/nova/nova.conf vnc enabled  true
openstack-config --set  /etc/nova/nova.conf vnc server_listen  0.0.0.0
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set  /etc/nova/nova.conf vnc novncproxy_base_url http://10.15.253.88:6080/vnc_auto.html

openstack-config --set  /etc/nova/nova.conf glance api_servers  http://10.15.253.88:9292

openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

openstack-config --set  /etc/nova/nova.conf placement region_name  RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name  Default
openstack-config --set  /etc/nova/nova.conf placement project_name  service
openstack-config --set  /etc/nova/nova.conf placement auth_type  password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name  Default
openstack-config --set  /etc/nova/nova.conf placement auth_url  http://10.15.253.88:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username  placement
openstack-config --set  /etc/nova/nova.conf placement password  Zxzn@2020

將nova的配置文件拷貝到另外的計算節點上:

scp -rp /etc/nova/nova.conf compute02:/etc/nova/
scp -rp /etc/nova/nova.conf compute03:/etc/nova/

##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/nova/nova.conf

##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/nova/nova.conf

3. 啓動計算節點的nova服務

全部計算節點操作;

systemctl restart libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service

4. 向cell數據庫添加計算節點

任意控制節點執行;查看計算節點列表

[root@controller01 ~]# openstack compute service list --service nova-compute

5. 控制節點上發現計算主機

添加每臺新的計算節點時,必須在控制器節點上運行

5.1 手動發現計算節點

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

5.2 自動發現計算節點

爲避免新加入計算節點時,手動執行註冊操作nova-manage cell_v2 discover_hosts,可設置控制節點定時自動發現主機;涉及控制節點nova.conf文件的[scheduler]字段;
在全部控制節點操作;設置自動發現時間爲10min,可根據實際環境調節

openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 600
systemctl restart openstack-nova-api.service

6. 驗證

列出服務組件以驗證每個進程的成功啓動和註冊情況

openstack compute service list

列出身份服務中的API端點以驗證與身份服務的連接

openstack catalog list

列出圖像服務中的圖像以驗證與圖像服務的連接性

openstack image list

檢查Cells和placement API是否正常運行

nova-status upgrade check
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章