手動安裝openstack並配置虛擬化集成VM

手動安裝openstack並配置虛擬化集成VM

雲計算含義:

彈性質

可以隨便增加內存和cpu,硬盤

對用戶是透明的

對數據進行去重


應用,數據,跑的時間,中間件,系統

虛擬機,服務器,存儲,網絡


計算分層

雲計算絕不等於虛擬化

雲計算用到虛擬化的技術


nova負責計算節點

quantum 負責虛擬網絡

swift 負責雲存儲‘

libvirt 負責 虛擬機管理,虛擬機設備管理 遠程過程


目錄:

安裝先決條件

1.環境

2.域名解析和關閉防火牆

3.配置時間同步服務器(NTP)

4.安裝軟件包

5.安裝數據庫

6.驗證數據庫

7.安裝rabbitmq服務

8.安裝Memcached 


安裝目錄:

1.安裝配置keystone身份認證服務

2.鏡像服務

3.(nova)的安裝及配置

4.網絡服務neutron服務器端的安裝及配置

5.安裝dashboard組件

6.安裝cinder

7.EXSI集成VM虛擬化

8.安裝 VMware vCenter Appliance 的 的 OVF 

9.集成vmware


1.環境

手動安裝openstack

openstack-newton版

192.168.2.34 controller

192.168.2.35 compute1

CentOS 7.3系統 2 臺

controller即作爲控制節點,也作爲計算節點.

compute1 就只是計算節點

拓撲圖

wKioL1noBlHioxx0AAD5E3T6voU200.png

在ESXI裏面創建虛擬交換機

    現在我們開始在ESXI上部署網絡

ESXI創建兩個虛擬交換機,一個命名爲fuel_pxe,一個命名爲fuel_storage,加上原有的vm network總共三個交換機。

wKiom1noCViDHmRfAABkTZnRwOA978.png

wKioL1noBqOCdDNGAABV3rxaYkU331.png


創建虛擬機,內存8G以上,硬盤100G以上,創建三個網卡。

wKiom1noCbeh7t7uAACLQ6I8sYg567.png


控制節點去操控計算節點,計算節點上可以創建虛擬機

controller   192.168.2.34 網卡 NAT ens160

(ens160是內網網卡,下面neutron配置文件裏會設置到)

compute1   192.168.2.35  網卡 NAT ens160

wKioL1noBC2DMCDHAACBWZySfhE364.png

2.域名解析和關閉防火牆 (控制節點和計算節點都做)

/etc/hosts#主機名一開始設置好

後面就不能更改了,否則就會出問題!這裏設置好ip與主機名的對應關係

192.168.2.34 controller     

192.168.2.35 compute1

setenforce 0
systemctl start firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service

3.配置時間同步服務器(NTP)

控制節點

yum install chrony -y             

#安裝服務

sed -i 's/#allow 192.168\/2/allow 192.168\/2/g' /etc/chrony.conf
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

#更改時區

啓動NTP服務

systemctl enable chronyd.service

systemctl start chronyd.service

配置計算節點

yum install chrony -y

sed -i 's/^server.*$//g' /etc/chrony.conf

sed -i "N;2aserver controller iburst" /etc/chrony.conf

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime   

更改時區

systemctl enable chronyd.service

systemctl start chronyd.servic


4.安裝軟件包

sudo yum install -y centos-release-openstack-newton

sudo yum update -y

sudo yum install -y openstack-packstack

yum install python-openstackclient openstack-selinux -y

wKiom1noCr3gRep2AACNDDypctg003.png

5.安裝數據庫

控制節點下操作

yum install mariadb mariadb-server python2-PyMySQL -y

[root@controller openstack]# vim /etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 192.168.2.34           #本服務器的ip
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8


啓動數據庫

systemctl enable mariadb.service

systemctl start mariadb.service

5.初始化數據庫

# mysql_secure_installation

數據庫密碼爲123456

6.驗證數據庫

[root@controller openstack]# mysql -uroot -p123456

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 8

Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

wKioL1noB_LAddPlAAAi6FWc8nA162.png

wKioL1noB_OAO0AfAAFe_Xqdcso802.png

wKiom1noCqnTYPUVAABdW8SoQSc191.png


7.消息隊列

openstack使用一個消息隊列的服務之間進行協調的操作和狀態的信息。消息隊列服務通常在控制器節點上運行。OpenStack的支持多種消息隊列服務,包括RabbitMQ的, Qpid和ZeroMQ。

8.安裝rabbitmq

yum install rabbitmq-server -y


9.啓動rabbitmq服務

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service


9.創建openstack用戶這裏使用RABBIT_PASS做openstack用戶的密碼

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS


Creating user "openstack" ...

10.允許openstack用戶的配置,寫入和讀取的訪問


rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...


11.安裝Memcached 

yum install memcached python-memcached -y


12.啓動服務

systemctl enable memcached.service

systemctl start memcached.service


安裝配置keystone身份認證服務

[root@controller ~]# mysql -uroot -p123456

# 創建keystone數據庫

MariaDB [(none)]> CREATE DATABASE keystone;

Query OK, 1 row affected (0.00 sec)

# 授予數據庫訪問權限

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';


Query OK, 0 rows affected (0.00 sec)

# 用合適的密碼替換KEYSTONE_DBPASS。


2.安裝軟件包

[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y


3.編輯/etc/keystone/keystone.conf文件

[root@controller ~]# cd /etc/keystone/

[root@controller keystone]# cp keystone.conf keystone.conf.bak

[root@controller keystone]# egrep -v "^#|^$" keystone.conf.bak > keystone.conf

[root@controller keystone]# vim keystone.conf

添加如下內容

[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
provider = fernet


4.導入數據庫

su -s /bin/sh -c "keystone-manage db_sync" keystone


5.初始化存儲庫

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone


6.引導身份認證

keystone-manage bootstrap --bootstrap-password admin --bootstrap-admin-url http://controller:35357/v3/   --bootstrap-internal-url http://controller:35357/v3/   --bootstrap-public-url http://controller:5000/v3/   --bootstrap-region-id RegionOne


7.配置http

[root@controller ~]# sed -i 's/#ServerName www.example.com:80/ServerName controller/g' /etc/httpd/conf/httpd.conf 

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/


systemctl enable httpd.service

systemctl start httpd.service

netstat -lntp |grep http


Oct 11 22:26:28 controller systemd[1]: Failed to start The Apache HTTP Server.

Oct 11 22:26:28 controller systemd[1]: Unit httpd.service entered failed state.

Oct 11 22:26:28 controller systemd[1]: httpd.service failed.


[root@localhost conf.d]# vi/etc/httpd/conf/httpd.conf

353行是這一行,我們把它註釋掉。

353 IncludeOptional conf.d/*.conf

systemctl start httpd.service

9.配置管理用戶

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3


10.創建用戶、域、角色

openstack project create --domain default --description "Service Project" service

[root@controller ~]# openstack project create --domain default --description "Service Project" service

Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.

Unable to establish connection to http://controller:35357/v3/auth/tokens: HTTPConnectionPool(host='controller', port=35357): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x3381150>: Failed to establish a new connection: [Errno 111] Connection refused',))

[root@controller ~]# vi /etc/profile

[root@controller ~]# curl -I http://controller:35357/v3

curl: (7) Failed connect to controller:35357; Connection refused


[root@localhost conf.d]# vi/etc/httpd/conf/httpd.conf

353行是這一行,我們把註釋去掉。

353 IncludeOptional conf.d/*.conf

systemctl restart httpd.service

[root@controller ~]# curl -I http://controller:35357/v3

HTTP/1.1 200 OK

Date: Wed, 11 Oct 2017 14:51:23 GMT

Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5

Vary: X-Auth-Token

x-openstack-request-id: req-f9541e9b-255e-4979-99b5-ebb2292ab555

Content-Length: 250

Content-Type: application/json


c5ffce865979f8503b057a316aeac513.png

b9aa0cd0ba93111bef8b88b621acc48b.png

[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 93645ea6bee14d9f8c019ad1e5ac9f51 |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | default                          |
+-------------+----------------------------------+

[root@controller ~]# 

openstack project create --domain default  --description "Demo Project" demo

openstack user create --domain default  --password-prompt demo

 密碼123456

 openstack role create user

 openstack role add --project demo --user demo user

 驗證腳本

 [root@controller openstack]# vi  admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2


[root@controller openstack]# vim demo-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2


[root@controller openstack]# openstack token issue

[root@controller openstack]# 
+------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                 |
+------------+---------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2017-10-11 15:58:45+00:00                                                                                                             |
| id         | gAAAAABZ3jGlmx9cC0uHRqFZCaIYAeWkdQxkmsZJ2IgOzjs1z2WKIiXK814IYHmOz8EhLbgE5IHz-                                                         |
|            | rJrxNN97c3FkDnZYb6JNnI5N4aJVpi0veEEr0qDguF62AgBLEbx0OL9_n7Q3tK7AkCJdihUfT33DVwGPxDusgCqi28xbUptNi7v3F7FqOc                            |
| project_id | 7fb02c054c7343b29e1f55bdc4b7081f                                                                                                      |
| user_id    | 3f567d42de46423388b2f20c4ec9dd6d                                                                                                      |
+------------+---------------------------------------------------------------------------------------------------------------------------------------+

鏡像服務

鏡像服務(glance)是用戶能夠發現,註冊和檢索虛擬機的鏡像,它提供了一個REST API,使您能夠查詢虛擬機映像元數據並檢索實際映像。可以將通過Image服務提供的虛擬機映像存儲在各種位置,從簡單文件系統到對象存儲系統(如OpenStack對象存儲)。鏡像默認放在/var/lib/glance/p_w_picpaths/。

下面說一下glance服務的安裝及配置

1.創建數據庫
mysql -uroot -p123456
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
GLANCE_DBPASS 123456
創建glance用戶
[root@controller openstack]# . admin-openrc
[root@controller openstack]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | e9f633cf1a8b4de39939ec3c1677a4d8 |
| name                | glance                           |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller openstack]# 
openstack role add --project service --user glance admin
將admin角色添加到glance用戶和服務項目中

f8bba1269eef568f027e71f13df36aa2.png

創建glance服務實例

openstack service create --name glance --description "OpenStack Image" p_w_picpath


4.創建p_w_picpath服務API端點

openstack endpoint create --region RegionOne p_w_picpath public http://controller:9292

[root@controller openstack]# openstack endpoint create --region RegionOne p_w_picpath public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 8ee8707b390847308e31f13b096d381b |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 6a2785ef1e954a819b6adf00f52d7bff |
| service_name | glance                           |
| service_type | p_w_picpath                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

openstack endpoint create --region RegionOne p_w_picpath internal http://controller:9292
openstack endpoint create --region RegionOne p_w_picpath admin http://controller:9292


5.安裝服務

yum install openstack-glance -y


6.配置服務

修改/etc/glance/glance-api.conf


[root@controller ~]# cd /etc/glance/

[root@controller glance]# cp glance-api.conf glance-api.conf.bak

[root@controller glance]# egrep -v "^#|^$" glance-api.conf.bak > glance-api.conf

[root@controller glance]# vim glance-api.conf

[DEFAULT]
[cors]
[cors.subdomain]
[database] 
connection = mysql+pymysql://glance:123456@controller/glance

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/p_w_picpaths/
[p_w_picpath_format]

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]

[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

[root@controller ~]# cd /etc/glance/

[root@controller glance]# cp glance-registry.conf glance-registry.conf.bak

[root@controller glance]# egrep -v "^#|^$" glance-registry.conf.bak > glance-registry.conf

vim glance-registry.conf

[DEFAULT]
[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]

7.填充數據庫

[root@controller glance]# su -s /bin/sh -c "glance-manage db_sync" glance

Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1171: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
  expire_on_commit=expire_on_commit, _conf=conf)


8.啓動服務

[root@controller glance]# systemctl enable openstack-glance-api.service   openstack-glance-registry.service

[root@controller glance]#  systemctl restart openstack-glance-api.service   openstack-glance-registry.service


9.下載鏡像

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

scp -r [email protected]:/root/CentOS-7-x86_64-GenericCloud.qcow2 /opt/openstack/
[root@controller openstack]# glance p_w_picpath-create --name "CentOS7" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 90956b2310c742b42e80c5eee9e6efb4     |
| container_format | bare                                 |
| created_at       | 2017-10-11T15:23:20Z                 |
| disk_format      | qcow2                                |
| id               | c3265e82-3245-46eb-b9b4-cdce4440d660 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | CentOS6_5                            |
| owner            | 249493ec92ae4f569f65b9bfb40ca371     |
| protected        | False                                |
| size             | 854851584                            |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2017-10-11T15:23:24Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+
[root@controller openstack]# 
[root@controller openstack]# openstack p_w_picpath list
+--------------------------------------+-----------+--------+
| ID                                   | Name      | Status |
+--------------------------------------+-----------+--------+
| a4afd509-e63f-42dc-af05-08e80e530680 | CentOS7   | active |
| c3265e82-3245-46eb-b9b4-cdce4440d660 | CentOS6_5 | active |
+--------------------------------------+-----------+--------+

92a7086f4ea0f5b2871ea77cabf5ca12.png

63ac177e5e71b2785634e4736695db41.png

f8347617bfebbce45bf7ab5a0da09a54.png

683af89b8c50714ecc18610a432dff1f.png

4715c344990f4064210687b1d6c0674d.png


(nova)的安裝及配置。

一、安裝和配置控制器節點

1.配置數據庫

mysql -uroot -p123456

CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';


2.獲取admin權限

# . admin-openrc

3.創建服務

 創建nova用戶

[root@controller openstack]# openstack user create --domain default --password-prompt nova

User Password:123456
Repeat User Password:123456
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | b0c274d46433459f8048cf5383d81e91 |
| name                | nova                             |
| password_expires_at | None                             |
+---------------------+----------------------------------+

將admin角色添加到nova用戶

 openstack role add --project service --user nova admin

  創建nova服務實體

openstack service create --name nova  --description "OpenStack Compute" compute

4.創建API

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s

[root@controller openstack]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | c73837b6665a4583a1885b70e2727a2e          |
| interface    | public                                    |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | e511923b761e4103ac6f2ff693c68639          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller openstack]# 
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s


安裝軟件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler -y

6.編輯/etc/nova/nova.conf

[root@controller ~]# cd /etc/nova/

[root@controller nova]# cp nova.conf nova.conf.bak

[root@controller nova]# egrep -v "^$|^#" nova.conf.bak > nova.conf

[root@controller nova]# vim nova.conf

添加如下內容
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 192.168.2.34
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
connection = mysql+pymysql://nova:123456@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp


7.導入數據庫

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage db sync" nova

忽略此輸出的任何棄用消息


8.開啓服務

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service


驗證

[root@controller nova]# nova service-list

+----+------------------+------------+----------+---------+-------+------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+------------+-----------------+
| 1  | nova-consoleauth | controller | internal | enabled | up    | -          | -               |
| 2  | nova-conductor   | controller | internal | enabled | up    | -          | -               |
| 5  | nova-scheduler   | controller | internal | enabled | up    | -          | -               |
+----+------------------+------------+----------+---------+-------+------------+-----------------+


9d7a3478f985520d61e01b54aa4d1de3.png

72221efc1c68869e0e0cc4b1550b416e.png


nova計算節點服務安裝

安裝配置計算節點

yum install openstack-nova-compute -y

2.編輯/etc/nova/nova.conf

[root@compute1 ~]# cd /etc/nova/

[root@compute1 nova]# cp nova.conf nova.conf.bak

[root@compute1 nova]# egrep -v "^#|^$" nova.conf.bak > nova.conf

[root@compute1 nova]# vim nova.conf

[DEFAULT]
...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 192.168.2.35
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
...
api_servers = http://controller:9292
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp


3.確定計算節點是否支持虛擬化。

# egrep -c '(vmx|svm)' /proc/cpuinfo

如果返回值大於1表示支持虛擬化

0

如果不支持請更改/etc/nova/nova.conf

[libvirt]

...

virt_type = qemu

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service


啓動報錯

2017-10-11 16:13:04.611 15074 ERROR nova Acce***efused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.


rabbitmqctl add_user openstack RABBIT_PASS

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

systemctl restart rabbitmq-server.service

重啓所有nova服務 在控制節點上


[root@controller openstack]# systemctl stop openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@controller openstack]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@controller openstack]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller | internal | enabled | up    | 2017-10-11T08:27:57.000000 |
|  2 | nova-conductor   | controller | internal | enabled | up    | 2017-10-11T08:27:57.000000 |
|  5 | nova-scheduler   | controller | internal | enabled | up    | 2017-10-11T08:27:57.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
[root@controller openstack]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | controller | internal | enabled | up    | 2017-10-11T08:28:57.000000 | -               |
| 2  | nova-conductor   | controller | internal | enabled | up    | 2017-10-11T08:28:57.000000 | -               |
| 5  | nova-scheduler   | controller | internal | enabled | up    | 2017-10-11T08:28:57.000000 | -               |
| 9  | nova-compute     | compute1   | nova     | enabled | up    | 2017-10-11T08:29:03.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
[root@controller openstack]#


網絡服務neutron服務器端的安裝及配置

mysql -u root -p123456

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';

2.獲取admin權限

# . admin-openrc

3.創建neutron服務

# openstack user create --domain default --password-prompt neutron

[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:123456
Repeat User Password:123456
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 6d0d3ffc8b3247d5b2e35ccd93cb5fb6 |
| name                | neutron                          |
| password_expires_at | None                             |
+---------------------+----------------------------------+
openstack role add --project service --user neutron admin
openstack service create --name neutron  --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 41251fae3b584d96a5485739033a700e |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+


4.創建網絡服務API端點

openstack endpoint create --region RegionOne  network public http://controller:9696

openstack endpoint create --region RegionOne  network internal http://controller:9696

openstack endpoint create --region RegionOne  network admin http://controller:9696

[root@controller openstack]# openstack endpoint create --region RegionOne  network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 91146c03d2084c89bbbd211faa35868f |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 41251fae3b584d96a5485739033a700e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller openstack]# openstack endpoint create --region RegionOne  network internal http://controller:9696

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 190cb2b5ad914451933c2931a34028a8 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 41251fae3b584d96a5485739033a700e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller openstack]# openstack endpoint create --region RegionOne  network admin http://controller:9696

+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c9e4a664a2eb49b7bf9670e53ee834f8 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 41251fae3b584d96a5485739033a700e |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+


網絡選擇網絡1模式:Provider networks

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

5.編輯 /etc/neutron/neutron.conf

[root@controller ~]# cd /etc/neutron/

[root@controller neutron]# cp neutron.conf neutron.conf.bak

[root@controller neutron]# egrep -v "^$|^#" neutron.conf.bak > neutron.conf

[root@controller neutron]# vim neutron.conf

[database]
...
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 123456
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

6.編輯/etc/neutron/plugins/ml2/ml2_conf.ini


[root@controller ~]# cd /etc/neutron/plugins/ml2/

[root@controller ml2]# cp ml2_conf.ini ml2_conf.ini.bak

[root@controller ml2]# egrep -v "^$|^#" ml2_conf.ini.bak > ml2_conf.ini

[root@controller ml2]# vim ml2_conf.ini

[ml2]
...
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
...
flat_networks = provider
[securitygroup]
...
enable_ipset = True


6.編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@controller ~]# cd /etc/neutron/plugins/ml2/

[root@controller ml2]# cp linuxbridge_agent.ini linuxbridge_agent.ini.bak

[root@controller ml2]# egrep -v "^$|^#" linuxbridge_agent.ini.bak >linuxbridge_agent.ini

[root@controller ml2]# vim linuxbridge_agent.ini

7.編輯/etc/neutron/dhcp_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = False
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@controller ~]# cd /etc/neutron/

[root@controller neutron]# cp dhcp_agent.ini dhcp_agent.ini.bak

[root@controller neutron]# egrep -v "^$|^#" dhcp_agent.ini.bak > dhcp_agent.ini

[root@controller neutron]# vim dhcp_agent.ini

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True


8.修改/etc/neutron/metadata_agent.ini

[root@controller ~]# cd /etc/neutron/

[root@controller neutron]# cp metadata_agent.ini metadata_agent.ini.bak

[root@controller neutron]# egrep -v "^$|^#" metadata_agent.ini.bak metadata_agent.ini

[root@controller neutron]# egrep -v "^$|^#" metadata_agent.ini.bak > metadata_agent.ini

[root@controller neutron]# vim metadata_agent.ini

[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = mate


8.編輯/etc/nova/nova.conf

[root@controller ~]# cd /etc/nova/

[root@controller nova]# cp nova.conf nova.conf.nova

[root@controller nova]# vim nova.conf

[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = mate


9.創建鏈接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

10.導入數據庫

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

INFO  [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90, Add segment_id to subnet
INFO  [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4, Add segment_host_mapping table.
INFO  [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426, Rename ml2_dvr_port_bindings
INFO  [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524, Remove mtu column from networks.
INFO  [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a, migrate dns name from port
INFO  [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad, rename tenant to project
INFO  [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab, Add routerport bindings for L3 HA
INFO  [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0, migrate to pluggable ipam
INFO  [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62, add standardattr to qos policies
INFO  [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353, Add Name and Description to the networksegments table
INFO  [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding
INFO  [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges.
INFO  [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc, uniq_floatingips0floating_network_id0fixed_port_id0fixed_ip_addr
INFO  [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d, Add ip_allocation to port
  OK


11.重啓nova-api服務

systemctl restart openstack-nova-api.service

12.啓動服務

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service


驗證:

[root@controller ~]# openstack network agent list

+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+

| ID                                   | Agent Type     | Host       | Availability Zone | Alive | State | Binary                 |

+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+

| 5d6eb39f-5a04-41e3-9403-20319ef8c816 | DHCP agent     | controller | nova              | True  | UP    | neutron-dhcp-agent     |

| 70fbc282-4dd1-4cd5-9af5-2434e8de9285 | Metadata agent | controller | None              | True  | UP    | neutron-metadata-agent |

+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+


[root@controller ~]# 

網卡地址不對

Failed to start OpenStack Neutron Linux Bridge Agent.

解決辦法:

編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini 

在linux_bridge部分將physical_interface_mappings改爲physnet1:ens160,重啓neutron-linuxbridge-agent服務即可

[linux_bridge]
...
physical_interface_mappings = physnet1:ens160
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0207fffa-ae6b-4e06-a901-cc6ccb6c1404 | Linux bridge agent | controller | None              | True  | UP    | neutron-linuxbridge-agent |
| 5d6eb39f-5a04-41e3-9403-20319ef8c816 | DHCP agent         | controller | nova              | True  | UP    | neutron-dhcp-agent        |
| 70fbc282-4dd1-4cd5-9af5-2434e8de9285 | Metadata agent     | controller | None              | True  | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+


[root@cntroller ~]# 

網絡服務計算節點的安裝及配置

1.安裝服務

# yum install openstack-neutron-linuxbridge ebtables ipset -y

2.修改/etc/neutron/neutron.conf

[database]
...
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 123456
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True


3.修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens160
[vxlan]
enable_vxlan = False
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver


4.修改/etc/nova/nova.conf

[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

5.重啓nova-compute服務

systemctl restart openstack-nova-compute.service

6.啓動服務

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

7.驗證

openstack network agent list

[root@controller ~]# openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0207fffa-ae6b-4e06-a901-cc6ccb6c1404 | Linux bridge agent | controller | None              | True  | UP    | neutron-linuxbridge-agent |
| 5d6eb39f-5a04-41e3-9403-20319ef8c816 | DHCP agent         | controller | nova              | True  | UP    | neutron-dhcp-agent        |
| 70fbc282-4dd1-4cd5-9af5-2434e8de9285 | Metadata agent     | controller | None              | True  | UP    | neutron-metadata-agent    |
| ab7efa23-4957-475a-8c80-58b786e96cdd | Linux bridge agent | compute1   | None              | True  | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

安裝dashboard組件

yum install openstack-dashboard httpd mod_wsgi memcached python-memcached

WEBROOT = '/dashboard/'

配置dashboard

編輯文件 /etc/openstack-dashboard/local_settings 完成下面內容

a.配置dashboard使用OpenStack服務【控制節點】

OPENSTACK_HOST = "192.168.2.34"

b.允許hosts 訪問dashboard

ALLOWED_HOSTS = ['*']

c.配置緩存會話存儲服務:

CACHES = {

   'default': {

       'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

       'LOCATION': '192.168.2.34:11211',

   }

}


注意:

註釋掉其它 session存儲配置

d.配置user爲默認角色

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

e.配置時區

TIME_ZONE = "Asia/Shanghai"

完成安裝


1.CentOS,配置 SELinux允許 web server 連接 to OpenStack 服務:

setsebool -P httpd_can_network_connect on

2.可能包bug,dashboard CSS 加載失敗,運行下面命令解決

chown -R apache:apache /usr/share/openstack-dashboard/static

3.啓動服務,並設置開機啓動

systemctl enable httpd.service memcached.service

systemctl restart httpd.service memcached.service


http://controller/dashboard/

輸入管理員的用戶名和密碼

0285f8389babba86f2f5c536d74cec69.png

d3bb6a1dee9f189a72f07526116adc57.png

f040a7081df020e3cbf2b4a1f3b6369e.png

[root@compute1 ~]# curl -I http://controller/dashboard/

HTTP/1.1 500 Internal Server Error

Date: Thu, 12 Oct 2017 12:16:13 GMT

Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5

Connection: close

Content-Type: text/html; charset=iso-8859-1

vim /etc/httpd/conf.d/openstack-dashboard.conf

添加WSGIApplicationGroup %{GLOBAL}


Not Found

The requested URL /auth/login/ was not found on this server.

[:error] [pid 16188] WARNING:py.warnings:RemovedInDjango19Warning: The use of the language code 'zh-cn' is deprecated. Please use the 'zh-hans' translation instead.

修改vim /etc/httpd/conf.d/openstack-dashboard.conf

WEBROOT = '/dashboard/'


安裝cinder

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON root.* TO 'root'@'%' IDENTIFIED BY '123456';


[root@controller neutron]# openstack user create --domain default --password-prompt cinder
User Password:123456
Repeat User Password:123456
'+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | cb24c3d7a80647d79fa851c8680e4880 |
| name                | cinder                           |
| password_expires_at | None                             |
+---------------------+----------------------------------+'
openstack role add --project service --user cinder admin


[root@ontroller neutron]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 686dd8eed2d44e4081513673e76e8060 |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller neutron]# openstack service create --name admin --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 635e003e1979452e8f7f63c70b999fb2 |
| name        | admin                            |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller neutron]# openstack service create --name public --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 87fd1a9b8b3240458ac3b5fa8925b79b |
| name        | public                           |
| type        | volume                           |
+-------------+----------------------------------+


[root@controller neutron]# openstack service create --name internal --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 1ff8370f7d7c4da4b6a2567c9fe83254 |
| name        | internal                         |
| type        | volume                           |
+-------------+----------------------------------+


[root@controller neutron]# 

[root@controller ~]# openstack service create --name admin --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 12eaeb14115548b09688bd64bfee0af2 |
| name        | admin                            |
| type        | volumev2                         |
+-------------+----------------------------------+
You have new mail in /var/spool/mail/root
[root@controller ~]# openstack service create --name public --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | b1dfc4b8c3084d9789c0b0731e0f4a2a |
| name        | public                           |
| type        | volumev2                         |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name internal  --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | b8d90f1e658e402f9871a2fbad1d21b0 |
| name        | internal                         |
| type        | volumev2                         |
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s


安裝

yum install openstack-cinder -y

cd /etc/cinder/

cp cinder.conf cinder.conf.bak

egrep -v "^$|^#" cinder.conf.bak > cinder.conf

vim cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 192.168.2.34
[database]
connection = mysql://cinder:123456@controller:3306/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[cinder]
os_region_name = RegionOne
su -s /bin/sh -c "cinder-manage db sync" cinder



[root@controller cinder]# systemctl restart openstack-nova-api.service
[root@controller cinder]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@controller cinder]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service


安裝cinder節點(本人真實機不足以開啓三個節點,cinder節點安裝在compute上)

compute

安裝LVM作爲後端存儲

compute#

yum install lvm2 -y

配置LVM(需要在各個使用LVM的服務節點上配置LVM,讓其顯示,能被掃描到)

(本人的計算節點自帶LVM,所以需要加上之前的LVM信息sda)

compute#

vi /etc/lvm/lvm.conf file

devices {
...
filter = [ "a/sda/","a/sdb/", "r/.*/"]
如果存儲節點上本身自帶LVM(節點操作系統在sda盤的LVM上),則需要加上sda的配置。
cinder#
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
如果計算節點上本身自帶LVM(節點操作系統在sda盤的LVM上),則需要配置sda的配置。
compute#
filter = [ "a/sda/", "r/.*/"]
}


systemctl enable lvm2-lvmetad.service

systemctl restart lvm2-lvmetad.service

yum install openstack-cinder targetcli python-keystone -y


建立物理卷和邏輯組

compute#
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
[root@compute1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@compute1 ~]# vgcreate cinder-volumes /dev/sdb
  Volume group "cinder-volumes" successfully created
yum install openstack-cinder targetcli python-keystone -y


配置cinder各個組件的配置文件(備份配置文件,刪除配置文件裏的所有數據,使用提供的配置):

vi /etc/cinder/cinder.conf

[database]
connection = mysql+pymysql://cinder:123456@controller:3306/cinder
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 192.168.2.35
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

systemctl enable openstack-cinder-volume.service target.service

systemctl restart openstack-cinder-volume.service target.service

[root@controller cinder]# openstack volume service list

+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2017-10-12T10:02:31.000000 |
| cinder-volume    | compute1@lvm | nova | enabled | up    | 2017-10-12T10:03:32.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller cinder]#


創建卷

controller#

命令:openstack volume create --size [多少Gb] [卷名]

例子: openstack volume create --size 1 volume1

[root@controller cinder]# openstack volume create --size 50 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| p_w_uploads         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2017-10-12T10:03:51.987431           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 51596429-b877-4f24-9574-dc266b3f4451 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 50                                   |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | 01f1a4a6e97244ec8a915cb120caa564     |
+---------------------+--------------------------------------+


查看卷詳情

controller#

openstack volume list

[root@controller cinder]# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID                                   | Display Name | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 51596429-b877-4f24-9574-dc266b3f4451 | volume1      | available |   50 |             |
+--------------------------------------+--------------+-----------+------+-------------+


9f4c5b9cdeb4acecd0348ea3aad9a723.png


在192.168.2.53 VMEXSI主機上操作

集成VM虛擬化

# vi /etc/vmware/config

libdir = "/usr/lib/vmware"
authd.proxy.vim = "vmware-hostd:hostd-vmdb"
authd.proxy.nfc = "vmware-hostd:ha-nfc"
authd.proxy.nfcssl = "vmware-hostd:ha-nfcssl"
authd.proxy.vpxa-nfcssl = "vmware-vpxa:vpxa-nfcssl"
authd.proxy.vpxa-nfc = "vmware-vpxa:vpxa-nfc"
authd.fullpath = "/sbin/authd"
authd.soapServer = "TRUE"
vmauthd.server.alwaysProxy = "TRUE"
vhv.enable = "TRUE"

安裝 VMware vCenter Appliance 的 的 OVF 

點進login設置網卡


c647a17b1ad90b915629d8d1884d5ca5.png

f7d67957dbb707f88c011b7c534fc229.png

4eb78af0cc96c5e644bd59eea0f0f651.png

9391233dfc74d830a116c7cbec9cb00c.png

688185febac4a738f3df4321948b72b4.png

632752780fe36dfcbbcadde3bfdfe558.png

5c7085440e36b4d491d9d43b81c33eb1.png

8eb3fa6daa4893743966a11f14cd7a91.png

729b4658b156ec029840e77ff264ffcd.png

526d43e42c6ae41224ef4fc0c3bc2408.png

f98ecd379676c4170dd6e598a5f6c1a6.png

cbb083e7c0420cb57c98916df534f4e9.png

Service network restart 

安裝 VMware vCenter

瀏覽器打開 https://192.168.2.38:5480,登錄root 密碼vmware


集成vmware

(1)在控制節點安裝 nova-compute。

yum install openstack-nova-compute python-suds

(2)備份 nova.conf

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

(3)修改 nova.conf 集成 VMware vCenter。

vi /etc/nova/nova.conf

添加以下配置:

在 [DEFAULT] 段落添加:
compute_driver = vmwareapi.VMwareVCDriver
在 [vmware] 段落添加:
vif_driver = nova.virt.baremetal.vif_driver.BareMetalVIFDriver
host_ip = 192.168.2.38 # vCenter 的 IP 地址
host_username = root # vCenter 用戶名
host_password = vmware  # vCenter 密碼
datastore_regex = datastore5 # ESXi 存儲
cluster_name = openstack  # vCenter 羣集
api_retry_count = 10
integration_bridge = VM Network  # ESXi 虛擬機網絡端口組
vlan_interface = vmnic5 # ESXi 網卡

fbac20c505dd5b68c1d54d934d5d1a88.png

(4)將 nova-compute 服務設置爲自動啓動,重新啓動控制節點。

chkconfig openstack-nova-compute on

shutdown -r now

或將以下服務重啓:

service openstack-nova-api restart

service openstack-nova-cert restart

service openstack-nova-consoleauth restart

service openstack-nova-scheduler restart

service openstack-nova-conductor restart

service openstack-nova-novncproxy restart

service openstack-nova-compute restart

(5)檢查 OpenStack 是否已集成 VMware vCenter。

nova connectionError: HTTPSConnectionPool(host='192.168.2.38', port=443): Max retries exceeded with url: /sdk/vimService.wsdl (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x567f090>: Failed to establish a new connection: [Errno 113] EHOSTUNREACH',))

如果之前沒有配置SSL(缺省是沒有),需要修改代碼,禁止VMwareVCDriver使用SSL認證,否則在創建虛機的時候會報錯“error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed”。

需要編輯/usr/lib/python2.7/site-packages/oslo_vmware/service.py,註釋掉如下行:

#self.verify = cacert if cacert else not insecure

並修改self.verify的值爲False,如下所示:

[python] view plain copy

class RequestsTransport(transport.Transport):  

    def __init__(self, cacert=None, insecure=True, pool_maxsize=10):  

        transport.Transport.__init__(self)  

        # insecure flag is used only if cacert is not  

        # specified.  

        #self.verify = cacert if cacert else not insecure  

        self.verify = False         

  

self.session = requests.Session()  

        self.session.mount('file:///',  

                           LocalFileAdapter(pool_maxsize=pool_maxsize))  

        self.cookiejar = self.session.cookies  

[root@controller nova]# service openstack-nova-compute restart

Redirecting to /bin/systemctl restart openstack-nova-compute.service

[root@controller nova]# nova hypervisor-list

+----+------------------------------------------------+-------+---------+

| ID | Hypervisor hostname                            | State | Status  |

+----+------------------------------------------------+-------+---------+

| 1  | compute1                                       | up    | enabled |

| 2  | domain-c7.21FC92E0-0460-40C7-9DE1-05536B3F9F2C | up    | enabled |

+----+------------------------------------------------+-------+---------+

[root@controller nova]# 


1fbed381bb1b44b634204a15b0bcde9c.png
到此實驗結束,開始學習雲計算

以上是我安裝筆記。


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章