前言:
openstack的部署非常簡單,簡單的前提建立在紮實的理論功底,本人一直覺得,玩技術一定是理論指導實踐,網上遍佈個種搭建方法都可以實現一個基本的私有云環境,但是諸位可曾發現,很多配置都是重複的,爲何重複?到底什麼位置該不該配?具體配置什麼參數?很多作者本人都搞不清楚,今天本人就是要在這裏正本清源。
如有不解可郵件聯繫我:[email protected]
介紹:本次案列爲基本的三節點部署,集羣案列後期有時間再整理
一:網絡:
1.管理網絡:172.16.209.0/24
2.數據網絡:1.1.1.0/24
二:操作系統:CentOS Linux release 7.2.1511 (Core)
三:內核:3.10.0-327.el7.x86_64
四:openstack版本mitaka
效果圖:
OpenStack mitaka部署
約定:
1.在修改配置的時候,切勿在某條配置後加上註釋,可以在配置的上面或者下面加註釋
2.相關配置一定是在標題後追加,不要在原有註釋的基礎上修改
PART1:環境準備(在所有節點執行)
一:
每臺機器設置固定ip,每臺機器添加hosts文件解析,爲每臺機器設置主機名,關閉firewalld,selinux
可選操作:在控制節點製作密鑰登錄其他節點(可以方便後面的操作,真實環境也極有必要準備一臺單獨的管理機),在控制節點修改/etc/hosts並scp到其他節點
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.209.115 controller01
172.16.209.117 compute01
172.16.209.119 network02
二:獲取軟件包源(在所有節點執行),即配置yum源,下述兩種方式請按個人情況選擇一種,推薦方式一
方式一:自定義的yum源
本人從官網下載包後自定製的yum源,自定製源的好處是:嚴格地控制軟件包版本,保持平臺內主機版本的一致性和可控性,具體做法如下
找一個服務器,作爲yum源(同時也可以作爲cobbler或pxe主機)
上傳openstack-mitaka-rpms.tar.gz
tar xvf openstack-mitaka-rpms.tar.gz -C /
在這臺機器上安裝httpd並且啓動,設置開機啓動
ln -s /mitaka-rpms /var/www/html/
然後每臺機器配置yum源
[mitaka]
name=mitaka repo
baseurl=http://172.16.209.100/mitaka-rpms/
enabled=1
gpgcheck=0
方式二:下載安裝官網的源
自定義yum源的包其實也都是來自於官網,只不過,官網經常會更新包,一個包的更新可能會導致一些版本兼容性問題,所以推薦使用方式一,但如果只是測試而非生產環境,方式二是一種稍微便捷的方式
基於centos系統,在所有節點上執行:
yum install centos-release-openstack-mitaka -y
基於redhat系統,在所有節點上執行:
yum install https://rdoproject.org/repos/rdo-release.rpm -y #紅帽系統請去掉epel源,否則會跟這裏的源有衝突
三 製作yum緩存並更新系統(在所有節點執行)
yum makecache && yum install vim net-tools -y&& yum update -y
小知識點:
yum -y update
升級所有包,改變軟件設置和系統設置,系統版本內核都升級
yum -y upgrade
升級所有包,不改變軟件設置和系統設置,系統版本升級,內核不改變
四 關閉yum自動更新(在所有節點執行)
CentOS7最小化安裝後默認yum會自動下載更新,這對許多生產系統是不需要的,可以手動關閉它
[root@engine cron.weekly]# cd /etc/yum
[root@engine yum]# ls
fssnap.d pluginconf.d protected.d vars version-groups.conf yum-cron.conf yum-cron-hourly.conf
編輯yum-cron.conf,將download_updates = yes改爲no即可
ps:yum install yum-plugin-priorities -y #如果不想關閉自動更新,那麼可以安裝這個插件,來設定升級的優先級,從官網下載更新而不是從一些亂七八糟的第三方源
五 預裝包(在所有節點執行)
yum install python-openstackclient -y
yum install openstack-selinux -y
六 時間服務部署
yum install chrony -y #(在所有節點執行)
控制節點:
修改配置:
/etc/chrony.conf
server ntp.staging.kycloud.lan iburst
allow 管理網絡網段ip/24
啓服務:
systemctl enable chronyd.service
systemctl start chronyd.service
其餘節點:
修改配置:
/etc/chrony.conf
server 控制節點ip iburst
啓服務
systemctl enable chronyd.service
systemctl start chronyd.service
時區不是Asia/Shanghai需要改時區:
# timedatectl set-local-rtc 1 # 將硬件時鐘調整爲與本地時鐘一致, 0 爲設置爲 UTC 時間
# timedatectl set-timezone Asia/Shanghai # 設置系統時區爲上海
其實不考慮各個發行版的差異化, 從更底層出發的話, 修改時間時區比想象中要簡單:
# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
驗證:
每臺機器執行:
chronyc sources
在S那一列包含*號,代表同步成功(可能需要花費幾分鐘去同步,時間務必同步)
七:部署mariadb數據庫
控制節點:
yum install mariadb mariadb-server python2-PyMySQL -y
編輯:
/etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 控制節點管理網絡ip
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
啓服務:
systemctl enable mariadb.service
systemctl start mariadb.service
mysql_secure_installation
八:爲Telemetry 服務部署MongoDB
控制節點:
yum install mongodb-server mongodb -y
編輯:/etc/mongod.conf
bind_ip = 控制節點管理網絡ip
smallfiles = true
啓動服務:
systemctl enable mongod.service
systemctl start mongod.service
九:部署消息隊列rabbitmq(驗證方式:http://172.16.209.104:15672/ 用戶:guest 密碼:guest)
控制節點:
yum install rabbitmq-server -y
啓動服務:
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
新建rabbitmq用戶密碼:
rabbitmqctl add_user openstack che001
爲新建的用戶openstack設定權限:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
十:部署memcached緩存(爲keystone服務緩存tokens)
控制節點:
yum install memcached python-memcached -y
啓動服務:
systemctl enable memcached.service
systemctl start memcached.service
PART2:認證服務keystone部署
一:安裝和配置服務
1.建庫建用戶
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'che001';
flush privileges;
2.yum install openstack-keystone httpd mod_wsgi -y
3.編輯/etc/keystone/keystone.conf
[DEFAULT]
admin_token = che001 #建議用命令製作token:openssl rand -hex 10
[database]
connection = mysql+pymysql://keystone:che001@controller01/keystone
[token]
provider = fernet
#Token Provider:UUID, PKI, PKIZ, or Fernet #http://blog.csdn.net/miss_yang_cloud/article/details/49633719
4.同步修改到數據庫
su -s /bin/sh -c "keystone-manage db_sync" keystone
5.初始化fernet keys
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
6.配置apache服務
編輯:/etc/httpd/conf/httpd.conf
ServerName controller01
編輯:/etc/httpd/conf.d/wsgi-keystone.conf
新增配置
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
7.啓動服務:
systemctl enable httpd.service
systemctl restart httpd.service #因爲之前自定義基於http協議的yum源時已經啓動過了httpd,所以此處需要restart
二:創建服務實體和訪問端點
1.實現配置管理員環境變量,用於獲取後面創建的權限
export OS_TOKEN=che001
export OS_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3
2.基於上一步給的權限,創建認證服務實體(目錄服務)
openstack service create \
--name keystone --description "OpenStack Identity" identity
3.基於上一步建立的服務實體,創建訪問該實體的三個api端點
openstack endpoint create --region RegionOne \
identity public http://controller01:5000/v3
openstack endpoint create --region RegionOne \
identity internal http://controller01:5000/v3
openstack endpoint create --region RegionOne \
identity admin http://controller01:35357/v3
三:創建域,租戶,用戶,角色,把四個元素關聯到一起
建立一個公共的域名:
openstack domain create --description "Default Domain" default
管理員:admin
openstack project create --domain default \
--description "Admin Project" admin
openstack user create --domain default \
--password-prompt admin
openstack role create admin
openstack role add --project admin --user admin admin
普通用戶:demo
openstack project create --domain default \
--description "Demo Project" demo
openstack user create --domain default \
--password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
爲後續的服務創建統一租戶service
解釋:後面每搭建一個新的服務都需要在keystone中執行四種操作:1.建租戶 2.建用戶 3.建角色 4.做關聯
後面所有的服務公用一個租戶service,都是管理員角色admin,所以實際上後續的服務安裝關於keysotne
的操作只剩2,4
openstack project create --domain default \
--description "Service Project" service
四:驗證操作:
編輯:/etc/keystone/keystone-paste.ini
在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三個地方
移走:admin_token_auth
unset OS_TOKEN OS_URL
openstack --os-auth-url http://controller01:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2016-08-17T08:29:18.528637Z |
| id | gAAAAABXtBJO-mItMcPR15TSELJVB2iwelryjAGGpaCaWTW3YuEnPpUeg799klo0DaTfhFBq69AiFB2CbFF4CE6qgIKnTauOXhkUkoQBL6iwJkpmwneMo5csTBRLAieomo4z2vvvoXfuxg2FhPUTDEbw-DPgponQO-9FY1IAEJv_QV1qRaCRAY0 |
| project_id | 9783750c34914c04900b606ddaa62920 |
| user_id | 8bc9b323a3b948758697cb17da304035 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
五:新建客戶端腳本文件
管理員:admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=che001
export OS_AUTH_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
普通用戶demo:demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=che001
export OS_AUTH_URL=http://controller01:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
效果:
source admin-openrc
[root@controller01 ~]# openstack token issue
part3:部署鏡像服務
一:安裝和配置服務
1.建庫建用戶
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'che001';
flush privileges;
2.keystone認證操作:
上面提到過:所有後續項目的部署都統一放到一個租戶service裏,然後需要爲每個項目建立用戶,建管理員角色,建立關聯
. admin-openrc
openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
建立服務實體
openstack service create --name glance \
--description "OpenStack Image" image
建端點
openstack endpoint create --region RegionOne \
image public http://controller01:9292
openstack endpoint create --region RegionOne \
image internal http://controller01:9292
openstack endpoint create --region RegionOne \
image admin http://controller01:9292
3.安裝軟件
yum install openstack-glance -y
4.初始化存放鏡像的存儲設備,此處我們暫時用本地存儲,但是無論哪種存儲,都應該在glance啓動前就已經有了,否則glance啓動時通過驅動程序檢索不到存儲設備,即在glance啓動後 新建的存儲設備無法被glance識別到,需要重新啓動glance纔可以,因此我們將下面的步驟提到了最前面。
新建目錄:
mkdir -p /var/lib/glance/images/
chown glance. /var/lib/glance/images/
5.修改配置:
編輯:/etc/glance/glance-api.conf
[database]
#這裏的數據庫連接配置是用來初始化生成數據庫表結構,不配置無法生成數據庫表結構
#glance-api不配置database對創建vm無影響,對使用metada有影響
#日誌報錯:ERROR glance.api.v2.metadef_namespaces
connection = mysql+pymysql://glance:che001@controller01/glance
[keystone_authtoken]
auth_url = http://controller01:5000
memcached_servers = controller01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = che001
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
編輯:/etc/glance/glance-registry.conf
[database]
#這裏的數據庫配置是用來glance-registry檢索鏡像元數據
connection = mysql+pymysql://glance:che001@controller01/glance
同步數據庫:(此處會報一些關於future的問題,自行忽略)
su -s /bin/sh -c "glance-manage db_sync" glance
啓動服務:
systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
systemctl start openstack-glance-api.service \
openstack-glance-registry.service
二:驗證操作:
. admin-openrc
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
(本地下載:wget http://172.16.209.100/cirros-0.3.4-x86_64-disk.img)
openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
openstack image list
part4:部署compute服務
一:控制節點配置
1.建庫建用戶
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'che001';
flush privileges;
2.keystone相關操作
. admin-openrc
openstack user create --domain default \
--password-prompt nova
openstack role add --project service --user nova admin
openstack service create --name nova \
--description "OpenStack Compute" compute
openstack endpoint create --region RegionOne \
compute public http://controller01:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
compute internal http://controller01:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
compute admin http://controller01:8774/v2.1/%\(tenant_id\)s
3.安裝軟件包:
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler -y
4.修改配置:
編輯/etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
#下面的爲管理ip
my_ip = 172.16.209.115
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:che001@controller01/nova_api
[database]
connection = mysql+pymysql://nova:che001@controller01/nova
[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = che001
[keystone_authtoken]
auth_url = http://controller01:5000
memcached_servers = controller01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = che001
[vnc]
#下面的爲管理ip
vncserver_listen = 172.16.209.115
#下面的爲管理ip
vncserver_proxyclient_address = 172.16.209.115
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
5.同步數據庫:(此處會報一些關於future的問題,自行忽略)
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova
6.啓動服務
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
二:計算節點配置
1.安裝軟件包:
yum install openstack-nova-compute libvirt-daemon-lxc -y
2.修改配置:
編輯/etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
#計算節點管理網絡ip
my_ip = 172.16.209.117
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = che001
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
#計算節點管理網絡ip
vncserver_proxyclient_address = 172.16.209.117
#控制節點管理網絡ip
novncproxy_base_url = http://172.16.209.115:6080/vnc_auto.html
[glance]
api_servers = http://controller01:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
3.如果在不支持虛擬化的機器上部署nova,請確認
egrep -c '(vmx|svm)' /proc/cpuinfo結果爲0
則編輯/etc/nova/nova.conf
[libvirt]
virt_type = qemu
4.啓動服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
三:驗證
控制節點
[root@controller01 ~]# source admin-openrc
[root@controller01 ~]# openstack compute service list
+----+------------------+--------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+--------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller01 | internal | enabled | up | 2016-08-17T08:51:37.000000 |
| 2 | nova-conductor | controller01 | internal | enabled | up | 2016-08-17T08:51:29.000000 |
| 8 | nova-scheduler | controller01 | internal | enabled | up | 2016-08-17T08:51:38.000000 |
| 12 | nova-compute | compute01 | nova | enabled | up | 2016-08-17T08:51:30.000000 |
part5:部署網絡服務
一:控制節點配置
1.建庫建用戶
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'che001';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'che001';
flush privileges;
2.keystone相關
. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron \
--description "OpenStack Networking" network
openstack endpoint create --region RegionOne \
network public http://controller01:9696
openstack endpoint create --region RegionOne \
network internal http://controller01:9696
openstack endpoint create --region RegionOne \
network admin http://controller01:9696
3.安裝軟件包
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which -y
4.配置服務器組件
編輯 /etc/neutron/neutron.conf文件,並完成以下動作:
在[數據庫]節中,配置數據庫訪問:
[DEFAULT]
core_plugin = ml2
service_plugins = router
#下面配置:啓用重疊IP地址功能
allow_overlapping_ips = True
rpc_backend = rabbit
#auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = che001
[database]
connection = mysql+pymysql://neutron:che001@controller01/neutron
[keystone_authtoken]
auth_url = http://controller01:5000
memcached_servers = controller01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = che001
[nova]
auth_url = http://controller01:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = che001
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件
[ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True
編輯/etc/nova/nova.conf文件:
[neutron]
url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = che001
service_metadata_proxy = True
5.創建連接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
6.同步數據庫:(此處會報一些關於future的問題,自行忽略)
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
7.重啓nova服務
systemctl restart openstack-nova-api.service
8.啓動neutron服務
systemctl enable neutron-server.service
systemctl start neutron-server.service
二:網絡節點配置
1. 編輯 /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
2.執行下列命令,立即生效
sysctl -p
3.安裝軟件包
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
4.配置組件
編輯/etc/neutron/neutron.conf文件
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
[database]
connection = mysql+pymysql://neutron:che001@controller01/neutron
[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = che001
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
6、編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件:
[ovs]
#下面ip爲網絡節點數據網絡ip
local_ip=1.1.1.119
bridge_mappings=external:br-ex
[agent]
tunnel_types=gre,vxlan
#l2_population=True
prevent_arp_spoofing=True
7.配置L3代理。編輯 /etc/neutron/l3_agent.ini文件:
[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge=br-ex
8.配置DHCP代理。編輯 /etc/neutron/dhcp_agent.ini文件:
[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata=True
9.配置元數據代理。編輯 /etc/neutron/metadata_agent.ini文件:
[DEFAULT]
nova_metadata_ip=controller01
metadata_proxy_shared_secret=che001
10.啓動服務(先啓動服務再建網橋br-ex)
網路節點:
systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service
11.建網橋
注意,如果網卡數量有限,想用網路節點的管理網絡網卡作爲br-ex綁定的物理網卡
#那麼需要將網絡節點管理網絡網卡ip去掉,建立br-ex的配置文件,ip使用原管理網ip
[root@network01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
NM_CONTROLLED=no
[root@network01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
#HWADDR=bc:ee:7b:78:7b:a7
IPADDR=172.16.209.10
GATEWAY=172.16.209.1
NETMASK=255.255.255.0
DNS1=202.106.0.20
DNS1=8.8.8.8
NM_CONTROLLED=no #注意加上這一句否則網卡可能啓動不成功
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2 #要在network服務重啓前將物理端口eth0加入網橋br-ex
systemctl restart network #重啓網絡時,務必保證eth2網卡沒有ip或者乾脆是down掉的狀態,並且一定要NM_CONTROLLED=no,否則會無法啓動服務
三:計算節點配置
1. 編輯 /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
2.sysctl -p
3.yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y
4.編輯 /etc/neutron/neutron.conf文件
[DEFAULT]
rpc_backend = rabbit
#auth_strategy = keystone
[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = che001
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
5.編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
#下面ip爲計算節點數據網絡ip
local_ip = 1.1.1.117
#bridge_mappings = vlan:br-vlan
[agent]
tunnel_types = gre,vxlan
l2_population = True #開啓l2_population功能用於接收sdn控制器(一般放在控制節點)發來的(新建的vm)arp信息,這樣就把arp信息推送到了每個中斷設備(計算節點),減少了一大波初識arp廣播流量(說初始是因爲如果沒有l2pop機制,一個vm對另外一個vm的arp廣播一次後就緩存到本地了),好強大,詳見https://assafmuller.com/2014/05/21/ovs-arp-responder-theory-and-practice/
arp_responder = True #開啓br-tun的arp響應功能,這樣br-tun就成了一個arp proxy,來自本節點對其他虛擬機而非物理主機的arp請求可以基於本地的br-tun輕鬆搞定,不能再牛逼了
prevent_arp_spoofing = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
7.編輯 /etc/nova/nova.conf
[neutron]
url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = che001
8.啓動服務
systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service
part6:部署控制面板dashboard
在控制節點
1.安裝軟件包
yum install openstack-dashboard -y
2.配置/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller01"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller01:11211',
}
}
#注意:必須是v3而不是v3.0
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
TIME_ZONE = "UTC"
3.啓動服務
systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service
4.驗證;
http://172.16.209.115/dashboard
總結:
與keystone打交道的只有api層,所以不要到處亂配
建主機的時候由nova-compute負責調用各個api,所以不要再控制節點配置啥調用
ml2是neutron的core plugin,只需要在控制節點配置
網絡節點只需要配置相關的agent
各組件的api除了接收請求外還有很多其他功能,比方說驗證請求的合理性,控制節點nova.conf需要配neutron的api、認證,因爲nova boot時需要去驗證用戶提交網絡的合理性,控制節點neutron.conf需要配nova的api、認證,因爲你刪除網絡端口時需要通過nova-api去查是否有主機正在使用端口。計算幾點nova.conf需要配neutron,因爲nova-compute發送請求給neutron-server來創建端口。這裏的端口值得是'交換機上的端口'
不明白爲啥?或者不懂我在說什麼,請好好研究openstack各組件通信機制和主機創建流程。
網路故障排查:
網絡節點:
[root@network02 ~]# ip netns show
qdhcp-e63ab886-0835-450f-9d88-7ea781636eb8
qdhcp-b25baebb-0a54-4f59-82f3-88374387b1ec
qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83
[root@network02 ~]# ip netns exec qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83 bash
[root@network02 ~]# ping -c2 www.baidu.com
PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
64 bytes from 61.135.169.125: icmp_seq=1 ttl=52 time=33.5 ms
64 bytes from 61.135.169.125: icmp_seq=2 ttl=52 time=25.9 ms
如果無法ping通,那麼退出namespace
ovs-vsctl del-br br-ex
ovs-vsctl del-br br-int
ovs-vsctl del-br br-tun
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth0
systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service