OpenStack stein安裝(九)network option2

Install and configure the Networking components on the controller node.

  1. Install the components
    # yum install openstack-neutron openstack-neutron-ml2 \
      openstack-neutron-linuxbridge ebtables
  2. Configure the server component
    
    Edit the /etc/neutron/neutron.conf file and complete the following actions:
    ○ In the [database] section, configure database access:
    [database]
    # ...
    connection = mysql+pymysql://neutron:[email protected]/neutron
    注意:註釋或移除其它連接選項在[database]區域中

○ In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
[DEFAULT]

...

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true

○ In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]

...

transport_url = rabbit://openstack:[email protected]

○ In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
[DEFAULT]

...

auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://stack.flex.net:5000
auth_url = http://stack.flex.net:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron123
注意:註釋或移除在[keystone_authtoken]區域中其它選項.

○ In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of network topology changes:
[DEFAULT]

...

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]
# ...
auth_url = http://stack.flex.net:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova123

○ In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]

...

lock_path = /var/lib/neutron/tmp
3. Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:

○ In the [ml2] section, enable flat, VLAN, and VXLAN networks:
[ml2]

...

type_drivers = flat,vlan,vxlan

○ In the [ml2] section, enable VXLAN self-service networks:
[ml2]

...

tenant_network_types = vxlan

○ In the [ml2] section, enable the Linux bridge and layer-2 population mechanisms:
[ml2]

...

mechanism_drivers = linuxbridge,l2population
注意:配置ML2插件後, 從type_drivers移除這個選項會導致數據庫不一致並且Linux bridge只支持VXLAN overlay network.

○ In the [ml2] section, enable the port security extension driver:
[ml2]

...

extension_drivers = port_security

○ In the [ml2_type_flat] section, configure the provider virtual network as a flat network:
[ml2_type_flat]

...

flat_networks = provider

○ In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for self-service networks:
[ml2_type_vxlan]

...

vni_ranges = 1:1000

○ In the [securitygroup] section, enable ipset to increase efficiency of security group rules:
[securitygroup]

...

enable_ipset = true
4. Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.
Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

○ In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:
[linux_bridge]
physical_interface_mappings = provider:external:eth1

Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network interface.

○ In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true

Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the underlying physical network interface that handles overlay networks. The example architecture uses the management interface to tunnel traffic to the other nodes. Therefore, replace OVERLAY_INTERFACE_IP_ADDRESS with the management IP address of the controller node. See Host networking for more information.

○ In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:
[securitygroup]

...

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

○ Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables

To enable networking bridge support, typically the br_netfilter kernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.
5. Configure the layer-3 agent
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.
Edit the /etc/neutron/l3_agent.ini file and complete the following actions:

○ In the [DEFAULT] section, configure the Linux bridge interface driver and external network bridge:
[DEFAULT]

...

interface_driver = linuxbridge
6. Configure the DHCP agent
The DHCP agent provides DHCP services for virtual networks.
Edit the /etc/neutron/dhcp_agent.ini file and complete the following actions:

○ In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:
[DEFAULT]

...

interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
完成後返回網絡配置
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章