Ubuntu16.04 部署 keystone 和 swift

1. 實驗環境

節點 OS 網絡接口 硬盤 運行服務
controller Ubuntu16.04 LTS ens33 (192.168.1.130)、ens34 /dev/sda identity、proxy-server
storage Ubuntu16.04 LTS ens33 (192.168.1.140)、ens34 /dev/sda、/dev/sdb、/dev/sdc account-server、container-server、object-server

其中,ens33 接口作爲 management network interface,在 VMware 上配置爲 nat 模式;ens34 作爲 provider network interface,在 VMware 上配置爲僅主機模式。

此處部署的是 OpenStack Pike 版本。

2. 環境配置

官方文檔:https://docs.openstack.org/install-guide/

2.1 更換軟件源

首先更換 Ubuntu 的軟件源鏡像:

# cd /etc/apt
# cp sources.list  sources.list.back
# gedit sources.list
# sources.list
deb http://mirrors.aliyun.com/ubuntu/ xenial main
#deb-src http://mirrors.aliyun.com/ubuntu/ xenial main

deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
#deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main

deb http://mirrors.aliyun.com/ubuntu/ xenial universe
#deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe

deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
#deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe

deb http://mirrors.aliyun.com/ubuntu/ xenial-security main
#deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main

deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
#deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security universe
# apt-get update & apt-get upgrade

2.2 配置網絡

配置IP地址

controller 節點上,編輯 /etc/network/interfaces 文件:

...

# The management network interface
auto ens33
iface ens33 inet static
address 192.168.1.130
netmask 255.255.255.0
gateway 192.168.1.2

# The provider network interface
auto ens34
iface ens34 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

storage 節點上,編輯 /etc/network/interfaces 文件:

...

# The management network interface
auto ens33
iface ens33 inet static
address 192.168.1.130
netmask 255.255.255.0
gateway 192.168.1.2

# The provider network interface
auto ens34
iface ens34 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

重啓 controller 和 storage 節點。

配置主機名解析服務

分別在 controller 節點和 storage 節點上,編輯 /etc/hosts 文件:

...
# controller
192.168.1.130       controller

# object1
192.168.1.140       object1
...

配置 DNS

分別在 controller 節點和 storage 節點上,編輯 /etc/resolv.conf 文件:

...
nameserver 192.168.1.2	# 網關

每次重啓時,都需要重新配置。

2.3 配置 NTP

controller 節點

# apt install chrony

編輯 /etc/chrony/chrony.conf 文件:

server NTP_SERVER iburst
...
allow 192.168.1.0/24

Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server. 這裏,我將NTP_SERVER設爲 controller 。

重啓服務、查看服務狀態:

# service chrony restart
# service chrony status -l

storage 節點

# apt install chrony

編輯 /etc/chrony/chrony.conf 文件:

server controller iburst

重啓服務、查看服務狀態:

# service chrony restart
# service chrony status -l

2.4 安裝 OpensStack 包

在 controller、storage 節點上安裝。

# apt install software-properties-common
# add-apt-repository cloud-archive:pike
# apt update && apt dist-upgrade
# apt install python-openstackclient

2.5 安裝數據庫

在 controller 節點上安裝。

# apt install mariadb-server python-pymysql

創建、編輯 /etc/mysql/mariadb.conf.d/99-openstack.cnf 文件:

[mysqld]
bind-address = 192.168.1.130

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

重啓服務、查看服務狀態:

# service mysql restart
# service mysql status -l

爲 root 用戶設置密碼:

# mysql_secure_installation

2.6 安裝消息隊列

在 controller 節點上安裝。

# apt install rabbitmq-server

添加 openstack 用戶(密碼:RABBITMQ_PASS):

# rabbitmqctl add_user openstack RABBITMQ_PASS

授予權限:

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

2.7 安裝緩存

在 controller 節點上安裝。

# apt install memcached python-memcache

編輯 /etc/memcached.conf 文件:

-l 192.168.1.130

重啓服務、查看服務狀態:

# service memcached restart
# service memcached status -l

2.8 安裝 Etcd

在 controller 節點上安裝。

# apt install etcd

編輯 /etc/default/etcd 文件:

ETCD_NAME="controller"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER="controller=http://192.168.1.130:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.130:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.130:2379"

Enable and restart the etcd service:

# systemctl enable etcd
# systemctl restart etcd
# service etcd status -l

3. 安裝 KeyStone

官方文檔:https://docs.openstack.org/keystone/pike/install/

在 controller 節點上執行操作。

3.1 安裝、配置

# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;

-- 授權:密碼爲 KEYSTONE_DBPASS
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
# apt install keystone  apache2 libapache2-mod-wsgi

編輯 /etc/keystone/keystone.conf 文件:

[database]
# ...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone


[token]
# ...
provider = fernet

填充數據庫:

# su -s /bin/sh -c "keystone-manage db_sync" keystone

查看是否填充成功:

MariaDB [(none)]> use keystone;
MariaDB [keystone]> show tables;

如果爲空,則說明填充失敗。

Initialize Fernet key repositories:

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Bootstrap the Identity service:(admin用戶的密碼爲:ADMIN_PASS)

# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

編輯 /etc/apache2/apache2.conf 文件:

ServerName controller

重啓服務、查看服務狀態:

# service apache2 restart
# service apache2 status -l

3.2 創建項目

創建 admin-openrc 文件:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

加載 admin-openrc 文件:

$ . admin-openrc

創建 service 項目:

$ openstack project create --domain default --description "Service Project" service

4. 安裝 Swift

官方文檔:https://docs.openstack.org/swift/pike/install/

4.1 配置 controller 節點

$ . admin-openrc

# 創建 swift 用戶. swift 用戶的密碼爲 SWIFT_PASS .
$ openstack user create --domain default --password-prompt swift

# 給 swift 用戶添加 admin 角色
$ openstack role add --project service --user swift admin

# Create the swift service entity
$ openstack service create --name swift \
  --description "OpenStack Object Storage" object-store
  
# Create the Object Storage service API endpoints
$ openstack endpoint create --region RegionOne \
  object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
  
$ openstack endpoint create --region RegionOne \
  object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
  
$ openstack endpoint create --region RegionOne \
  object-store admin http://controller:8080/v1
# apt-get install swift swift-proxy python-swiftclient \
  python-keystoneclient python-keystonemiddleware \
  memcached
# mkdir /etc/swift

# curl -o /etc/swift/object-server.conf https://opendev.org/openstack/swift/raw/branch/stable/pike/etc/object-server.conf-sample

編輯 /etc/swift/proxy-server.conf 文件:

[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift


[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server


[app:proxy-server]
use = egg:swift#proxy
# ...
account_autocreate = true


[filter:keystoneauth]
use = egg:swift#keystoneauth
# ...
operator_roles = admin,user


[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = True


[filter:cache]
use = egg:swift#memcache
# ...
memcache_servers = controller:11211

4.2 配置 storage 節點

# apt-get install xfsprogs rsync
# mkfs.xfs /dev/sdb
# mkfs.xfs /dev/sdc
# mkdir -p /srv/node/sdb
# mkdir -p /srv/node/sdc

編輯 /etc/fstab 文件:

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

掛載設備:

# mount /srv/node/sdb
# mount /srv/node/sdc

創建 /etc/rsyncd.conf 文件:

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.1.140

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

編輯 /etc/default/rsync 文件:

RSYNC_ENABLE=true

啓動服務、查看服務狀態:

# service rsync start
# service rsync status -l
# apt-get install swift swift-account swift-container swift-object
# curl -o /etc/swift/account-server.conf https://opendev.org/openstack/swift/raw/branch/stable/pike/etc/account-server.conf-sample

# curl -o /etc/swift/container-server.conf https://opendev.org/openstack/swift/raw/branch/stable/pike/etc/container-server.conf-sample

# curl -o /etc/swift/object-server.conf https://opendev.org/openstack/swift/raw/branch/stable/pike/etc/object-server.conf-sample

編輯 /etc/swift/account-server.conf 文件:

[DEFAULT]
# ...
bind_ip = 192.168.1.140
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True


[pipeline:main]
pipeline = healthcheck recon account-server


[filter:recon]
use = egg:swift#recon
# ...
recon_cache_path = /var/cache/swift

編輯 /etc/swift/container-server.conf 文件:

[DEFAULT]
# ...
bind_ip = 192.168.1.140
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True


[pipeline:main]
pipeline = healthcheck recon container-server


[filter:recon]
use = egg:swift#recon
# ...
recon_cache_path = /var/cache/swift

編輯 /etc/swift/object-server.conf 文件:

[DEFAULT]
# ...
bind_ip = 192.168.1.140
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True


[pipeline:main]
pipeline = healthcheck recon object-server


[filter:recon]
use = egg:swift#recon
# ...
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
# chown -R swift:swift /srv/node
# mkdir -p /var/cache/swift
# chown -R root:swift /var/cache/swift
# chmod -R 775 /var/cache/swift

4.3 創建、分發 ring

在 controller 節點上執行操作。

For simplicity, this guide uses one region and two zones with 2^10 (1024) maximum partitions, 2 replicas of each object, and 1 hour minimum time between moving a partition more than once. For Object Storage, a partition indicates a directory on a storage device rather than a conventional partition table.

# cd /etc/swfit

account ring

# swift-ring-builder account.builder create 10 2 1

Add each storage node to the ring:

# swift-ring-builder account.builder add \
  --region 1 --zone 1 --ip 192.168.1.140 --port 6202 --device sdb --weight 100

# swift-ring-builder account.builder add \
  --region 1 --zone 1 --ip 192.168.1.140 --port 6202 --device sdc --weight 100

Verify the ring contents:

# swift-ring-builder account.builder

Rebalance the ring:

# swift-ring-builder account.builder rebalance

container ring

# swift-ring-builder container.builder create 10 2 1

# swift-ring-builder container.builder add \
  --region 1 --zone 1 --ip 192.168.1.140 --port 6201 --device sdb --weight 100

# swift-ring-builder container.builder add \
  --region 1 --zone 1 --ip 192.168.1.140 --port 6201 --device sdc --weight 100
  
# swift-ring-builder container.builder

# swift-ring-builder container.builder rebalance

object ring

# swift-ring-builder object.builder create 10 2 1

# swift-ring-builder object.builder add \
  --region 1 --zone 1 --ip 192.168.1.140 --port 6200 --device sdb --weight 100

# swift-ring-builder object.builder add \
  --region 1 --zone 1 --ip 192.168.1.140 --port 6200 --device sdc --weight 100
  
# swift-ring-builder object.builder

# swift-ring-builder object.builder rebalance

分發

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

4.4 最後一步

在 controller 節點上執行操作。

# curl -o /etc/swift/swift.conf \
https://opendev.org/openstack/swift/raw/branch/stable/pike/etc/swift.conf-sample

編輯 /etc/swift/swift.conf 文件:

[swift-hash]
...
swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX


[storage-policy:0]
# ...
name = Policy-0
default = yes

Replace HASH_PATH_PREFIX and HASH_PATH_SUFFIX with unique values.

注意:Keep HASH_PATH_SUFFIX and HASH_PATH_PREFIX secret and do not change or lose them.

Copy the swift.conf file to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

在所有節點上:

# chown -R root:swift /etc/swift

On the controller node and any other nodes running the proxy service, restart the Object Storage proxy service including its dependencies:

# service memcached restart
# service swift-proxy restart
# service swift-proxy status -l

On the storage nodes, start the Object Storage services:

# swift-init all start

4.5 驗證

在 controller 節點上執行操作。

$ . demo-openrc
# Show the service status
$ swift stat

# Create container1 container
$ openstack container create container1

# Upload a file (FILE) to the container1 container
$ openstack object create container1 FILE

# List files in the container1 container
$ openstack object list container1

# Download FILE from the container1 container
$ openstack object save container1 FILE
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章