OpenStack安装流程(juno版)- 添加对象存储服务(swift)- 安装和配置

在controller节点上安装和配置

建立swift的数据库,服务证书和API端点

  1. 建立服务证书:

    启动admin证书:
    $ source admin-openrc.sh node

    建立swift用户:
    <pre>$ keystone user-create --name swift --pass SWIFT_PASSpython

Property Value
email
enabled True
id dcf5d53f027b44d38c205ad06717812c
name swift
username swift

+----------+----------------------------------+</pre>
用合适的密码代替SWIFT_PASS。git

admin角色赋予给swift用户:
$ keystone user-role-add --user swift --tenant service --role admin
这条命令不产生输出显示。github

建立swift服务实体:
<pre>$ keystone service-create --name swift --type object-store \数据库

--description "OpenStack Object Storage"
Property Value
description OpenStack Object Storage
enabled True
id 11519978722e4fb4be75f086aca49334
name swift
type object-store

+-------------+----------------------------------+</pre>swift

  1. 建立对象存储服务的API端点:
    <pre>$ keystone endpoint-create \

--service-id $(keystone service-list | awk '/ object-store / {print$2}') \
--publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://controller:8080 \网络

--region regionOne
Property Value
adminurl http://controller:8080
id ec003b88a6144afda3fc2b34acb93ded
internalurl http://controller:8080/v1/AUTH_%(tenant_id)s
publicurl http://controller:8080/v1/AUTH_%(tenant_id)s
region regionOne
service_id 11519978722e4fb4be75f086aca49334

+-------------+----------------------------------------------+</pre>app

安装和配置组件

  1. 安装所需包:
    # apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
  2. 建立文件夹/etc/swift
  3. 从对象存储的源码仓库中取得代理服务配置文件(proxy service configuration file)。
    # curl -o /etc/swift/proxy-server.conf https://raw.githubusercontent...
  4. 编辑# vi /etc/swift/proxy-server.conf文件:

    [DEFAULT]部分,设置bind port,用户和配置文件存放目录:
    <pre>[DEFAULT]curl

...
bind_port = 8080
user = swift
swift_dir = /etc/swift</pre>ide

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
</pre>

[app:proxy-server]部分,启用账户管理:
<pre>[app:proxy-server]
...
allow_account_management = true
account_autocreate = true
</pre>

[filter:keystoneauth]部分,设定操做者角色(operator role):
<pre>[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,_member_
</pre>

[filter:authtoken]部分,设定认证服务:
<pre>[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = swift
admin_password = SWIFT_PASS
delay_auth_decision = true
</pre>
SWIFT_PASS为建立swift用户时使用的密码。注释掉 auth_host,auth_port,和auth_protocol的选项,由于identity_uri选项是直接代替它们的。

[filter:cache]部分,设定memcached location:
<pre>[filter:cache]
...
memcache_servers = 127.0.0.1:11211
</pre>

在object节点上安装和配置

这里要求两个object存储节点,每个都包含两个空的本地块存储设备(two empty local block storage devices)。每一个设备(/dev/sdb/dev/sdc)都必须包含一个合适的分区表,整个设备就一个分区(Each of the devices, /dev/sdb and /dev/sdc, must contain a suitable partition table with one partition occupying the entire device. )。

配置存储

为object节点增添两块硬盘:设置->存储->控制器:SATA->添加虚拟硬盘。

object节点的基础环境配置

由前文所述的虚拟机模版建立两个存储节点,分别为object1和object2节点,基础环境配置以下:

配置网络

object节点虚拟机网络设置,设置->网络:

  1. 网卡1,链接方式->仅主机(Host-Only)适配器,界面名称->VirtualBox Host-Only Ethernet Adapter #2,控制芯片->准虚拟化网络(virtio-net),混杂模式->所有容许,接入网线->勾选;
  2. 网卡2,链接方式->网络地址转换(NAT),控制芯片->准虚拟化网络(virtio-net),接入网线->勾选。

启动虚拟机后,配置其网络,经过更改# vi /etc/network/interfaces文件,添加以下代码:
object1:
<pre># The management network interface
auto eth0
iface eth0 inet static

address 10.10.10.14
netmask 255.255.255.0

The NAT network

auto eth1
iface eth1 inet dhcp</pre>
object2:
<pre># The management network interface
auto eth0
iface eth0 inet static

address 10.10.10.15
netmask 255.255.255.0

The NAT network

auto eth1
iface eth1 inet dhcp</pre>

配置命名的解决方案,更改# vi /etc/hostname文件,将主机名改成object1object2,更改# vi /etc/hosts文件,添加如下代码:
<pre>10.10.10.10 controller
10.10.10.11 compute
10.10.10.12 network
10.10.10.13 block
10.10.10.14 object1
10.10.10.15 object2
</pre>

配置NTP

修改配置文件# vi /etc/ntp.conf,添加以下代码:

<pre>server controller iburst</pre>
其余server所有都注释掉。若是/var/lib/ntp/ntp.conf.dhcp文件存在,则删除之。

重启NTP服务:# service ntp restart

配置存储

  1. 安装配套的功能包

# apt-get install xfsprogs rsync

  1. 格式化分区/dev/sdb/dev/sdc为XFS格式:
    <pre># mkfs.xfs /dev/sdb

meta-data=/dev/sdb isize=256 agcount=4, agsize=524288 blks

=                       sectsz=512   attr=2, projid32bit=0

data = bsize=4096 blocks=2097152, imaxpct=25

=                       sunit=0      swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2

=                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0</pre>
<pre># mkfs.xfs /dev/sdc
meta-data=/dev/sdc isize=256 agcount=4, agsize=524288 blks

=                       sectsz=512   attr=2, projid32bit=0

data = bsize=4096 blocks=2097152, imaxpct=25

=                       sunit=0      swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2

=                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0></pre>

  1. 建立挂载点的目录结构

# mkdir -p /srv/node/sdb
# mkdir -p /srv/node/sdc

  1. 编辑# vi /etc/fstab文件,添加以下内容:

    <pre>/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2</pre>

  1. 挂载设备

# mount /srv/node/sdb
# mount /srv/node/sdc

  1. 新建# vi /etc/rsyncd.conf文件,添加以下内容:
    <pre>uid = swift

gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

  1. 编辑# vi /etc/default/rsync文件,启用rsync服务:
    <pre>RSYNC_ENABLE=true</pre>
  2. 重启rsync服务:

# service rsync start

安装和配置存储节点组件

  1. 安装所需包:

# apt-get install swift swift-account swift-container swift-object

  1. 从对象存储的源码仓库中取得accounting,container和object服务配置文件:

# curl -o /etc/swift/account-server.conf https://raw.githubusercontent...
# curl -o /etc/swift/container-server.conf https://raw.githubusercontent...
# curl -o /etc/swift/object-server.conf https://raw.githubusercontent...

  1. 编辑# vi /etc/swift/account-server.conf文件:
    [DEFAULT]部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
    <pre>[DEFAULT]

...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]
pipeline = healthcheck recon account-server
</pre>

[filter:recon]部分,设定recon(metrics)cache目录:
<pre>[filter:recon]
...
recon_cache_path = /var/cache/swift
</pre>

  1. 编辑# vi /etc/swift/container-server.conf文件:
    [DEFAULT]部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
    <pre>[DEFAULT]

...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]
pipeline = healthcheck recon container-server
</pre>

[filter:recon]部分,设定recon(metrics)cache目录:
<pre>[filter:recon]
...
recon_cache_path = /var/cache/swift
</pre>

  1. 编辑# vi /etc/swift/object-server.conf文件:

    [DEFAULT]部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
    <pre>[DEFAULT]

...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。

[pipeline:main]部分,启用合适的模块:
<pre>[pipeline:main]
pipeline = healthcheck recon object-server
</pre>

[filter:recon]部分,设定recon(metrics)cache目录:
<pre>[filter:recon]
...
recon_cache_path = /var/cache/swift
</pre>

  1. 确保挂载点目录结构的权限正确:

# chown -R swift:swift /srv/node

  1. 建立recon目录,并确保权限正确:

# mkdir -p /var/cache/swift
# chown -R swift:swift /var/cache/swift

相关文章
相关标签/搜索