OpenStack是一个开源的云计算管理平台项目,是一系列软件开源项目的组合,由NASA(美国国家航空航天局)和Rackspace合做研发并发起,以Apache许可证受权的开源代码项目
OpenStack为私有云和公有云提供可扩展的弹性的云计算服务,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台
OpenStack覆盖了网络、虚拟化、操做系统、服务器等各个方面,它是一个正在开发中的云计算平台项目,根据成熟及重要程度的不一样,被分解成核心项目、孵化项目,以及支持项目和相关项目,每一个项目都有本身的委员会和项目技术主管,并且每一个项目都不是一成不变的,孵化项目能够根据发展的成熟度和重要性,转变为核心项目
核心组件
一、计算(Compute)Nova:一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机建立、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操做,配置CPU、内存等信息规格
二、对象存储(Object Storage)Swift:一套用于在大规模可扩展系统中经过内置冗余及高容错机制实现对象存储的系统,容许进行存储或者检索文件,可为Glance提供镜像存储,为Cinder提供卷备份服务
三、镜像服务(Image Service)Glance:一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW二、Raw、VDI、VHD、VMDK),有建立上传镜像、删除镜像、编辑镜像基本信息的功能
四、身份服务(Identity Service)Keystone:为OpenStack其余服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles
五、网络&地址管理(Network)Neutron:提供云计算的网络虚拟化技术,为OpenStack其余服务提供网络链接服务。为用户提供接口,能够定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN,插件架构支持许多主流的网络厂家和技术,如OpenvSwitch
六、块存储(Block Storage)Cinder:为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的建立和管理,如建立卷、删除卷,在实例上挂载和卸载卷
七、UI 界面(Dashboard)Horizon:OpenStack中各类服务的Web管理门户,用于简化用户对服务的操做,例如:启动实例、分配IP地址、配置访问控制等
八、测量(Metering)Ceilometer:能把OpenStack内部发生的几乎全部的事件都收集起来,而后为计费和监控以及其它服务提供数据支撑
九、部署编排(Orchestration)Heat:提供了一种经过模板定义的协同部署方式,实现云基础设施软件运行环境(计算、存储和网络资源)的自动化部署
十、数据库服务(Database Service)Trove:为用户在OpenStack的环境提供可扩展和可靠的关系和非关系数据库引擎服务html
准备三台Centos7虚拟机,其中两台虚拟机配置两个网卡(NAT和仅主机),两台虚拟区配置多块硬盘,配置IP地址和hostname,同步系统时间,关闭防火墙和selinux,修改ip地址和hostname映射node
ip | hostname |
---|---|
ens33(NAT):192.168.29.145 ens37(仅主机):192.168.31.135 | controller |
ens33(NAT):192.168.29.146 ens37(仅主机):192.168.31.136 | computer |
ens33(NAT):192.168.29.147 | storager |
安装epel源python
[root@controller ~]# yum install epel-release -y [root@computer ~]# yum install epel-release -y
安装openstack源mysql
[root@controller ~]# yum install -y centos-release-openstack-queens [root@computer ~]# yum install -y centos-release-openstack-queens
安装openstack的客户端和selinux服务linux
[root@controller ~]# yum install python-openstackclient openstack-selinux -y [root@computer ~]# yum install python-openstackclient openstack-selinux -y
部署MySQL数据库和memcachedweb
[root@controller ~]# yum install mysql-server mysql memcached python2-PyMySQL -y
安装消息队列服务sql
[root@controller ~]# yum install -y rabbitmq-server -y
安装keystone服务shell
[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
安装glance服务数据库
[root@controller ~]# yum install openstack-glance -y
controller安装nova服务django
[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
computer安装nova服务
[root@computer ~]# yum install openstack-nova-compute -y
controller安装neutron服务
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
computer安装neutron服务
[root@computer ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
安装dashboard组件
[root@controller ~]# yum install openstack-dashboard -y
controller安装swift服务
[root@controller ~]# yum install openstack-swift openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware
comupter和storager安装swift服务
[root@computer ~]# yum install openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object -y [root@storager ~]# yum install openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object -y
开启服务
[root@controller ~]# systemctl startrabbitmq-server.service [root@controller ~]# systemctl enable rabbitmq-server.service
添加用户
[root@controller ~]# rabbitmqctl add_user openstack openstack
受权限
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
修改配置文件
[root@controller ~]# vi /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 127.0.0.1,::1,controller"
启动服务
[root@controller ~]# systemctl start memcached.service [root@controller ~]# systemctl enable memcached.service
修改配置文件
[root@controller ~]# vi /etc/my.cnf default-time_zone='+8:00' bind-address = 192.168.29.145 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
启动服务
[root@controller ~]# systemctl start mysqld [root@controller ~]# systemctl enable mysqld
建立数据库
mysql> create database keystone; mysql> create database glance; mysql> create database nova; mysql> create database nova_api; mysql> create database nova_cell0; mysql> create database neutron;
受权用户
mysql> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'your_password'; mysql> grant all privileges on keystone.* to 'keystone'@'%' identified by 'your_password'; mysql> grant all privileges on glance.* to 'glance'@'localhost' identified by 'your_password'; mysql> grant all privileges on glance.* to 'glance'@'%' identified by 'your_password'; mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'your_password'; mysql> grant all privileges on nova.* to 'nova'@'%' identified by 'your_password'; mysql> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'your_password'; mysql> grant all privileges on nova_api.* to 'nova'@'%' identified by 'your_password'; mysql> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'your_password'; mysql> grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'your_password'; mysql> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'your_password'; mysql> grant all privileges on neutron.* to 'neutron'@'%' identified by 'your_password'; mysql> flush privileges;
修改配置文件
[root@controller ~]# vi /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:your_password@controller/keystone [token] provider = fernet
数据库同步
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
密钥库初始化
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
配置httpd服务
#修改配置文件 [root@controller ~]# vi /etc/httpd/conf/httpd.conf ServerName controller #建立软链接 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ #启动服务 [root@controller ~]# systemctl start httpd [root@controller ~]# systemctl enable httpd
配置环境变量脚本
[root@controller ~]# vi admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
验证环境变量
[root@controller ~]# source admin-openrc [root@controller ~]# openstack token issue
建立service项目
[root@controller ~]# openstack project create --domain default --description "Service Project" service
建立demo项目
[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
建立demo用户
[root@controller ~]# openstack user create --domain default --password-prompt demo
建立user角色
[root@controller ~]# openstack role create user
添加user角色到demo项目和用户
[root@controller ~]# openstack role add --project demo --user demo user
配置demo环境变量脚本
[root@controller ~]# vi demo-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
建立并配置glance用户
[root@controller ~]# openstack user create --domain default --password-prompt glance [root@controller ~]# openstack role add --project service --user glance admin
建立glance服务实体
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
建立glance服务端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292 [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292 [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
修改配置文件
[root@controller ~]# vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:your_password@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
[root@controller ~]# vi /etc/glance/glance-registry.conf [database] connection = mysql+pymysql://glance:your_password@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance [paste_deploy] flavor = keystone
同步数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
启动服务
[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service [root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service
上传镜像
[root@controller ~]# glance image-create --name Centos7--disk-format qcow2 --container-format bare --progress < CentOS-7-x86_64-GenericCloud-1907.qcow2 #查看镜像 [root@controller ~]# openstack image list
建立并配置nova用户
[root@controller ~]# openstack user create --domain default --password-prompt nova [root@controller ~]# openstack role add --project service --user nova admin
建立nova服务实体
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
建立nova服务端点
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
建立并配置placement用户
[root@controller ~]# openstack user create --domain default --password-prompt placement [root@controller ~]# openstack role add --project service --user placement admin
建立placement服务实体
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
建立placement服务端点
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
修改配置文件
[root@controller ~]# vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata my_ip = 192.168.29.145 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:openstack@controller [api_database] connection = mysql+pymysql://nova:your_password@controller/nova_api [database] connection = mysql+pymysql://nova:your_password@controller/nova [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement
[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf Listen 8778 <VirtualHost *:8778> WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova WSGIScriptAlias / /usr/bin/nova-placement-api <IfVersion >= 2.4> ErrorLogFormat "%M" </IfVersion> ErrorLog /var/log/nova/nova-placement-api.log #SSLEngine On #SSLCertificateFile ... #SSLCertificateKeyFile ... <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion >= 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost> Alias /nova-placement-api /usr/bin/nova-placement-api <Location /nova-placement-api> SetHandler wsgi-script Options +ExecCGI WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On </Location>
重启httpd服务
[root@controller ~]# systemctl restart httpd
同步数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
验证
[root@controller ~]# nova-manage cell_v2 list_cells
启动服务
[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service [root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
修改配置文件
[root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:openstack@controller my_ip = 192.168.29.146 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver allow_resize_to_same_host = True [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [vnc] enabled = True server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [libvirt] virt_type = qemu
启动服务
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service [root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
controller添加computer进入数据库
#查看nova-compute结点 [root@controller ~]# openstack compute service list --service nova-compute #添加数据库 [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
建立并配置neutron用户
[root@controller ~]# openstack user create --domain default --password-prompt neutron [root@controller ~]# openstack role add --project service --user neutron admin
建立neutron服务实体
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
建立neutron服务端点
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
修改配置文件
[root@controller ~]# vi /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] connection = mysql+pymysql://neutron:your_password@controller/neutron [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens37 [vxlan] enable_vxlan = true local_ip = 192.168.31.135 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@controller ~]# vi /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
[root@controller ~]# vi /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = 000000
[root@controller ~]# vi /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = 000000
建立软连接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
启动服务
#重启nova-api服务 [root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service [root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
修改配置文件
[root@computer ~]# vi /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [oslo_concurrency] lock_path = /var/lib/neutron/tmp
[root@computer ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens37 [vxlan] enable_vxlan = true local_ip = 192.168.31.136 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@computer ~]# vi /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron
启动服务
#重启nova-compute服务 [root@compute ~]# systemctl stop openstack-nova-compute.service [root@compute ~]# systemctl start openstack-nova-compute.service #注:直接restart重启可能会致使报错 [root@compute ~]# systemctl start neutron-linuxbridge-agent.service [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
验证
[root@controller ~]# openstack network agent list #查看日志 [root@computer ~]# tail /var/log/nova/nova-compute.log
修改配置文件
[root@controller ~]# vi /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*', 'two.example.com'] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf WSGIApplicationGroup %{GLOBAL}
重启服务
[root@controller ~]# systemctl restart httpd.service memcached.service
访问web界面
浏览器访问http://ip/dashboard
添加硬盘并格式化
#computer结点 [root@computer ~]# parted -a optimal --script /dev/sdc -- mktable gpt [root@computer ~]# parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100% [root@computer ~]# mkfs.xfs -f /dev/sdc1 #storager结点 [root@storager ~]# parted -a optimal --script /dev/sdc -- mktable gpt [root@storager ~]# parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100% [root@storager ~]# mkfs.xfs -f /dev/sdc1
挂载硬盘
#computer结点 [root@computer ~]# mkdir -p /srv/node/sdc1 [root@computer ~]# vi /etc/fstab /dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 [root@computer ~]# mount /srv/node/sdc1/
#storager结点 [root@storager ~]# mkdir -p /srv/node/sdc1 [root@storager ~]# vi /etc/fstab /dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 [root@storager ~]# mount /srv/node/sdc1/
编辑rsyncd服务
[root@computer ~]# vi /etc/rsyncd.conf uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 192.168.29.146 [account] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/object.lock
[root@storager ~]# vi /etc/rsyncd.conf uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 192.168.29.147 [account] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/object.lock
修改配置文件
[root@computer ~]# vi /etc/swift/account-server.conf [root@storager ~]# vi /etc/swift/account-server.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6202 workers = 2 user = swift swift_dir = /etc/swift devices = /srv/node [pipeline:main] pipeline = account-server
[root@computer ~]# vi /etc/swift/container-server.conf [root@storager ~]# vi /etc/swift/container-server.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6201 workers = 2 user = swift swift_dir = /etc/swift devices = /srv/node [pipeline:main] pipeline = container-server
[root@computer ~]# vi /etc/swift/object-server.conf [root@storager ~]# vi /etc/swift/object-server.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6200 workers = 3 user = swift swift_dir = /etc/swift devices = /srv/node [pipeline:main] pipeline = object-server
修改权限
[root@computer ~]# chown -R swift:swift /srv/node/ [root@storager ~]# chown -R swift:swift /srv/node/
启动服务
[root@computer ~]# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service rsyncd.service [root@computer ~]# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@storager ~]# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service rsyncd.service [root@storager ~]# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
建立并配置swift用户
[root@controller ~]# openstack user create --password-prompt swift [root@controller ~]# openstack role add --project service --user swift admin
建立swift服务实体
[root@controller ~]# openstack service create --name swift --description "OpenStack Object Storage" object-store
建立swift服务端点
[root@controller ~]# openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s [root@controller ~]# openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s [root@controller ~]# openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
修改配置文件
[root@controller ~]# vi /etc/swift/proxy-server.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 8080 workers = 2 user = swift swift_dir = /etc/swift [filter:keystone] use = egg:swift#keystoneauth operator_roles = admin,SwiftOperator,user cache = swift.cache [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = swift password = swift delay_auth_decision = true
[root@controller ~]# vi /etc/swift/swift.conf [swift-hash] swift_hash_path_suffix = `od -t x8 -N 8 -A n < /dev/random` [storage-policy:0] name = Policy-0 default = yes
建立rings
[root@controller ~]# cd /etc/swift
#Account rings #建立rings [root@controller ~]# swift-ring-builder account.builder create 18 2 1 参数含义 18:每一个ring被分为2的18次方个分区 2:一个数据存储两个备份 1:ring的一个分区在一个小时候才可被移动 #添加存储节点到rings [root@controller ~]# swift-ring-builder account.builder add r1z1-192.168.29.146:6202/sdc1 100 [root@controller ~]# swift-ring-builder account.builder add r1z1-192.168.29.147:6202/sdc1 100 #从新分布rings [root@controller ~]# swift-ring-builder account.builder rebalance
#Container rings #建立rings [root@controller ~]# swift-ring-builder container.builder create 18 2 1 #添加存储节点到rings [root@controller ~]# swift-ring-builder container.builder add r1z1-192.168.29.146:6201/sdc1 100 [root@controller ~]# swift-ring-builder container.builder add r1z1-192.168.29.147:6201/sdc1 100 #从新分布rings [root@controller ~]# swift-ring-builder container.builder rebalance
#object ring #建立rings [root@controller ~]# swift-ring-builder object.builder [root@controller ~]# swift-ring-builder object.builder create 18 2 1 #添加存储节点到rings [root@controller ~]# swift-ring-builder object.builder add r1z1-192.168.29.146:6200/sdc1 100 [root@controller ~]# swift-ring-builder object.builder add r1z1-192.168.29.147:6200/sdc1 100 #从新分布rings [root@controller ~]# swift-ring-builder object.builder rebalance
分配配置文件
[root@controller ~]# scp account.ring.gz container.ring.gz object.ring.gz swift.conf 192.168.29.146:/etc/swift/ [root@controller ~]# scp account.ring.gz container.ring.gz object.ring.gz swift.conf 192.168.29.147:/etc/swift
设置目录权限
[root@controller ~]# chown -R swift:swift /etc/swift/
重启服务
[root@controller ~]# systemctl restart memcached.service [root@controller ~]# systemctl start openstack-swift-proxy.service
存储结点重启服务
[root@computer ~]# swift-init all start [root@storager ~]# swift-init all start
验证状态
[root@controller ~]# swift stat Account: AUTH_97e5c629da9944c5ad960e5c171dac68 Containers: 1 Objects: 1 Bytes: 13287936 Containers in policy "policy-0": 1 Objects in policy "policy-0": 1 Bytes in policy "policy-0": 13287936 X-Openstack-Request-Id: tx19a4687c644645708525e-005f30ef14 X-Timestamp: 1597039034.68479 X-Trans-Id: tx19a4687c644645708525e-005f30ef14 Content-Type: application/json; charset=utf-8 Accept-Ranges: bytes
上传文件
[root@controller ~]# swift upload demo_container cirros-0.3.4-x86_64-disk.img
查看文件
[root@controller ~]# swift list demo_container
部署云主机步骤可参考:http://www.javashuo.com/article/p-nwfrkbdv-kc.html