建议最小化安装,这里用的是CentOS-7-x86_64-Minimal-1511。html
本文包含控制节点controller3
,计算节点compute11
,存储节点cinder
各一台,全部密码为pass123456
。其它全部计算节点配置基本相同,但每个计算节点的主机名和IP应该是惟一的。node
每一个节点上有两块网卡,一块是能够访问外网的192.158.32.0/24
段,另外一块是内部通讯管理网络的172.16.1.0/24
段。python
网卡配置根据环境,虚拟机或物理机上配置方法请自行百度。mysql
其中,按该文配置的一个控制节点和一个计算节点的IP分别以下:linux
节点名称 | 提供网络 | 自选网络 |
---|---|---|
controller3 | 192.168.32.134 | 172.16.1.136 |
compute11 | 192.168.32.129 | 172.16.1.130 |
cinder | 192.168.32.139 | 172.16.1.138 |
在全部节点上:web
# yum install -y wget # mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup # wget -P /etc/yum.repos.d/ http://mirrors.163.com/.help/CentOS7-Base-163.repo # yum clean all # yum makecache
在全部节点上:sql
# yum install -y vim net-tools epel-release python-pip
在全部节点上:数据库
编辑/etc/selinux/config
文件apache
selinux=disabled
在全部节点上:django
编辑/etc/hosts
# controller3 192.168.32.134 controller3 # compute11 192.168.32.129 compute11 # cinder 192.168.32.139 cinder
修改主机名,将servername
分别在主机上修改成节点名称controller3
、compute11
、cinder
:
hostnamectl set-hostname servername systemctl restart systemd-hostnamed
验证:分别在各节点间ping每一个主机名的联通性。
在控制节点上:
# yum install -y chrony
编辑文件/etc/chrony.conf
添加:
allow 192.168.32.0/24
启动NTP服务并随系统系统
# systemctl enable chronyd.service # systemctl start chronyd.service
在除控制节点外其它节点上:
# yum install -y chrony
编辑文件/etc/chrony.conf
,并注释其它全部server
选项
server controller3 iburst
更改时区:
# timedatectl set-timezone Asia/Shanghai
启动NTP服务并随系统系统
# systemctl enable chronyd.service # systemctl start chronyd.service
验证:在全部节点上运行chronyc sources
,输出结果MS前带*
表示同步了相应Name/IP address的时间。
若是时间不一样步,则重启服务:
# systemctl restart chronyd.service
在全部节点上:
# yum install -y centos-release-openstack-ocata # yum install -y https://rdoproject.org/repos/rdo-release.rpm # yum install -y python-openstackclient
在控制节点上:
# yum install -y mariadb mariadb-server python2-PyMySQL
建立并编辑/etc/my.cnf.d/openstack.cnf
文件,注释bind-address
行:
[mysqld] #bind-address = 127.0.0.1 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
启动数据库服务,并随系统而启动:
# systemctl enable mariadb.service # systemctl start mariadb.service
运行数据库初始化安全脚本,设置数据库root用户密码,刚登陆数据库时密码默认为空:
mysql_secure_installation
在控制节点上:
# yum install -y rabbitmq-server # systemctl enable rabbitmq-server.service # systemctl start rabbitmq-server.service # rabbitmqctl add_user openstack pass123456 Creating user "openstack" ... # rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" ...
在控制节点上:
# yum install -y memcached python-memcached
编辑文件/etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller3"
启动memcache服务,并随系统启动:
# systemctl enable memcached.service # systemctl start memcached.service
在控制节点上:
首先要为认证服务建立数据库,用root
用户登陆数据库:
$ mysql -u root -p
建立数据库,并为用户分配权限:
MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
# yum install -y openstack-keystone httpd mod_wsgi
编辑配置文件/etc/keystone/keystone.conf
:
配置数据库访问
[database] # ... connection = mysql+pymysql://keystone:pass123456@controller3/keystone
配置Fernet 令牌提供者
[token] # ... provider = fernet
初始化认证服务数据库、Fernetkey仓库
# su -s /bin/sh -c "keystone-manage db_sync" keystone # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导认证服务
# keystone-manage bootstrap --bootstrap-password pass123456 \ --bootstrap-admin-url http://controller3:35357/v3/ \ --bootstrap-internal-url http://controller3:5000/v3/ \ --bootstrap-public-url http://controller3:5000/v3/ \ --bootstrap-region-id RegionOne
编辑/etc/httpd/conf/httpd.conf
,配置ServerName
为控制节点
ServerName controller3
建立连接文件
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动 Apache HTTP 服务并配置其随系统启动
# systemctl enable httpd.service # systemctl start httpd.service
使用环境变量和命令的组合来配置认证服务,为了更加高效和方便,建立 admin
和 demo
项目和用户建立客户端环境变量脚本,为客户端操做加载合适的的凭证。
建立并编辑admin-openrc
文件,并添加如下内容:
export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=pass123456 export OS_AUTH_TYPE=password export OS_AUTH_URL=http://controller3:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
建立并编辑demo-openrc
文件,并添加如下内容:
export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=pass123456 export OS_AUTH_TYPE=password export OS_AUTH_URL=http://controller3:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
运行admin
用户认证脚本. admin-openrc
,加载环境变量。
本指南有一个service
项目,你添加的每个服务都有惟一的用户。建立service
项目:
$ openstack project create --domain default \ --description "Service Project" service
常规(非管理)任务应该使用无特权的项目和用户。做为例子,本指南建立 demo
项目和用户:
$ openstack project create --domain default \ --description "Demo Project" demo
注意:当为这个项目建立额外用户时,不要重复这一步。
建立demo
用户、角色:
$ openstack user create --domain default \ --password-prompt demo User Password: Repeat User Password: $ openstack role create user
将user
角色添加到demo
项目中的user
用户中。
$ openstack role add --project demo --user demo user
出于安全性的缘由,禁用掉暂时的认证令牌机制。
编辑/etc/keystone/keystone-paste.ini
文件,并从[pipeline:public_api]
,[pipeline:admin_api]
和[pipeline:api_v3]
选项中删除admin_token_auth
。
使用admin用户,请求一个认证令牌;
$ openstack --os-auth-url http://controller3:35357/v3 \ --os-project-domain-name default --os-user-domain-name default \ --os-project-name admin --os-username admin token issue
使用demo用户,请求认证令牌:
$ openstack --os-auth-url http://controller3:5000/v3 \ --os-project-domain-name default --os-user-domain-name default \ --os-project-name demo --os-username demo token issue
请求认证令牌:
$ openstack token issue +------------+-----------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------+ | expires | 2016-02-12T20:44:35.659723Z | | id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl | | | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e | | | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E | | project_id | 343d245e850143a096806dfaefa9afdc | | user_id | ac3377633149401296f6c0d92d79dc16 | +------------+-----------------------------------------------------------------+
在控制节点上:
在安装配置镜像服务以前,你必须建立数据库、服务凭证和API端点。
以root用户链接数据库服务器,建立glance数据库,并赋予适当的权限:
$ mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
$ . admin-openrc $ openstack user create --domain default --password-prompt glance User Password: Repeat User Password: $ openstack role add --project service --user glance admin $ openstack service create --name glance \ --description "OpenStack Image" image
$ openstack endpoint create --region RegionOne \ image public http://controller3:9292 $ openstack endpoint create --region RegionOne \ image internal http://controller3:9292 $ openstack endpoint create --region RegionOne \ image admin http://controller3:9292
安装包:
# yum install -y openstack-glance
编辑文件/etc/glance/glance-api.conf
:
[database] # ... connection = mysql+pymysql://glance:pass123456@controller3/glance [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = pass123456 [paste_deploy] # ... flavor = keystone [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
注意:注释或删除
[keystone_authtoken]
选项的其它内容。
编辑文件/etc/glance/glance-registry.conf
:
[database] # ... connection = mysql+pymysql://glance:pass123456@controller3/glance [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = pass123456 [paste_deploy] # ... flavor = keystone
填充镜像数据库:
# su -s /bin/sh -c "glance-manage db_sync" glance
启动镜像服务并配置随系统启动
# systemctl enable openstack-glance-api.service \ openstack-glance-registry.service # systemctl start openstack-glance-api.service \ openstack-glance-registry.service
验证使用一个小的Linux系统 CirrOS 来测试OpenStack的部署。
$ . admin-openrc $ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img $ openstack image create "cirros" \ --file cirros-0.3.5-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active | +--------------------------------------+--------+--------+
在控制节点上:
在安装配置计算服务以前,你必须建立数据库、服务凭证和API端点。
以root用户链接数据库服务器,建立以下数据库,并赋予适当的权限:
$ mysql -u root -p MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
计算服务凭证:
$ openstack user create --domain default --password-prompt nova User Password: Repeat User Password: $ openstack role add --project service --user nova admin $ openstack service create --name nova \ --description "OpenStack Compute" compute
Placement服务凭证:
$ openstack user create --domain default --password-prompt placement User Password: Repeat User Password: $ openstack role add --project service --user placement admin $ openstack service create --name placement --description "Placement API" placement
计算服务API 端点:
$ openstack endpoint create --region RegionOne \ compute public http://controller3:8774/v2.1 $ openstack endpoint create --region RegionOne \ compute internal http://controller3:8774/v2.1 $ openstack endpoint create --region RegionOne \ compute admin http://controller3:8774/v2.1
Placement API 端点 :
$ openstack endpoint create --region RegionOne placement public http://controller3:8778 $ openstack endpoint create --region RegionOne placement internal http://controller3:8778 $ openstack endpoint create --region RegionOne placement admin http://controller3:8778
安装包:
# yum install -y openstack-nova-api openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy \ openstack-nova-scheduler openstack-nova-placement-api
编辑/etc/nova/nova.conf
文件:
[DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:pass123456@controller3 my_ip = 172.16.1.136 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database] # ... connection = mysql+pymysql://nova:pass123456@controller3/nova_api [database] # ... connection = mysql+pymysql://nova:pass123456@controller3/nova [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = pass123456 [vnc] enabled = true # ... vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] # ... api_servers = http://controller3:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller3:35357/v3 username = placement password = pass123456
编辑/etc/httpd/conf.d/00-nova-placement-api.conf
文件添加:
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
重启httpd服务:
# systemctl restart httpd
填充nova-api
数据库:
# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0
数据库:
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
建立cell1
单元:
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova 109e1d4b-536a-40d0-83c6-5f121b82b650
填充nova
数据库,警告信息能够忽略:
# su -s /bin/sh -c "nova-manage db sync" nova
验证nova cell0
和cell1
是否注册正确:
# nova-manage cell_v2 list_cells +-------+--------------------------------------+ | Name | UUID | +-------+--------------------------------------+ | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | | cell0 | 00000000-0000-0000-0000-000000000000 | +-------+--------------------------------------+
启动计算服务并配置随系统启动:
# systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service # systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
在全部计算节点上:
安装包:
# yum install -y openstack-nova-compute
编辑/etc/nova/nova.conf
文件:
[DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:pass123456@controller3 my_ip = 172.16.1.130 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = pass123456 [vnc] # ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller3:6080/vnc_auto.html [glance] # ... api_servers = http://controller3:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller3:35357/v3 username = placement password = pass123456
检查你的计算节点是否支持硬件虚拟化:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
若是命令返回值大于等于1,那么不须要配置,不然,须要作一下配置libvirt
来使用QEMU而不能用KVM。
编辑/etc/nova/nova.conf
文件:
[libvirt] # ... virt_type = qemu
启动计算服务及其依赖服务并配置随系统启动:
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
注意:下面的命令在控制节点运行。
确认有哪些计算节点主机在数据库:
$ . admin-openrc $ openstack hypervisor list +----+---------------------+-----------------+-----------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+-----------+-------+ | 1 | compute1 | QEMU | 10.0.0.31 | up | +----+---------------------+-----------------+-----------+-------+
发现计算节点主机:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
注意:当你添加一个新的计算节点的时候,须要在控制节点运行
nova-manage cell_v2 discover_hosts
来注册该新计算节点,或者在/etc/nova/nova.conf
配置节点中设置:
[scheduler] discover_hosts_in_cells_interval = 300
在控制节点上:
$ . admin-openrc $ openstack compute service list +----+--------------------+------------+----------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated At | +----+--------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | | 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | | 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 | | 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 | +----+--------------------+------------+----------+---------+-------+----------------------------+ $ openstack catalog list +-----------+-----------+-----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ | keystone | identity | RegionOne | | | | public: http://controller:5000/v3/ | | | | RegionOne | | | | internal: http://controller:5000/v3/ | | | | RegionOne | | | | admin: http://controller:35357/v3/ | | | | | | glance | image | RegionOne | | | | admin: http://controller:9292 | | | | RegionOne | | | | public: http://controller:9292 | | | | RegionOne | | | | internal: http://controller:9292 | | | | | | nova | compute | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | RegionOne | | | | internal: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | | | placement | placement | RegionOne | | | | public: http://controller:8778 | | | | RegionOne | | | | admin: http://controller:8778 | | | | RegionOne | | | | internal: http://controller:8778 | | | | | +-----------+-----------+-----------------------------------------+ $ openstack image list +--------------------------------------+-------------+-------------+ | ID | Name | Status | +--------------------------------------+-------------+-------------+ | 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros | active | +--------------------------------------+-------------+-------------+ # nova-status upgrade check +---------------------------+ | Upgrade Check Results | +---------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +---------------------------+
在控制节点上:
在配置OpenStack网络服务以前,你必须建立数据库、服务凭证和API端点。
以root用户链接数据库服务器,建立glance数据库,并赋予适当的权限:
$ mysql -u root -p MariaDB [(none)] CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
建立neutron
服务实体:
$ . admin-openrc $ openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: $ openstack role add --project service --user neutron admin $ openstack service create --name neutron \ --description "OpenStack Networking" network
建立网络服务API端点:
$ openstack endpoint create --region RegionOne \ network public http://controller3:9696 $ openstack endpoint create --region RegionOne \ network internal http://controller3:9696 $ openstack endpoint create --region RegionOne \ network admin http://controller3:9696
这里选择自服务网络。
# yum install -y openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables
编辑配置文件/etc/neutron/neutron.conf
:
[DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:pass123456@controller3 auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] # ... connection = mysql+pymysql://neutron:pass123456@controller3/neutron [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = pass123456 [nova] # ... auth_url = http://controller3:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = pass123456 [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp
ML2插件使用Linux bridge机制来为实例建立layer-2虚拟网络基础设施。
编辑配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
:
[ml2] # ... type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [ml2_type_vxlan] # ... vni_ranges = 1:1000 [securitygroup] # ... enable_ipset = true
警告:在配置完ML2插件以后,删除可能致使数据库不一致的type_drivers
项的值。
Linux bridge代理为实例创建layer-2虚拟网络而且处理安全组规则。
编辑配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
:
[linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = 172.16.1.136 l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
将PUBLIC_INTERFACE_NAME
替换为底层的物理公共网络接口。
将172.16.1.136
为计算节点的管理网络的IP地址。
编辑配置文件/etc/neutron/l3_agent.ini
:
[DEFAULT] # ... interface_driver = linuxbridge
DHCP代理为虚拟网络提供了DHCP服务。
编辑配置文件/etc/neutron/dhcp_agent.ini
:
[DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
编辑配置文件/etc/neutron/metadata_agent.ini
:
[DEFAULT] # ... nova_metadata_ip = controller3 metadata_proxy_shared_secret = pass123456
编辑配置文件/etc/nova/nova.conf
:
[neutron] # ... url = http://controller3:9696 auth_url = http://controller3:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = pass123456 service_metadata_proxy = true metadata_proxy_shared_secret = pass123456
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron # systemctl restart openstack-nova-api.service # systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service # systemctl start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service # systemctl enable neutron-l3-agent.service # systemctl start neutron-l3-agent.service
在计算节点上:
# yum install -y openstack-neutron-linuxbridge ebtables ipset
网络通用组件的配置包括认证机制、消息队列和插件。
编辑配置文件/etc/neutron/neutron.conf
:
在[database]
部分,注释全部connection
项,由于计算节点不直接访问数据库。
[DEFAULT] # ... transport_url = rabbit://openstack:pass123456@controller3 auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = pass123456 [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp
对应控制节点,这里也选择自服务网络。
7.2.3.1 配置Linux bridge代理
Linux bridge代理为实例创建layer-2虚拟网络而且处理安全组规则。
编辑配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
:
[linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = 172.16.1.130 l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
将PROVIDER_INTERFACE_NAME
替换为底层的物理公共网络接口。
将172.16.1.130
为计算节点的管理网络的IP地址。
编辑配置文件/etc/nova/nova.conf
:
[neutron] # ... url = http://controller3:9696 auth_url = http://controller3:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = pass123456
重启计算服务,启动Linuxbridge代理并配置它开机自启动:
# systemctl restart openstack-nova-compute.service # systemctl enable neutron-linuxbridge-agent.service # systemctl start neutron-linuxbridge-agent.service
在控制节点上:
$ . admin-openrc $ openstack extension list --network $ openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent | | 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent | | 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent | | 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent | | dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
在控制节点上:
安装包:
# yum install -y openstack-dashboard
编辑配置文件/etc/openstack-dashboard/local_settings
:
OPENSTACK_HOST = "controller3" ALLOWED_HOSTS = ['*'] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller3:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
重启web服务器以及会话存储服务:
# systemctl restart httpd.service memcached.service
在浏览器中输入 http://192.168.32.134/dashboard
访问仪表盘。
验证使用 admin 或者demo
用户凭证和default
域凭证。
在控制节点上:
$ mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller3' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
$ openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 \ --description "OpenStack Block Storage" volumev2 $ openstack service create --name cinderv3 \ --description "OpenStack Block Storage" volumev3
$ openstack endpoint create --region RegionOne \ volumev2 public http://controller3:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne \ volumev2 internal http://controller3:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne \ volumev2 admin http://controller3:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \ volumev3 public http://controller3:8776/v3/%\(project_id\)s $ openstack endpoint create --region RegionOne \ volumev3 internal http://controller3:8776/v3/%\(project_id\)s $ openstack endpoint create --region RegionOne \ volumev3 admin http://controller3:8776/v3/%\(project_id\)s
# yum install -y openstack-cinder
/etc/cinder/cinder.conf
:[DEFAULT] # ... transport_url = rabbit://openstack:pass123456@controller3 auth_strategy = keystone my_ip = 172.16.1.136 [database] # ... connection = mysql+pymysql://cinder:pass123456@controller3/cinder [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = pass123456 [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
# su -s /bin/sh -c "cinder-manage db sync" cinder
编辑配置文件/etc/nova/nova.conf
:
[cinder] os_region_name = RegionOne
# systemctl restart openstack-nova-api.service # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
在存储节点上:
# yum install lvm2 # systemctl enable lvm2-lvmetad.service # systemctl start lvm2-lvmetad.service
# pvcreate /dev/sdb # vgcreate cinder-volumes /dev/sdb
# yum install openstack-cinder targetcli python-keystone
编辑配置文件/etc/cinder/cinder.conf
:
[DEFAULT] # ... transport_url = rabbit://openstack:pass123456@controller3 auth_strategy = keystone my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS enabled_backends = lvm glance_api_servers = http://controller3:9292 [database] # ... connection = mysql+pymysql://cinder:pass123456@controller3/cinder [keystone_authtoken] # ... auth_uri = http://controller3:5000 auth_url = http://controller3:35357 memcached_servers = controller3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = pass123456 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
# systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.service
$ . admin-openrc $ openstack volume service list +------------------+------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated_at | +------------------+------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 | | cinder-volume | block@lvm | nova | enabled | up | 2016-09-30T02:27:46.000000 | +------------------+------------+------+---------+-------+----------------------------+