三节点搭建openstack-Mitaka版本

前言:html

  如今的云计算平台已经很是火,也很是的稳定了。像阿里云平台,百度云平台等等,今天我们基于openstack来搭建一个云平台python

注意:mysql

  本次平台搭建为三节点搭建(没有外部存储节点,全部存储为本地存储的方式)linux

 

一:网络:sql

  1.管理网络:192.168.1.0/24mongodb

  2.数据网络:1.1.1.0/24数据库

 

二:操做系统apache

  CentOS Linux release 7.3.1611 (Core) django

三:内核vim

  3.10.0-514.el7.x86_64

 

四:版本信息:

  openstack版本mitaka

注意:

  在修改配置文件的时候 必定要注意不要再某条配置文件后面添加注释。能够上面与下面

  相关配置必定要在标题的后追加,不要再原有的基础上修改

 

效果图:

  

本博客主要是搭建前期的环境,后续的一些内部操做将会在后面的博客中陆续更新

 

准备环境:

  为三台主机添加hosts解析文件,为每台机器设置主机名,关闭firewalld,sellinux,设置静态IP

  为计算节点添加两块网卡,为网络节点添加两块网卡

  

  自定义yum源

    全部节点执行:

    yum makecache && yum install vim net-tools -y&& yum update -y

      关闭yum自动更新

    修改/ect/yum/yum-cron.conf将download_updates = yes改成no便可

  

  也可使用网络yum源: 

  下载网络yum源(在全部节点执行)

    yum install centos-release-openstack-mitaka -y

     制做yum缓存并更新系统

 

  预装包(在全部节点执行)

    yum install python-openstackclient -y

    yum install openstack-selinux -y

 

  部署时间服务(在全部节点执行)

    yum install chrony -y 

 

控制节点:

    修改配置:

    /etc/chrony.conf

    server ntp.staging.kycloud.lan iburst

    allow 192.168.1.0/24

 

启动服务:

    systemctl enable chronyd.service

    systemctl start chronyd.service

 

其他节点:

  

    修改配置:

    /etc/chrony.conf

    server 192.168.1.142 iburst

 

启动服务

    systemctl enable chronyd.service

    systemctl start chronyd.service

 

验证:

    每台机器执行:

    chronyc sources

    在S那一列包含*号,表明同步成功(可能须要花费几分钟去同步,时间务必同步)

 

控制节点操做

 

安装数据库

  yum install mariadb mariadb-server python2-PyMySQL -y

编辑: /etc/my.cnf.d/openstack.cnf[mysqld] bind-address = 192.168.1.142#控制节点IP default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
启动服务
systemctl enable mariadb.service
systemctl start mariadb.service

  

安装mogodb

  yum install mongodb-server mongodb -y

 

 

编辑:/etc/mongod.confbind_ip = 192.168.1.142 smallfiles = true

 

保存退出后启动服务 并添加开机自启动
systemctl enable mongod.service
systemctl start mongod.service

 

部署消息列队
  yum install rabbitmq-server -y

 

启动服务并开机自启 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service
新建rabbitmq用户名与密码(这里面我全部的密码都用lhc001) rabbitmqctl add_user openstack lhc001 为新建的用户openstack设定权限: rabbitmqctl set_permissions openstack ".*" ".*" ".*"

 

安装memcache缓存token
  yum install memcached python-memcached -y

启动并添加到开机自启动
systemctl enable memcached.service
systemctl start memcached.service

 

部署keystone服务

对数据库的操做 CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller01' \ IDENTIFIED BY 'lhc001'; #远程登陆的要写上这个否则会报错 flush privileges;

 

安装keystone
  yum install openstack-keystone httpd mod_wsgi -y

 

编辑/etc/keystone/keystone.conf [DEFAULT] admin_token = lhc001 [database] connection = mysql+pymysql://keystone:lhc001@controller01/keystone  [token] provider = fernet    

 

同步修改数据到数据库中
  su -s /bin/sh -c "keystone-manage db_sync" keystone

 

初始化fernet keys
  keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

 

配置apache服务

编辑:/etc/httpd/conf/httpd.conf ServerName controller01

 

硬连接/usr/share/keystone/wsgi-keystone.conf到/etc/httpd/conf.d/下

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/wsgi-keystone.conf

 

从新启动httpd

systemctl restart httpd

 

 

建立服务实体和访问站点

实现配置管理员环境变量,用于获取后面建立的权限 export OS_TOKEN=lhc001 export OS_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3    

 

基于上一步给的权限,建立认证服务实体(目录服务)

openstack service create \ --name keystone --description "OpenStack Identity" identity

 

基于上一步创建的服务实体,建立访问该实体的三个api端点

openstack endpoint create --region RegionOne \ identity public http://controller01:5000/v3  openstack endpoint create --region RegionOne \ identity internal http://controller01:5000/v3  openstack endpoint create --region RegionOne \ identity admin http://controller01:35357/v3  
    

 

 

 

 建立域,租户,用户,角色,把四个元素关联到一块儿

 创建一个公共的域名: 

openstack domain create --description "Default Domain" default    
管理员:admin openstack project create --domain default \ --description "Admin Project" admin openstack user create --domain default \ --password-prompt admin openstack role create admin openstack role add --project admin --user admin admin
普通用户:demo openstack project create --domain default \ --description "Demo Project" demo openstack user create --domain default \ --password-prompt demo openstack role create user openstack role add --project demo --user demo user
为后续的服务建立统一租户service openstack project create --domain default \ --description "Service Project" service

 

 

验证关联

 

验证操做

编辑:/etc/keystone/keystone-paste.ini 在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三个地方 移走:admin_token_auth 

 

新建客户端脚本文件

管理员:admin-openrc

export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=lhc001 export OS_AUTH_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

 

普通用户demo:demo-openrc

export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=lhc001 export OS_AUTH_URL=http://controller01:5000/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

 

 

退出控制台 从新登陆
source admin-openrc
openstack token issue

 

部署镜像服务 

数据库操做 

mysql -u root -p CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller01' \ IDENTIFIED BY 'lhc001'; #同上面keystone对数据库的操做同样 flush privileges;

 

 

keystone认证操做:
上面提到过:全部后续项目的部署都统一放到一个租户service里,而后须要为每一个项目创建用户,建管理员角色,创建关联

openstack user create --domain default --password-prompt glance

 

 

关联角色

openstack role add --project service --user glance admin
创建服务实体 openstack service create --name glance \ --description "OpenStack Image" image
建端点 openstack endpoint create --region RegionOne \ image public http://controller01:9292
 openstack endpoint create --region RegionOne \ image internal http://controller01:9292
 openstack endpoint create --region RegionOne \ image admin http://controller01:9292

 

 

安装glance软件
  yum install openstack-glance -y

此次的环境统一都是用本地存储,可是不管什么存储,都要在启动glance以前创建,否则启动时glance搜索不到,虽然不会报错,可是对于后面的一些操做会报错。因此为了省略麻烦 仍是先提早创建好

mkdir /var/lib/glance/images/ chown glance. /var/lib/glance/images/

 

编辑/etc/glance/glacne-api.conf
 [database]
 connection = mysql+pymysql://glance:lhc001@controller01/glance
[keystone_authtoken] auth_url = http://controller01:5000
memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = lhc001 [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/

 

编辑/etc/glance/glacne-registry

这里的registry配置的数据库是用来检索镜像元数据用的
[database] connection
= mysql+pymysql://glance:lhc001@controller01/glance 在以前的版本中,glance-api怎么配置registry就怎么配置。如今变成了v3版本 就不须要那么繁琐,直接在glance-registry中添加一条数据库接口,其它一律不用

 

 

同步数据库:输出不是报错 su -s /bin/sh -c "glance-manage db_sync" glance

 

启动并添加到开机自启动 systemctl enable openstack-glance-api.service \ openstack-glance-registry.service systemctl start openstack-glance-api.service \ openstack-glance-registry.service

 

 

验证操做

查看openstack image list 输出为空 而后执行镜像上传 openstack image create "cirros" \ --file cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public

 

 部署compute服务

再部署任何组件时都有对keystone的数据库中进行添加用户的操做,即全部组件共用一个数据库服务

compute对数据库的操做 CREATE DATABASE nova_api;  CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller01' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller01' \ IDENTIFIED BY 'lhc001'; flush privileges;

 

 

keystone部分的相关操做

 

openstack user create --domain default \ --password-prompt nova 建立一个nova用户 输入用户名密码 openstack role add --project service --user nova admin 用户与角色项目关联 openstack service create --name nova \ --description "OpenStack Compute" compute 建立实体

 

 

openstack endpoint create --region RegionOne \ compute public http://controller01:8774/v2.1/%\(tenant_id\)s  openstack endpoint create --region RegionOne \ compute internal http://controller01:8774/v2.1/%\(tenant_id\)s  openstack endpoint create --region RegionOne \ compute admin http://controller01:8774/v2.1/%\(tenant_id\)s   三个endpoint 

 

 

安装软件包:

yum install openstack-nova-api openstack-nova-conductor \

  openstack-nova-console openstack-nova-novncproxy \

  openstack-nova-scheduler -y

编辑/etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata rpc_backend = rabbit auth_strategy = keystone #下面的为管理ip my_ip = 192.168.1.142 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_databases] connection = mysql+pymysql://nova:lhc001@controller01/nova_api  [database] connection = mysql+pymysql://nova:lhc001@controller01/nova  [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [keystone_authtoken] auth_url = http://controller01:5000 memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = lhc001 [vnc] #下面的为管理ip vncserver_listen = 192.168.1.142 #下面的为管理ip vncserver_proxyclient_address = 192.168.1.142 [oslo_concurrency] lock_path = /var/lib/nova/tmp  

 

 

同步数据库 有输出不是报错

su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage db sync" nova

 

 

启动并添加到开机自启动 systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service

 接下来就要配置计算节点了 控制节点就暂时先放一放

 

配置计算节点 

安装软件包:
yum install openstack-nova-compute libvirt-daemon-lxc -y

修改配置:
编辑/etc/nova/nova.conf

[DEFAULT] rpc_backend = rabbit auth_strategy = keystone #计算节点管理网络ip my_ip = 192.168.1.141 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [vnc] enabled = True vncserver_listen = 0.0.0.0 #计算节点管理网络ip vncserver_proxyclient_address = 192.168.1.141 #控制节点管理网络ip novncproxy_base_url = http://192.168.1.142:6080/vnc_auto.html
 [glance] api_servers = http://controller01:9292
 [oslo_concurrency] lock_path = /var/lib/nova/tmp
启动程序 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service

 

 

 验证操做:

 控制节点测试

 

网络部署 

仍是在控制节点进行数据库操做

CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller01' \ IDENTIFIED BY 'lhc001'; flush privileges;

 

 

keystone对于neutron的操做

建立用户创建关联 openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin

 

 建立实体服务与三个endpoint

openstack service create --name neutron \ --description "OpenStack Networking" network openstack endpoint create --region RegionOne \ network public http://controller01:9696
 openstack endpoint create --region RegionOne \ network internal http://controller01:9696
 openstack endpoint create --region RegionOne \ network admin http://controller01:9696

 

 安装neutron组件

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which  -y

 

配置服务组件
编辑 /etc/neutron/neutron.conf文件

[DEFAULT] core_plugin = ml2 service_plugins = router #下面配置:启用重叠IP地址功能 allow_overlapping_ips = True rpc_backend = rabbit auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [database] connection = mysql+pymysql://neutron:lhc001@controller01/neutron
 [keystone_authtoken] auth_url = http://controller01:5000
memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = lhc001 [nova] auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2] type_drivers = flat,vlan,vxlan,gre tenant_network_types = vxlan mechanism_drivers = openvswitch,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True

 

 编辑/etc/nova/nova.conf文件

[neutron] url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = lhc001 service_metadata_proxy = True

 

 

建立连接
  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 数据库同步:会有输出 并非报错

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

 从新启动nova

   systemctl restart openstack-nova-api.service

启动neutron并加入到开机自启
  systemctl enable neutron-server.service
  systemctl start neutron-server.service

 

配置网络节点

编辑 /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0

 

 

执行下列命令,当即生效
  sysctl -p

 

安装软件包
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

配置组件
编辑/etc/neutron/neutron.conf文件

[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 编辑 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件

[ovs] #下面ip为网络节点数据网络ip local_ip=1.1.1.119 bridge_mappings=external:br-ex [agent] tunnel_types=gre,vxlan #l2_population=True prevent_arp_spoofing=True

 

 配置L3代理。编辑 /etc/neutron/l3_agent.ini文件

[DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver external_network_bridge=br-ex

 

配置DHCP代理。编辑 /etc/neutron/dhcp_agent.ini文件

[DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata=True

 

配置元数据代理。编辑 /etc/neutron/metadata_agent.ini文件

[DEFAULT] nova_metadata_ip=controller01 metadata_proxy_shared_secret=lhc001

 

启动服务
网路节点:

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

注意上面标红的服务,在查看服务状态的时候ovs会提示启动失败,这个不用担忧,由于以前上面的配置文件中配置的数据管理IP与网桥都没有建立,它的log中提示找不到,不要紧 后面会建立出来。

 

建立为网络节点数据网络ip

[root@network01 network-scripts]# cat ifcfg-ens37 TYPE="Ethernet" BOOTPROTO="static" IPADDR=1.1.1.119 NETMASK=255.255.255.0 NAME="ens37" DEVICE="ens37" ONBOOT="yes"

 

计算节点也是同样,也要建立一个数据管理IP

cd /etc/sysconfig/network-scripts
cp ifcfg-ens33 ifcfg-ens37

[root@compute01 network-scripts]# cat ifcfg-ens37 TYPE="Ethernet" BOOTPROTO="static" IPADDR=1.1.1.117 NETMASK=255.255.255.0 NAME="ens37" DEVICE="ens37" ONBOOT="yes"

 

重启两节点网卡服务

  systemctl restart nwtwork

两节点互ping 测试

 

回到网络节点上建立网桥

创建网桥,br-ex网卡与外部网卡绑定,因为在实验以前 咱们添加了三块网卡,可是我这台机器上能上网的只有一个IP地址,因此能用的网络就两个一个数据网络,一个管理网络,可是在真实环境中,是必定要用三块网卡的

cp ifcfg-ens33 ifcfg-br-ex

[root@network01 network-scripts]# cat ifcfg-br-ex TYPE="Ethernet" BOOTPROTO="static" IPADDR=192.168.1.140 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 NAME="br-ex" DEVICE="br-ex" ONBOOT="yes" NM_CONTROLLED=no    #这个必定要添加否则网桥会创建失败

 

同理,既然已经将网桥创建了,那么ens33上面的IP地址就要拿掉

[root@network01 network-scripts]# cat ifcfg-ens33 TYPE="Ethernet" BOOTPROTO="static" DEFROUTE="yes" PEERDNS="yes" PEERROUTES="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="e82bc9e4-d28e-4363-aab4-89fda28da938" DEVICE="ens33" ONBOOT="yes" NM_CONTROLLED=no      #在真实网卡上也要添加

 

重启网络节点网卡

  systemctl restart nwtwork

这时候再次查看ovs服务的状态(

systemctl status neutron-openvswitch-agent.service)就发现这个服务必定running了

 

到这里网络节点就告一段落了

 

配置计算节点

编辑 /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 sysctl -p

 

 

计算节点安装ovs等组件
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

 

编辑 /etc/neutron/neutron.conf文件

[DEFAULT] rpc_backend = rabbit #auth_strategy = keystone [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 

编辑 /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs] #下面ip为计算节点数据网络ip local_ip = 1.1.1.117 [agent] tunnel_types = gre,vxlan l2_population = True arp_responder = True prevent_arp_spoofing = True [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True

 

 

编辑 /etc/nova/nova.conf

[neutron] url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = lhc001

 

启动服务重启nova systemctl enable neutron-openvswitch-agent.service systemctl start neutron-openvswitch-agent.service systemctl restart openstack-nova-compute.service

 

 

ok,接下来的事情就简单多了,在控制节点部署一个dashboard服务

 

控制节点操做

安装软件包
  yum install openstack-dashboard -y

 

配置/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller01" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #SESSION_ENGINE = 'django.contrib.sessions.backends.file' #这个配置文件没有咱们在最后面添加进去,输入上面的那个配置在登陆页面进行登陆的时候可能会报错,若是报错,修改为下面的配置 问题就会解决 CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller01:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "data-processing": 1.1, "identity": 3, "image": 2, "volume": 2, "compute": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" TIME_ZONE = "UTC"

 

从新启动服务
  systemctl enable httpd.service memcached.service
  systemctl restart httpd.service memcached.service

 

在浏览器段进行测试

 http://http://192.168.1.142/dashboard/

 

相关文章
相关标签/搜索