如今咱们看一下,就没有任何问题了html
[root@linux-node2 ~]# /etc/init.d/openstack-nova-compute starthtml5
正在启动 openstack-nova-compute: [肯定]node
[root@linux-node2 ~]# ps aux | grep pythonpython
root 1179 4.9 2.8 1108796 54304 pts/0 Sl 18:05 0:01 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.loglinux
root 1216 0.0 0.0 103248 836 pts/0 S+ 18:06 0:00 grep pythonvim
[root@linux-node2 ~]# ps -ef|grep novaapi
root 1179 1 0 18:05 pts/0 00:00:03 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log网络
root 1233 1634 0 18:16 pts/0 00:00:00 grep novatcp
咱们再看一下Linuxbir正常不正常ide
计算节点:计算+网络
生产环境最好有两个控制节点
[root@linux-node2 ~]# neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
[root@linux-node2 ~]# /etc/init.d/openstack-neutron-linuxbridge-agent start
正在启动 openstack-neutron-linuxbridge-agent: [肯定]
[root@linux-node2 ~]# ps aux |grep python
root 1179 0.4 3.3 1109592 64120 pts/0 Sl 18:05 0:04 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log
root 1249 1.2 1.5 254912 29616 pts/0 S 18:21 0:00 /usr/bin/python /usr/bin/neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini --verbose
root 1258 0.0 0.0 103248 836 pts/0 S+ 18:21 0:00 grep python
控制节点上查看:
[root@linux-node1 ~]# nova host-list
+---------------------------+-------------+----------+
| host_name | service | zone |
+---------------------------+-------------+----------+
| linux-node1.openstack.com | consoleauth | internal |
| linux-node1.openstack.com | scheduler | internal |
| linux-node1.openstack.com | cert | internal |
| linux-node1.openstack.com | conductor | internal |
| linux-node2.openstack.com | compute | nova |
+---------------------------+-------------+----------+
哪一个节点都行,只有你有环境变量
[root@linux-node2 ~]# nova host-list
ERROR (CommandError): You must provide a username or user id via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
[root@linux-node2 ~]# export OS_TENANT_NAME=admin
[root@linux-node2 ~]# export OS_USERNAME=admin
[root@linux-node2 ~]# export OS_PASSWORD=admin
[root@linux-node2 ~]# export OS_AUTH_URL=http://192.168.33.11:35357/v2.0
[root@linux-node2 ~]# nova host-list
+---------------------------+-------------+----------+
| host_name | service | zone |
+---------------------------+-------------+----------+
| linux-node1.openstack.com | consoleauth | internal |
| linux-node1.openstack.com | scheduler | internal |
| linux-node1.openstack.com | cert | internal |
| linux-node1.openstack.com | conductor | internal |
| linux-node2.openstack.com | compute | nova |
+---------------------------+-------------+----------+
下面的图就说明computer启动起来了。
[root@linux-node1 ~]# neutron agent-list
用demo用户登录
建立虚拟机,咱们要保证要有镜像
接下来,
filter Scheduler 概念
上面画红色方框的是默认的,其它的是我手动添加的。
通常的报错可能的缘由:找不到有效的主机,第二种是宿主机上的内存不够用。
[root@linux-node1 ~]# iptables -vnL
Chain INPUT (policy ACCEPT 71230 packets, 24M bytes)
pkts bytes target prot opt in out source destination
70570 24M nova-api-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 nova-filter-top all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 nova-api-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0
0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 68959 packets, 23M bytes)
pkts bytes target prot opt in out source destination
68309 23M nova-filter-top all -- * * 0.0.0.0/0 0.0.0.0/0
68309 23M nova-api-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0
Chain nova-api-FORWARD (1 references)
pkts bytes target prot opt in out source destination
Chain nova-api-INPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 192.168.33.11 tcp dpt:8775
Chain nova-api-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
Chain nova-api-local (1 references)
pkts bytes target prot opt in out source destination
Chain nova-filter-top (2 references)
pkts bytes target prot opt in out source destination
68309 23M nova-api-local all -- * * 0.0.0.0/0 0.0.0.0/0
Nova的调度服务scheduler,你建立虚拟机你要建立大哪台物理机上? nova schduler
固然,我们的实验只有一两台机器。
出了问题,你们要去找日志。
[root@linux-node1 ~]# ll /var/log/nova/
总用量 12708
-rw-r--r-- 1 root root 7187241 8月 22 13:00 api.log
-rw-r--r-- 1 root root 1220479 8月 22 13:13 cert.log
-rw-r--r-- 1 root root 1226101 8月 22 13:14 conductor.log
-rw-r--r-- 1 root root 1224671 8月 22 13:13 consoleauth.log
-rw-r--r-- 1 root root 2129478 8月 22 13:13 scheduler.log
你们在排查错误的时候,一边建立主机,一边看打开日志,观察错误。
修改计算节点:
由于只在计算节点上建立虚拟机,因此在控制节点上修改也没有意义。
[root@linux-node2 ~]# vim /etc/nova/nova.conf
virt_type=kvm
它支持不少,有的笔记本,不支持,因此改为qemu
[root@linux-node2 ~]# /etc/init.d/openstack-nova-compute restart
中止 openstack-nova-compute: [肯定]
正在启动 openstack-nova-compute: [肯定]
建立完成查看,以下:
有的时候,openstack会出现许多奇葩的问题,我开始查看的“用量”的时候,竟然不会显示,后来我重启一下openstack的各个服务就OK了。
我们先把DHCP打开,由于虚拟机获取不到ip地址。它不会自动往iptables里面加规则的。
下面我来说解一下关于DHCP的,我生产环境下没有用DHCP,物理交换的路由功能。生产环境下有DHCP。
这样就冲突了。
我这里来配置一下,我们在控制节点上。
[root@linux-node1 ~]# vim /etc/neutron/dhcp_agent.ini
debug = False
nterfaceDriverinterface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = false
dhcp_confs = $state_path/dhcp
[root@linux-node1 ~]# grep "^[a-z]" /etc/neutron/dhcp_agent.ini
debug = true
interfaceDriverinterface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = false
dhcp_confs = $state_path/dhcp
[root@linux-node1 ~]# cd init.d
[root@linux-node1 init.d]# ls
openstack-cinder-api openstack-glance-api openstack-keystone openstack-neutron-server openstack-nova-compute openstack-nova-novncproxy
openstack-cinder-scheduler openstack-glance-registry openstack-neutron-dhcp-agent openstack-nova-api openstack-nova-conductor openstack-nova-scheduler
openstack-cinder-volume openstack-glance-scrubber openstack-neutron-linuxbridge-agent openstack-nova-cert openstack-nova-consoleauth openstack-nova-spicehtml5proxy
[root@linux-node1 init.d]# cp openstack-neutron-dhcp-agent /etc/init.d/
[root@linux-node1 init.d]# chmod +x /etc/init.d/openstack-neutron-dhcp-agent
[root@linux-node1 init.d]# chkconfig --add openstack-neutron-dhcp-agent
[root@linux-node1 init.d]# chkconfig openstack-neutron-dhcp-agent on
[root@linux-node1 init.d]# /etc/init.d/openstack-neutron-dhcp-agent start
正在启动 openstack-neutron-dhcp-agent: [肯定]
[root@linux-node1 ~]# virsh net-list
名称 状态 自动开始 Persistent
--------------------------------------------------
default 活动 yes yes
[root@linux-node1 ~]# virsh net-destroy default
网络 default 被删除
[root@linux-node1 ~]# virsh net-undefine default
网络 default 已经被取消定义
[root@linux-node1 ~]# service libvirtd restart
正在关闭 libvirtd 守护进程: [肯定]
启动 libvirtd 守护进程: [肯定]
[root@linux-node1 ~]# virsh net-list
名称 状态 自动开始 Persistent
--------------------------------------------------
[root@linux-node1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:3B:15:9F
inet addr:192.168.33.11 Bcast:192.168.33.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe3b:159f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4664 errors:0 dropped:0 overruns:0 frame:0
TX packets:4630 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:843472 (823.7 KiB) TX bytes:2029897 (1.9 MiB)
eth1 Link encap:Ethernet HWaddr 00:0C:29:3B:15:A9
inet6 addr: fe80::20c:29ff:fe3b:15a9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3234 (3.1 KiB) TX bytes:2700 (2.6 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:48874 errors:0 dropped:0 overruns:0 frame:0
TX packets:48874 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15529706 (14.8 MiB) TX bytes:15529706 (14.8 MiB)