查看CPU是否支持硬件虚拟化技术。 CPU要支持全虚拟化虚拟化技术且是64位的
Intel:cat /proc/cpuinfo | grep –color vmx
AMD :cat /proc/cpuinfo | grep –color svm
看看flag有没有上面的vmx戒者是svm,有的话就是支持全虚拟化技术
cat /proc/cpuinfo | grep –color lm 是否支持64位html
[root@sxooky ~]# cat /proc/cpuinfo |grep --color lm flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid [root@sxooky ~]# cat /proc/cpuinfo |grep --color vmx flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2 invpcid
若是,看不到vmx,是由于 VM CPU没有开启 VT 技术node
kvm : KVM虚拟化模块
virt-manager: KVM图形化管理工具
libvirt: 虚拟化服务python
[root@sxooky ~]# yum install -y kvm virt-manager libvirt ......省略 Installed: libvirt.x86_64 0:0.10.2-29.el6 qemu-kvm.x86_64 2:0.12.1.2-2.415.el6 virt-manager.x86_64 0:0.9.0-19.el6 Dependency Installed: augeas-libs.x86_64 0:1.0.0-5.el6 celt051.x86_64 0:0.5.1.3-0.el6 cyrus-sasl-md5.x86_64 0:2.1.23-13.el6_3.1 ebtables.x86_64 0:2.0.9-6.el6 glusterfs-api.x86_64 0:3.4.0.36rhs-1.el6 glusterfs-libs.x86_64 0:3.4.0.36rhs-1.el6 gnutls-utils.x86_64 0:2.8.5-10.el6_4.2 gpxe-roms-qemu.noarch 0:0.9.7-6.10.el6 gtk-vnc.x86_64 0:0.3.10-3.el6 gtk-vnc-python.x86_64 0:0.3.10-3.el6 iscsi-initiator-utils.x86_64 0:6.2.0.873-10.el6 libcacard.x86_64 0:0.15.0-2.el6 libcgroup.x86_64 0:0.40.rc1-5.el6 libvirt-client.x86_64 0:0.10.2-29.el6 libvirt-python.x86_64 0:0.10.2-29.el6 lzop.x86_64 0:1.02-0.9.rc1.el6 nc.x86_64 0:1.84-22.el6 netcf-libs.x86_64 0:0.1.9-4.el6 numad.x86_64 0:0.5-9.20130814git.el6 python-virtinst.noarch 0:0.600.0-18.el6 qemu-img.x86_64 2:0.12.1.2-2.415.el6 radvd.x86_64 0:1.6-1.el6 seabios.x86_64 0:0.6.1.2-28.el6 sgabios-bin.noarch 0:0-0.3.20110621svn.el6 spice-glib.x86_64 0:0.20-11.el6 spice-gtk.x86_64 0:0.20-11.el6 spice-gtk-python.x86_64 0:0.20-11.el6 spice-server.x86_64 0:0.12.4-6.el6 usbredir.x86_64 0:0.5.1-1.el6 vgabios.noarch 0:0.6b-3.7.el6 yajl.x86_64 0:1.0.7-3.el6 Complete!
注:使用系统镜像,先配置好yum 本地源linux
[root@sxooky ~]# service libvirtd start Starting libvirtd daemon: 2017-02-21 19:57:58.972+0000: 37536: info : libvirt version: 0.10.2, package: 29.el6 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2013-10-09-06:25:35, x86-026.build.eng.bos.redhat.com) 2017-02-21 19:57:58.972+0000: 37536: warning : virGetHostname:2294 : getaddrinfo failed for 'sxooky': Name or service not known [ OK ] [root@sxooky ~]# chkconfig libvirtd on
[root@sxooky ~]# lsmod |grep kvm kvm_intel 54285 0 kvm 333172 1 kvm_intel
检查KVM 是否成功安装可使用virsh命令检查虚拟机的状态ios
[root@sxooky ~]# virsh list Id Name State ----------------------------------------------------
使用命令:virt-manager 创建虚拟机, 将kvm管理工具从英文界面,切换成中文界面git
[root@sxooky ~]# echo $LANG en_US.UTF-8 [root@sxooky ~]# LANG='zh_CN.UTF-8' [root@sxooky ~]# echo $LANG zh_CN.UTF-8
执行virt-manager后,弹出以下界面vim
右击localhost(QEMU)后,点击“新建”就能够跟据向导,安装一个新的虚拟机了。
注:这里先丌安装Linux虚拟机。api
扩展:
下载linux版本的vmware
https://download3.vmware.com/software/wkst/file/VMware-Workstation-Full-11.1.2-2780 323.x86_64.bundle
安装:
bash
[root@sxooky ~]# hmod 755 VMware-Workstation-Full-11.1.2-2780323.x86_64.bundle [root@xuegod63~]# /VMware-Workstation-Full-11.1.2-2780323.x86_64.bundle
网桥介绍: 咱们常常所说的Bridge设备其实就是网桥设备,也就至关于如今的二层交换机,用于链接同一 网段内的全部机器,因此咱们的目的就是将网络设备 eth0 添加到 br0,此时 br0 就成为了所谓的交换机 设备,咱们物理机的eth0也是链接在上面的。网络
添加桥接设备br0: 至关于一个二层交换机
[root@sxooky ~]# rpm -ivh /mnt/Packages/bridge-utils-1.2-10.el6.x86_64.rpm warning: /mnt/Packages/bridge-utils-1.2-10.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] [root@sxooky ~]# cd /etc/sysconfig/network-scripts/
[root@sxooky network-scripts]# vim ifcfg-eth0 [root@sxooky network-scripts]# cat !$ cat ifcfg-eth0 DEVICE=eth0 HWADDR=00:0c:29:34:3b:d3 TYPE=Ethernet UUID=48a80b39-84e0-4cbf-bedc-c97dd1340048 ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPV6INIT=no USERCTL=no #IPADDR=192.168.1.51 #NETMASK=255.255.255.0 #GATEWAY=192.168.1.1 #DNS1=8.8.8.8 BRIDGE="br0"
[root@sxooky network-scripts]# cp ifcfg-eth0 ifcfg-br0 [root@sxooky network-scripts]# vim ifcfg-br0 [root@sxooky network-scripts]# cat !$ cat ifcfg-br0 DEVICE=br0 TYPE="Bridge" ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPV6INIT=no USERCTL=no IPADDR=192.168.1.51 GATEWAY=192.168.1.1 PREFIX=24 DNS1=8.8.8.8
注:TYPE=”Bridge”,B要大写。
[root@sxooky network-scripts]# /etc/init.d/network restart Shutting down interface br0: [ OK ] Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface br0: Determining if ip address 192.168.1.51 is already in use for device br0... [ OK ]
[root@sxooky network-scripts]# ifconfig br0 Link encap:Ethernet HWaddr 00:0C:29:34:3B:D3 inet addr:192.168.1.51 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe34:3bd3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:984 errors:0 dropped:0 overruns:0 frame:0 TX packets:37 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:48543 (47.4 KiB) TX bytes:3446 (3.3 KiB) eth0 Link encap:Ethernet HWaddr 00:0C:29:34:3B:D3 inet6 addr: fe80::20c:29ff:fe34:3bd3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:224243 errors:0 dropped:0 overruns:0 frame:0 TX packets:162118 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:26322149 (25.1 MiB) TX bytes:37133229 (35.4 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:52913 errors:0 dropped:0 overruns:0 frame:0 TX packets:52913 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:31947192 (30.4 MiB) TX bytes:31947192 (30.4 MiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:C5:29:6C inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) [root@sxooky network-scripts]# /etc/init.d/NetworkManager status NetworkManager is stopped [root@sxooky network-scripts]# brctl show bridge name bridge id STP enabled interfaces br0 8000.000c29343bd3 no eth0 virbr0 8000.525400c5296c yes virbr0-nic
[root@sxooky ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root vg_rhel -wi-ao---- 10.00g swap vg_rhel -wi-ao---- 1.00g [root@sxooky ~]# vgs VG #PV #LV #SN Attr VSize VFree vg_rhel 1 2 0 wz--n- 19.56g 8.56g [root@sxooky ~]# lvcreate -n kvm -L 8G vg_rhel Logical volume "kvm" created
[root@sxooky ~]# mkfs.ext4 /dev/mapper/vg_rhel-kvm mke2fs 1.41.12 (17-May-2010) 文件系统标签= 操做系统:Linux 块大小=4096 (log=2) 分块大小=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 524288 inodes, 2097152 blocks 104857 blocks (5.00%) reserved for the super user 第一个数据块=0 Maximum filesystem blocks=2147483648 64 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 正在写入inode表: 完成 Creating journal (32768 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 This filesystem will be automatically checked every 20 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@sxooky ~]# mount /dev/mapper/vg_rhel-kvm / var/lib/libvirt/images/ #安装虚拟机, 默认存放的路径 [root@sxooky ~]# df -h |tail -1 /dev/mapper/vg_rhel-kvm 7.9G 146M 7.4G 2% /var/lib/libvirt/images
[root@sxooky ~]# virt-manager
准备系统镜像:这里直接使用VMware光驱中的镜像
点“Finish”到此建立好一个新的KVM虚拟机了。
在安装好的KVM的Linux虚拟机中安装并启劢acpi服务 virsh shutdown命令经过发送acpi指令来控制虚拟机的电源, 而kvm虚拟机安装linux系统时默认是没有安装acpi服务的,因此并丌会作处理。
解决方法:只须要在虚拟机里安装并启动acpid服务便可,执行命令以下:
[root@kvmsxooky ~]# yum install acpid -y [root@kvmsxooky ~]# service acpid start #启动acpic服务 [root@localhost yum.repos.d]# chkconfig --list acpid acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off ##安装后默认会加入到开机启动中的
配置基本环境,关闭KVM虚拟主机
[root@kvmsxooky ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" HWADDR="52:54:00:95:89:DE" IPADDR=192.168.1.201 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=8.8.8.8 NM_CONTROLLED="yes" ONBOOT="yes" [root@kvmsxooky ~]# service network restart #启动网卡 [root@kvmsxooky ~]# cat /etc/yum.repos.d/rhel.repo [rhel-source] name=rhel6.5 cdrom baseurl=file:///mnt enabled=1 gpgcheck=0 [root@kvmsxooky ~]# sed -n '/^SELINUX=/p' /etc/sysconfig/selinux SELINUX=disabled [root@kvmsxooky ~]# getenforce Disabled [root@kvmsxooky ~]# chkconfig --list iptables iptables 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭 [root@sxooky ~]# init 0
KVM宿主机sxooky修改KVM虚拟机配置文件
[root@sxooky qemu]# vim kvm_shenxiang01.xml 35 <disk type='block' device='cdrom'> 36 <driver name='qemu' type='raw'/> 37 <source dev="/dev/sr0"/> #插入这条信息 38 <target dev='hdc' bus='ide'/> 39 <readonly/> 40 <address type='drive' controller='0' bus='1' target='0' unit='0'/>
从新载入配置文件,登录KVM虚拟机查看
[root@sxooky qemu]# virsh create kvm_shenxiang01.xml 域 kvm_shenxiang01 被建立(从 kvm_shenxiang01.xml)
[root@sxooky ~]# yum install httpd -y [root@sxooky ~]# service httpd start [root@sxooky ~]# mount /dev/cdrom /var/www/html/ 配置kvm 虚拟机rhel6-71 的yum源 [root@kvmsxooky ~]# cat /etc/yum.repos.d/rhel.repo [rhel-source] name=rhel6.5 cdrom #baseurl=file:///mnt #方法一 baseurl=http://192.168.1.51/ #方法二:修改成此内容 enabled=1 gpgcheck=0
[root@kvmsxooky ~]# yum install -y acpid [root@kvmsxooky ~]# service acpid start #启动acpic服务 [root@localhost yum.repos.d]# chkconfig --list acpid acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off #安装后默认会加入到开机启劢的
[root@sxooky ~]# virsh list Id 名称 状态 ---------------------------------------------------- 19 kvm_shenxiang01 running [root@sxooky ~]# virsh shutdown kvm_shenxiang01 #关闭kvm_shenxiang01虚拟机 域 kvm_shenxiang01 被关闭 [root@sxooky ~]# virsh list --all Id 名称 状态 ---------------------------------------------------- - kvm_shenxiang01 关闭 [root@sxooky ~]# virsh autostart kvm_shenxiang01 #设置 vm1为物理机开机后,自动启动 域 kvm_shenxiang01标记为自动开始
关闭要调整的虚拟机,编辑虚拟机配置文件
#virsh edit ‘your vm name’
找到配置文件中的如下字段
<graphics type=’vnc’ port=’-1’/>
加入键盘的语言布局后以下
<graphics type=’vnc’ port=’-1′ keymap=’en-us’/>
保存退出后,从新载入虚拟机配置文件
#virsh create /etc/libvirt/qemu/’your vm name’.xml
若是要避免这种状况,在使用virt-install安装的时候,就加入键盘布局的字段
–keymap=en-us
在虚拟关闭的状态下,经过virt-manager界面选中相应的虚拟机
Open(打开)–>View(查看)–>Details(详情)–>Display VNC(显示VNC)–>keymap–>en-us
保存后再启动虚拟机就能够了
光盘安装 —— 注意
Open(打开)–>View(查看)–>Details(详情)–> IDE cdrom1 ()–>disconnect(链接状态)
Kernel panic -not syncing:Attempted to kill init
进入单用户模式(关闭selinux)
在linux启动界面出现时,按f2进入以下界面
输入“e”
选择Kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/luks-e885a2 再次输入“e”
在rhgb quiet 后面加上 selinux=0 <回车键>
输入“b”在从新进入系统,就OK了。
Swap分区:我没有使用swap分区,因为在安装SELinux时候会消耗不少内存,内存不够用时,会使用swap分区,若是没有swap分区的话就会出现上图所示,若是分了swap分区过小,同样出现这种状况,所以,swap分区仍是颇有必要的。至少解决了个人问题。
[root@sxooky ~]# virsh list #只显示运行中的虚拟机Id名称 Id 名称 状态 ---------------------------------------------------- 12 kvm_shenxiang01 running [root@sxooky ~]# virsh list –all #显示全部的虚拟,包括关闭状态的虚拟机 Id 名称 状态 ---------------------------------------------------- 12 kvm_shenxiang01 running [root@sxooky ~]# virsh shutdown kvm_shenxiang01 #关闭KVM虚拟机kvm_shenxiang01 域 kvm_shenxiang01 被关闭