《OpenStack 虚拟机的磁盘文件类型与存储方式》html
NOTE:本文语境限于 OpenStack 原生 Libvirt Driver(QEMU-KVM Hypervisor)。node
根据虚拟机启动方式、虚拟机磁盘类型的不一样能够分为:python
Boot from imagelinux
Boot from volumeweb
NOTE:更多 OpenStack 虚拟机磁盘文件类型与存储方式请浏览《OpenStack 虚拟机的磁盘文件类型与存储方式》。后端
典型的文件存储是 NAS(Network Attached Storage,网络接入存储)、NFS(Network File System,网络文件系统)。多个计算节点能够经过 NFS 共享 instances_path
存放磁盘文件,Cinder 也支持 NFS Backend 将 Volumes 存放到共享目录下。可见,在共享存储的场景中,须要迁移的仅仅是虚拟机的内存数据。
api
典型的块存储(Block Storage)是 SAN(Storage Area Network,区域存储网络)。块存储经过 iSCSI、FC 等协议将块设备 Attach 到应用服务器上,并被文件系统层接管,数据以块的形式存储在 Volume 里。一样的,Nova、Glance、Cinder 均可以接入 Block Storage Backend。即虚拟机使用了共享的系统盘、数据盘,Local 只存放 Ephemeral file、Swap file、console.log、disk.info 等磁盘及配置文件。若是虚拟机没有使用 Ephemeral、Swap Disks 的话,那么须要迁移的也一样只有内存数据。安全
从某种层面上看,咱们不妨将文件存储、块存储概括为共享存储行列。但须要注意的是,严格来讲纯粹的块存储并不能称之为共享存储,由于块设备没法作到在多个挂载点上同时刷写数据,这须要业务层的支持,例如 CInder Multi-Attach。这也致使了在迁移场景中,对文件存储、块存储的处理方式并不相同:使用块存储的迁移过程,块设备须要经历先 Detach、再重定向 Attach 的过程;而使用文件存储的迁移过程当中只须要在目的节点上直接访问 mountd 目录便可。服务器
值得一提的是,NAS 和 SAN 做为传统存储解决方案分别提供了文件存储和块存储。而 Ceph 做为统一存储解决方案,可以同时提供文件存储、块存储和对象存储,如今已经被大量使用到生产环境中。网络
非共享存储场景中,虚拟机的磁盘文件存放方式为 Local,这就须要对虚拟机的内存数据、本地磁盘文件均进行迁移。迁移方式就是数据块级别的拷贝,简称块迁移。显然,这种场景对热迁移并不友好,由于拷贝时间太长会提升数据的丢失率(e.g. 拷贝过程当中的网络问题)。
为人熟知的迁移类型有冷迁移和热迁移,这两个概念很好区别,以迁移过程当中是否须要关闭业务主机做为辨识。
冷迁移:即关闭虚拟机、数据迁移。须要迁移的只有系统盘数据、数据盘数据,而无需迁移内存数据,使用块迁移方式。
热迁移:又称动态迁移、在线迁移,是一种用户无感的迁移方式。虚拟机不须要关机,业务不被中断,但相对的是一种复杂的迁移方式。
Non-live migration, also known as cold migration or simply migration. The instance is shut down, then moved to another hypervisor and restarted. The instance recognizes that it was rebooted, and the application running on the instance is disrupted.
Live migration, The instance keeps running throughout the migration. This is useful when it is not possible or desirable to stop the application running on the instance. Live migrations can be classified further by the way they treat instance storage:
- Shared storage-based live migration. The instance has ephemeral disks that are located on storage shared between the source and destination hosts.
- Block live migration, or simply block migration. The instance has ephemeral disks that are not shared between the source and destination hosts. Block migration is incompatible with read-only devices such as CD-ROMs and Configuration Drive (config_drive).
- Volume-backed live migration. Instances use volumes rather than ephemeral disks.
根据虚拟机数据类型、存储场景的不一样,迁移方式能够分为:
冷迁移
热迁移
迁移场景信息:
Step 1. 保证源计算节点和目的计算节点的 nova 用户能够进行 SSH 免密登陆,由于 nova-compute.service 默认是由 nova 用户启动的,该服务进程会在源计算节点和目的计算节点之间使用 scp 指令进行数据拷贝。不然就会出现以下异常:
2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Command: ssh -o BatchMode=yes 172.17.1.16 mkdir -p /var/lib/nova/instances/1365380a-a532-4811-8784-57f507acac46 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Exit code: 255 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Stdout: u'' 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Stderr: u'Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n'
Step 2. 关闭 SELinux 或者执行下述指令让 SSH 免密登陆认证文件可被访问
chcon -R -t ssh_home_t /var/lib/nova/.ssh/authorized_keys
NOTE:SELinux 相关日志 /var/log/audit/audit.log
Step 3. 执行迁移
[stack@undercloud (overcloudrc) ~]$ openstack server create --image c9debff2-cd87-4688-b712-87a2948461ce --flavor a0dd32df-8c1b-47ed-9b7c-88612a5dd78d --nic net-id=11d8d379-dcd9-46ff-9cd1-25d2737affb4 tst-block-migrate-vm +--------------------------------------+-----------------------------------------------+ | Field | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | WswHKEmcPnV3 | | config_drive | | | created | 2019-03-15T09:32:17Z | | flavor | 1U2G (a0dd32df-8c1b-47ed-9b7c-88612a5dd78d) | | hostId | | | id | 80996760-0c30-4e2a-847a-b9d882182df5 | | image | cirros (c9debff2-cd87-4688-b712-87a2948461ce) | | key_name | None | | name | tst-block-migrate-vm | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}] | | status | BUILD | | updated | 2019-03-15T09:32:17Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+-----------------------------------------------+ [stack@undercloud (overcloudrc) ~]$ openstack server show tst-block-migrate-vm +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000008e | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-15T09:32:31.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.14 | | config_drive | | | created | 2019-03-15T09:32:17Z | | flavor | 1U2G (a0dd32df-8c1b-47ed-9b7c-88612a5dd78d) | | hostId | 9f1230901ddf3fe0e1a41e1c650a784c122b791f89fdf66a40cff3d6 | | id | 80996760-0c30-4e2a-847a-b9d882182df5 | | image | cirros (c9debff2-cd87-4688-b712-87a2948461ce) | | key_name | None | | name | tst-block-migrate-vm | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-15T09:32:32Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+----------------------------------------------------------+ [stack@undercloud (overcloudrc) ~]$ openstack server migrate --block-migration --wait tst-block-migrate-vm Complete [stack@undercloud (overcloudrc) ~]$ openstack server show tst-block-migrate-vm +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000008e | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-15T09:33:52.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.14 | | config_drive | | | created | 2019-03-15T09:32:17Z | | flavor | 1U2G (a0dd32df-8c1b-47ed-9b7c-88612a5dd78d) | | hostId | 0f2ec590cd73fe0e9522f1ba715dae7a7d4b884e15aa8254defe85d0 | | id | 80996760-0c30-4e2a-847a-b9d882182df5 | | image | cirros (c9debff2-cd87-4688-b712-87a2948461ce) | | key_name | None | | name | tst-block-migrate-vm | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-15T09:34:53Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+----------------------------------------------------------+
NOTE:上述执行迁移时没有显式选中目的计算节点,交由 nova-scheduler.service 负责调度。
从上述迁移场景信息可知,Nova 会对该虚拟机使用块迁移方式进行磁盘文件的拷贝,这一点也体如今了操做日志中。
源主机日志分析:
# 开始进入迁移逻辑。 Starting migrate_disk_and_power_off # 在目的计算节点上尝试建立 tmp 文件,以此来判断源计算节点和目的计算节点是否使用了共享存储 Creating file /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/8ac1bb9977bc4b4b948c4c8fdad9f1f6.tmp on remote host 172.17.1.16 create_file # 建立 tmp 文件失败,表示没有使用共享存储,由于虚拟机目录不存在 'ssh -o BatchMode=yes 172.17.1.16 touch /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/8ac1bb9977bc4b4b948c4c8fdad9f1f6.tmp' failed. Not Retrying. # 在目的计算节点上建立虚拟机目录 Creating directory /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5 on remote host 172.17.1.16 # 关闭虚拟机电源 Shutting down instance Instance shutdown successfully after 35 seconds. # Hypervisor 层删除虚拟机实例 Instance destroyed successfully. # 重命名虚拟机目录 mv /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5 /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize # 拷贝虚拟机磁盘文件及配置文件至目的主机 scp -r /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize/disk 172.17.1.16:/var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk scp -r /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize/disk.info 172.17.1.16:/var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk.info # Nova 层面虚拟机已中止 VM Stopped (Lifecycle Event) # 正式删除源主机虚拟机目录 rm -rf /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize # 移除虚拟机网络设备 Unplugging vif VIFBridge(active=True,address=fa:16:3e:d0:f6:a4,bridge_name='qbr15c7b577-89',has_traffic_filtering=True,id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1,network=Network(11d8d379-dcd9-46ff-9cd1-25d2737affb4),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap15c7b577-89') brctl delif qbr15c7b577-89 qvb15c7b577-89 ip link set qbr15c7b577-89 down brctl delbr qbr15c7b577-89 ovs-vsctl --timeout=120 -- --if-exists del-port br-int qvo15c7b577-89 ip link delete qvo15c7b577-89
目的主机日志分析:
# Nova 层面新建虚拟机,预扣虚拟机资源 Claim successful # 迁移中 Migrating # 更新虚拟机 vNIC 的 Port binding:host_id 信息 Updating port 15c7b577-89f5-46f6-8111-5f4e0c8ebaa1 with attributes {'binding:host_id': u'overcloud-ovscompute-0.localdomain'} # 建立虚拟机镜像 Creating image # 检查是否能够 resize 虚拟机磁盘文件 Checking if we can resize image /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk. Cannot resize image /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk # 肯定虚拟机 console.log 日志文件存在 Ensure instance console log exists: /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/console.log # 组装 GuestOS 的 XML 文件 End _get_guest_xml # 添加虚拟机网络虚拟设备 Plugging vif VIFBridge(active=False,address=fa:16:3e:d0:f6:a4,bridge_name='qbr15c7b577-89',has_traffic_filtering=True,id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1,network=Network(11d8d379-dcd9-46ff-9cd1-25d2737affb4),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap15c7b577-89') brctl addbr qbr15c7b577-89 brctl setfd qbr15c7b577-89 0 brctl stp qbr15c7b577-89 off brctl setageing qbr15c7b577-89 0 tee /sys/class/net/qbr15c7b577-89/bridge/multicast_snooping tee /proc/sys/net/ipv6/conf/qbr15c7b577-89/disable_ipv6 ip link add qvb15c7b577-89 type veth peer name qvo15c7b577-89 ip link set qvb15c7b577-89 up ip link set qvb15c7b577-89 promisc on ip link set qvb15c7b577-89 mtu 1450 ip link set qvo15c7b577-89 up ip link set qvo15c7b577-89 promisc on ip link set qvo15c7b577-89 mtu 1450 ip link set qbr15c7b577-89 up brctl addif qbr15c7b577-89 qvb15c7b577-89 ovs-vsctl -- --may-exist add-br br-int -- set Bridge br-int datapath_type=system ovs-vsctl --timeout=120 -- --if-exists del-port qvo15c7b577-89 -- add-port br-int qvo15c7b577-89 -- set Interface qvo15c7b577-89 external-ids:iface-id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d0:f6:a4 external-ids:vm-uuid=80996760-0c30-4e2a-847a-b9d882182df ip link set qvo15c7b577-89 mtu 1450 # Noca 层面虚拟机已启动 Instance running successfully. VM Started (Lifecycle Event)
NOTE:虽然上述将源主机和目的主机的操做日志分开记录,但实际上二者的 nova-compute.service 是交叉交互的,并不是源主机的迁移操做处理完了以后再开始进行目的主机的迁移操做处理。
迁移场景信息:
从迁移场景信息可知,虚拟机的 Local 磁盘文件、内存数据都经过 Libvirt Live Migration 完成迁移;共享块设备经过重定向挂载完成迁移;端口设备经过 Neutron 完成虚拟网络设备的建立和删除。
为了保障 OpenStack 虚拟机热迁移的正常运行,须要知足几个前提条件:
配置 Libvirt 使用 SSH 协议进行数据传输:
[libvirt] ... live_migration_uri=qemu+ssh://nova_migration@%s/system?keyfile=/etc/nova/migration/identity
qemu+ssh
:使用 ssh 协议nova_migration
:执行 ssh 的用户%s
:计算节点 hostname,e.g. nova_migration@cpu01
keyfile
:安全通讯的 ssh 私钥除此以外还可使用 TCP 协议进行数据传输:
live_migration_uri=qemu+tcp://nova_migration@%s/system
若是使用了 TCP 协议,那么还得将源和目标节点的 Libvirt TCP 远程监听服务打开:
# /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 auth_tcp = "none" # /etc/init/libvirt-bin.conf # options passed to libvirtd, add "-l" to listen on tcp env libvirtd_opts="-d -l" # /etc/default/libvirt-bin libvirtd_opts="-d -l"
NOTE:在热迁移时也可能会采用块迁移,但这不是值得推荐的方式。
Block live migration requires copying disks from the source to the destination host. It takes more time and puts more load on the network. Shared-storage and volume-backed live migration does not copy disks.
[stack@undercloud (overcloudrc) ~]$ openstack server show VM1 +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-000000a0 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-19T08:04:50.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.17, 10.0.1.8, 10.0.1.16, 10.0.1.10, 10.0.1.18, 10.0.1.19 | | config_drive | | | created | 2019-03-19T08:04:04Z | | flavor | Flavor1 (2ff09ec5-19e4-40b9-a52e-6026652c0788) | | hostId | 0f2ec590cd73fe0e9522f1ba715dae7a7d4b884e15aa8254defe85d0 | | id | a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 | | image | CentOS-7-x86_64-GenericCloud (0aff2888-47f8-4133-928a-9c54414b3afb) | | key_name | stack | | name | VM1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-19T08:04:50Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume1 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume2 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume3 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume4 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume5 [stack@undercloud (overcloudrc) ~]$ openstack server migrate --block-migration --live overcloud-ovscompute-1.localdomain --wait VM1 Complete [stack@undercloud (overcloudrc) ~]$ openstack server show VM1 +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-000000a0 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-19T08:04:50.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.17, 10.0.1.8, 10.0.1.16, 10.0.1.10, 10.0.1.18, 10.0.1.19 | | config_drive | | | created | 2019-03-19T08:04:04Z | | flavor | Flavor1 (2ff09ec5-19e4-40b9-a52e-6026652c0788) | | hostId | 9f1230901ddf3fe0e1a41e1c650a784c122b791f89fdf66a40cff3d6 | | id | a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 | | image | CentOS-7-x86_64-GenericCloud (0aff2888-47f8-4133-928a-9c54414b3afb) | | key_name | stack | | name | VM1 | | os-extended-volumes:volumes_attached | [{u'id': u'afbe0783-50b8-4036-b59a-69b94dbdb630'}, {u'id': u'27fc8950-6e98-4ba7-9366-907e8fd2a90a'}, {u'id': u'df8b33a8-6d8c-4e0e-a742-869fec4ff923'}, {u'id': u'534bb675-4d8c-4380-8bd2-4aeaedbcda40'}, {u'id': u'623a513a-2cca- | | | 47e5-9426-71a154cbe0c0'}] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-19T08:18:39Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
NUMA 亲和和 CPU 绑定迁移状况:
# 源主机 [root@overcloud-ovscompute-0 nova]# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 node 0 size: 4095 MB node 0 free: 1273 MB node 1 cpus: 9 10 11 12 13 14 15 node 1 size: 4096 MB node 1 free: 2410 MB node distances: node 0 1 0: 10 20 1: 20 10 [root@overcloud-ovscompute-0 nova]# virsh list Id Name State ---------------------------------------------------- 11 instance-000000a0 running [root@overcloud-ovscompute-0 nova]# virsh vcpuinfo instance-000000a0 VCPU: 0 CPU: 0 State: running CPU time: 50.1s CPU Affinity: y--------------- VCPU: 1 CPU: 1 State: running CPU time: 26.8s CPU Affinity: -y-------------- [root@overcloud-ovscompute-0 nova]# virsh vcpupin instance-000000a0 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1 # 目的主机 [root@overcloud-ovscompute-1 nova]# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 node 0 size: 4095 MB node 0 free: 1420 MB node 1 cpus: 9 10 11 12 13 14 15 node 1 size: 4096 MB node 1 free: 2270 MB node distances: node 0 1 0: 10 20 1: 20 10 [root@overcloud-ovscompute-1 nova]# virsh vcpuinfo instance-000000a0 VCPU: 0 CPU: 0 State: running CPU time: 2.9s CPU Affinity: y--------------- VCPU: 1 CPU: 1 State: running CPU time: 1.2s CPU Affinity: -y-------------- [root@overcloud-ovscompute-1 nova]# virsh vcpupin instance-000000a0 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1
源主机日志分析:
# 经过建立 tmpfile 来检测是否使用了共存存储 Check if temp file /var/lib/nova/instances/tmpZ0Bj8s exists to indicate shared storage is being used for migration. Exists? False # 开始热迁移 Starting monitoring of live migration _live_migration /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6566 # 轮询监控 libvirtd 热迁移状态并打印迁移进度日志,并动态传递 downtime(最大停机时间) # 由于虚拟机的数据仍会不断变化,因此最终迁移的 Size 每每会大于 data_gb Current None elapsed 0 steps [(0, 46), (300, 47), (600, 48), (900, 51), (1200, 57), (1500, 66), (1800, 84), (2100, 117), (2400, 179), (2700, 291), (3000, 500)] update_downtime /usr/lib/python2.7/site-packages/nova/virt/libvirt/migration.py:348 Increasing downtime to 46 ms after 0 sec elapsed time Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0) # Nova 层面暂停虚拟机 VM Paused (Lifecycle Event) # 虚拟机数据迁移完成 Migration operation has completed Migration operation thread has finished # 卸载虚拟机的共享块设备 calling os-brick to detach iSCSI Volume disconnect_volume iscsiadm -m node -T iqn.2010-10.org.openstack:volume-afbe0783-50b8-4036-b59a-69b94dbdb630 -p 172.17.3.18:3260 --op delete Checking to see if SCSI volumes sdc have been removed. SCSI volumes sdc have been removed. # Nova 层面虚拟机已中止 VM Stopped (Lifecycle Event) # 拔出虚拟机网络 Unplugging vif VIFBridge # 删除虚拟机本地磁盘文件 mv /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37_del Deleting instance files /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37_del Deletion of /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37_del complete # 迁移完成 Migrating instance to overcloud-ovscompute-1.localdomain finished successfully. Live migration monitoring is all done
目的主机日志分析:
# 建立 tmpfile 检测是否使用共享存储 Creating tmpfile /var/lib/nova/instances/tmpZ0Bj8s to notify to other compute nodes that they should mount the same storage. # 建立虚拟机本地磁盘文件 Creating instance directory: /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 touch -c /var/lib/nova/instances/_base/ff34147b1062cd454ae2a8959f069e2e18691ec9 qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ff34147b1062cd454ae2a8959f069e2e18691ec9 /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk Creating disk.info with the contents: {u'/var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk': u'qcow2'} Checking to make sure images and backing files are present before live migration. # 检查磁盘文件是否能够 Resize Checking if we can resize image /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk. size=10737418240 qemu-img resize /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk 10737418240 # 挂载虚拟机的共享块设备 Connecting volumes before live migration. Calling os-brick to attach iSCSI Volume connect_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py:63 Trying to connect to iSCSI portal 172.17.3.18:3260 iscsiadm -m node -T iqn.2010-10.org.openstack:volume-afbe0783-50b8-4036-b59a-69b94dbdb630 -p 172.17.3.18:3260 Attached iSCSI volume {'path': u'/dev/sda', 'scsi_wwn': '360014052de14ef00f124a939740ba645', 'type': 'block'} # 插入虚拟机网络 Plugging VIFs before live migration. # 更新 Port 信息 Port 35f7ede8-2a78-44b6-8c65-108e6f1080aa updated with migration profile {'migrating_to': 'overcloud-ovscompute-1.localdomain'} successfully # Nova 层面虚拟机已启动 VM Started (Lifecycle Event) VM Resumed (Lifecycle Event)
从上述日志能够看出,热迁移的关键动做是交由 Hypervisor 层完成的,Nova 只是对 Hypervisor Live Migration 功能进行了封装和调度管理。在上例中 Libvirt Live Migration 经过 SSH 协议将虚拟机的本地磁盘文件、内存数据一并传输到目的主机。
https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm-virtual-machines/
http://www.javashuo.com/article/p-scpajatm-mr.html
https://docs.openstack.org/nova/pike/admin/configuring-migrations.html
https://docs.openstack.org/nova/pike/admin/live-migration-usage.html
https://blog.csdn.net/lemontree1945/article/details/79901874
https://www.ibm.com/developerworks/cn/linux/l-cn-mgrtvm1/index.html
https://blog.csdn.net/hawkerou/article/details/53482268