Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines. It may be used independently or as part of an OpenStack Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova), Network (neutron), Image (glance), and Object (swift) services.html
The Bare Metal service manages hardware through both common (eg. PXE and IPMI) and vendor-specific remote management protocols. It provides the cloud operator with a unified interface to a heterogeneous fleet of servers while also providing the Compute service with an interface that allows physical servers to be managed as though they were virtual machines.html5
官方文档:https://docs.openstack.org/ironic/latest/node
裸金属节点特指没有部署操做系统的物理服务器,相对于虚拟机,裸金属节点具备更强的计算能力、资源独占以及安全隔离等优势。Ironic 旨在为用户提供自助式的裸金属管理服务,Ironic 便可以独立使用,也能够与 OpenStack 集成,咱们主要关注后者。python
注: Bifrost 是自动化部署 Standalone Ironic 的 Ansible playbooks 集合。git
Ironic 为 OpenStack 提供裸金属管理服务,容许用户像管理虚拟机同样管理裸金属节点,部署裸机就像是部署虚拟机同样简单,为用户提供了多租户网络的裸金属云基础设施。Ironic 主要依赖 PXE 和 IPMI 技术来实现裸金属节点批量部署和系统控制,所以大部分物理服务器型号均可以经过 Ironic 进行系统安装和电源状态管理,对于个别物理服务器型号,也能够基于 Ironic 的可插拔驱动架构快速开发出针对性的管理驱动程序。凭借标准 API、普遍的驱动程序支持和轻量级的空间占用,使 Ironic 适用于从小型边缘部署到大型数据中心的各类用例,提供了理想的运行环境来托管高性能的云应用程序和架构,包括当下流行的 Kubernetes 等容器编排平台。web
应用 Ironic 可以解决的问题:算法
Ironic 与其余 OpenStack Projects 的协同:docker
注:网络管理可分为带外管理(out-of-band)和带内管理(in-band)两种方式。数据库
• node:裸金属的基础信息。包括 CPU、存储等信息,还包括 Ironic 管理该裸金属所使用的 Driver 类型信息。
• chassis:裸金属模板信息,用于 node 的管理分类。
• port:裸金属网口的基础信息,包括 MAC 地址、LLDP 等信息。
• portgroup:裸金属上联交换机对裸金属网口的端口组配置信息。
• conductor:记录 ironic-conductor 的状态及其支持 Driver 类型的信息。
• volume connector/target:记录裸金属的块设备挂载信息。ubuntu
官方文档:https://docs.openstack.org/ironic/rocky/contributor/states.html
当裸机完成硬件安装、网络连线等上架工做后,由管理员将裸机的信息录入到 Ironic 进行纳管,以此支持它的各项后续操做。该阶段中,根据须要能够应用 Ironic Inspector 的功能实现裸机硬件配置信息以及上联交换机信息的自动采集,即裸金属的自检。但基于具体 Drivers 实现的不一样,Ironic Inspector 有时并不能完成所有的工做,仍须要人工录入。
自检阶段主要由 IPA 和 Ironic Inspector 共同完成,前者负责信息的采集和传输,后者负责信息的处理。IPA 采集的信息,包括 CPU 个数、CPU Feature Flag、Memory Size、Disk Size、网卡 IP、MAC 地址等,这些信息会被做为 Nova Scheduler 的调度因子。若是接入了 SDN 网络,还须要监听裸机业务网卡上收到的 LLDP(链路发现协议)报文。LLDP 报文由 SDN 控制器发出,包含了 Chassis ID 和 Port ID,用于标记交换机的端口信息,使得 SDN 控制器能够将指定的转发规则下发到指定的交换机端口上。
Inspection 状态机:【数据录入】-> 【运维管理】-> 【数据检测】-> 【节点可用】。
manage
API 请求,Ironic 会依赖用户录入的信息作依赖数据校验,确认用户录入的数据是否知足下一步操做的要求;provide
API 请求(此动做意在告诉 Ironic 此裸金属节点已经准备好,可进入下一步的操做系统部署工做)以后,裸金属节点进入 cleaning 状态(才阶段的意义在于为用户提供一个纯净的裸金属节点),等待 cleaning 完成以后该裸金属节点正式标记为可用状态。用户能够随意使用,此时裸金属节点的基础信息(e.g. CPU、RAM、Disks、NICs)所有录入完毕。在裸金属完成上架自检处于 available 的状态以后,就能够进入 Provision 部署阶段。即用户根据业务须要指定镜像、网络等信息来部署裸金属实例。由云平台自动化完成资源调度、操做系统安装、网络配置等工做。一旦实例建立成功后,用户就可使用物理服务器运行业务了。
Provision 状态机:【设定部署模板】-> 【传入部署参数】-> 【进入裸金属阶段 randisk】-> 【ironic-python-agent 接管裸金属节点】-> 【注入镜像数据】->【引导操做系统】->【激活裸金属节点】。
actice
API 请求执行操做系统部署,请求的同时须要将部署的详细信息(e.g. user-image、instance metadata、网络资源分配)持久化到裸金属数据库表记录中;Clean 阶段保证了 Ironic 可以在多租户环境中为不一样用户提供始终纯净(配置统1、无非必要数据残留)的裸金属节点。Clean 阶段对裸金属节点的配置以及数据清理设计了一个统一可扩展的流程。经过这套流程,用户能够指定须要的清理流程,好比:抹盘、RAID 配置、BIOS 设置等项目,以及这些项目的执行优先级排序。
下载 Devstack:
git clone https://git.openstack.org/openstack-dev/devstack.git -b stable/stein sudo ./devstack/tools/create-stack-user.sh sudo su - stack
配置 local.conf:
[[local|localrc]] HOST_IP=192.168.1.100 # Use TryStack(99cloud) git mirror GIT_BASE=http://git.trystack.cn #GIT_BASE=https://git.openstack.org # Reclone each time RECLONE=no # Enable Logging DEST=/opt/stack LOGFILE=$DEST/logs/stack.sh.log VERBOSE=True LOG_COLOR=True SCREEN_LOGDIR=$DEST/logs LOGDAYS=1 # Define images to be automatically downloaded during the DevStack built process. DOWNLOAD_DEFAULT_IMAGES=False IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img" # use TryStack git mirror GIT_BASE=http://git.trystack.cn NOVNC_REPO=http://git.trystack.cn/kanaka/noVNC.git SPICE_REPO=http://git.trystack.cn/git/spice/sice-html5.git # Apache Frontend ENABLE_HTTPD_MOD_WSGI_SERVICES=False # IP Version IP_VERSION=4 # Credentials ADMIN_PASSWORD=password DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=password SWIFT_HASH=password SWIFT_TEMPURL_KEY=password # Enable Ironic plugin enable_plugin ironic https://git.openstack.org/openstack/ironic stable/stein # Disable nova novnc service, ironic does not support it anyway. disable_service n-novnc # Enable Swift for the direct deploy interface. enable_service s-proxy enable_service s-object enable_service s-container enable_service s-account # Cinder VOLUME_GROUP_NAME="stack-volumes" VOLUME_NAME_PREFIX="volume-" VOLUME_BACKING_FILE_SIZE=100G # Neutron ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta # By default, DevStack creates a 10.0.0.0/24 network for instances. # If this overlaps with the hosts network, you may adjust with the # following. NETWORK_GATEWAY=10.1.0.1 FIXED_RANGE=10.1.0.0/24 FIXED_NETWORK_SIZE=256 # Swift temp URL's are required for the direct deploy interface SWIFT_ENABLE_TEMPURLS=True # Create 3 virtual machines to pose as Ironic's baremetal nodes. IRONIC_VM_COUNT=3 IRONIC_BAREMETAL_BASIC_OPS=True DEFAULT_INSTANCE_TYPE=baremetal # Enable additional hardware types, if needed. #IRONIC_ENABLED_HARDWARE_TYPES=ipmi,fake-hardware # Don't forget that many hardware types require enabling of additional # interfaces, most often power and management: #IRONIC_ENABLED_MANAGEMENT_INTERFACES=ipmitool,fake #IRONIC_ENABLED_POWER_INTERFACES=ipmitool,fake # The 'ipmi' hardware type's default deploy interface is 'iscsi'. # This would change the default to 'direct': #IRONIC_DEFAULT_DEPLOY_INTERFACE=direct # Change this to alter the default driver for nodes created by devstack. # This driver should be in the enabled list above. IRONIC_DEPLOY_DRIVER=ipmi # The parameters below represent the minimum possible values to create # functional nodes. IRONIC_VM_SPECS_RAM=1280 IRONIC_VM_SPECS_DISK=10 # Size of the ephemeral partition in GB. Use 0 for no ephemeral partition. IRONIC_VM_EPHEMERAL_DISK=0 # To build your own IPA ramdisk from source, set this to True IRONIC_BUILD_DEPLOY_RAMDISK=False VIRT_DRIVER=ironic # Log all output to files LOGFILE=/opt/stack/devstack.log LOGDIR=/opt/stack/logs IRONIC_VM_LOG_DIR=/opt/stack/ironic-bm-logs
服务状态检查:
[root@localhost ~]# openstack compute service list +----+------------------+-----------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-----------------------+----------+---------+-------+----------------------------+ | 3 | nova-scheduler | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:18.000000 | | 6 | nova-consoleauth | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:22.000000 | | 7 | nova-conductor | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:14.000000 | | 1 | nova-conductor | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:15.000000 | | 3 | nova-compute | localhost.localdomain | nova | enabled | up | 2019-05-03T18:56:18.000000 | +----+------------------+-----------------------+----------+---------+-------+----------------------------+ [root@localhost ~]# openstack network agent list +--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+ | 52f23bda-a645-4459-bcac-686d98d23345 | Open vSwitch agent | localhost.localdomain | None | :-) | UP | neutron-openvswitch-agent | | 7113312f-b0b7-4ce8-ab15-428768b30855 | L3 agent | localhost.localdomain | nova | :-) | UP | neutron-l3-agent | | a45fb074-3b24-4b9e-8c8a-43117f6195f2 | Metadata agent | localhost.localdomain | None | :-) | UP | neutron-metadata-agent | | f207648b-03f3-4161-872e-5210f29099c6 | DHCP agent | localhost.localdomain | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+ [root@localhost ~]# openstack volume service list +------------------+-----------------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-----------------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | localhost.localdomain | nova | enabled | up | 2019-05-03T18:56:54.000000 | | cinder-volume | localhost.localdomain@lvmdriver-1 | nova | enabled | up | 2019-05-03T18:56:53.000000 | +------------------+-----------------------------------+------+---------+-------+----------------------------+ [root@localhost ~]# openstack baremetal driver list +---------------------+----------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------+ | fake-hardware | localhost | | ipmi | localhost | +---------------------+----------------+ [root@localhost ~]# openstack baremetal node list +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | adda54fb-1038-4634-8d82-53922e875a1f | node-0 | None | power off | available | False | | 6952e923-11ae-4506-b010-fd7a3c4278f5 | node-1 | None | power off | available | False | | f3b8fe69-a840-42dd-9cbf-217be8a95431 | node-2 | None | power off | available | False | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+
部署裸金属实例:
[root@localhost ~]# openstack server create --flavor baremetal --image cirros-0.4.0-x86_64-disk --key-name default --nic net-id=5c86f931-64da-4c69-a0f1-e2da6d9dd082 VM1 +-------------------------------------+-----------------------------------------------------------------+ | Field | Value | +-------------------------------------+-----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | k3TgBf5Xjsqv | | config_drive | | | created | 2019-05-03T20:26:28Z | | flavor | baremetal (8f6fd22b-9bec-4b4d-b427-7c333e47d2c2) | | hostId | | | id | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | | image | cirros-0.4.0-x86_64-disk (4ff12aca-b762-436c-b98c-579ad2a21649) | | key_name | default | | name | VM1 | | progress | 0 | | project_id | cbf936fc5e9d4cfcaa1dbc06cd9d2e3e | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2019-05-03T20:26:28Z | | user_id | 405fad83a4b3470faf7d6c616fe9f7f4 | | volumes_attached | | +-------------------------------------+-----------------------------------------------------------------+ [root@localhost ~]# openstack baremetal node list +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | adda54fb-1038-4634-8d82-53922e875a1f | node-0 | None | power off | available | False | | 6952e923-11ae-4506-b010-fd7a3c4278f5 | node-1 | None | power off | available | False | | f3b8fe69-a840-42dd-9cbf-217be8a95431 | node-2 | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | power off | deploying | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ [root@localhost ~]# openstack server list --long +--------------------------------------+------+--------+------------+-------------+-------------------+--------------------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------+--------+------------+-------------+-------------------+--------------------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+ | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | VM1 | ACTIVE | None | Running | private=10.0.0.40 | cirros-0.4.0-x86_64-disk | 4ff12aca-b762-436c-b98c-579ad2a21649 | baremetal | 8f6fd22b-9bec-4b4d-b427-7c333e47d2c2 | nova | localhost.localdomain | | +--------------------------------------+------+--------+------------+-------------+-------------------+--------------------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+ [root@localhost ~]# openstack baremetal node list +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | adda54fb-1038-4634-8d82-53922e875a1f | node-0 | None | power off | available | False | | 6952e923-11ae-4506-b010-fd7a3c4278f5 | node-1 | None | power off | available | False | | f3b8fe69-a840-42dd-9cbf-217be8a95431 | node-2 | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | power on | deploying | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ [root@localhost ~]# ssh cirros@10.0.0.40 $
此时 Ironic 做为 OpenStack Nova 驱动存在:
# nova.conf [DEFAULT] ... ompute_driver = ironic.IronicDriver
首先配置一个 Physical Network 做为 Provisioning Network,用于提供 DHCP、PXE 功能,即裸金属节点部署网络。
# /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_type_flat] flat_networks = public, physnet1 [ovs] datapath_type = system bridge_mappings = public:br-ex, physnet1:br-eth2 tunnel_bridge = br-tun local_ip = 172.22.132.93
$ sudo ovs-vsctl add-br br-eth2 $ sudo ovs-vsctl add-port br-eth2 eth2
$ sudo systemctl restart devstack@q-svc.service $ sudo systemctl restart devstack@q-agt.service
$ neutron net-create sharednet1 \ --shared \ --provider:network_type flat \ --provider:physical_network physnet1 $ neutron subnet-create sharednet1 172.22.132.0/24 \ --name sharedsubnet1 \ --ip-version=4 --gateway=172.22.132.254 \ --allocation-pool start=172.22.132.180,end=172.22.132.200 \ --enable-dhcp
NOTE:要注意 Enable DHCP,为裸金属节点的 PXE 网卡提供 IP 地址。
当使用 ironic node clean 功能时,须要使用 Cleaning Network,这里咱们将 Cleaning Network 和 Provisioning Network 合并。
# /etc/ironic/ironic.conf [neutron] cleaning_network = sharednet1
$ sudo systemctl restart devstack@ir-api.service $ sudo systemctl restart devstack@ir-cond.service
Deploy Image 和 User Image 均可以经过 Disk Image Builder 工具来完成,Deploy Image 在 DevStack 部署的时候已经建立好了,咱们再也不重复。
$ virtualenv dib $ source dib/bin/activate (dib) $ pip install diskimage-builder
$ cat <<EOF > k8s.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF $ DIB_YUM_REPO_CONF=k8s.repo \ DIB_DEV_USER_USERNAME=kyle \ DIB_DEV_USER_PWDLESS_SUDO=yes \ DIB_DEV_USER_PASSWORD=r00tme \ disk-image-create \ centos7 \ dhcp-all-interfaces \ devuser \ yum \ epel \ baremetal \ -o k8s.qcow2 \ -p vim,docker,kubelet,kubeadm,kubectl,kubernetes-cni ... Converting image using qemu-img convert Image file k8s.qcow2 created... $ ls dib k8s.d k8s.initrd k8s.qcow2 k8s.repo k8s.vmlinuz
# Kernel $ openstack image create k8s.kernel \ --public \ --disk-format aki \ --container-format aki < k8s.vmlinuz # Initrd $ openstack image create k8s.initrd \ --public \ --disk-format ari \ --container-format ari < k8s.initrd # Qcow2 $ export MY_VMLINUZ_UUID=$(openstack image list | awk '/k8s.kernel/ { print $2 }') $ export MY_INITRD_UUID=$(openstack image list | awk '/k8s.initrd/ { print $2 }') $ openstack image create k8s \ --public \ --disk-format qcow2 \ --container-format bare \ --property kernel_id=$MY_VMLINUZ_UUID \ --property ramdisk_id=$MY_INITRD_UUID < k8s.qcow2
这里的 Ironic Node 并不是指代运行 Ironic 守护进程的节点,而是指代裸机,这一点叫法上的习惯性区别须要注意。
$ ironic driver-list +---------------------+----------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------+ | agent_ipmitool | ironic-dev | | fake | ironic-dev | | ipmi | ironic-dev | | pxe_ipmitool | ironic-dev | +---------------------+----------------+
NOTE:若缺失,则能够经过 Set up the drivers for the Bare Metal service 的指示添加。
$ export DEPLOY_VMLINUZ_UUID=$(openstack image list | awk '/ipmitool.kernel/ { print $2 }') $ export DEPLOY_INITRD_UUID=$(openstack image list | awk '/ipmitool.initramfs/ { print $2 }') $ ironic node-create -d agent_ipmitool \ -n bare-node-1 \ -i ipmi_address=172.20.3.194 \ -i ipmi_username=maas \ -i ipmi_password=passwd \ -i ipmi_port=623 \ -i ipmi_terminal_port=9000 \ -i deploy_kernel=$DEPLOY_VMLINUZ_UUID \ -i deploy_ramdisk=$DEPLOY_INITRD_UUID
$ export NODE_UUID=$(ironic node-list | awk '/bare-node-1/ { print $2 }') $ ironic node-update $NODE_UUID add \ properties/cpus=4 \ properties/memory_mb=8192 \ properties/local_gb=100 \ properties/root_gb=100 \ properties/cpu_arch=x86_64
NOTE:上述信息也能够经过 Ironic Inspector 来进行录入,须要额外的配置。e.g.
# /etc/ironic-inspector/dnsmasq.conf no-daemon port=0 interface=eth1 bind-interfaces dhcp-range=172.22.132.200,172.22.132.210 dhcp-match=ipxe,175 dhcp-boot=tag:!ipxe,undionly.kpxe dhcp-boot=tag:ipxe,http://172.22.132.93:3928/ironic-inspector.ipxe dhcp-sequential-ip $ devstack@ironic-inspector-dhcp.service $ devstack@ironic-inspector.service
Inspection 流程:
$ ironic port-create -n $NODE_UUID -a NODE_PXE_NIC_MAC_ADDRESS
$ ironic node-validate $NODE_UUID +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | boot | False | Cannot validate image information for node 8e6fd86a-8eed-4e24-a510-3f5ebb0a336a because one or more parameters are missing from its instance_info. Missing are: ['ramdisk', 'kernel', 'image_source'] | | console | False | Missing 'ipmi_terminal_port' parameter in node\'s driver_info. | | deploy | False | Cannot validate image information for node 8e6fd86a-8eed-4e24-a510-3f5ebb0a336a because one or more parameters are missing from its instance_info. Missing are: ['ramdisk', 'kernel', 'image_source'] | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
NOTE:Ironic 做为 Node Driver 的场景中国 boot、deploy False 是正常的。
$ ironic --ironic-api-version 1.34 node-set-provision-state $NODE_UUID manage $ ironic --ironic-api-version 1.34 node-set-provision-state $NODE_UUID provide $ ironic node-list +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | 0c20cf7d-0a36-46f4-ac38-721ff8bfb646 | bare-0 | None | power off | cleaning | False | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+
NOTE:直到 Ironic Node 的 State 从 cleaning => available 以后,该节点正式纳管成功,能够继续进行上述提到过的裸金属实例部署操做。
每一个 Conductor 实例在启动时都会被注册到数据库,注册的信息包含了本实例支持的 Drivers 列表,而且按期更新本身的时间戳,使得 Ironic 可以知道哪些 Conductor 和哪些驱动是可用的。当用户注册裸金属节点时须要指定相应的 Driver,Ironic 会根据裸金属节点的注册信息将其分配到支持对应 Driver 的 Conductor 实例上。
对于多个 Conductor 和多个裸金属节点之间的关联场景,因为须要作到有状态管理和避免管理冲突问题,Ironic 采用了一致性哈希算法,多个裸金属节点依据一致性哈希算法映射在一组 Conductor 上。当 Conductor 实例加入或者退出集群,裸金属阶段会从新映射到不一样的 Conductor 上,此时会触发驱动的多种动做,好比 take-over 或者 clean-up。
Ironic 设计 Drivers 模型时尽可能考虑到了功能模块的高内聚、松耦合、可复用性和可组合性。使用了一个 Drivers Set 对应一个裸金属节点的管理。
Drivers 的类型可分为:
Ironic Python Agent,简称 IPA,运行的物理载体是裸金属节点,软件载体是 ramdisk。用于控制和配置裸金属节点,执行擦盘和镜像注入等任务。这是经过引导自定义的 Linux 内核和运行 IPA 并链接到 Ironic Conductor 的 initramfs 镜像来完成的。
IPA 的功能清单:
IPA 能够经过下面命令来生成:
disk-image-create -c ironic-agent ubuntu dynamic-login stable-interface-names proliant-tools -o ironic-agent
工做流程:从 PXE 引导系统,而后进入 IPA,IPA 会把收集到的信息发送给 ironic-inspector,inspector 根据发来的信息获得 IPMI 的 IP/MAC 地址来和 ironic-api 通讯注册节点,而后 ironic-api 就能够和 IPA 就能够进行通讯来使用 agent_*driver 进入 Clean 阶段或进行部署了。
Ironic 支持两种 Console 类型:
Shellinabox 能够将终端输出转换成 Ajax 实现的 http 服务,能够直接经过浏览器访问,呈现出相似 Terminal 的界面。Socat 与 Shellinabox 相似,都是充当一个管道做用,只不过 Socat 是将终端流重定向到 TCP 链接。Shellinabox 是比较早的方式,它的限制在于只能在 Ironic Conductor 节点上运行 WEB 服务,访问范围受限,因此社区又用 Socat 实现了一套。Socat 提供 TCP 链接,能够和 Nova 的 Serial Console 对接。要使用这二者,须要在 Ironic Conductor 节点安装相应的工具。Socat 在 yum 源里就能够找到,Shellinabox 不在标准源里,要从 EPEL 源里下载,它没有外部依赖,因此直接下载这个 rpm 包安装就能够了。Ironic 的驱动中,以 Socat 结尾的驱动是支持 Socat 的,好比 agent_ipmitool_socat,其它的则是 Shellinabox 方式。使用哪一种终端方式,在建立节点时要肯定好。这两种方式都是双向的,能够查看终端输出,也能够键盘输入。
Ironic 部署一台裸金属节点须要两种镜像类型,都可以经过 Disk Image Builder 工具构建:
Deploy Images 内含了部署最重要的模块 IPA,它既完成目标主机在部署阶段提供 tgt+iSCSI 服务,又提供物理机发现阶段收集上报目标主机的物理信息。经过下列指令构建 deploy image 以后会生成两个文件:ironic-deploy.vmlinuz(ironic-deploy.kernel)和 ironic-deploy.initramfs。
disk-image-create ironic-agent centos7 -o ironic-deploy
User Images 是真正的用户目的操做系统镜像,经过下列指令构建 user image 以后会生成 my-image.qcow二、my-image.vmlinuz 和 my-image.initrd 三个文件,用于引导和启动操做系统。
image-create centos7 baremetal dhcp-all-interfaces grub2 -o my-image
https://mp.weixin.qq.com/s/SbHqZ-lUVh9EOD_UhQNQ-A
https://mp.weixin.qq.com/s/-o1SYXT50nYU4FKqX0qs6Q
https://mp.weixin.qq.com/s/RND6MX-RnM6FcLus-gbM3w
https://mp.weixin.qq.com/s?__biz=MzIzMzk0MDgxNQ==&mid=2247485064&idx=1&sn=1e0f461a2f0f61f857a38842a11756ec&scene=21#wechat_redirect
https://k2r2bai.com/2017/08/16/openstack/ironic/#创建-Bare-metal-網路