Multipath即多路径,是个通用概念。这里要介绍的是开源的存储多路径技术,也就是DM multipath。有关multipath介绍很多,这里 主要记录我对multipath最初几个问题和答案:html
使用虚拟机和iscsi。装一虚拟机,添加块设备,添加两个网卡,再用这个块设备建一个iscsi target。而后在一个想玩multipath的机器 上面,用iscsi client去链接iscsi target。至此,用lsblk会查看到原来的块设备有两个设备节点。安全
有时看到一串16进制数字(WWID), 有时是以mpath为前缀的名字(user-friendly name), 有时是任意字母串(alia name)。multipath默 认用的是WWID,为何不用好记的名字呢? 好记的名字不能工做的一个情景:根文件系统不能在multipath设备上面。好记的名字和 WWID之间的映射是保存在/etc/multipath/bindings文件里的。要访问这个文件,根文件系统必须已经挂载上了,而multipath服务在initrd里就要开始工做,那个时候尚未根系统。所以,默认设置为wwid是为了安全。less
2:0:0:1
设备地址,数字分别对应:Host:Bus:Target:Lun
。好比咱们让iscsi target走了两个IP地址,那么对于同一个设备只有 host
字段不一样。好比:2:0:0:1
和3:0:01
。ide
起初,我对这个概念有混淆:认为一个真实设备对应的全部路径为一个path group,即认为下面是一个path group:ui
multipath-demo:~ # multipath -l 14945540000000000ccb70d0ceeee4280f8450284d6298b59 dm-0 IET,VIRTUAL-DISK size=10G features='1 retain_attached_hw_handler' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | `- 2:0:0:0 sda 8:0 active undef unknown `-+- policy='service-time 0' prio=0 status=enabled `- 3:0:0:0 sdc 8:32 active undef unknown
其实,dm-0设备有两个path group,每一个PG都只有一个路径(真实环境有多条),状态active
的是正在工做的路径,状态enabled
处于备用状态,并不下发IO。 为此,请教了作multipath的同事Martin:this
Please have a look at http://christophe.varoqui.free.fr/refbook.html Path groups are mainly used for active/passive setups, and for cases where some paths have a higher latency/lower bandwidth than others (imagine a mirrored storage with mirror legs in different physical locations, disaster avoidance: the local mirror will be much faster than remote mirrors). Only one path group is "active" at any given time. The others are serving as standby, for the case that all paths in the currently active group fail. Depending on the storage array, the host may need to take explicit action to switch from one path group to another (e.g. send a certain SCSI command that forces the storage to activate the stand-by ports). If the active path group contains multiple paths, switching between these paths (more precisely: between those paths in the path group which are not in failed state) is controlled by the "path_selector" algorithm in the kernel. The are 3 algorithms: "round-robin", "queue- length", and "service-time". See multipath.conf(5). Switching of paths inside a path group, unlike switching between path groups, is assumed to be instanteneous, and to require no explicit action. Regardless which path selector is in use, every healthy path will receive IO sooner or later, unless the multipath device is completely idle. How the paths are grouped into path groups at discovery time is determined by the "path_grouping_policy". It's "failover" by default, meaning that there's a dedicated path group for every path. But multipath's builtin hardware table sets different defaults for many real-world storage arrays. For modern setups, "group_by_prio" is often the best, combined with "detect_prio yes" or or a "prio" setting that assigns different priority to paths with different quality (e.g. "alua", "rdac", or "path_latency"). Path groups are assigned a priority which is calculated as the average of all non-failed paths in the path group. At startup, the path group with the highest prio is set as active PG. When all paths in this PG fail, the kernel will switch to the next-best PG. When paths in the best PG return to good state, the "failback" configuration on determines if, and when, to switch back to the best PG.
path grouping policy 默认是failover
, 如martin所说,各设备厂商默认策略不一样,主流的在用group_by_prio
,做用就是把路径分组。IO scheduling policy默认是service time
, 负责如何在一个PG的路径中分配IO。对此,Martin给出了详细的解释。lua