参考文档:html
iSCSI gateway的实现主要有TGT && LIO两种方式。node
TGT:Linux target framework,为建立、维护SCSI target 驱动(包括iSCSI、FC、SRP等)提供支持。python
在ceph集成原生iscsi以前,一般使用基于用户空间的"scsi-target-utils"套件实现tgt。linux
基于centos7.x系列,安装"scsi-target-utils"套件后,iscsi并不支持ceph rbd后端存储(经过"tgtadm --lld iscsi --mode system --op show"查看),主要缘由是redhat针对套件屏蔽了支持ceph rbd后端存储的代码。vim
解决方案:经过"rbd map xxx"将ceph rbd挂载到本地后,再经过iscsi tgt的"direct-store"模式发布ceph块存储。windows
方案缺点:"rbd map xxx"挂载ceph rbd是经过"ceph rbd kernel module"的形式,tgt在用户空间实现,致使发布的ceph rbd在内核态与用户态之间频繁切换,影响性能。后端
LIO:Linux-IO Target,用软件实现各类SCIS Target。centos
本文主要介绍基于LIO的ceph原生iscsi 实现方式,LIO利用用户空间直通(即TCMU)与ceph的librbd库进行交互(tcmu-runner处理LIO TCM后端存储的用户空间端的守护进程,在内核之上多了一个用户态的驱动层,这样只须要根据tcmu的标准来对接接口便可,而不用去直接与内核进行交互),并将rbd image暴露给iSCSI客户端。api
targetcli-2.1.fb47 or newer package,本文采用:2.1.fb47; 网络
python-rtslib-2.1.fb64 or newer package,本文采用:2.1.fb64-3;
tcmu-runner-1.3.0 or newer package,本文采用:1.3.0-0.4.2;
ceph-iscsi-config-2.4 or newer package,本文采用:2.5-1;
ceph-iscsi-cli-2.5 or newer package,本文采用:2.5-10。
以上rpm包并不能直接下载,整理本文采用的rpm包以下,连接:https://pan.baidu.com/s/1i-0GLqxjMv3P3c3YYoyhiQ 密码:ncxv
Hostname |
IP |
Service |
Remark |
ceph01 |
public:172.30.200.57 cluster:192.30.200.57 |
centos7.5 with kernel v4.18.7-1 |
|
ceph02 |
public:172.30.200.58 cluster:192.30.200.58 |
centos7.5 with kernel v4.18.7-1 |
|
ceph03 |
public:172.30.200.59 cluster:192.30.200.59 |
centos7.5 with kernel v4.18.7-1 |
|
ceph-client |
172.30.200.50 |
iscsi-initiator-utils v6.2.0.874-7 device-mapper-multipath v0.4.9-119 |
针对ceph-mon或osd节点,并无特殊的iscsi-gateway参数选项,但下降若干默认的检测osd宕机时间,能够有效下降initiator的链接超时。
# 可在ceph-mon节点修改ceph.conf文件后分发到全部节点,如: [root@ceph01 ~]# su - cephde [cephde@ceph01 ~]$ cd cephcluster/ [cephde@ceph01 cephcluster]$ cat ceph.conf # 新增参数 [osd] osd client watch timeout = 15 osd heartbeat grace = 20 osd heartbeat interval = 5 # 分发,须要重启服务 [cephde@ceph01 cephcluster]$ ceph-deploy admin ceph01 ceph02 ceph03 # 经过ceph-deply节点,在线修改参数,以下: [cephde@ceph01 cephcluster]$ sudo ceph tell osd.* config set osd_client_watch_timeout 15 [cephde@ceph01 cephcluster]$ sudo ceph tell osd.* config set osd_heartbeat_grace 20 [cephde@ceph01 cephcluster]$ sudo ceph tell osd.* config set osd_heartbeat_interval 5
# 为了多路径高可用,iscsi gateway在多osd节点部署,下面以ceph01节点为例,其他节点相似,必要时根据节点作调整; # 下载必需的软件后,利用”yum localinstall *”统一安装,可解决依赖问题 [root@ceph01 ~]# cd ~/ceph-iscsi/ [root@ceph01 ceph-iscsi]# yum localinstall * -y
# 在osd节点/etc/ceph/目录下建立iscsi-gateway.cfg文件,全部iscsi-gateway节点配置内容相同,以ceph01节点为例; # iscsi-gateway.cfg文件中,只须要根据实际状况修改trusted_ip_list,其是每一个iscsi网关上的ip地址列表,用于管理操做,如目标建立,LUN导出等; # trusted_ip_list可与用于iSCSI数据的ip相同,但条件容许时推荐使用分离的IP [root@ceph01 ~]# touch /etc/ceph/iscsi-gateway.cfg [root@ceph01 ~]# vim /etc/ceph/iscsi-gateway.cfg [config] # Name of the Ceph storage cluster. A suitable Ceph configuration file allowing # access to the Ceph storage cluster from the gateway node is required, if not # colocated on an OSD node. cluster_name = ceph # Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph # drectory and reference the filename here gateway_keyring = ceph.client.admin.keyring # API settings. # The API supports a number of options that allow you to tailor it to your # local environment. If you want to run the API under https, you will need to # create cert/key files that are compatible for each iSCSI gateway node, that is # not locked to a specific node. SSL cert and key files *must* be called # 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory # on *each* gateway node. With the SSL files in place, you can use 'api_secure = true' # to switch to https mode. # To support the API, the bear minimum settings are: api_secure = false # Additional API configuration options are as follows, defaults shown. # api_user = admin # api_password = admin # api_port = 5001 trusted_ip_list = 172.30.200.57,172.30.200.58,172.30.200.59
# rbd-target-api依赖于rbd-target-gw,rbd-target-gw服务依赖于”rbd”池的提早创建,且pool的名字必须是”rbd”; [root@ceph01 ~]# ceph osd pool create rbd 256 # 建立pool后须要启动pool的属性,如块存储池”rbd”,属性关键字在最后 [root@ceph01 ~]# ceph osd pool application enable rbd rbd
# 查看pool,或”ceph osd lspools”,” ceph osd pool ls“,”rados df”等 [root@ceph01 ~]# ceph osd pool get rbd all
# 服务须要在全部iscsi-gateway节点启动,以ceph01节点为例; # 在启动”rbd-target-api”服务的同时,会启动”rbd-target-gw”服务; # 注意提早建立”rbd” pool,rbd-target-api依赖于rbd-target-gw,rbd-target-gw服务依赖于”rbd”池 [root@ceph01 ~]# systemctl daemon-reload [root@ceph01 ~]# systemctl enable rbd-target-api [root@ceph01 ~]# systemctl start rbd-target-api [root@ceph01 ~]# systemctl status rbd-target-api ; systemctl status rbd-target-gw
iscsi-gateway命令行工具gwcli用于建立/配置iscsi-target与rbd image;其他较低级别命令行工具,如targetcli或rbd等,可用于查询配置,但不能用于修改gwcli所作的配置。
建立iscsi-target与rbd image在1个节点操做便可,如下操做在ceph01节点完成。
# 进入gwcli命令行工具后,经过”ls”可查看目录,经过”cd”可切换目录 [root@ceph01 ~]# gwcli Warning: Could not load preferences file /root/.gwcli/prefs.bin. /> ls
# 在iscsi-target目录下建立iscsi-target; # iscsi-target命名规则:iqn.yyyy-mm.<reversed domain name>:identifier,即iqn.年-月.反转域名:target-name,这里没有域名,采用ip地址替代; # 在新建立的iscsi-target下,同步生成gateway,host-groups,hosts目录 /> cd /iscsi-target /iscsi-target> create iqn.2018-09.172.30.200.5x:iscsi-gw /iscsi-target> ls
# 在新建立的iscsi-target下同步生成gateway目录下建立iscsi-gateway; # iscsi-gateway的ip采用用于iscsi数据的ip,也可与trusted_ip_list设置的ip相同,建议采用前者; # iscsi-gateway 的名字同主机hostname; # 为了多路径ha,iscsi-gateway至少配置2个; # 若是没有使用指定版本的OS或者内核,或者采用ceph-iscsi-test内核时,可在建立iscsi-gateway命令后带上”skipchecks=true”,跳过内核检测 /iscsi-target> cd iqn.2018-09.172.30.200.5x:iscsi-gw/gateways /iscsi-target...i-gw/gateways> create ceph01 172.30.200.57 /iscsi-target...i-gw/gateways> create ceph02 172.30.200.58 /iscsi-target...i-gw/gateways> create ceph03 172.30.200.59 /iscsi-target...i-gw/gateways> ls
# 在命令行根目录的disks目录下建立image; # 建立image时,须要指定pool,image-name与size /iscsi-target...i-gw/gateways> cd /disks /disks> create pool=rbd image=disk01 size=10G /disks> ls
# 在新建立的iscsi-target下同步生成hosts目录下设置initiator; # initiator-name同iscsi-target命名相似,或在已有initiator客户端的状况下,采用客户端默认的initiator-name,centos系统可查看”/etc/iscsi/initiatorname.iscsi”文件获取; # 建立initiator-name后,自动进入initiator-name目录 /disks> cd /iscsi-target/iqn.2018-09.172.30.200.5x:iscsi-gw/hosts /iscsi-target...scsi-gw/hosts> create iqn.2018-09.172.30.200.50:iscsi-initiator /iscsi-target...csi-initiator> ls
# 设置CHAP认证(必须),不然iscsi-target会拒绝initiator的登录请求; # 在新建的initiator-name目录下设置认证 /iscsi-target...csi-initiator> auth chap=iscsiname/iscsipassword /iscsi-target...csi-initiator> ls
# 在新建的initiator-name目录下向initiator添加image; # 添加成功后,对应initiator下有可被挂载的lun设备; # 此时多台iscsi-gateway主机iscsi-gateway ip的tcp 3260端口被监听 /iscsi-target...csi-initiator> disk add rbd.disk01 /iscsi-target...csi-initiator> ls
# 查看全局目录层级,cluster目录针对ceph集群; # disk与iscsi-target目录针对iscsi-target与rbd image的建立与配置; # 同时可经过targetcli命令行工具或者ceph的rbd命令查询已完成的配置 /iscsi-target...csi-initiator> cd / /> ls
查看官网,iscsi initiator目前支持linux,windows与vmware esx,这里只针对linux作验证。
# iscsi-initiator-utils是通用initiator套件; # device-mapper-multipath是多路径工具 [root@ceph-client ~]# yum install iscsi-initiator-utils device-mapper-multipath -y
# 启用multipath服务,生成”/etc/multipath.conf”文件 [root@ceph-client ~]# mpathconf --enable --with_multipathd y # 在”/etc/multipath.conf”文件新增配置,针对LIO后端存储设置多路径ha [root@ceph-client ~]# vim /etc/multipath.conf devices { device { vendor "LIO-ORG" hardware_handler "1 alua" path_grouping_policy "failover" path_selector "queue-length 0" failback 60 path_checker tur prio alua prio_args exclusive_pref_bit fast_io_fail_tmo 25 no_path_retry queue } } # 从新加载multinpath服务 [root@ceph-client ~]# systemctl reload multipathd
# 开启initiator的chap认证,并设置username/password,与iscsi-target设置保持一致; # CHAP Settings部分,涉及57/61/62行 [root@ceph-client ~]# vim /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP node.session.auth.username = iscsiname node.session.auth.password = iscsipassword
# 设置initiator-name,保持与iscsi-target设置的initiator-name一致 [root@ceph-client ~]# vim /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2018-09.172.30.200.50:iscsi-initiator
# 发现iscsi存储:iscsiadm -m discovery -t st -p ISCSI_IP,ISCSI_IP默认采用3260端口; # 查看iscsi发现记录:iscsiadm -m node # 删除iscsi发现记录:iscsiadm -m node -o delete -T LUN_NAME -p ISCSI_IP [root@ceph-client ~]# iscsiadm -m discovery -t st -p 172.30.200.57
# 登陆iscsi存储:iscsiadm -m node -T LUN_NAME -p ISCSI_IP -l # 登出iscsi存储:iscsiadm -m node -T LUN_NAME -p ISCSI_IP -u # 显示会话状况:iscsiadm -m session [root@ceph-client ~]# iscsiadm -m node -T iqn.2018-09.172.30.200.5x:iscsi-gw -l
# 1个后端存储,经过3条路径链接 [root@ceph-client ~]# multipath -ll
# 经过多路径链接后端存储,生成多个盘符; # 经过multipath服务汇聚,生成盘符/dev/mapper/mpathx,mount时间直接使用; # 或:lsscsi [root@ceph-client ~]# fdisk -l
# 建立分区,分区类型与大小默认便可; # 保存退出后会有1个报错,可忽略 [root@ceph-client ~]# fdisk /dev/mapper/mpatha Command (m for help): n Select (default p): Partition number (1-4, default 1): First sector (8192-20971519, default 8192): Last sector, +sectors or +size{K,M,G} (8192-20971519, default 20971519): Command (m for help): w # 格式化分区 [root@ceph-client ~]# mkfs.xfs /dev/mapper/mpatha1 # 挂载分区 [root@ceph-client ~]# mount /dev/mapper/mpatha1 /mnt # 查看挂载状况 [root@ceph-client ~]# df -Th
# filesystem parameters列设置挂载时间; # noatime:禁止更新文件与目录的inode访问时间,以得到更快的访问速度; # _netdev:标识文件系统位于网络上,防止网络启动前挂载 [root@ceph-client ~]# vim /etc/fstab # rbd /dev/mapper/mpatha1 /mnt xfs noatime,_netdev 0 0
ceph提供了1个监控导出的rbd image的性能的工具gwtop。
gwtop相似top,可显示经过iSCSI导出到客户端的rbd image的聚合性能指标,度量值取自Performance Metrics Domain Agent (PMDA)。Linux-IO target (LIO) PMDA信息列出每一个导出的rbd image与客户端的链接,以及关联的I/O值。
# 在已部署iscsi-gateway的节点安装,以ceph01节点为例; # pcp是性能采集工具,pcp-pmda-lio是agent [root@ceph01 ~]# yum install ceph-iscsi-tools pcp pcp-pmda-lio -y # 启动服务 [root@ceph01 ~]# systemctl enable pmcd [root@ceph01 ~]# systemctl start pmcd [root@ceph01 ~]# systemctl status pmcd
# 注册pcp-pmda-lio agent [root@ceph01 ~]# cd /var/lib/pcp/pmdas/lio [root@ceph01 lio]# ./Install
# 在client列中,”(CON)”表示initiator已链接到iscsi-gateway,”-multi-”表示多client链接到单rbd image; # 能够经过在client写入数据,如”dd”命令查看gwtop的输出 [root@ceph01 lio]# gwtop