ceph集群中容许使用混合类型的磁盘,好比一部分磁盘是SSD,一部分是STAT。若是针对某些业务小高速磁盘SSD,某些业务须要STAT,在建立资源池的时候能够指定建立在某些OSD上。node
基本步骤有8步:code
当前只有STAT没有SSD,可是不影响实验结果。对象
1 获取crush mapip
[root@ceph-admin getcrushmap]# ceph osd getcrushmap -o /opt/getcrushmap/crushmap got crush map from osdmap epoch 2482
2 反编译crush map资源
[root@ceph-admin getcrushmap]# crushtool -d crushmap -o decrushmap
3 修改crush mapget
在root default 后面添加下面两个bucketjenkins
root ssd { id -5 alg straw hash 0 item osd.0 weight 0.01 } root stat { id -6 alg straw hash 0 item osd.1 weight 0.01 }
在rules部分添加以下规则:hash
rule ssd{ ruleset 1 type replicated min_size 1 max_size 10 step take ssd step chooseleaf firstn 0 type osd step emit } rule stat{ ruleset 2 type replicated min_size 1 max_size 10 step take stat step chooseleaf firstn 0 type osd step emit }
4 编译crush mapit
[root@ceph-admin getcrushmap]# crushtool -c decrushmap -o newcrushmap
5 注入crush map编译
[root@ceph-admin getcrushmap]# ceph osd setcrushmap -i /opt/getcrushmap/newcrushmap set crush map
[root@ceph-admin getcrushmap]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -6 0.00999 root stat 1 0.00999 osd.1 up 1.00000 1.00000 -5 0.00999 root ssd 0 0.00999 osd.0 up 1.00000 1.00000 -1 0.58498 root default -2 0.19499 host ceph-admin 2 0.19499 osd.2 up 1.00000 1.00000 -3 0.19499 host ceph-node1 0 0.19499 osd.0 up 1.00000 1.00000 -4 0.19499 host ceph-node2 1 0.19499 osd.1 up 1.00000 1.00000 # 从新查看osd tree 的时候已经看见这个树已经变了。添加了名称为stat和SSD的两个bucket
6 建立资源池
[root@ceph-admin getcrushmap]# ceph osd pool create ssd_pool 8 8 pool 'ssd_pool' created [root@ceph-admin getcrushmap]# ceph osd pool create stat_pool 8 8 pool 'stat_pool' created [root@ceph-admin getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2484 flags hashpspool stripe_width 0 [root@ceph-admin getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2486 flags hashpspool stripe_width 0
注意:刚刚建立的两个资源池ssd_pool 和stat_pool 的 crush_ruleset 都是0,下面须要修改。
7 修改资源池存储规则
[root@ceph-admin getcrushmap]# ceph osd pool set ssd_pool crush_ruleset 1 set pool 28 crush_ruleset to 1 [root@ceph-admin getcrushmap]# ceph osd pool set stat_pool crush_ruleset 2 set pool 29 crush_ruleset to 2 [root@ceph-admin getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2488 flags hashpspool stripe_width 0 [root@ceph-admin getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2491 flags hashpspool stripe_width 0 # luminus 版本设置pool规则的语法是 [root@ceph-admin ceph]# ceph osd pool set ssd crush_rule ssd set pool 2 crush_rule to ssd [root@ceph-admin ceph]# ceph osd pool set stat crush_rule stat set pool 1 crush_rule to stat
8 验证
验证前先看看ssd_pool 和stat_pool 里面是否有对象
[root@ceph-admin getcrushmap]# rados ls -p ssd_pool [root@ceph-admin getcrushmap]# rados ls -p stat_pool #这两个资源池中都没有对象
用rados命令 添加对象到两个资源池中
[root@ceph-admin getcrushmap]# rados -p ssd_pool put test_object1 /etc/hosts [root@ceph-admin getcrushmap]# rados -p stat_pool put test_object2 /etc/hosts [root@ceph-admin getcrushmap]# rados ls -p ssd_pool test_object1 [root@ceph-admin getcrushmap]# rados ls -p stat_pool test_object2 #对象添加成功
[root@ceph-admin getcrushmap]# ceph osd map ssd_pool test_object1 osdmap e2493 pool 'ssd_pool' (28) object 'test_object1' -> pg 28.d5066e42 (28.2) -> up ([0], p0) acting ([0,1,2], p0) [root@ceph-admin getcrushmap]# ceph osd map stat_pool test_object2 osdmap e2493 pool 'stat_pool' (29) object 'test_object2' -> pg 29.c5cfe5e9 (29.1) -> up ([1], p1) acting ([1,0,2], p1)
上面验证结果能够看出,test_object1 存入osd.0中,test_object2 存入osd.1中。达到预期目的。