环境描述:ceph1和ceph2两个服务器(其中ceph1已经安装好了ceph-deploy)node
每一个服务器上各一块S3500 SSD、两块日立1T的HDD盘bootstrap
先后端和管理网络合一:分别为128.128.128.九、128.128.128.10后端
一、在/etc/hosts中增长:服务器
128.128.128.9 ceph1 128.128.128.10 ceph2
二、经过网络
ssh-keygen和ssh-copy-id
使节点之间免密ssh
三、在ceph1中新建一个目录,保留配置文件debug
mkdir /root/mycluster cd /root/mycluster
四、安装cephcode
ceph-deploy install ceph1 ceph2
五、安装monit
ceph-deploy new ceph1
六、修改ceph.conf配置文件test
public_network = 128.128.128.0/24 cluster_network = 128.128.128.0/24 enable experimental unrecoverable data corrupting features = bluestore rocksdb debug_white_box_testing_ec_overwrites bluestore block db size = 10737418240 #10G bluestore block wal size = 10737418240 #10G osd objectstore = bluestore mon_allow_pool_delete = true rbd_cache = false [osd] bluestore = true
七、初始化mon
ceph-deploy mon create-initial
若须要增长mon
ceph-deploy add mon ceph2
八、格式化分区
ceph-deploy zap {ceph-node}:{dest-disk} 例如: ceph-deploy zap ceph1:/dev/sdb
九、将ceph.bootstrap-osd.keyring拷贝到/var/lib/ceph/bootstrap-osd/下,并命令为ceph.keyring
cp /home/mycluster/ceph.bootstrap-osd.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring 或 scp /home/mycluster/ceph.bootstrap-osd.keyring ceph2:/var/lib/ceph/bootstrap-osd/ceph.keyring
十、增长osd
ceph-disk prepare --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdc 其中:/dev/sdb为SSD盘,/dev/sdc为HDD盘