ceph部署

1、部署准备:

准备5台机器(linux系统为centos7.6版本),固然也能够至少3台机器并充当部署节点和客户端,能够与ceph节点共用:
    1台部署节点(配一块硬盘,运行ceph-depoly)
    3台ceph节点(配两块硬盘,第一块为系统盘并运行mon,第二块做为osd数据盘)
    1台客户端(可使用ceph提供的文件系统,块存储,对象存储)
 
(1)全部ceph集群节点(包括客户端)设置静态域名解析;
复制代码
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.253.135 controller 192.168.253.194 compute 192.168.253.15 storage 192.168.253.10 dlp
复制代码

 

 
(2)全部集群节点(包括客户端)建立cent用户,并设置密码,后执行以下命令:
 
useradd cent && echo "123" | passwd --stdin cent echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph chmod 440 /etc/sudoers.d/ceph

 

 
(3)在部署节点切换为cent用户,设置无密钥登录各节点包括客户端节点
 
  su - cent 
ceph@dlp15:17:01~#ssh-keygen ceph@dlp15:17:01~#ssh-copy-id dlp ceph@dlp15:17:01~#ssh-copy-id controller ceph@dlp15:17:01~#ssh-copy-id compute ceph@dlp15:17:01~#ssh-copy-id storage
 
 
 

(4)在部署节点切换为cent用户,在cent用户家目录,设置以下文件:vi config node

 而后设置以下权限:linux

 

复制代码
Host dlp Hostname dlp User cent Host controller Hostname controller User cent Host compute Hostname compute User cent Host storage Hostname storage User cent
复制代码

   chmod 600 ./ssh/configshell

2、全部节点配置国内ceph源:

(1)全部节点下载阿里云的镜像源,并删除或者移动rdo-release-yunwei.repo去另外一个目录下

  wget https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/vim

 2)接着yum clean all ,yum makecache缓存原数据。执行
  
 
  wget  http://download2.yunwei.edu/shell/ceph-j.tar.gz
 
 
(3)将下载好的rpm拷贝到全部节点,并安装。注意ceph-deploy-xxxxx.noarch.rpm 只有部署节点用到,其余节点不须要,部署节点也须要安装其他的rpm包
 
(4)在部署节点(cent用户下执行):安装 ceph-deploy, 在root用户下,进入下载好的rpm包目录,执行:
 yum -y  localinstall ./*

 

建立ceph工做目录
  mkdir ceph  && cd ceph
 
(5)在部署节点(cent用户下ceph目录下):配置新集群
 
  ceph-deploy  new  controller compute storage
  vim conf
  添加:osd_pool_ default_size = 2
 
(6)在部署节点执行(ceph目录下):全部节点安装ceph软件
 
  ceph-deploy install dlp controller compute  storage
 
 
   7)  初始化集群(ceph目录下)
 
   ceph-deploy mon create-initial
 
若是报错1:
eph部署monitor时出现"monitor is not yet in quorum.

这是由于防火墙没关闭,去各个节点关闭全部防火墙centos

再执行:缓存

  ceph-deploy --overwrite-conf  mon create-initialbash

 
报错2:
复制代码
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite [ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors 缘由:修改了ceph用户里的ceph.conf文件内容,可是没有把这个文件里的最新消息发送给其余节点,全部要推送消息 解决:ceph-deploy --overwrite-conf config push node1-4   ceph-deploy --overwrite-conf mon create node1-4
复制代码

 

 
  列出节点磁盘:ceph-deploy disk list node1
  擦净节点磁盘:ceph-deploy disk zap controller:/dev/sdb         #擦除选择磁盘而不是分区
 
 8) 给每一个节点分区
 
  fdisk /dev/sdb                 #本身注意是分哪一个硬盘,注意w保存
 
   9)准备osd(Object Storage Daemon:对象存储保护程序)
 
ceph-deploy osd prepare controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdc1

 

 10)激活osd
 
ceph-deploy osd activate controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdc1

 

   1 1)   在部署节点transfer config files
 
ceph-deploy admin dlp controller compute storage

  sudo chmod 644 /etc/ceph/ceph.client.admin.keyring(在其余节点上)app

 

   12  )在ceph集群中任意节点检测:
 
复制代码
[root@controller old]# ceph -s cluster 8e03f0d7-06cb-49c6-b0fa-b9764e85e61a health HEALTH_OK monmap e1: 3 mons at {compute=192.168.253.194:6789/0,controller=192.168.253.135:6789/0,storage=192.168.253.15:6789/0} election epoch 6, quorum 0,1,2 storage,controller,compute osdmap e14: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v2230: 64 pgs, 1 pools, 0 bytes data, 0 objects 24995 MB used, 27186 MB / 52182 MB avail 64 active+clean
复制代码

 

 

 3、rbd块设备设置

 
 
  建立rbdrbd create disk01 --size 10G --image-feature layering    删除:rbd rm disk01
 
列示rbd:rbd ls -l
 
[cent@dlp ceph]$ rbd create disk01 --size 10G --image-feature layering [cent@dlp ceph]$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK disk01 10240M 2

 

 
映射rbd的image map:sudo rbd map disk01      取消映射:sudo rbd unmap disk01
 
[root@controller ~]# rbd map disk01      #在root目录下这么些,其余目录下要加sudo在前面/dev/rbd0

将rbd0成功映射到了/dev/目录下,可是lsblk并没有法查看到/dev/rbd0,这是由于尚未格式化和挂载。dom

 

显示map:rbd showmapped
 
 
格式化disk01文件系统xfs:sudo mkfs.xfs /dev/rbd0
 
复制代码
[root@controller ~]# mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=17, agsize=162816 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
复制代码

 

挂载硬盘:sudo mount /dev/rbd0 /mnt
复制代码
[root@controller ~]# mount /dev/rbd0 /mnt [root@controller ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 20G 0 disk └─sdb1 8:17 0 20G 0 part /var/lib/ceph/osd/ceph-0 sdc 8:32 0 10G 0 disk └─sdc1 8:33 0 10G 0 part /var/lib/ceph/osd/ceph-3 sr0 11:0 1 4.2G 0 rom /mnt rbd0 252:0 0 10G 0 disk /mnt
复制代码
 
验证是否挂着成功:df -hT
 
 
 
中止ceph-mds服务:
systemctl stop ceph-mds@node1
 
ceph mds fail 0
 
列示存储池  
ceph osd lspools
显示结果:0 rbd,
  
删除存储池
ceph osd pool rm rdb --yes-i-really-really-mean-it
 
 
 

4、删除环境:

 

   清空集群信息ssh

   ceph-deploy purge dlp node1 node2 node3 controller

 ceph-deploy purgedata dlp node1 node2 node3 controller
   
  忘记身份认证的密钥
 ceph-deploy forgetkeys
 
 rm -rf ceph*
相关文章
相关标签/搜索