centos7下ceph-jewel单节点部署

服务器/虚拟机准备

1.配置dns,保证机器能够使用公网网络源
2.额外添加几个盘,做为数据盘,此处我加了/dev/vdb /dev/vdc /dev/vdd三块100G的盘linux

准备源

添加centos源和epel源,本文所有使用阿里云的源centos

[root@dev179 ~] rm /etc/yum.repos.d/* -rf
[root@dev179 ~] curl http://mirrors.aliyun.com/repo/Centos-7.repo > /etc/yum.repos.d/Centos-7.repo 
[root@dev179 ~] curl http://mirrors.aliyun.com/repo/epel-7.repo > /etc/yum.repos.d/epel.repo

添加ceph源,在/etc/yum.repos.d/目录下建立ceph.repo文件,粘贴如下内容服务器

[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md

更新源网络

[root@dev179 ~] yum clean all
[root@dev179 ~] yum makecache

服务器配置

// 关闭selinux
[root@dev179 ~] sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@dev179 ~] setenforce 0

// 关闭防火墙
[root@dev179 ~] systemctl stop firewalld 
[root@dev179 ~] systemctl disable firewalld

// 设置服务器ip到/etc/hosts,把1.1.1.1改成服务器的IP地址
[root@dev179 ~] echo 1.1.1.1 $HOSTNAME >> /etc/hosts

// 准备临时部署目录
[root@dev179 ~] rm -rf /root/ceph-cluster && mkdir -p /root/ceph-cluster && cd /root/ceph-cluster

部署

// 安装部署软件
[root@dev179 ceph-cluster] yum install ceph ceph-radosgw ceph-deploy -y

// 初始化配置,后面的步骤都必须进入/root/ceph-cluster目录中再执行
[root@dev179 ceph-cluster] ceph-deploy new $HOSTNAME

// 更新默认配置文件
[root@dev179 ceph-cluster] echo osd pool default size = 1 >> ceph.conf
[root@dev179 ceph-cluster] echo osd crush chooseleaf type = 0 >> ceph.conf
[root@dev179 ceph-cluster] echo osd max object name len = 256 >> ceph.conf
[root@dev179 ceph-cluster] echo osd journal size = 128 >> ceph.conf

// 初始化监控节点
[root@dev179 ceph-cluster] ceph-deploy mon create-initial

// 准备磁盘,此处/dev/vdb /dev/vdc /dev/vdd根据实际状况修改
[root@dev179 ceph-cluster] ceph-deploy osd prepare $HOSTNAME:/dev/vdb $HOSTNAME:/dev/vdc $HOSTNAME:/dev/vdd

上步prepare后,那3个盘会自动建立文件系统并挂载,经过df查看具体挂载目录app

// 经过df查看具体挂载目录
[root@dev179 ceph-cluster]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  1.7G   49G   4% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.4M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   36G   33M   36G   1% /home
tmpfs                    380M     0  380M   0% /run/user/0
/dev/vdb1                100G  108M  100G   1% /var/lib/ceph/osd/ceph-0
/dev/vdc1                100G  108M  100G   1% /var/lib/ceph/osd/ceph-1
/dev/vdd1                100G  108M  100G   1% /var/lib/ceph/osd/ceph-2

// 激活osd
[root@dev179 ceph-cluster] ceph-deploy osd activate $HOSTNAME:/var/lib/ceph/osd/ceph-0 $HOSTNAME:/var/lib/ceph/osd/ceph-1 $HOSTNAME:/var/lib/ceph/osd/ceph-2

检查

第一次检查会发现是HEALTH_WARN状态curl

[root@dev179 ceph-cluster]# ceph -s
    cluster 7775a3a4-7315-41fe-b192-2655b11a83a1
     health HEALTH_WARN
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {dev179=172.24.8.179:6789/0}
            election epoch 3, quorum 0 dev179
     osdmap e15: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 299 GB / 299 GB avail
                  64 active+clean

修改pg/pgp值后,集群状态变为HEALTH_OKui

[root@dev179 ceph-cluster]# ceph osd pool set rbd pg_num 128
[root@dev179 ceph-cluster]# ceph osd pool set rbd pgp_num 128
[root@dev179 ceph-cluster]# ceph -s
    cluster 7775a3a4-7315-41fe-b192-2655b11a83a1
     health HEALTH_OK
     monmap e1: 1 mons at {dev179=172.24.8.179:6789/0}
            election epoch 3, quorum 0 dev179
     osdmap e19: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v34: 128 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 299 GB / 299 GB avail
                 128 active+clean