ceph部署

1、部署准备:

准备5台虚拟机(linux系统为centos7.6版本)   java

1台部署节点(配一块硬盘,运行ceph-depoly)node

3台ceph节点(每台配置两块硬盘,第一块为系统盘并运行mon,第二块做为osd数据盘)python

1台客户端(可使用ceph提供的文件系统,块存储,对象存储)linux

(1)全部ceph集群节点(包括客户端)设置静态域名解析;vim

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.24.11 dlp
192.168.24.8 controller
192.168.24.9 compute
192.168.24.10 storage

(2)全部集群节点(包括客户端)建立cent用户,并设置密码,后执行以下命令:centos

useradd cent && echo "123" | passwd --stdin cent

echo
-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod
440 /etc/sudoers.d/ceph

(3)在部署节点切换为cent用户,设置无密钥登录各节点包括客户端节点app

su – cent

ceph@dlp15:
17:01~#ssh-keygen
ceph@dlp15:
17:01~#ssh-copy-id controller
ceph@dlp15:
17:01~#ssh-copy-id compute
ceph@dlp15:
17:01~#ssh-copy-id storage
ceph@dlp15:
17:01~#ssh-copy-id dlp

(4)在部署节点切换为cent用户,在cent用户家目录,设置以下文件:vi~/.ssh/config# create new ( define all nodes and users )dom

su – cent

cd .ssh
vim config
Host dlp Hostname dlp User cent Host controller Hostname controller User cent Host compute Hostname compute User cent Host storage Hostname storage User cent
chmod
600 ~/.ssh/config

2、全部节点配置国内ceph源:

(1)all-node(包括客户端)在/etc/yum.repos.d/建立 ceph-yunwei.repossh

cd /etc/yum.repos.d

vim ceph
-yunwei.repo
[ceph
-yunwei] name=ceph-yunwei-install baseurl=https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/ enable=1 gpgcheck=0

(2)到国内ceph源中https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/下载以下所需rpm包。注意:红色框中为ceph-deploy的rpm,只须要在部署节点安装,下载须要到https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/中找到最新对应的ceph-deploy-xxxxx.noarch.rpm 下载ui

ceph-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm
ceph-deploy-1.5.39-0.noarch.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm
ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
ceph-selinux-10.2.11-0.el7.x86_64.rpm
ceph-test-10.2.11-0.el7.x86_64.rpm
libcephfs1-10.2.11-0.el7.x86_64.rpm
libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
librados2-10.2.11-0.el7.x86_64.rpm
librados2-devel-10.2.11-0.el7.x86_64.rpm
libradosstriper1-10.2.11-0.el7.x86_64.rpm
libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
librbd1-10.2.11-0.el7.x86_64.rpm
librbd1-devel-10.2.11-0.el7.x86_64.rpm
librgw2-10.2.11-0.el7.x86_64.rpm
librgw2-devel-10.2.11-0.el7.x86_64.rpm
python-ceph-compat-10.2.11-0.el7.x86_64.rpm
python-cephfs-10.2.11-0.el7.x86_64.rpm
python-rados-10.2.11-0.el7.x86_64.rpm
python-rbd-10.2.11-0.el7.x86_64.rpm
rbd-fuse-10.2.11-0.el7.x86_64.rpm
rbd-mirror-10.2.11-0.el7.x86_64.rpm
rbd-nbd-10.2.11-0.el7.x86_64.rpm

(3)将下载好的rpm拷贝到全部节点,并安装。注意ceph-deploy-xxxxx.noarch.rpm 只有部署节点用到,其余节点不须要,部署节点也须要安装其他的rpm

(4)在部署节点(cent用户下执行):安装 ceph-deploy,在root用户下,进入下载好的rpm包目录,执行:

yum localinstall -y ./*

注意:如遇到以下报错:

 

处理办法:

#安装依赖包
python-distribute
#将这个源移走
rdo-release-yunwei.repo 
#在安装ceph-deploy-1.5.39-0.noarch.rpm
yum localinstall ceph-deploy-1.5.39-0.noarch.rpm -y
#查看版本:
ceph -v

(5)在部署节点(cent用户下执行):配置新集群

ceph-deploy new controller compute storage

vim ./ceph.conf

#添加:

osd_pool_default_size = 2
 可选参数以下:
public_network = 192.168.254.0/24
cluster_network = 172.16.254.0/24
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 8
osd_pool_default_pgp_num = 8
osd_crush_chooseleaf_type = 1
  
[mon]
mon_clock_drift_allowed = 0.5
  
[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_max_sync_interval = 5
filestore_min_sync_interval = 0.1
filestore_fd_cache_size = 655350
filestore_omap_header_cache_size = 655350
filestore_fd_cache_random = true
osd op threads = 8
osd disk threads = 4
filestore op threads = 8
max_open_files = 655350

(6)在部署节点执行(cent用户下执行):全部节点安装ceph软件

全部节点有以下软件包:

root@rab116:13:59~/cephjrpm#ls
ceph-10.2.11-0.el7.x86_64.rpm               ceph-resource-agents-10.2.11-0.el7.x86_64.rpm    librbd1-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm          ceph-selinux-10.2.11-0.el7.x86_64.rpm            librbd1-devel-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm        ceph-test-10.2.11-0.el7.x86_64.rpm               librgw2-10.2.11-0.el7.x86_64.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm  libcephfs1-10.2.11-0.el7.x86_64.rpm              librgw2-devel-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm        libcephfs1-devel-10.2.11-0.el7.x86_64.rpm        python-ceph-compat-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm          libcephfs_jni1-10.2.11-0.el7.x86_64.rpm          python-cephfs-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm   libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm    python-rados-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm           librados2-10.2.11-0.el7.x86_64.rpm               python-rbd-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm           librados2-devel-10.2.11-0.el7.x86_64.rpm         rbd-fuse-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm           libradosstriper1-10.2.11-0.el7.x86_64.rpm        rbd-mirror-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm       libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm  rbd-nbd-10.2.11-0.el7.x86_64.rpm

全部节点安装上述软件包(包括客户端):

yum localinstall ./* -y

(7)在部署节点执行,全部节点安装ceph软件

ceph-deploy install dlp controller compute storage

(8)在部署节点初始化集群(cent用户下执行):

ceph-deploy mon create-initial

(9)每一个节点将第二块硬盘作分区(注意,存储节点的磁盘是sdc

fdisk /dev/sdb

列出节点磁盘

ceph-deploy disk list controller

擦净节点磁盘

ceph-deploy disk zap controller:/dev/sdb1

(10)准备Object Storage Daemon:

ceph-deploy osd prepare controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdb1

(11)激活Object Storage Daemon:

ceph-deploy osd activate controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdb1

(12)在部署节点transfer config files

ceph-deploy admin dlp controller compute storage

(每一个节点作)sudo chmod
644 /etc/ceph/ceph.client.admin.keyring

(13)在ceph集群中任意节点检测:

ceph -s

3、客户端设置:

(1)客户端也要有cent用户:

useradd cent && echo "123" | passwd --stdin cent

echo
-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod440
/etc/sudoers.d/ceph

在部署节点执行,安装ceph客户端及设置:

ceph-deploy install controller

ceph-deploy admin controller

(2)客户端执行

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

(3)客户端执行,块设备rdb配置:

建立rbd:rbd create disk01(起的名字) --size 10G(指定rdb大小) --image-feature layering
删除:rbd rm disk01(rdb名字)
列示rbd:rbd ls –l

如今只是建立好了一块10G的硬盘,要想用的话,还须要映射

映射rbd的image map:sudo rbd map disk01
取消映射:sudo rbd unmap disk01
显示map:rbd showmapped

映射完就能够用lsblk查看到这块硬盘了,但还须要格式化挂载才能够用

格式化disk01文件系统xfs:sudo mkfs.xfs /dev/rbd0
挂载硬盘:sudo mount /dev/rbd0 /mnt
验证是否挂着成功:df -hT

若是不想用这块硬盘了,怎么办?

umonut /dev/rbd0 /mnt

sudo rbd unmap disk01

lsblk

rbd rm disk01

(4)File System配置:

在部署节点执行,选择一个node来建立MDS:

ceph-deploy mds create node1

如下操做在node1上执行:

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

在MDS节点node1上建立 cephfs_data 和  cephfs_metadata 的 pool

ceph osd pool create cephfs_data 128

ceph osd pool create cephfs_metadata 128

开启pool:

ceph fs new cephfs cephfs_metadata cephfs_data
显示ceph fs:
ceph fs ls

ceph mds stat

如下操做在客户端执行,安装ceph-fuse:

yum -y install ceph-fuse

获取admin key:

sshcent@node1"sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key

chmod600 admin.key

挂载ceph-fs:

mount-t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key

df-h

中止ceph-mds服务:

systemctl stop ceph-mds@node1

ceph mds fail
0
ceph fs rm cephfs
--yes-i-really-mean-it
ceph osd lspools
显示结果:
0 rbd,1 cephfs_data,2 cephfs_metadata,
ceph osd pool rm cephfs_metadata cephfs_metadata
--yes-i-really-really-mean-it

4、删除环境:

ceph-deploy purge dlp node1 node2 node3 controller

ceph
-deploy purgedata dlp node1 node2 node3 controller
ceph
-deploy forgetkeys rm -rf ceph*
相关文章
相关标签/搜索