一. 环境准备:linux
1.关闭防火墙、selinux(全部节点):
vim
# systemctl stop firewalld ; systemctl disable firewalld
# setenforce 0
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config //注意,网上说这种关闭selinux方法,可能致使selinux没法临时开启,可使用如下谨慎的方法
# vim /etc/sysconfig/selinux //将SELINUX=disabled修改便可,重启虚机
# init 6
2.修改主机名(全部节点):api
# hostnamectl set-hostname ceph01
# hostnamectl set-hostname ceph02
# hostnamectl set-hostname ceph03
随便找一台虚机好比ceph01上,修改hosts文件:bash
# vim /etc/hosts
# scp /etc/hosts 192.168.10.11:/etc/
# scp /etc/hosts 192.168.10.12:/etc/
3. SSH登陆免密(ceph01控制节点):服务器
# ssh-keygen //一路回车,生成RSA公钥
# ssh-copy-id ceph02
# ssh-copy-id ceph03
4. 配置YUM源(全部节点上):
ssh
Centos7源:ide
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
epel源:ui
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
新建Ceph.repo源:
url
# vim Ceph.repo
[ceph-nautilus] name=ceph-nautilus baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/ enabled=1 gpgcheck=0 [ceph-nautilus-noarch] name=ceph-nautilus-noarch baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ enabled=1 gpgcheck=0
# yum clean all
# yum makecache
5. 安装NTP服务(全部节点上):spa
# yum install -y chrony
在ceph01控制节点上:
# vim /etc/chrony.conf
将服务器指向了AD域服务器(网上本身找NTP服务器节点替换便可),下面是容许客户端的子网:
# systemctl restart chronyd ; systemctl enable chronyd
# chronyc sources
在ceph0二、03节点上:
# vim /etc/chrony.conf
# systemctl restart chronyd ; systemctl enable chronyd
# chronyc sources
二. 部署CEPH:
1. 安装ceph和ceph-deploy:
在全部节点上:
# yum install -y ceph
在ceph01控制节点上:
# yum install -y ceph-deploy
2. 部署MON:
在ceph01控制节点上:
# mkdir ceph ; cd ceph
# ceph-deploy new ceph01 ceph02 ceph03
# vim ceph.conf
添加 overwrite_conf = true
在全部节点上:
# chown ceph:ceph -R /var/lib/ceph
回到ceph01节点(仍是在/root/ceph目录下操做):
# ceph-deploy --overwrite-conf mon create-initial
在全部节点上:
# systemctl restart ceph-mon@ceph01
# systemctl restart ceph-mon@ceph02
# systemctl restart ceph-mon@ceph03
3. 部署MGR:
在ceph01上:
# ceph-deploy mgr create ceph01
# ps -ef | grep ceph
# systemctl restart ceph-mgr@ceph01
4. 部署OSD:
我这里全部节点是/dev/sdb
在ceph01节点(仍是在/root/ceph目录下操做):
# ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph01
# ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph02
# ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph03
将全部.keyring文件拷贝到全部节点/etc/ceph下:
在全部节点上重启:
# systemctl restart ceph-osd@0
# systemctl restart ceph-osd@1
# systemctl restart ceph-osd@2
在ceph01节点:
# ceph osd tree
5. 部署RGW:
这里对象存储做为单机,只安装在ceph01节点:
# yum install -y ceph-radosgw
# cepy-deploy --overwrite rgw create ceph01
# ps aux | grep radosgw
# systemctl restart ceph-radosgw@ceph01
ceph桶存储分片:
若是每一个桶中对象数量较少,好比小于10000, 能够不操做此步骤, 大于10万对象,必定要设置下面的参数。
若是设计方案中,一个桶中存储对象数量大于几千万,须要关闭动态分片, 同时设置最大分片数量。
# vim /etc/ceph/ceph.conf //添加如下项
桶动态分片默认开启:
rgw_dynamic_resharding = false
桶中最大分片的数量:
rgw_override_bucket_index_max_shards = 16
# systemctl restart ceph-radosgw@ceph01 //重启服务
6. 创建S3帐户:
# radosgw-admin user create --uid testid --display-name 'admin' --system
保存access_key与secret_key
7.部署Dashboard(ceph01上):
# yum install -y ceph-mgr-dashboard
# ceph mgr module enable dashboard
# ceph dashboard create-self-signed-cert
# ceph dashboard set-login-credentials admin admin //建立登陆用户,并设置密码
此时,你在object gateway里面看不到bucket内容:
# ceph dashboard set-rgw-api-access-key <access_key> //将access_key添加进去
# ceph dashboard set-rgw-api-secret-key <secret_key> //将secret_key添加进去
用网页打开https://192.168.10.10:8443
若是忘记了S3帐户的KEY:
# radosgw-admin user info --uid=testid