1、环境准备工做
(1) 节点要求
==》节点配置硬件最低要求
角色 设备 最小配置 推荐配置
-----------------------------------------------------------------------------------------------------------------
ceph-osd RAM 500M RAM for per daemon 1GB RAM for 1TB of storage per daemon
Volume Storage 1x storage drive per daemon >1TB storage drive per daemon
Journal Storage 5GB(default) SSD, >1GB for 1TB of storage per daemon
Network 2x 1GB Ethernet NICs 2x10GB Ethernet NICs
-----------------------------------------------------------------------------------------------------------------
ceph-mon RAM 1 GB per daemon 2 GB per daemon
Disk Space 10 GB per daemon >20 GB per daemon
Network 2x 1GB Ethernet NICs 2x10GB Ethernet NICs
-----------------------------------------------------------------------------------------------------------------
ceph-mds RAM 1 GB minimum per daemon >2GB per daemon
Disk Space 1 MB per daemon >1MB per daemon
Network 2x 1GB Ethernet NICs 2x10GB Ethernet NICs
==》OS环境
CentOS7
linux distribution:3.10.0-229.el7.x86_64
==》搭建环境
a) 电脑*1 (RAM>6G Disk>100G)
b) VirtualBox
c) CentOS7.1(3.10.0-229.el7.x86_64).ISO安装包
==》基本环境搭建,配置节点
主机名 角色 OS 磁盘
=====================================================================================================
a) admnode deploy-node CentOS7.1(3.10.0-229.el7.x86_64)
b) node1 mon,osd CentOS7.1(3.10.0-229.el7.x86_64) Disk(/dev/sdb capacity:10G)
c) node2 osd CentOS7.1(3.10.0-229.el7.x86_64) Disk(/dev/sdb capacity:10G)
d) node3 osd CentOS7.1(3.10.0-229.el7.x86_64) Disk(/dev/sdb capacity:10G)html
==》配置yum源
cp -r /etc/yum.repos.d/ /etc/yum.repos.d.bak
rm -f /etc/yum.repos.d/CentOS-*
vim /etc/yum.repos.d/ceph.repo
并将以下信息拷贝(注:若是是要在安装hammer版本,将下面的"rpm-infernalis"替换成"rpm-hammer")
[epel]
name=Ceph epel packages
baseurl=ftp://193.168.140.67/pub/ceph/epel/
enabled=1
priority=2
gpgcheck=0node
[ceph]
name=Ceph packages
baseurl=ftp://193.168.140.67/pub/ceph/rpm_infernalis/
enabled=1
priority=2
gpgcheck=0linux
[update]
name=update
baseurl=ftp://193.168.140.67/pub/updates/
enabled=1
priority=2
gpgcheck=0json
[base]
name=base
baseurl=ftp://193.168.140.67/pub/base/
enabled=1
priority=2
gpgcheck=0vim
查看yum源
yum repolist all
2、Ceph节点安装配置
yum install -y yum-utils epel-release centos
若是是ceph-deploy节点执行 start
yum install -y ceph-deploy
若是是ceph-deploy节点执行 end服务器
安装NTP和openssh
yum install -y ntp ntpdate ntp-doc openssh-server网络
建立Ceph Deploy User,其中{username}部分须要本身指定用户,并将{username}替换掉
# sudo useradd -d /home/{username} -m {username}
# sudo passwd {username}
例:
useradd -d /home/cephadmin -m cephadmin
sudo passwd cephadmin
确保建立的{username}有sudo权限
# echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
# sudo chmod 0440 /etc/sudoers.d/{username}
例:
echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
chmod 0440 /etc/sudoers.d/cephadminssh
修改主机名(如admnode,node1,node2)并指定全部节点的主机名和IP之间的对应关系。
hostnamectl set-hostname admnode
vim /etc/hosts
在末尾添加
<node1's IP> admnode
<node2's IP> node1
<node3's IP> node2
例:
10.167.225.111 admnode
10.167.225.114 node1
10.167.225.116 node2tcp
若是是ceph-deploy节点执行 start
设置无密码登录SSH
使用自定义用户(非root用户),输入命令ssh-keygen,并所有按回车键
ssh-keygen
将上述生成的key拷贝到其余全部节点,输入命令
# ssh-copy-id {node1’s hostname}
# ssh-copy-id {node2’s hostname}
# ssh-copy-id {node3’s hostname}
例:
ssh-copy-id node1
ssh node1
exit
若是是ceph-deploy节点执行 end
设置网卡开机自启动
vim /etc/sysconfig/network-scripts/ifcfg-{iface}
确保设置项:
ONBOOT="yes"
关闭防火墙或打开防火墙中须要的端口6789
systemctl disable firewalld
systemctl stop firewalld
systemctl status firewalld
firewall-cmd --zone=public --add-port=6789/tcp --permanent
sudo visudo
找到Defaults requiretty设置选项
将"Defaults requiretty"修改成"Defaults:ceph !requiretty"
禁用SELinux
sudo setenforce 0
vim /etc/selinux/config
将其中的"SELINUX=enforcing"修改成"SELINUX=permissive"
sudo yum install -y yum-plugin-priorities
sudo vim /etc/yum/pluginconf.d/priorities.conf
确认内容:
[main]
enabled = 1
若是是安装infernalis版本执行 start
sudo yum install -y systemd
若是是安装infernalis版本执行 end
若是是安装hammer版本执行 start
yum install redhat-lsb
若是是安装hammer版本执行 end
3、最基本节点Ceph Storage Cluster配置(http://docs.ceph.com/docs/master/start/quick-ceph-deploy/)
基本环境搭建,配置节点
主机名 角色 磁盘
================================================================
a) admnode deploy-node
b) node1 mon1 Disk(/dev/sdb capacity:10G)
c) node2 osd.0 Disk(/dev/sdb capacity:10G)
d) node3 osd.1 Disk(/dev/sdb capacity:10G)
(1) admin节点切换到自定义的cephadmin用户(避免使用sodu 或 root用户调用ceph-deploy)
(2) 在cephadmin用户下建立一个ceph-cluster的目录,用来保存执行ceph-deploy命令后输出的文件。
# mkdir ceph-cluster
# cd ceph-cluster
(3) Create a Cluster
a) ceph-cluster的目录下,使用ceph-deploy命令,用初始的monitor节点建立cluster。
# ceph-deploy new {initial-monitor-node(s)}
例如
# ceph-deploy new node1
b) 将默认的osd数量从3改为2。
修改ceph-cluster目录下的 ceph.conf 配置文件,在[global]区域后添加:
osd pool default size = 2
osd pool default min size = 2
osd pool default pg num = 512
osd pool default pgp num = 512
osd crush chooseleaf type = 1
[osd]
osd journal size = 1024
(4) 在每一个节点中安装Ceph(因为公司网络的限制,使用桥接模式时没法上网,)
# ceph-deploy install {ceph-node}[{ceph-node} ...]
例)
# ceph-deploy install --no-adjust-repos admnode node1 node2
(5) 在admin节点上初始化某个(或多个节点为mon节点)monitor(s)
# ceph-deploy mon create-initial
(6) 增长两个OSDs
a) 查看集群节点的磁盘节点信息,例如 /dev/sdb
# ceph-deploy disk list <节点Host名>
b) 准备OSDs
# ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb
c) 激活OSDs节点(注:prepare OSDs时,ceph-deploy会自动格式化磁盘,做成/sdb1数据盘和/sdb2日志盘,这里使用数据盘/sdb1,而非整个/dev/sdb)
# ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1
备)若是OSD激活失败,或者OSD的状态是down,查看
http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#osd-not-running
d) 推送configuration file 和 admin key 给admin节点和其余全部Ceph节点,以即可以在全部节点上执行ceph CLI命令(如ceph -s),而不用必须在monitor节点上执行。
# ceph-deploy admin admnode node1 node2 node3
e) 确保对ceph.client.admin.keyring有足够权限
# sudo chmod +r /etc/ceph/ceph.client.admin.keyring
f) 若是激活OSDs节点成功,则可经过在mon节点上执行ceph -s 或 ceph -w 查看到以下active+clean的信息
[root@node1 etc]# ceph -w
cluster 62d61946-b429-4802-b7a7-12289121a022
health HEALTH_OK
monmap e1: 1 mons at {node1=10.167.225.137:6789/0}
election epoch 2, quorum 0 node2
osdmap e9: 2 osds: 2 up, 2 in
pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects
67916 kB used, 18343 MB / 18409 MB avail
64 active+clean
2016-03-08 20:12:00.436008 mon.0 [INF] pgmap v15: 64 pgs: 64 active+clean; 0 bytes data, 67916 kB used, 18343 MB / 18409 MB avail
4、完整集群节点Ceph Storage Cluster配置(http://docs.ceph.com/docs/master/start/quick-ceph-deploy/)
完整环境搭建,配置节点
主机名 角色 磁盘
================================================================
a) admnode deploy-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G)
(1) 在Node1上增长一个OSD
# ceph-deploy osd prepare node1:/dev/sdb
# ceph-deploy osd activate node1:/dev/sdb1
执行成功后,集群的状态以下“
[root@node1 etc]# ceph -w
cluster 62d61946-b429-4802-b7a7-12289121a022
health HEALTH_OK
monmap e1: 1 mons at {node1=10.167.225.137:6789/0}
election epoch 2, quorum 0 node2
osdmap e13: 3 osds: 3 up, 3 in
pgmap v23: 64 pgs, 1 pools, 0 bytes data, 0 objects
102032 kB used, 27515 MB / 27614 MB avail
64 active+clean
2016-03-08 21:21:29.930307 mon.0 [INF] pgmap v23: 64 pgs: 64 active+clean; 0 bytes data, 102032 kB used, 27515 MB / 27614 MB avail
(2) 在Node1增长一个MDS(若是要使用CephFS,必需要有一个metadata server)
# ceph-deploy mds create node1
(3) 增长RGW实例(为了使用Ceph Object Gateway组件)
# ceph-deploy rgw create node1
(4) 增长Monitors,根据Monitor节点法定人数(quorum)的要求,Monitors机器须要奇数以上的节点,所以增长2个MON节点,同时,MON集群之间须要时间同步。
4.1 MONs节点之间配置时间同步(admnode做为NTP服务器,因为不链接外网,所以将使用local时间做为ntp服务提供给ntp客户端)。
a) 在admnode节点上配置局域网NTP服务器(使用local时间)。
a.1) 编辑/etc/ntp.conf,注释掉 "server 0|1|2|3.centos.pool.ntp.org iburst"四行。
添加"server 127.127.1.0 fudge"和”127.127.1.0 stratum 8“这两行
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0 fudge
127.127.1.0 stratum 8
a.2) admin节点启用ntpd服务
# sudo systemctl enable ntpd
# sudo systemctl restart ntpd
# sudo systemctl status ntpd
a.3) 查看ntpd服务启动信息
# ntpstat
synchronised to local net at stratum 6
time correct to within 12 ms
polling server every 64 s
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 5 l 3 64 377 0.000 0.000 0.000
b) 在其他node1,node2,Node3三个须要配置Monitor服务的节点上,配置NTP,与NTP服务器同步时间。
b.1) 确保ntpd服务关闭
# sudo systemctl stop ntpd
# sudo systemctl status ntpd
b.2) 使用 ntpdate 命令先于NTP服务同步,确保offset在1000s内。
# sudo ntpdate <admnode's IP or hostname>
9 Mar 16:59:26 ntpdate[31491]: adjust time server 10.167.225.136 offset -0.000357 sec
b.3) 编辑/etc/ntp.conf,注释掉 "server 0|1|2|3.centos.pool.ntp.org iburst"四行。
添加NTP服务器(admnode节点)的IP"server 10.167.225.136"
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 10.167.225.136
b.4) 启动ntpd服务
# sudo systemctl enable ntpd
# sudo systemctl start ntpd
# sudo systemctl status ntpd
b.5) 查看ntpd服务启动信息
# ntpstat
synchronised to NTP server (10.167.225.136) at stratum 7
time correct to within 7949 ms
polling server every 64 s
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*admnode LOCAL(0) 6 u 6 64 1 0.223 -0.301 0.000
4.2 在集群中增长两个MON a) 新增Monitor节点 # ceph-deploy mon add node2 # ceph-deploy mon add node3 b) 节点安装成功后查看集群状态以下: # ceph -s cluster 62d61946-b429-4802-b7a7-12289121a022 health HEALTH_OK monmap e3: 3 mons at {node1=10.167.225.137:6789/0,node2=10.167.225.138:6789/0,node3=10.167.225.141:6789/0} election epoch 8, quorum 0,1,2 node2,node3,node4 osdmap e21: 3 osds: 3 up, 3 in pgmap v46: 64 pgs, 1 pools, 0 bytes data, 0 objects 101 MB used, 27513 MB / 27614 MB avail 64 active+clean c)检查quorum的状态 # ceph quorum_status --format json-pretty 输出以下: { "election_epoch": 8, "quorum": [ 0, 1, 2 ], "quorum_names": [ "node1", "node2", "node3" ], "quorum_leader_name": "node2", "monmap": { "epoch": 3, "fsid": "62d61946-b429-4802-b7a7-12289121a022", "modified": "2016-03-09 17:50:29.370831", "created": "0.000000", "mons": [ { "rank": 0, "name": "node1", "addr": "10.167.225.137:6789\/0" }, { "rank": 1, "name": "node2", "addr": "10.167.225.138:6789\/0" }, { "rank": 2, "name": "node3", "addr": "10.167.225.141:6789\/0" } ] } }