Rook是一个开放源码的云本机存储协调器,提供平台、框架和对各类存储解决方案的支持,以便与云本机环境进行本机集成。node
Rook将存储软件转变为自我管理、自我扩展和自我修复的存储服务。它经过自动化部署、引导、配置、供应、扩展、升级、迁移、灾难恢复、监视和资源管理来实现这一点。Rook使用底层云本地容器管理、调度和协调平台提供的设施来执行其职责。git
Rook利用扩展点深刻集成到云本机环境中,为调度、生命周期管理、资源管理、安全、监控和用户体验提供无缝体验。github
下图说明了Ceph Rook如何与Kubernetes集成:json
ROOK 架构api
须要K8S环境一套,节点分配以下.每台vm至少挂一块50G的盘浏览器
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
51.0.1.213 k8s-master
51.0.1.214 k8s-node1
51.0.1.215 k8s-node2安全
软件版本:bash
k8s 版本: v1.14.1架构
ROOK版本: v1.0 (Kubernetes v1.10 最低K8S要求)app
开始部署:
github拉去ROOK仓库
[root@k8s-master ~]# git clone https://github.com/rook/rook.git
[root@k8s-master rook]# git checkout -b remotes/origin/release-1.0
进入ceph部署目录
[root@k8s-master rook]# cd ./cluster/examples/kubernetes/ceph/
1 [root@k8s-master ceph]# kubectl create -f common.yaml 2 namespace/rook-ceph created 3 customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created 4 customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created 5 customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created 6 customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created 7 customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created 8 customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created 9 customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created 10 clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created 11 clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt-rules created 12 role.rbac.authorization.k8s.io/rook-ceph-system created 13 clusterrole.rbac.authorization.k8s.io/rook-ceph-global created 14 clusterrole.rbac.authorization.k8s.io/rook-ceph-global-rules created 15 clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created 16 clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster-rules created 17 serviceaccount/rook-ceph-system created 18 rolebinding.rbac.authorization.k8s.io/rook-ceph-system created 19 clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created 20 serviceaccount/rook-ceph-osd created 21 serviceaccount/rook-ceph-mgr created 22 role.rbac.authorization.k8s.io/rook-ceph-osd created 23 clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created 24 clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system-rules created 25 role.rbac.authorization.k8s.io/rook-ceph-mgr created 26 rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created 27 rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created 28 rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created 29 rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created 30 clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
建立operator 和agent容器
1 [root@k8s-master ceph]# kubectl create -f operator.yaml 2 deployment.apps/rook-ceph-operator created
查看先关容器是否已经启动了,部署rook-ceph-operator过程当中,会触发以DaemonSet的方式在集群部署Agent和Discoverpods。operator会在集群内的每一个主机建立两个pod:rook-discover,rook-ceph-agent:
[root@k8s-master ceph]# kubectl get pod -n rook-ceph -o wide
建立ceph进群
[root@k8s-master ceph]# kubectl create -f cluster.yaml
cephcluster.ceph.rook.io/rook-ceph created
查看容器状态
[root@k8s-master ceph]# kubectl get pod -n rook-ceph -o wide
刚建立后须要等待一段时间才能够所有建立完,osd容器仍是相对比较慢
查看下deployment 信息看看集群
[root@k8s-master ~]# kubectl -n rook-ceph get deployment
配置dashboard
在cluster.yaml文件中默认已经启用了ceph dashboard,查看dashboard的service:
[root@k8s-master ~]# kubectl get service -n rook-ceph|grep dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard ClusterIP 10.105.208.128 <none> 8443/TCP 25h
rook-ceph-mgr-dashboard监听的端口是8443,建立nodeport类型的service以便集群外部访问
[root@k8s-master ceph]# kubectl apply -f dashboard-external-https.yaml
service/rook-ceph-mgr-dashboard-external-https created
查看 dashboard 外网端口
[root@k8s-master ceph]# kubectl get service -n rook-ceph | grep dashboard
rook-ceph-mgr-dashboard ClusterIP 10.105.208.128 <none> 8443/TCP 15m
rook-ceph-mgr-dashboard-external-https NodePort 10.100.241.136 <none> 8443:32299/TCP 6s
获取Dashboard的登录帐号和密码
[root@k8s-master ~]# MGR_POD=`kubectl get pod -n rook-ceph | grep mgr | awk '{print $1}'`
[root@k8s-master ceph]# kubectl -n rook-ceph logs $MGR_POD | grep password
debug 2019-05-16 06:36:41.934 7fc2c1822700 0 log_channel(audit) log [DBG] : from='client.14398 -' entity='client.admin' cmd=[{"username": "admin", "prefix": "dashboard set-login-credentials", "password": "RmXvxlnOf6", "target": ["mgr", ""], "format": "json"}]: dispatch
找到password字段,用户 admin,密码 RmXvxlnOf6
打开浏览器输入任意一个Node的IP+nodeport端口,这里使用master节点 ip访问:
部署Ceph toolbox
默认启动的Ceph集群,是开启Ceph认证的,这样你登录Ceph组件所在的Pod里,是无法去获取集群状态,
以及执行CLI命令,这时须要部署Ceph toolbox,命令以下:
[root@k8s-master ceph]# kubectl apply -f toolbox.yaml
deployment.apps/rook-ceph-tools created
[root@k8s-master ceph]# kubectl -n rook-ceph get pods -o wide | grep ceph-tools
rook-ceph-tools-b8c679f95-mf5xn 1/1 Running 0 12s 51.0.1.215 k8s-node2 <none> <none>
登陆容器运行命令
[root@k8s-master ceph]# kubectl -n rook-ceph exec -it rook-ceph-tools-b8c679f95-mf5xn bash