官网地址:https://rook.io/node
项目地址:https://github.com/rook/rooknginx
准备osd存储介质git
硬盘符号 | 大小 | 做用 |
---|---|---|
sdb | 50GB | OSD Data |
sdc | 50GB | OSD Data |
sdd | 50GB | OSD Data |
sde | 50GB | OSD Metadata |
> 安装前使用命令lvm lvs
,lvm vgs
和lvm pvs
检查上述硬盘是否已经被使用,若已经使用须要删除,且确保硬盘上不存在分区和文件系统github
确保开启内核rbd模块并安装lvm2json
modprobe rbd yum install -y lvm2
安装operatorcentos
git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git cd rook/cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml
安装ceph集群api
--- apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: ceph/ceph:v14.2.5 allowUnsupported: false dataDirHostPath: /var/lib/rook skipUpgradeChecks: false mon: count: 3 allowMultiplePerNode: true mgr: modules: - name: pg_autoscaler enabled: true dashboard: enabled: true ssl: true monitoring: enabled: false rulesNamespace: rook-ceph network: hostNetwork: false rbdMirroring: workers: 0 annotations: resources: removeOSDsIfOutAndSafeToRemove: false useAllNodes: false useAllDevices: false config: nodes: - name: "minikube" devices: - name: "sdb" - name: "sdc" - name: "sdd" config: storeType: bluestore metadataDevice: "sde" databaseSizeMB: "1024" journalSizeMB: "1024" osdsPerDevice: "1" disruptionManagement: managePodBudgets: false osdMaintenanceTimeout: 30 manageMachineDisruptionBudgets: false machineDisruptionBudgetNamespace: openshift-machine-api
安装命令行工具浏览器
kubectl create -f toolbox.yaml
在toolbox中使用命令ceph -s
查看集群状态app
> 在重装ceph集群时须要清理rook数据目录(默认:/var/lib/rook)工具
为ceph-dashboard服务添加ingress路由
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-mgr-dashboard namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_verify off; spec: tls: - hosts: - rook-ceph.minikube.local secretName: rook-ceph.minikube.local rules: - host: rook-ceph.minikube.local http: paths: - path: / backend: serviceName: rook-ceph-mgr-dashboard servicePort: https-dashboard
获取访问dashboard所需的admin帐号密码
kubectl get secret rook-ceph-dashboard-password -n rook-ceph -o jsonpath='{.data.password}'|base64 -d
将域名rook-ceph.minikube.local加入/etc/hosts后经过浏览器访问
https://rook-ceph.minikube.local/
建立rbd存储池
--- apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: osd replicated: size: 3
> 因为仅有一个节点和三个OSD,所以采用osd做为故障域
建立完成后在rook-ceph-tools中使用指令ceph osd pool ls
能够看到新建了如下存储池
以rbd为存储介质建立storageclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph csi.storage.k8s.io/fstype: ext4 reclaimPolicy: Delete
使用statefulset测试经过storageclass挂载rbd存储
--- kind: StatefulSet apiVersion: apps/v1 metadata: name: storageclass-rbd-test namespace: default labels: app: storageclass-rbd-test spec: replicas: 2 selector: matchLabels: app: storageclass-rbd-test template: metadata: labels: app: storageclass-rbd-test spec: restartPolicy: Always containers: - name: storageclass-rbd-test imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data image: 'centos:7' args: - 'sh' - '-c' - 'sleep 3600' volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-ceph-block
建立mds服务与cephfs文件系统
--- apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: failureDomain: osd replicated: size: 3 dataPools: - failureDomain: osd replicated: size: 3 preservePoolsOnDelete: true metadataServer: activeCount: 1 activeStandby: true placement: annotations: resources:
建立完成后在rook-ceph-tools中使用指令ceph osd pool ls
能够看到新建了如下存储池
以cephfs为存储介质建立storageclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph fsName: myfs pool: myfs-data0 csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete mountOptions:
使用deployment测试经过storageclass挂载cephfs共享存储
--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data-storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-test spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: csi-cephfs volumeMode: Filesystem --- kind: Deployment apiVersion: apps/v1 metadata: name: storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-test spec: replicas: 2 selector: matchLabels: app: storageclass-cephfs-test template: metadata: labels: app: storageclass-cephfs-test spec: restartPolicy: Always containers: - name: storageclass-cephfs-test imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data image: 'centos:7' args: - 'sh' - '-c' - 'sleep 3600' volumes: - name: data persistentVolumeClaim: claimName: data-storageclass-cephfs-test
建立对象存储网关
--- apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: osd replicated: size: 3 dataPool: failureDomain: osd replicated: size: 3 preservePoolsOnDelete: false gateway: type: s3 sslCertificateRef: port: 80 securePort: instances: 1 placement: annotations: resources:
建立完成后在rook-ceph-tools中使用指令ceph osd pool ls
能够看到新建了如下存储池
为ceph-rgw服务添加ingress路由
--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-rgw namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" spec: tls: - hosts: - rook-ceph-rgw.minikube.local secretName: rook-ceph-rgw.minikube.local rules: - host: rook-ceph-rgw.minikube.local http: paths: - path: / backend: serviceName: rook-ceph-rgw-my-store servicePort: http
将域名rook-ceph-rgw.minikube.local加入/etc/hosts后经过浏览器访问
https://rook-ceph-rgw.minikube.local/
添加对象存储用户
--- apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: "my display name"
建立对象存储用户的同时会生成以{{.metadata.namespace}}-object-user-{{.spec.store}}-{{.metadata.name}}
为命名规则的secret,其中保存了该S3用户的AccessKey和SecretKey
获取AccessKey
kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.AccessKey}'|base64 -d
获取SecretKey
kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.SecretKey}'|base64 -d
根据上述步骤获取到的信息,使用S3客户端进行链接便可使用该S3用户
建立以s3为存储的storageclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-delete-bucket provisioner: ceph.rook.io/bucket reclaimPolicy: Delete parameters: objectStoreName: my-store objectStoreNamespace: rook-ceph region: default
> 目前不支持以s3存储建立pvc,仅可用于建立存储桶
为storageclass建立对应的存储桶资源申请
apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-delete-bucket spec: generateBucketName: ceph-bkt storageClassName: rook-ceph-delete-bucket
存储桶建立后会生成与桶资源申请同名的secret,其中保存着用于链接该存储桶的AccessKey和SecretKey
获取AccessKey
kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_ACCESS_KEY_ID}'|base64 -d
获取SecretKey
kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}'|base64 -d
> 使用该方式获取的s3用户已经作了配额限制只能使用一个存储桶
以上就是对于rook ceph的三位一体(rbd,cephfs,s3)简单上手体验,相比较ceph-deploy和ceph-ansible而言更加地简单方便,适合新手上手体验ceph,稳定性如何须要时间观察,暂不推荐用于生产环境