Copyright 2017-05-22 xiaogang(172826370@qq.com)node
因为网上的文章所有是抄袭官网等,烂文章一堆,误导一堆人,完美没有实用性,特写此文章,nfs相对来讲比较简单,通常都会安装
先送上nfs的相关文档,稍后将为你们献上ceph rbd动态卷文档,同时还有几个redis和mysql主从实例mysql
有状态容器的工做过程当中,存储是一个关键问题,Kubernetes 对存储的管理提供了有力的支持。Kubernetes 独有的动态卷供给特性,
实现了存储卷的按需建立。在这一特性面世以前,集群管理员首先要给云供应商或者存储供应商致电,来申请新的存储卷,而后建立持
久卷(PersistentVolue),使其在 Kubernetes 中可见。而动态卷供给功能则实现了这两个步骤的自动化,让管理员无需再进行存储卷
预分配。存储资源会依照 StorageClass 定义的方式进行供给。StorageClass 是对底层存储资源的抽象,包含了存储相关的参数,
例如磁盘类型(标准类型和 SSD)。nginx
StorageClass 的多种供给者(Previsioner),为 Kubernetes 提供了针对特定物理存储或云存储的访问能力。目前提供了多种开箱即
用的存储支持,另外还有一些在 Kubernetes 孵化器中提供的其余存储支持。git
在 Kubernetes 1.6 中,动态卷供给提高为稳定版(1.4 开始进入 Beta 版)。这在 Kubernetes 的存储自动化过程当中是很重要的一步,
让管理员可以控制资源的供给方式,让用户可以更专一于本身的应用。在上面提到的益处以外,在升级到 Kubernetes 1.6 以前,还需
要了解一下这里涉及到的针对用户方面的变动。
有状态的应用程序
通常状况下,nginx或者web server(不包含MySQL)自身都是不须要保存数据的,对于 web server,数据会保存在专门作持久化的节点
上。因此这些节点能够随意扩容或者缩容,只要简单的增长或减小副本的数量就能够。可是不少有状态的程序都须要集群式的部署,
意味着节点须要造成群组关系,每一个节点须要一个惟一的ID(例如Kafka BrokerId, Zookeeper myid)来做为集群内部每一个成员的标识,
集群内节点之间进行内部通讯时须要用到这些标识。传统的作法是管理员会把这些程序部署到稳定的,长期存活的节点上去,这些节点
有持久化的存储和静态的IP地址。这样某个应用的实例就跟底层物理基础设施好比某台机器,某个IP地址耦合在一块儿了。Kubernets中
StatefulSet的目标是经过把标识分配给应用程序的某个不依赖于底层物理基础设施的特定实例来解耦这种依赖关系。(消费方不使用静
态的IP,而是经过DNS域名去找到某台特定机器)github
StatefulSetweb
使用StatefulSet的前提:redis
安装好DNS集群插件,版本 >=15sql
StatefulSet(1.5版本以前叫作PetSet)为何适合有状态的程序,由于它相比于Deployment有如下特色:docker
因此Zookeeper, Etcd 或 Elasticsearch这类须要稳定的集群成员的应用时,就能够用StatefulSet。经过查询无头服务域名的A记录,
就能够获得集群内成员的域名信息。centos
StatefulSet也有一些限制:
要定义一个服务(Service)为无头服务(Headless Service),须要把Service定义中的ClusterIP配置项设置为空: spec.clusterIP:None。
和普通Service相比,Headless Service没有ClusterIP(因此没有负载均衡),它会给一个集群内部的每一个成员提供一个惟一的DNS- 域名来
做为每一个成员的网络标识,集群内部成员之间使用域名通讯。无头服务管理的域名是以下的格式:$(service_name).$(k8s_namespace).svc.cluster.local。
其中的 "cluster.local"是集群的域名,除非作了配置,不然集群域名默认就是cluster.local。StatefulSet下建立的每一个Pod,获得一个对应的DNS子域名,
格式以下:
$(podname).$(governing_service_domain),这里 governing_service_domain是由StatefulSet中定义的serviceName来决定。举例子,
无头服务管理的kafka的域名是:kafka.test.svc.cluster.local, 建立的Pod获得的子域名是 kafka-1.kafka.test.svc.cluster.local。
注意这里提到的域名,都是由kuber-dns组件管理的集群内部使用的域名,能够经过命令来查询:
在nfs-server物理机上配置权限 cat /etc/exports /data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
下载nfs-client 插件
docker pull quay.io/kubernetes_incubator/nfs-client-provisioner:v1 docker tag quay.io/kubernetes_incubator/nfs-client-provisioner:v1 192.168.1.103/k8s_public/nfs-client-provisioner:v1 docker push 192.168.1.103/k8s_public/nfs-client-provisioner:v1
布署供应卷,其实是把pv挂载成class供应卷
cat deployment-nfs.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: containers: - name: nfs-client-provisioner image: 192.168.1.103/k8s_public/nfs-client-provisioner:v1 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.1.103 - name: NFS_PATH value: /data/nfs-storage/k8s-storage/ssd volumes: - name: nfs-client-root nfs: server: 192.168.1.103 path: /data/nfs-storage/k8s-storage/ssd #此处填写nfs 存储路径 跟据实际状况填写
[root@master3 deploy]# kubectl create -f deployment-nfs.yaml kubectl get pod nfs-client-provisioner-4163627910-fn70d 1/1 Running 0 1m
布署storageclass.yaml
[root@master3 deploy]# cat nfs-class.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # 此处引用nfs-client-provisioner里面的 fuseim.pri/ifs or choose another name, must match deployment's env PROVISIONER_NAME' [root@master3 deploy]# kubectl create -f nfs-class.yaml [root@master3 deploy]# kubectl get storageclass NAME TYPE ceph-web kubernetes.io/rbd managed-nfs-storage fuseim.pri/ifs
建立一个pod引用storageclass
[root@master3 stateful-set]# cat nginx.yaml apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx1" replicas: 2 volumeClaimTemplates: - metadata: name: test annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #此处引用classname spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi template: metadata: labels: app: nginx1 spec: containers: - name: nginx1 image: 192.168.1.103/k8s_public/nginx:latest volumeMounts: - mountPath: "/mnt" name: test imagePullSecrets: - name: "registrykey" #注意此处注名了secret安全链接registy 本地镜相服务器
验证pv pvc 是否本身建立成功
[root@master3 stateful-set]# kubectl get pv |grep web default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 2Gi RWO Delete Bound default/test-web-0 1m default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59 2Gi RWO Delete Bound default/test-web-1 1m [root@master3 stateful-set]# kubectl get pvc |grep web test-web-0 Bound default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 2Gi RWO 1m test-web-1 Bound default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59 2Gi RWO 1m [root@master3 stateful-set]# kubectl get storageclass |grep web ceph-web kubernetes.io/rbd [root@master3 stateful-set]# kubectl get storageclass NAME TYPE ceph-web kubernetes.io/rbd managed-nfs-storage fuseim.pri/ifs
[root@master3 stateful-set]# kubectl get pod |grep web web-0 1/1 Running 0 2m web-1 1/1 Running 0 2m
扩展 pod
[root@master3 stateful-set]# kubectl scale statefulset web --replicas=3 [root@master3 stateful-set]# kubectl get pod |grep web web-0 1/1 Running 0 10m web-1 1/1 Running 0 10m web-3 1/1 Running 0 1m
收缩 pod 至1个
kubectl scale statefulset web --replicas=1 [root@master3 stateful-set]# kubectl get pod |grep web web-0 1/1 Running 0 11m
ok ,建立完成 pod也正常
进入web-0验证pvc挂载目录
[root@master3 stateful-set]# kubectl exec -it web-0 /bin/bash root@web-0:/# root@web-0:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/docker-253:0-654996-18a8b448ce9ebf898e46c4468b33093ed9a5f81794d82a271124bcd1eb27a87c 10G 230M 9.8G 3% / tmpfs 1.6G 0 1.6G 0% /dev tmpfs 1.6G 0 1.6G 0% /sys/fs/cgroup 192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 189G 76G 104G 43% /mnt /dev/mapper/centos-root 37G 9.1G 26G 27% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 1.6G 12K 1.6G 1% /run/secrets/kubernetes.io/serviceaccount root@web-0:/#
去nfs-server上看看pvc卷
root@pxt:/data/nfs-storage/k8s-storage/ssd# ll total 40 drwxr-xr-x 10 root root 4096 May 22 17:53 ./ drwxr-xr-x 7 root root 4096 May 12 17:26 ../ drwxr-xr-x 3 root root 4096 May 16 16:19 default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59/ drwxr-xr-x 3 root root 4096 May 16 16:20 default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59/ drwxr-xr-x 3 root root 4096 May 16 16:21 default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59/ drwxr-xr-x 2 root root 4096 May 17 17:49 default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59/ drwxr-xr-x 2 root root 4096 May 17 17:56 default-redis-secondary-volume-redis-secondary-0-pvc-16c8749d-3ae7-11e7-b646-525400c2bc59/ drwxr-xr-x 2 root root 4096 May 17 17:58 default-redis-secondary-volume-redis-secondary-1-pvc-16da7ba5-3ae7-11e7-b646-525400c2bc59/ drwxr-xr-x 2 root root 4096 May 22 17:53 default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59/ drwxr-xr-x 2 root root 4096 May 22 17:53 default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59/
root@pxt:/data/nfs-storage/k8s-storage/ssd# showmount -e Export list for pxt.docker.agent103: /data/nfs_ssd * /data/nfs-storage/k8s-storage/standard * /data/nfs-storage/k8s-storage/ssd * /data/nfs-storage/k8s-storage/redis * /data/nfs-storage/k8s-storage/nginx * /data/nfs-storage/k8s-storage/mysql * root@pxt:/data/nfs-storage/k8s-storage/ssd# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) #/data/nfs-storage/k8s-storage *(rw,insecure,sync,no_subtree_check,no_root_squash) /data/nfs-storage/k8s-storage/mysql *(rw,insecure,sync,no_subtree_check,no_root_squash) /data/nfs-storage/k8s-storage/nginx *(rw,insecure,sync,no_subtree_check,no_root_squash) /data/nfs-storage/k8s-storage/redis *(rw,insecure,sync,no_subtree_check,no_root_squash) /data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash) /data/nfs-storage/k8s-storage/standard *(rw,insecure,sync,no_subtree_check,no_root_squash) /data/nfs_ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
mysql-configmap.yaml mysql-services.yaml mysql-statefulset.yaml [root@master3 setateful-set-mysql]# cat mysql-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: mysql labels: app: mysql data: master.cnf: | # Apply this config only on the master. [mysqld] log-bin slave.cnf: | # Apply this config only on slaves. [mysqld] super-read-only
[root@master3 setateful-set-mysql]# cat mysql-services.yaml # Headless service for stable DNS entries of StatefulSet members. apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - name: mysql port: 3306 clusterIP: None selector: app: mysql --- # Client service for connecting to any MySQL instance for reads. # For writes, you must instead connect to the master: mysql-0.mysql. apiVersion: v1 kind: Service metadata: name: mysql-read labels: app: mysql spec: ports: - name: mysql port: 3306 selector: app: mysql
[root@master3 setateful-set-mysql]# cat mysql-statefulset.yaml apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mysql spec: serviceName: mysql replicas: 3 template: metadata: labels: app: mysql annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "init-mysql", #原始镜相:image: msql:5.7 "image": "192.168.1.103/k8s_public/mysql:5.7", "command": ["bash", "-c", " set -ex\n # Generate mysql server-id from pod ordinal index.\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n echo [mysqld] > /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/master.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/slave.cnf /mnt/conf.d/\n fi\n "], "volumeMounts": [ {"name": "conf", "mountPath": "/mnt/conf.d"}, {"name": "config-map", "mountPath": "/mnt/config-map"} ] }, { "name": "clone-mysql", #"image": gcr.io/google-samples/xtrabackup:1.0 原始镜相本身打tag push 到私库 "image": "192.168.1.103/k8s_public/xtrabackup:1.0", "command": ["bash", "-c", " set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on master (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql\n "], "volumeMounts": [ {"name": "data", "mountPath": "/var/lib/mysql", "subPath": "mysql"}, {"name": "conf", "mountPath": "/etc/mysql/conf.d"} ] } ]' spec: containers: - name: mysql image: 192.168.1.103/k8s_public/mysql:5.7 env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1" ports: - name: mysql containerPort: 3306 volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 1 memory: 1Gi #memory: 500Mi livenessProbe: exec: command: ["mysqladmin", "ping"] initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: exec: # Check we can execute queries over TCP (skip-networking is off). command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] initialDelaySeconds: 5 timeoutSeconds: 1 - name: xtrabackup image: 192.168.1.103/k8s_public/xtrabackup:1.0 ports: - name: xtrabackup containerPort: 3307 command: - bash - "-c" - | set -ex cd /var/lib/mysql # Determine binlog position of cloned data, if any. if [[ -f xtrabackup_slave_info ]]; then # XtraBackup already generated a partial "CHANGE MASTER TO" query # because we're cloning from an existing slave. mv xtrabackup_slave_info change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it's useless). rm -f xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # We're cloning directly from master. Parse binlog position. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm xtrabackup_binlog_info echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. if [[ -f change_master_to.sql.in ]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" # In case of container restart, attempt this at-most-once. mv change_master_to.sql.in change_master_to.sql.orig mysql -h 127.0.0.1 <<EOF $(<change_master_to.sql.orig), MASTER_HOST='mysql-0.mysql', MASTER_USER='root', MASTER_PASSWORD='', MASTER_CONNECT_RETRY=10; START SLAVE; EOF fi # Start a server to send backups when requested by peers. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \ "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 100m memory: 100Mi nodeSelector: zone: mysql volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql volumeClaimTemplates: - metadata: name: data annotations: #volume.alpha.kubernetes.io/storage-class: "managed-nfs-storage" #不一样版本这里引用的alpha/beta不一样注意 volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi
[root@master3 setateful-set-mysql]# kubectl create -f mysql-configmap.yaml -f mysql-services.yaml -f mysql-statefulset.yaml [root@master3 setateful-set-mysql]# kubectl get storageclass,pv,pvc,statefulset,pod,service |grep mysql pv/default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-0 6d pv/default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-1 6d pv/default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-2 6d pvc/data-mysql-0 Bound default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d pvc/data-mysql-1 Bound default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d pvc/data-mysql-2 Bound default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d statefulsets/mysql 3 3 5d po/mysql-0 2/2 Running 0 5d po/mysql-1 2/2 Running 0 5d po/mysql-2 2/2 Running 0 5d svc/mysql None <none> 3306/TCP 6d #同一个namespaces 下面是能够ping 的 ping mysql-0.mysql ; ping mysql-1.mysql svc/mysql-read 172.1.11.160 <none> 3306/TCP 6d
[root@master3 setateful-set-mysql]# ok 全部pok建立完成,注意这里的service 没有clusterip 这种就是headless service无头类型,
注意删除kubectl delete statefulset yaml pv和pvc仍是会存在的
扩容mysql slave 扩容后能够看到pv,pvc相应的自动建立了
kubectl scale --replicas=5 statefulset mysql kubectl get pod|grep mysql po/mysql-0 2/2 Running 0 5d po/mysql-1 2/2 Running 0 5d po/mysql-2 2/2 Running 0 5d po/mysql-3 2/2 Running 0 5m po/mysql-4 2/2 Running 0 5m 收宿 kubectl scale --replicas=2 statefulset mysql kubectl get pod|grep mysql po/mysql-0 2/2 Running 0 5d po/mysql-1 2/2 Running 0 5d
方法1: 经过容器链接
启动1个mysql-client pod
#启动1个容器,这里测了下,执行成功了, 没反应 ctrl+c下. 看到查看pod 能够看到mysql-client的pod kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\ mysql -h mysql-0.mysql <<EOF CREATE DATABASE test; CREATE TABLE test.messages (message VARCHAR(250)); INSERT INTO test.messages VALUES ('hello'); EOF
kubectl exec -it mysql-client bash #链接从库 root@mysql-client:/# mysql -h mysql-read #链接主库 mysql -h mysql-0.mysql
方法2: 能够物理机安装mysql-client
#安装 yum install mysql -y #查看pod的ip [root@node131 images]# kubectl get po -o wide|grep mysql mysql-0 2/2 Running 0 25m 172.30.2.4 192.168.6.133 mysql-1 2/2 Running 1 24m 172.30.28.4 192.168.6.132 mysql-2 2/2 Running 1 24m 172.30.2.5 192.168.6.133 mysql-client 1/1 Running 0 22m 172.30.28.5 192.168.6.132 #经过本地mysql客户端登陆mysql mysql -h 172.30.2.5
kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\ bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
[root@node131 images]# kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\ > bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done" If you don't see a command prompt, try pressing enter. +-------------+---------------------+ +-------------+---------------------+ | @@server_id | NOW() | +-------------+---------------------+ | 100 | 2017-05-23 08:58:31 | +-------------+---------------------+ +-------------+---------------------+ | @@server_id | NOW() | +-------------+---------------------+ | 101 | 2017-05-23 08:58:32 | +-------------+---------------------+ +-------------+---------------------+ | @@server_id | NOW() | +-------------+---------------------+ | 102 | 2017-05-23 08:58:33 | +-------------+---------------------+ ^C
上面窗口保留
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off
从窗口能够看到只有id是100和101的了.
+-------------+---------------------+ | @@server_id | NOW() | +-------------+---------------------+ | 100 | 2017-05-23 09:03:05 | +-------------+---------------------+ +-------------+---------------------+ | @@server_id | NOW() | +-------------+---------------------+ | 100 | 2017-05-23 09:03:06 | +-------------+---------------------+
恢复102,后自动有添加到从库了
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
删除pod:
kubectl delete pod mysql-2
删掉后,StatefulSet controller会自动建立mysql-2
维护node: 当1个node须要被维护,全部的所在此node的pod都要被驱逐出来.pod会自动实现调用到别的节点
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets kubectl get pod mysql-2 -o wide --watch
维护好node后,加入集群
kubectl uncordon <node-name> kubectl get pods -l app=mysql --watch
扩展节点
kubectl scale --replicas=5 statefulset mysql kubectl get pods -l app=mysql --watch kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\ mysql -h mysql-3.mysql -e "SELECT * FROM test.messages" kubectl scale --replicas=3 statefulset mysql kubectl get pvc -l app=mysql
缩小:
Which shows that all 5 PVCs still exist, despite having scaled the StatefulSet down to 3:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE data-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m data-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m data-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m data-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m data-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
If you don’t intend to reuse the extra PVCs, you can delete them:
kubectl delete pvc data-mysql-3 kubectl delete pvc data-mysql-4
清理环境:
kubectl delete pod mysql-client-loop --now kubectl delete statefulset mysql kubectl get pods -l app=mysql kubectl delete configmap,service,pvc -l app=mysql
由于k8s 1.6开启了rbac受权
建立statfulset后,看了下pod的日志
kubectl logs -f nfs-client-provisioner-2387627438-hs250 ... E0523 02:47:32.695718 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes) E0523 02:47:32.696305 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io) E0523 02:47:32.697326 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims) E0523 02:47:33.697467 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes) E0523 02:47:33.697967 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io) E0523 02:47:33.699042 1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims) ... ^C
解决:
[root@node131 rbac]# cat serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner
[root@node131 rbac]# cat clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1alpha1 metadata: name: nfs-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]
[root@node131 rbac]# cat clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1alpha1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io [root@node131 rbac]#
注意点
[root@node131 nfs]# cat nfs-stateful.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: **serviceAccount: nfs-provisioner** #这里须要调用刚建立的sa
按照以依次建立,而后执行上面的 pv
kubectl create -f serviceaccount.yaml -f clusterrole.yaml -f clusterrolebinding.yaml