k8s之针对有状态服务实现数据持久化

前言

一、什么是有状态服务和无状态服务?

对服务器程序来讲,到底是有状态服务,仍是无状态服务,其判断依旧是指两个来自相同发起者的请求在服务器端是否具有上下文关系。若是是状态化请求,那么服务器端通常都要保存请求的相关信息,每一个请求能够默认地使用之前的请求信息。而对于无状态请求,服务器端所可以处理的过程必须所有来自于请求所携带的信息,以及其余服务器端自身所保存的、而且能够被全部请求所使用的公共信息。
无状态的服务器程序,最著名的就是WEB服务器。每次HTTP请求和之前都没有什么关系,只是获取目标URI。获得目标内容以后,此次链接就被杀死,没有任何痕迹。在后来的发展进程中,逐渐在无状态化的过程当中,加入状态化的信息,好比COOKIE。服务端在响应客户端的请求的时候,会向客户端推送一个COOKIE,这个COOKIE记录服务端上面的一些信息。客户端在后续的请求中,能够携带这个COOKIE,服务端能够根据这个COOKIE判断这个请求的上下文关系。COOKIE的存在,是无状态化向状态化的一个过渡手段,他经过外部扩展手段,COOKIE来维护上下文关系。
状态化的服务器有更广阔的应用范围,好比MSN、网络游戏等服务器。他在服务端维护每一个链接的状态信息,服务端在接收到每一个链接的发送的请求时,能够从本地存储的信息来重现上下文关系。这样,客户端能够很容易使用缺省的信息,服务端也能够很容易地进行状态管理。好比说,当一个用户登陆后,服务端能够根据用户名获取他的生日等先前的注册信息;并且在后续的处理中,服务端也很容易找到这个用户的历史信息。
状态化服务器在功能实现方面具备更增强大的优点,但因为他须要维护大量的信息和状态,在性能方面要稍逊于无状态服务器。无状态服务器在处理简单服务方面有优点,但复杂功能方面有不少弊端,好比,用无状态服务器来实现即时通信服务器,将会是场恶梦。html

二、K8s有状态服务和无状态服务的数据持久化有什么区别?

在k8s中,对web这种无状态服务实现数据持久化时,采用我以前的博文:K8s数据持久化之自动建立PV的方式对其实现便可。可是若是对数据库这种有状态的服务使用这种数据持久化方式的话,那么将会有一个很严重的问题,就是当对数据库进行写入操做时,你会发现只能对后端的多个容器中的其中一个容器进行写入,固然,nfs目录下也会有数据库写入的数据,可是,其没法被其余数据库读取到,由于在数据库中有不少影响因素,好比server_id,数据库分区表信息等。node

固然,除了数据库以外,还有其余的有状态服务不可使用上述的数据持久化方式。mysql

三、数据持久化实现方式——StatefullSet

StatefulSet也是一种资源对象(在kubelet 1.5版本以前都叫作PetSet),这种资源对象和RS、RC、Deployment同样,都是Pod控制器。web

在Kubernetes中,大多数的Pod管理都是基于无状态、一次性的理念。例如Replication Controller,它只是简单的保证可提供服务的Pod数量。若是一个Pod被认定为不健康的,Kubernetes就会以对待牲畜的态度对待这个Pod——删掉、重建。相比于牲畜应用,PetSet(宠物应用),是由一组有状态的Pod组成,每一个Pod有本身特殊且不可改变的ID,且每一个Pod中都有本身独一无2、不能删除的数据。sql

  众所周知,相比于无状态应用的管理,有状态应用的管理是很是困难的。有状态的应用须要固定的ID、有本身内部可不见的通讯逻辑、特别容器出现剧烈波动等。传统意义上,对有状态应用的管理通常思路都是:固定机器、静态IP、持久化存储等。Kubernetes利用PetSet这个资源,弱化有状态Pet与具体物理设施之间的关联。一个PetSet可以保证在任意时刻,都有固定数量的Pet在运行,且每一个Pet都有本身惟一的身份。docker

一个“有身份”的Pet指的是该Pet中的Pod包含以下特性:数据库

  • 静态存储;
  • 有固定的主机名,且DNS可寻址(稳定的网络身份,这是经过一种叫 Headless Service 的特殊Service来实现的。 和普通Service相比,Headless Service没有Cluster IP,用于为一个集群内部的每一个成员提供一个惟一的DNS名字,用于集群内部成员之间通讯 。);
  • 一个有序的index(好比PetSet的名字叫mysql,那么第一个启起来的Pet就叫mysql-0,第二个叫mysql-1,如此下去。当一个Pet down掉后,新建立的Pet会被赋予跟原来Pet同样的名字,经过这个名字就能匹配到原来的存储,实现状态保存。)

一、应用举例:apache

  • 数据库应用,如Mysql、PostgreSQL,须要一个固定的ID(用于数据同步)以及外挂一块NFS Volume(持久化存储)。
  • 集群软件,如zookeeper、Etcd,须要固定的成员关系。
    二、使用限制
  • 1.4新加功能,1.3及以前版本不可用;
  • DNS,要求使用1.4或1.4以后的DNS插件,1.4以前的插件只能解析Service对应的IP,没法解析Pod(HostName)对应的域名;
  • 须要持久化数据卷(PV,若为nfs这种没法经过调用API来建立存储的网络存储,数据卷要在建立PetSet以前静态建立;若为aws-ebs、vSphere、openstack Cinder这种能够经过API调用来动态建立存储的虚拟存储,数据卷除了能够经过静态的方式建立之外,还能够经过StorageClass进行动态建立。须要注意的是,动态建立出来的PV,默认的回收策略是delete,及在删除数据的同时,还会把虚拟存储卷删除);
  • 删除或缩容PetSet不会删除对应的持久化数据卷,这么作是出于数据安全性的考虑;
  • 只能经过手动的方式升级PetSet。

配置示例

这种方式,与K8s数据持久化之自动建立PV的方式有不少相同点,都须要底层NFS存储、rbac受权帐户,nfs-client-Provisioner提供存储,SC存储类这些东西,惟一不一样的是,这种针对于有状态服务的数据持久化,并不须要咱们手动建立PV。vim

搭建registry私有仓库:后端

[root@master ~]# docker run -tid --name registry -p 5000:5000 -v /data/registry:/var/lib/registry --restart always registry
[root@master ~]# vim /usr/lib/systemd/system/docker.service   #更改docker的配置文件,以便指定私有仓库
ExecStart=/usr/bin/dockerd -H unix:// --insecure-registry 192.168.20.6:5000
[root@master ~]# scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/
[root@master ~]# scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker

搭建NFS服务:

[root@master ~]# yum -y install nfs-utils
[root@master ~]# systemctl enable rpcbind
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs
[root@master ~]# showmount -e
Export list for master:
/nfsdata *

以上皆属于准备工做。

一、使用自定义镜像,建立StatefulSet资源对象,要求每一个都作数据持久化。副本数量为6个。数据持久化目录为:/usr/local/apache2/htdocs

建立rbac受权

[root@master ljz]# vim rbac.yaml  #编写yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[root@master ljz]# kubectl apply -f rbac.yaml       #执行yaml文件

建立NFS-clinet-Provisioner

[root@master ljz]# vim nfs-deploymnet.yaml   #编写yaml文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: ljz
            - name: NFS_SERVER
              value: 192.168.20.6
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.20.6
            path: /nfsdata
[root@master ljz]# kubectl apply -f nfs-deploymnet.yaml      #执行yaml文件

建立SC(storageClass)

[root@master ljz]# vim sc.yaml       #编写yaml文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-sc
provisioner: ljz
reclaimPolicy: Retain
[root@master ljz]# kubectl apply -f sc.yaml         #执行yaml文件

建立POd

[root@master ljz]# vim statefulset.yaml        #编写yaml文件

apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - name: testweb
    port: 80
  selector:
    app: headless-pod
  clusterIP: None

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  serviceName: headless-svc
  replicas: 6
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - name: testhttpd
        image: 192.168.20.6:5000/ljz:v1
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test
          mountPath: /usr/local/apache2/htdocs
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: test-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

[root@master ljz]# kubectl apply -f statefulset.yaml 
[root@master ljz]# kubectl get pod -w
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          7m57s
statefulset-0                             1/1     Running   0          26s
statefulset-1                             1/1     Running   0          24s
statefulset-2                             1/1     Running   0          20s
statefulset-3                             1/1     Running   0          16s
statefulset-4                             1/1     Running   0          13s
statefulset-5                             1/1     Running   0          9s
[root@master ljz]# kubectl get pv,pvc

二、完成以后,要求第0--5个Pod的主目录应该为: Version:--v1
将服务进行扩容:副本数量更新为10个,验证是否会继续为新的Pod建立持久化的PV,PVC

[root@master ljz]# vim a.sh   #编写脚本定义首页

#!/bin/bash
for i in `ls /nfsdata`
do
  echo "Version: --v1" > /nfsdata/${i}/index.html
done
[root@master ljz]# kubectl get pod -o wide      #查看节点IP,随机验证首页文件
[root@master ljz]# curl 10.244.1.3
Version: --v1
[root@master ljz]# curl 10.244.1.5
Version: --v1
[root@master ljz]# curl 10.244.2.4
Version: --v1
#进行扩容更新
[root@master ljz]# vim statefulset.yaml 

apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - name: testweb
    port: 80
  selector:
    app: headless-pod
  clusterIP: None

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  updateStrategy:
    rollingUpdate:
      partition: 4
  serviceName: headless-svc
  replicas: 10
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - name: testhttpd
        image: 192.168.20.6:5000/ljz:v2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test
          mountPath: /usr/local/apache2/htdocs
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: test-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

[root@master ljz]# kubectl get pod -w #查看其更新过程
NAME                                      READY   STATUS             RESTARTS   AGE
nfs-client-provisioner-6649749f97-cl92m   1/1     Running            0          40m
statefulset-0                             1/1     Running            0          33m
statefulset-1                             1/1     Running            0          33m
statefulset-2                             1/1     Running            0          33m
statefulset-3                             1/1     Running            0          33m
statefulset-4                             1/1     Running            0          33m
statefulset-5                             1/1     Running            0          33m
statefulset-6                             0/1     ImagePullBackOff   0          5m9s
statefulset-6                             1/1     Running            0          5m41s
statefulset-7                             0/1     Pending            0          0s
statefulset-7                             0/1     Pending            0          0s
statefulset-7                             0/1     Pending            0          2s
statefulset-7                             0/1     ContainerCreating   0          2s
statefulset-7                             1/1     Running             0          4s
statefulset-8                             0/1     Pending             0          0s
statefulset-8                             0/1     Pending             0          0s
statefulset-8                             0/1     Pending             0          1s
statefulset-8                             0/1     ContainerCreating   0          1s
statefulset-8                             1/1     Running             0          3s
statefulset-9                             0/1     Pending             0          0s
statefulset-9                             0/1     Pending             0          0s
statefulset-9                             0/1     Pending             0          1s
statefulset-9                             0/1     ContainerCreating   0          1s
statefulset-9                             1/1     Running             0          3s
statefulset-5                             1/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Pending             0          0s
statefulset-5                             0/1     Pending             0          0s
statefulset-5                             0/1     ContainerCreating   0          0s
statefulset-5                             1/1     Running             0          1s
statefulset-4                             1/1     Terminating         0          33m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Pending             0          0s
statefulset-4                             0/1     Pending             0          0s
statefulset-4                             0/1     ContainerCreating   0          0s
statefulset-4                             1/1     Running             0          1s
[root@master ljz]# kubectl get pv,pvc                #查看其为扩容后的容器建立的pv及pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
persistentvolume/pvc-161fc655-7601-4996-99c8-a13cabaa4ad1   100Mi      RWO            Delete           Bound    default/test-statefulset-0   test-sc                 38m
persistentvolume/pvc-1d1b0cfd-83cc-4cd6-a380-f23b9eac0411   100Mi      RWO            Delete           Bound    default/test-statefulset-4   test-sc                 37m
persistentvolume/pvc-297495a8-2117-4232-8e9a-61019c03f0d0   100Mi      RWO            Delete           Bound    default/test-statefulset-7   test-sc                 3m41s
persistentvolume/pvc-2e48a292-cb30-488e-90a9-5184811b9eb8   100Mi      RWO            Delete           Bound    default/test-statefulset-5   test-sc                 37m
persistentvolume/pvc-407e2c0e-209d-4b5a-a3fa-454787f617a7   100Mi      RWO            Delete           Bound    default/test-statefulset-2   test-sc                 37m
persistentvolume/pvc-56ac09a0-e51d-42a9-843b-f0a3a0c60a08   100Mi      RWO            Delete           Bound    default/test-statefulset-9   test-sc                 3m34s
persistentvolume/pvc-90a05d1b-f555-44df-9bb3-73284001dda3   100Mi      RWO            Delete           Bound    default/test-statefulset-3   test-sc                 37m
persistentvolume/pvc-9e2fd35e-5151-4790-b248-6545815d8c06   100Mi      RWO            Delete           Bound    default/test-statefulset-8   test-sc                 3m37s
persistentvolume/pvc-9f60aab0-4491-4422-9514-1a945151909d   100Mi      RWO            Delete           Bound    default/test-statefulset-6   test-sc                 9m22s
persistentvolume/pvc-ab64f8b8-737e-49a5-ae5d-e3b33188ce39   100Mi      RWO            Delete           Bound    default/test-statefulset-1   test-sc                 37m

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-statefulset-0   Bound    pvc-161fc655-7601-4996-99c8-a13cabaa4ad1   100Mi      RWO            test-sc        38m
persistentvolumeclaim/test-statefulset-1   Bound    pvc-ab64f8b8-737e-49a5-ae5d-e3b33188ce39   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-2   Bound    pvc-407e2c0e-209d-4b5a-a3fa-454787f617a7   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-3   Bound    pvc-90a05d1b-f555-44df-9bb3-73284001dda3   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-4   Bound    pvc-1d1b0cfd-83cc-4cd6-a380-f23b9eac0411   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-5   Bound    pvc-2e48a292-cb30-488e-90a9-5184811b9eb8   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-6   Bound    pvc-9f60aab0-4491-4422-9514-1a945151909d   100Mi      RWO            test-sc        9m22s
persistentvolumeclaim/test-statefulset-7   Bound    pvc-297495a8-2117-4232-8e9a-61019c03f0d0   100Mi      RWO            test-sc        3m41s
persistentvolumeclaim/test-statefulset-8   Bound    pvc-9e2fd35e-5151-4790-b248-6545815d8c06   100Mi      RWO            test-sc        3m37s
persistentvolumeclaim/test-statefulset-9   Bound    pvc-56ac09a0-e51d-42a9-843b-f0a3a0c60a08   100Mi      RWO            test-sc        3m34s
[root@master nfsdata]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          54m   10.244.1.2   node01   <none>           <none>
statefulset-0                             1/1     Running   0          47m   10.244.1.3   node01   <none>           <none>
statefulset-1                             1/1     Running   0          47m   10.244.2.3   node02   <none>           <none>
statefulset-2                             1/1     Running   0          47m   10.244.2.4   node02   <none>           <none>
statefulset-3                             1/1     Running   0          47m   10.244.1.4   node01   <none>           <none>
statefulset-4                             1/1     Running   0          12m   10.244.2.8   node02   <none>           <none>
statefulset-5                             1/1     Running   0          13m   10.244.1.8   node01   <none>           <none>
statefulset-6                             1/1     Running   0          18m   10.244.2.6   node02   <none>           <none>
statefulset-7                             1/1     Running   0          13m   10.244.1.6   node01   <none>           <none>
statefulset-8                             1/1     Running   0          13m   10.244.2.7   node02   <none>           <none>
statefulset-9                             1/1     Running   0          13m   10.244.1.7   node01   <none>           <none>
#查看其首页文件
[root@master nfsdata]# curl 10.244.2.6
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
 <head>
  <title>Index of /</title>
 </head>
 <body>
<h1>Index of /</h1>
<ul></ul>
</body></html>
[root@master nfsdata]# curl 10.244.1.8
Version: --v1

服务进行更新:在更新过程当中,要求3之后的所有更新为Version:v2

[root@master ljz]# vim a.sh      #编写首页文件

#!/bin/bash
for i in `ls /nfsdata/`
do
  if [ `echo $i | awk -F - '{print $4}'` -gt 3 ]
  then
    echo "Version: --v2" > /nfsdata/${i}/index.html
  fi
done
[root@master ljz]# sh a.sh        #执行脚本
[root@master ljz]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          68m   10.244.1.2   node01   <none>           <none>
statefulset-0                             1/1     Running   0          60m   10.244.1.3   node01   <none>           <none>
statefulset-1                             1/1     Running   0          60m   10.244.2.3   node02   <none>           <none>
statefulset-2                             1/1     Running   0          60m   10.244.2.4   node02   <none>           <none>
statefulset-3                             1/1     Running   0          60m   10.244.1.4   node01   <none>           <none>
statefulset-4                             1/1     Running   0          26m   10.244.2.8   node02   <none>           <none>
statefulset-5                             1/1     Running   0          26m   10.244.1.8   node01   <none>           <none>
statefulset-6                             1/1     Running   0          32m   10.244.2.6   node02   <none>           <none>
statefulset-7                             1/1     Running   0          26m   10.244.1.6   node01   <none>           <none>
statefulset-8                             1/1     Running   0          26m   10.244.2.7   node02   <none>           <none>
statefulset-9                             1/1     Running   0          26m   10.244.1.7   node01   <none>           <none>
#确认内容
[root@master ljz]# curl 10.244.1.4
Version: --v1
[root@master ljz]# curl 10.244.2.8
Version: --v2
相关文章
相关标签/搜索