原文地址:http://maoqide.live/post/cloud/deploy-mysql-on-kubernetes/node
本文经过 mysql-operator 在kubernetes集群部署高可用的mysql statefulset。
mysql
本文使用的开源 operator 项目 mysql-operator 配死只支持 mysql 8.0.11 以上的版本,改了下代码,支持 5.7.0 以上版本,项目地址,本文部署的是 mysql-5.7.26,使用的 dockerhub 上的镜像 mysql/mysql-server:5.7.26。linux
git clone 下载该项目,进入到代码目录,执行sh hack/build.sh
,编译代码获得二进制文件 mysql-agent 和 mysql-operator,将二进制文件放入 bin/linux_amd64
,执行docker build -f docker/mysql-agent/Dockerfile -t $IMAGE_NAME_AGENT .
,docker build -f docker/mysql-operator/Dockerfile -t $IMAGE_NAME_OPERATOR .
构建镜像,mysql-operator 生成的镜像为 operator 的镜像,mysql-agent 生成的是镜像,在建立mysql服务时,做为sidecar和mysql-server容器起在同一个pod中。git
先根据 文档 部署 mysql-operator 的 Deployment,文档中是使用 helm 安装,不但愿安装 helm 和 tiller 的话,能够只安装一个 helm 客户端,进入到代码目录,再执行helm template --name mysql-operator mysql-operator
生成部署所须要的yaml文件,而后直接执行kubectl apply -f mysql-operator.yaml
建立 operator。这个yaml建立了operator所需的CRD类型,operator 的 Deployment 和 operator 所需的 RBAC 权限等。github
# change directory into mysql-operator cd mysql-operator # generate mysql-operator.yaml helm template --name mysql-operator mysql-operator > mysql-operator.yaml # deploy on kubernetes kubectl apply -f mysql-operator.yaml # deployed. [root@localhost]$ kubectl get deploy -n mysql-operator NAME READY UP-TO-DATE AVAILABLE AGE mysql-operator 1/1 1 1 2d5h
本文建立的集群为3节点的 mysql,一个节点为 master,二个为 slave,master节点可读写,slave节点为只读,使用 kubernetes Local PV 做持久化存储。
首先,为每一个节点建立一个PV,Local PV 须要定义nodeAffinity
,约束建立的节点。sql
apiVersion: v1 kind: PersistentVolume metadata: name: mypv0 spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: mysql-storage local: path: /data/mysql-data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - 192.168.0.1 --- apiVersion: v1 kind: PersistentVolume metadata: name: mypv1 spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: mysql-storage local: path: /data/mysql-data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - 192.168.0.2 --- apiVersion: v1 kind: PersistentVolume metadata: name: mypv2 spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: mysql-storage local: path: /data/mysql-data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - 192.168.0.3
# create pv kubectl create -f pv.yaml # get presistence volume [root@localhost]$ kubectl get pv mypv-0 1Gi RWO Delete Available mysql-storage 4s mypv-1 1Gi RWO Delete Available mysql-storage 4s mypv-2 1Gi RWO Delete Available mysql-storage 4s
接着,须要在建立 mysql 的 namespace 下,为要建立的 mysql 建立对应的 RBAC 权限。docker
apiVersion: v1 kind: ServiceAccount metadata: name: mysql-agent namespace: mysql2 --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: mysql-agent namespace: mysql2 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: mysql-agent subjects: - kind: ServiceAccount name: mysql-agent namespace: mysql2
若是须要自定义 mysql 的密码,须要为其建立一个 secret,密码须要使用base64加密。linux 下执行 echo -n 'password' | base64
为密码加密。shell
apiVersion: v1 data: password: cm9vdA== kind: Secret metadata: labels: v1alpha1.mysql.oracle.com/cluster: mysql name: mysql-pv-root-password namespace: mysql2
kubectl apply -f rbac.yaml kubectl apply -f secret.yaml
在建立 operator 的时候,已经建立了以下的crd类型,部署mysql集群所需建立的就是 mysqlclusters 类型的资源。数据库
[root@localhost]$ kubectl get crd | grep mysql mysqlbackups.mysql.oracle.com 2019-05-14T02:51:11Z mysqlbackupschedules.mysql.oracle.com 2019-05-14T02:51:11Z mysqlclusters.mysql.oracle.com 2019-05-14T02:51:11Z mysqlrestores.mysql.oracle.com 2019-05-14T02:51:11Z
接下来开始建立 operator 自定义资源类型(CRD)的实例 mysqlclusters。json
apiVersion: mysql.oracle.com/v1alpha1 kind: Cluster metadata: name: mysql namespace: mysql2 spec: # 和mysql-server镜像版本的tag一直 version: 5.7.26 repository: 20.26.28.56/dcos/mysql-server # 节点数量 members: 3 # 指定 mysql 密码,和以前建立的secret名称一致 rootPasswordSecret: name: mysql-pv-root-password resources: agent: limits: cpu: 500m memory: 200Mi requests: cpu: 300m memory: 100Mi server: limits: cpu: 1000m memory: 1000Mi requests: cpu: 500m memory: 500Mi volumeClaimTemplate: metadata: name: mysql-pv spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "mysql-storage" resources: requests: storage: 1Gi
kubectl apply -f mysql.yaml
执行后,会看到 kubernetes 在该 namespace 下开始拉起 mysql 的 statefulset,并会建立一个 headless service。
[root@localhost]$ kubectl get all -n mysql2 NAME READY STATUS RESTARTS AGE pod/mysql-0 2/2 Running 0 8h pod/mysql-1 2/2 Running 0 8h pod/mysql-2 2/2 Running 0 8h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP None <none> 3306/TCP 21h NAME READY AGE statefulset.apps/mysql 1/1 21h
此时执行hack/cluster-status.sh
脚本,会获得以下集群信息:
{ "clusterName": "Cluster", "defaultReplicaSet": { "name": "default", "primary": "mysql-0.mysql:3306", "ssl": "DISABLED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 2 members are not active", "topology": { "mysql-0.mysql:3306": { "address": "mysql-0.mysql:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql-1.mysql:3306": { "address": "mysql-1.mysql:3306", "mode": "n/a", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql-2.mysql:3306": { "address": "mysql-2.mysql:3306", "mode": "n/a", "readReplicas": {}, "role": "HA", "status": "ONLINE" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "mysql-0.mysql:3306" }
经过DNS地址 mysql-0.mysql.mysql2.svc.cluster.local:3306
能够链接到数据库进行读写操做。此时一个多节点的mysql集群已经部署完成,可是,集群外部的服务还没法访问数据库。
首先,headless service只能经过集群内 DNS 访问服务,要外部访问,还须要另外建立一个 Service。为了让外部能够访问到 mysql-0 的服务,咱们为 mysql-0 建立一个ClusterIP 类型的服务。
kind: Service apiVersion: v1 metadata: name: mysql-0 namespace: mysql2 spec: selector: # 经过 selector 将 pod 约束到 mysql-0 statefulset.kubernetes.io/pod-name: mysql-0 ports: - protocol: TCP port: 3306 targetPort: 3306
接着,须要建立一个ingress-controller,本文选用的是 haproxy-ingress。
因为 mysql 服务经过 TCP 协议通讯,kubernetes ingress 默认只支持 http 和 https,haproxy-ingress 提供了经过 configmap 的方法,配置 TCP 服务的端口,须要先建立一个 configmap,configmap的data中,key为HAProxy监听的端口,value 为须要转发的 service 的服务和端口。
apiVersion: v1 kind: ConfigMap metadata: name: mysql-tcp namespace: mysql2 data: "3306": "mysql2/mysql-0:3306"
kubectl apply -f mysql-0.yaml kubectl apply -f tcp-svc.yaml
接下来建立 ingress-controller,
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: haproxy-ingress name: haproxy-ingress-192.168.0.1-30080 namespace: mysql2 spec: replicas: 1 selector: matchLabels: run: haproxy-ingress strategy: type: RollingUpdate template: metadata: labels: run: haproxy-ingress spec: tolerations: - key: app operator: Equal value: haproxy effect: NoSchedule serviceAccount: ingress-controller nodeSelector: kubernetes.io/hostname: 192.168.0.1 containers: - args: - --tcp-services-configmap=$(POD_NAMESPACE)/mysql-tcp - --default-backend-service=$(POD_NAMESPACE)/mysql - --default-ssl-certificate=$(POD_NAMESPACE)/tls-secret - --ingress-class=ha-mysql env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: jcmoraisjr/haproxy-ingress name: haproxy-ingress ports: # 和 configmap 中定义的端口对应 - containerPort: 3306 hostPort: 3306 name: http protocol: TCP - containerPort: 443 name: https protocol: TCP - containerPort: 1936 hostPort: 30081 name: stat protocol: TCP
apiVersion: v1 kind: ServiceAccount metadata: name: ingress-controller --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-controller rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-controller rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: mysql2 - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller
kubectl apply -f ingress-controller.yaml kubectl apply -f ingress-rbac.yaml -n mysql2
最后建立 ingress 规则:
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.class: ha-mysql name: ha-mysql spec: rules: - http: paths: - backend: serviceName: mysql-0 servicePort: 3306 path: /
此时能够经过 haproxy 的 IP + 映射端口访问到 mysql 集群。
如下是上面用到的 yaml 文件: