前言git
本文选用Stolon的方式搭建Postgresql高可用方案,主要为Harbor提供高可用数据库,Harbor搭建可查看kubernetes搭建Harbor无坑及Harbor仓库同步,以后会提供redis高可用及Harbor高可用方案搭建github
几种postgresql高可用方案简单比较:golang
引用https://studygolang.com/articles/19002?fr=sidebarredis
- 首先repmgr这种方案的算法有明显缺陷,非主流分布式算法,直接pass;
- Stolon和Patroni相对于Crunchy更加Cloud Native, 后者是基于pgPool实现。
- Crunchy和Patroni相对于Stolon有更多的使用者,而且提供了Operator对于之后的管理和扩容
根据上面简单的比较,最终选择的stolon,做者选择的是Patroni,感受实际区别并不大。算法
Stolon(https://github.com/sorintlab/stolon
)
是由3个部分组成的:sql
proxy:客户端的接入点。它强制链接到右边PostgreSQL的master而且强制关闭链接到由非选举产生的master。
Stolon 用etcd或者consul做为主要的集群状态存储。数据库
git clone https://github.com/sorintlab/stolon.git cd XXX/stolon/examples/kubernetes
如图所示
若有兴趣可查看官网搭建:https://github.com/sorintlab/stolon/blob/master/examples/kubernetes/README.md
以下为yaml中注意修改的地方api
- name: STKEEPER_PG_SU_USERNAME value: "postgres"
volumeClaimTemplates: - metadata: name: data spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "512Mi" storageClassName: nfs
apiVersion: v1 kind: Secret metadata: name: stolon type: Opaque data: password: cGFzc3dvcmQx
以下是做者整理的完整的stolon的编排文件,可直接修改使用bash
# This is an example and generic rbac role definition for stolon. It could be # fine tuned and split per component. # The required permission per component should be: # keeper/proxy/sentinel: update their own pod annotations # sentinel/stolonctl: get, create, update configmaps # sentinel/stolonctl: list components pods # sentinel/stolonctl: get components pods annotations apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: stolon namespace: default rules: - apiGroups: - "" resources: - pods - configmaps - events verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: stolon namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: stolon subjects: - kind: ServiceAccount name: default namespace: default --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: stolon-sentinel spec: replicas: 2 template: metadata: labels: component: stolon-sentinel stolon-cluster: kube-stolon annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" spec: containers: - name: stolon-sentinel image: sorintlab/stolon:master-pg10 command: - "/bin/bash" - "-ec" - | exec gosu stolon stolon-sentinel env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: STSENTINEL_CLUSTER_NAME valueFrom: fieldRef: fieldPath: metadata.labels['stolon-cluster'] - name: STSENTINEL_STORE_BACKEND value: "kubernetes" - name: STSENTINEL_KUBE_RESOURCE_KIND value: "configmap" - name: STSENTINEL_METRICS_LISTEN_ADDRESS value: "0.0.0.0:8080" ## Uncomment this to enable debug logs #- name: STSENTINEL_DEBUG # value: "true" ports: - containerPort: 8080 --- apiVersion: v1 kind: Secret metadata: name: stolon type: Opaque data: password: cGFzc3dvcmQx --- # PetSet was renamed to StatefulSet in k8s 1.5 # apiVersion: apps/v1alpha1 # kind: PetSet apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: stolon-keeper spec: serviceName: "stolon-keeper" replicas: 2 template: metadata: labels: component: stolon-keeper stolon-cluster: kube-stolon annotations: pod.alpha.kubernetes.io/initialized: "true" prometheus.io/scrape: "true" prometheus.io/port: "8080" spec: terminationGracePeriodSeconds: 10 containers: - name: stolon-keeper image: sorintlab/stolon:master-pg10 command: - "/bin/bash" - "-ec" - | # Generate our keeper uid using the pod index IFS='-' read -ra ADDR <<< "$(hostname)" export STKEEPER_UID="keeper${ADDR[-1]}" export POD_IP=$(hostname -i) export STKEEPER_PG_LISTEN_ADDRESS=$POD_IP export STOLON_DATA=/stolon-data chown stolon:stolon $STOLON_DATA exec gosu stolon stolon-keeper --data-dir $STOLON_DATA env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: STKEEPER_CLUSTER_NAME valueFrom: fieldRef: fieldPath: metadata.labels['stolon-cluster'] - name: STKEEPER_STORE_BACKEND value: "kubernetes" - name: STKEEPER_KUBE_RESOURCE_KIND value: "configmap" - name: STKEEPER_PG_REPL_USERNAME value: "repluser" # Or use a password file like in the below supersuser password - name: STKEEPER_PG_REPL_PASSWORD value: "replpassword" - name: STKEEPER_PG_SU_USERNAME value: "postgres" - name: STKEEPER_PG_SU_PASSWORDFILE value: "/etc/secrets/stolon/password" - name: STKEEPER_METRICS_LISTEN_ADDRESS value: "0.0.0.0:8080" # Uncomment this to enable debug logs #- name: STKEEPER_DEBUG # value: "true" ports: - containerPort: 5432 - containerPort: 8080 volumeMounts: - mountPath: /stolon-data name: data - mountPath: /etc/secrets/stolon name: stolon volumes: - name: stolon secret: secretName: stolon # Define your own volumeClaimTemplate. This example uses dynamic PV provisioning with a storage class named "standard" (so it will works by default with minikube) # In production you should use your own defined storage-class and configure your persistent volumes (statically or dynamically using a provisioner, see related k8s doc). volumeClaimTemplates: - metadata: name: data spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "512Mi" storageClassName: nfs --- apiVersion: v1 kind: Service metadata: name: stolon-proxy-service spec: ports: - port: 5432 targetPort: 5432 selector: component: stolon-proxy stolon-cluster: kube-stolon --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: stolon-proxy spec: replicas: 2 template: metadata: labels: component: stolon-proxy stolon-cluster: kube-stolon annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" spec: containers: - name: stolon-proxy image: sorintlab/stolon:master-pg10 command: - "/bin/bash" - "-ec" - | exec gosu stolon stolon-proxy env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: STPROXY_CLUSTER_NAME valueFrom: fieldRef: fieldPath: metadata.labels['stolon-cluster'] - name: STPROXY_STORE_BACKEND value: "kubernetes" - name: STPROXY_KUBE_RESOURCE_KIND value: "configmap" - name: STPROXY_LISTEN_ADDRESS value: "0.0.0.0" - name: STPROXY_METRICS_LISTEN_ADDRESS value: "0.0.0.0:8080" ## Uncomment this to enable debug logs #- name: STPROXY_DEBUG # value: "true" ports: - containerPort: 5432 - containerPort: 8080 readinessProbe: tcpSocket: port: 5432 initialDelaySeconds: 10 timeoutSeconds: 5
kubectl applay -f stolon.yaml
Initialize the cluster(大概意思是stolon初始化k8s集群,能够大概看下官网解释)
All the stolon components wait for an existing clusterdata entry in the store. So the first time you have to initialize a new cluster. For more details see the cluster initialization doc. You can do this step at every moment, now or after having started the stolon components.
You can execute stolonctl in different ways:app
- as a one shot command executed inside a temporary pod:
kubectl run -i -t stolonctl --image=sorintlab/stolon:master-pg10 --restart=Never --rm -- /usr/local/bin/stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init
- from a machine that can access the store backend:
stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init
- later from one of the pods running the stolon components.
kubectl run -i -t stolonctl --image=sorintlab/stolon:master-pg10 --restart=Never --rm -- /usr/local/bin/stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init
kubectl delete -f stolon.yaml kubectl delete pvc data-stolon-keeper-0 data-stolon-keeper-1
链接master而且创建test表
psql --host
--port 30543 postgres -U stolon -W
postgres=# create table test (id int primary key not null,
value text not null);
CREATE TABLE
postgres=# insert into test values (1, 'value1');
INSERT 0 1
postgres=# select * from test;
id | value
---- -------- 1 | value1
(1 row)
也可进入Pod执行postgresql命令
kubectl exec -ti stolon-proxy-5977cdbcfc-csnkq bash #登入sql psql --host localhost --port 5432 postgres -U postgres \l #列出全部数据库 \c dbname #切换数据库 CREATE TABLE insert into test values (1, 'value1'); INSERT 0 1 select * from test; \d #列出当前数据库的全部表 \q #退出数据库
链接slave而且检查数据。你能够写一些信息以便确认请求已经被slave处理了。
psql --host
--port 30544 postgres -U stolon -W
postgres=# select * from test;
id | value
---- -------- 1 | value1
(1 row)
这个案例是官方代码库中statefullset的一个例子。
简单的说,就是为模拟了master挂掉,咱们先删除了master的statefulset又删除了master的pod。
kubectl delete statefulset stolon-keeper --cascade=false kubectl delete pod stolon-keeper-0
而后,在sentinel的log中咱们能够看到新的master被选举出来了。
no keeper info available db=cb96f42d keeper=keeper0
no keeper info available db=cb96f42d keeper=keeper0
master db is failed db=cb96f42d keeper=keeper0
trying to find a standby to replace failed master
electing db as the new master db=087ce88a keeper=keeper1
如今,在刚才的那两个终端中若是咱们重复上一个命令,咱们能够看到以下输出。
postgres=# select from test;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset:
Succeeded.
postgres=# select from test;
id | value
---- -------- 1 | value1
(1 row)
Kubernetes的service把不可用的pod去掉,把请求转到可用的pod上。因此新的读取链接被路由到了健康的pod上。
另外一个测试集群弹性(resilience)的好方法是用chaoskube。Chaoskube是一个小的服务程序,它能够周期性的在集群里随机的kill掉一些的pod。它也能够用helm charts部署。
helm install --set labels="release=factualcrocodile, component!=factual-crocodine-etcd" --set interval=5m stable/chaoskube
这条命令会运行chaoskube,它会每5分钟删除一个pod。它会选择label中release=factual-crocodile的pod,可是会忽略etcd的pod。
参考资料:
http://www.javashuo.com/article/p-xcdhdjuv-eu.html
https://github.com/sorintlab/stolon/tree/master/examples/kubernetes
https://studygolang.com/articles/19002?fr=sidebar