咱们先不考虑配置文件的前提下:node
apiVersion: apps/v1 kind: StatefulSet #####固定hostname,有状态的服务使用这个 statefalset有个问题,就是若是那个pod不是running状态,这个主机名是没法解析的,这样就构成了一个死循环,我sed替换主机名的时候因为pod还不是running状态,她只能获取本身的主机名。没法获取别人的主机名,因此在zookeeper中换成了换成了ip metadata: name: zookeeper spec: serviceName: zookeeper ####因此生成的3个pod的名字叫zookeeper-0,zookeeper-1,zookeeper-2 replicas: 3 revisionHistoryLimit: 10 selector: ##statefulset必须有的 matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: volumes: - name: volume-logs hostPath: path: /var/log/zookeeper containers: - name: zookeeper image: harbor.test.com/middleware/zookeeper:3.4.10 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 2181 initialDelaySeconds: 30 timeoutSeconds: 3 periodSeconds: 5 successThreshold: 1 failureThreshold: 2 ports: - containerPort: 2181 protocol: TCP - containerPort: 2888 protocol: TCP - containerPort: 3888 protocol: TCP env: - name: SERVICE_NAME value: "zookeeper" - name: MY_POD_NAME #声明k8s自带的变量,这样在pod建立以后,在其中能够直接echo ${MY_POD_NAME}获得hostname valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: volume-logs mountPath: /var/log/zookeeper nodeSelector: zookeeper: enable --- apiVersion: v1 kind: Service metadata: name: zookeeper #个人cluster名字为这个,在任意一个生成的pod中能够ping zookeeper,至关于zookeeper为生成的3个pod的cluster_name,会发现每次ping出的地址不必定相同,nslookup zookeeper获得的是3个pod的pod ip,共3条记录。 spec: ports: - port: 2181 selector: app: zookeeper clusterIP: None #此句必须加上
[root@host5 src]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default zookeeper-0 1/1 Running 0 12m 192.168.55.69 host3 <none> <none> default zookeeper-1 1/1 Running 0 12m 192.168.31.93 host4 <none> <none> default zookeeper-2 1/1 Running 0 12m 192.168.55.70 host3 <none> <none>
bash-4.3# nslookup zookeeper nslookup: can't resolve '(null)': Name does not resolve Name: zookeeper Address 1: 192.168.55.70 zookeeper-2.zookeeper.default.svc.cluster.local Address 2: 192.168.55.69 zookeeper-0.zookeeper.default.svc.cluster.local Address 3: 192.168.31.93 zookeeper-1.zookeeper.default.svc.cluster.local bash-4.3# ping zookeeper-0.zookeeper PING zookeeper-0.zookeeper (192.168.55.69): 56 data bytes 64 bytes from 192.168.55.69: seq=0 ttl=63 time=0.109 ms 64 bytes from 192.168.55.69: seq=1 ttl=63 time=0.212 ms ^C --- zookeeper-0.zookeeper ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.109/0.160/0.212 ms bash-4.3# ping zookeeper-1.zookeeper PING zookeeper-1.zookeeper (192.168.31.93): 56 data bytes 64 bytes from 192.168.31.93: seq=0 ttl=62 time=0.535 ms 64 bytes from 192.168.31.93: seq=1 ttl=62 time=0.507 ms 64 bytes from 192.168.31.93: seq=2 ttl=62 time=0.587 ms ^C --- zookeeper-1.zookeeper ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.507/0.543/0.587 ms bash-4.3# ping zookeeper-2.zookeeper PING zookeeper-2.zookeeper (192.168.55.70): 56 data bytes 64 bytes from 192.168.55.70: seq=0 ttl=64 time=0.058 ms 64 bytes from 192.168.55.70: seq=1 ttl=64 time=0.081 ms ^C --- zookeeper-2.zookeeper ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.058/0.069/0.081 ms
k8s自带的经常使用变量以下:redis
env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName spec.nodeName : pod所在节点的IP、宿主主机IP status.podIP :pod IP
咱们再看配置文件:docker
[root@docker06 conf]# cat zoo.cfg |grep -v ^#|grep -v ^$ tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data clientPort=2181 clientPortAddress= docker06 server.1=docker05:2888:3888 server.2=docker06:2888:3888 server.3=docker04:2888:3888 snapCount=10000 leaderServes=yes autopurge.snapRetainCount=3 autopurge.purgeInterval=2 maxClientCnxns=1000
咱们须要修改为形如:shell
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data clientPort=2181 clientPortAddress= docker06 #下面的3行是固定的,主要是这行须要修改为本机的MY_POD_IP,咱们能够用configmap挂载配置文件,而后在pod里面用sed替换掉这行配置 server.1=zookeeper-0.zookeeper:2888:3888 server.2=zookeeper-1.zookeeper:2888:3888 server.3=zookeeper-2.zookeeper:2888:3888 snapCount=10000 leaderServes=yes autopurge.snapRetainCount=3 autopurge.purgeInterval=2 maxClientCnxns=1000
参考以下这种方式:api
先将配置文件经过configmap挂载进pod里面,如fix-ip.shbash
apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster labels: app: redis-cluster data: fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG="/var/lib/redis/nodes.conf" if [ -f ${CLUSTER_CONFIG} ]; then if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e "/myself/s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} fi exec "$@" redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /var/lib/redis/nodes.conf cluster-migration-barrier 1 appendonly yes protected-mode no
而后在启动pod的时候执行这个脚本:app
apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster labels: app: redis-cluster spec: serviceName: redis-cluster replicas: 6 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: 10.11.100.85/library/redis ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"] #此处先执行了那个脚本,而后启动的redis readinessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 20 periodSeconds: 3 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: /etc/redis readOnly: false - name: data mountPath: /var/lib/redis readOnly: false volumes: - name: conf configMap: name: redis-cluster defaultMode: 0755 # items: # - key: redis.conf # path: redis.conf # - key: fix-ip.sh # path: fix-ip.sh volumeClaimTemplates: - metadata: name: data labels: name: redis-cluster spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 150Mi
注意:经过configmap生成的配置文件为只读,没法经过sed修改,能够经过挂载到临时目录,而后拷过去以后sed,可是这样也存在一个问题,就是你动态修改了configmap,只会改变临时目录里的文件,而不会改变考过去的文件tcp
实际生产环境的配置:ide
1.从新制做imageui
[root@host4 zookeeper]# ll 总用量 4 drwxr-xr-x 2 root root 45 5月 24 15:48 conf -rw-r--r-- 1 root root 143 5月 23 06:19 Dockerfile drwxr-xr-x 2 root root 20 5月 24 15:48 scripts [root@host4 conf]# cd conf [root@host4 conf]# ll 总用量 8 -rw-r--r-- 1 root root 1503 5月 23 04:15 log4j.properties -rw-r--r-- 1 root root 324 5月 24 15:48 zoo.cfg [root@host4 conf]# cat zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data clientPort=2181 clientPortAddress=PODIP #此处用ip,下面用主机名,缘由见本文上面 server.1=zookeeper-0.zookeeper:2888:3888 server.2=zookeeper-1.zookeeper:2888:3888 server.3=zookeeper-2.zookeeper:2888:3888 snapCount=10000 leaderServes=yes autopurge.snapRetainCount=3 autopurge.purgeInterval=2 maxClientCnxns=1000 [root@host4 conf]# cd ../scripts/ [root@host4 scripts]# ll 总用量 4 -rwxr-xr-x 1 root root 177 5月 24 15:48 sed.sh [root@host4 scripts]# cat sed.sh #!/bin/bash MY_ID=`echo ${MY_POD_NAME} |awk -F'-' '{print $NF}'` MY_ID=`expr ${MY_ID} + 1` echo ${MY_ID} > /data/myid sed -i 's/PODIP/'${MY_POD_IP}'/g' /conf/zoo.cfg exec "$@" [root@host4 scripts]# cd .. [root@host4 zookeeper]# ls conf Dockerfile scripts [root@host4 zookeeper]# cat Dockerfile FROM harbor.test.com/middleware/zookeeper:3.4.10 MAINTAINER rongruixue@163.com ARG zookeeper_version=3.4.10 COPY conf /conf/ COPY scripts /
这样咱们docker build就制做出了image :harbor.test.com/middleware/zookeeper:v3.4.10
而后咱们启经过yml启动pod:
apiVersion: apps/v1 kind: StatefulSet metadata: name: zookeeper spec: # podManagementPolicy: Parallel #此配置决定是否让3个pod同时起来,而不是按 0 1 2的顺序 serviceName: zookeeper replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: volumes: - name: volume-logs hostPath: path: /var/log/zookeeper - name: volume-data hostPath: path: /opt/zookeeper/data terminationGracePeriodSeconds: 10 containers: - name: zookeeper image: harbor.test.com/middleware/zookeeper:v3.4.10 imagePullPolicy: Always ports: - containerPort: 2181 protocol: TCP - containerPort: 2888 protocol: TCP - containerPort: 3888 protocol: TCP env: - name: SERVICE_NAME value: "zookeeper" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: volume-logs mountPath: /var/log/zookeeper #- name: volume-data 此处不能挂载、data到本地,不然若是两个pod分配到同一个节点会相互覆盖,myid也会被覆盖 # mountPath: /data command: - /bin/bash - -c - -x - | /sed.sh #此脚本做用就是讲podip写入zoo.cfg配置文件中,而后写/data/myid sleep 10 zkServer.sh start-foreground nodeSelector: zookeeper: enable --- apiVersion: v1 kind: Service metadata: name: zookeeper spec: ports: - port: 2181 selector: app: zookeeper clusterIP: None