实现汇总:前端
当前环境部署主要是实现微服务自动发布和推送,具体实现的功能细节主要实如今下述几大软件上面。其实自动发布和推送有不少种方式,若有不足,请留言补充。java
IP地址 | 主机名 | 服务配置 |
---|---|---|
192.168.25.223 | k8s-master01 | Kubernetes-Master节点+Jenkins |
192.168.25.225 | k8s-node01 | Kubernetes-Node节点 |
192.168.25.226 | k8s-node02 | Kubernetes-Node节点 |
192.168.25.227 | gitlab-nfs | Gitlab,NFS,Git |
192.168.25.228 | harbor | harbor,mysql,docker,pinpoint |
部署命令
单Master版:node
ansible-playbook -i hosts single-master-deploy.yml -uroot -k
多Master版:mysql
ansible-playbook -i hosts multi-master-deploy.yml -uroot -k
若是安装某个阶段失败,可针对性测试.linux
例如:只运行部署插件nginx
ansible-playbook -i hosts single-master-deploy.yml -uroot -k --tags addons
示例参考:https://github.com/ansible/ansible-examplesgit
$ sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
$ sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
$ sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce docker-ce-cli containerd.io -y
$ sudo systemctl start docker && sudo systemctl enable docker
$ sudo docker run hello-world
docker run -d \ --name gitlab \ -p 8443:443 \ -p 9999:80 \ -p 9998:22 \ -v $PWD/config:/etc/gitlab \ -v $PWD/logs:/var/log/gitlab \ -v $PWD/data:/var/opt/gitlab \ -v /etc/localtime:/etc/localtime \ passzhang/gitlab-ce-zh:latest
访问地址:http://IP:9999github
初次会先设置管理员密码 ,而后登录,默认管理员用户名root,密码就是刚设置的。spring
https://github.com/passzhang/simple-microservicesql
代码分支说明:
dev1 交付代码
dev2 编写Dockerfile构建镜像
dev3 K8S资源编排
dev4 增长微服务链路监控
master 最终上线
拉取master分支,推送到私有代码仓库:
git clone https://github.com/PassZhang/simple-microservice.git # cd 进入simple-microservice目录 # 修改.git/config文件,将地址上传地址配置成本地gitlab既能够 vim /root/simple-microservice/.git/config ... [remote "origin"] url = http://192.168.25.227:9999/root/simple-microservice.git fetch = +refs/heads/*:refs/remotes/origin/* ... # 下载以后,还需修改链接数据库配置(xxx-service/src/main/resources/application-fat.yml),本次测试我将数据库地址修改为192.168.25.228::3306. # 修改好数据库地址后,才能够上传文件。 cd microservice git config --global user.email "passzhang@example.com" git config --global user.name "passzhang" git add . git commit -m 'all' git push origin master
# wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo # yum install docker-ce -y # systemctl start docker && systemctl enable docker
curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose
# tar zxvf harbor-offline-installer-v1.9.1.tgz # cd harbor ----------- # vi harbor.yml hostname: 192.168.25.228 http: 8088 ----------- # ./prepare # ./install.sh --with-chartmuseum --with-clair # docker-compose ps
--with-chartmuseum 参数表示启用Charts存储功能。
因为habor未配置https,还须要在docker配置可信任。
# cat /etc/docker/daemon.json {"registry-mirrors": ["http://f1361db2.m.daocloud.io"], "insecure-registries": ["192.168.25.228:8088"] } # systemctl restart docker #这边配置好仓库以后,也要保证K8S的master节点和docker节点都能同时链接上。须要修改dameon.json文件。
# wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz # tar zxvf helm-v3.0.0-linux-amd64.tar.gz # mv linux-amd64/helm /usr/bin/
# helm repo add stable http://mirror.azure.cn/kubernetes/charts # helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # helm repo list
# helm plugin install https://github.com/chartmuseum/helm-push
若是网络下载不了,也能够直接解压课件里包:
# tar zxvf helm-push_0.7.1_linux_amd64.tar.gz # mkdir -p /root/.local/share/helm/plugins/helm-push # chmod +x bin/* # mv bin plugin.yaml /root/.local/share/helm/plugins/helm-push
# helm repo add --username admin --password Harbor12345 myrepo http://192.168.25.228:8088/chartrepo/ms
# helm push ms-0.1.0.tgz --username=admin --password=Harbor12345 http://192.168.25.228:8088/chartrepo/ms # helm install --username=admin --password=Harbor12345 --version 0.1.0 http://192.168.25.228:8088/chartrepo/library/ms
先准备一台NFS服务器为K8S提供存储支持。
# yum install nfs-utils -y # vi /etc/exports /ifs/kubernetes * (rw,no_root_squash) # mkdir -p /ifs/kubernetes # systemctl start nfs # systemctl enable nfs
而且要在每一个Node上安装nfs-utils包,用于mount挂载时用。
因为K8S不支持NFS动态供给,还须要先安装上图中的nfs-client-provisioner插件:
具体配置文件以下:
[root@k8s-master1 nfs-storage-class]# tree . ├── class.yaml ├── deployment.yaml └── rbac.yaml 0 directories, 3 files
rbac.yaml
[root@k8s-master1 nfs-storage-class]# cat rbac.yaml kind: ServiceAccount apiVersion: v1 metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
class.yaml
[root@k8s-master1 nfs-storage-class]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "true"
deployment.yaml
[root@k8s-master1 nfs-storage-class]# cat deployment.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.25.227 - name: NFS_PATH value: /ifs/kubernetes volumes: - name: nfs-client-root nfs: server: 192.168.25.227 path: /ifs/kubernetes # 部署时不要忘记将server地址修改为新的nfs地址。
# cd nfs-client # vi deployment.yaml # 修改里面NFS地址和共享目录为你的 # kubectl apply -f . # kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-df88f57df-bv8h7 1/1 Running 0 49m
# yum install mariadb-server -y # systemctl start mariadb.service # mysqladmin -uroot password '123456'
或者docker建立
docker run -d --name db -p 3306:3306 -v /opt/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7 --character-set-server=utf8
最后将微服务数据库导入。
[root@cephnode03 db]# pwd /root/simple-microservice/db [root@cephnode03 db]# ls order.sql product.sql stock.sql [root@cephnode03 db]# mysql -uroot -p123456 <order.sql [root@cephnode03 db]# mysql -uroot -p123456 <product.sql [root@cephnode03 db]# mysql -uroot -p123456 <stock.sql # 配置好以后须要修改数据库受权 GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.25.%' IDENTIFIED BY '123456';
参考:https://github.com/jenkinsci/kubernetes-plugin/tree/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes
当前咱们直接在kubernetes中部署Jenkins程序,部署以前须要提早准备好存储,前面已经部署了nfs 存储,也可使用其余存储方案,例如ceph等。接下来咱们开始部署吧。
[root@k8s-master1 jenkins]# tree . ├── deployment.yml ├── ingress.yml ├── rbac.yml ├── service-account.yml └── service.yml 0 directories, 5 files
rbac.yml
[root@k8s-master1 jenkins]# cat rbac.yml --- # 建立名为jenkins的ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- # 建立名为jenkins的Role,授予容许管理API组的资源Pod kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- # 将名为jenkins的Role绑定到名为jenkins的ServiceAccount apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins
service-account.yml
[root@k8s-master1 jenkins]# cat service-account.yml # In GKE need to get RBAC permissions first with # kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>] --- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins
ingress.yml
[root@k8s-master1 jenkins]# cat ingress.yml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: 100m spec: rules: - host: jenkins.test.com http: paths: - path: / backend: serviceName: jenkins servicePort: 80
service.yml
[root@k8s-master1 jenkins]# cat service.yml apiVersion: v1 kind: Service metadata: name: jenkins spec: selector: name: jenkins type: NodePort ports: - name: http port: 80 targetPort: 8080 protocol: TCP nodePort: 30006 - name: agent port: 50000 protocol: TCP
deployment.yml
[root@k8s-master1 jenkins]# cat deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: jenkins labels: name: jenkins spec: replicas: 1 selector: matchLabels: name: jenkins template: metadata: name: jenkins labels: name: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccountName: jenkins containers: - name: jenkins image: jenkins/jenkins:lts imagePullPolicy: Always ports: - containerPort: 8080 - containerPort: 50000 resources: limits: cpu: 1 memory: 1Gi requests: cpu: 0.5 memory: 500Mi env: - name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource: limits.memory divisor: 1Mi - name: JAVA_OPTS value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai volumeMounts: - name: jenkins-home mountPath: /var/jenkins_home livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 securityContext: fsGroup: 1000 volumes: - name: jenkins-home persistentVolumeClaim: claimName: jenkins-home --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-home spec: storageClassName: "managed-nfs-storage" accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi
登陆地址:直接输入ingress配置的域名:http://jenkins.test.com
修改插件地址:
因为默认插件源在国外服务器,大多数网络没法顺利下载,需修改国内插件源地址:
cd jenkins_home/updates sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && \ sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json
Jenkins参数化构建流程图
Jenkins Pipeline是一套插件,支持在Jenkins中实现集成和持续交付管道;
参考:https://jenkins.io/doc/book/pipeline/syntax/
当前环境中咱们须要配置pipeline脚本,咱们能够先来建立一个Jenkins-pipeline脚本测试一下
安装pipeline插件 : Jenkins 首页 ------ >系统管理 ------ > 插件管理 ------> 可选插件 ------> 过滤输入pipeline, 安装pipeline插件既可使用。
流水线中输入如下脚本进行测试
pipeline { agent any stages { stage('Build') { steps { echo 'Building' } } stage('Test') { steps { echo 'Testing' } } stage('Deploy') { steps { echo 'Deploying' } } } }
测试结果以下:
日志以下:
控制台输出 Started by user admin Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] node Running on Jenkins in /var/jenkins_home/workspace/pipeline-test [Pipeline] { [Pipeline] stage [Pipeline] { (Build) [Pipeline] echo Building [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Test) [Pipeline] echo Testing [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy) [Pipeline] echo Deploying [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS
输出SUCCESS即成功测试
前面咱们已经完成了pipeline脚本的测试,可是考虑到Jenkins 主机性能有限,若是咱们要运行大批量的任务,Jenkins 主机可能会崩溃,这时咱们采用Jenkins-slave的方式,给Jenkins主机增长小弟,由Jenkins主机来部署任务,具体任务和编译则留给小弟去作。
传统的Jenkins Master/Slave架构
K8S中Jenkins Master/Slave架构
Kubernetes插件:Jenkins在Kubernetes集群中运行动态代理.
插件介绍:https://github.com/jenkinsci/kubernetes-plugin
当前环境中咱们须要将Jenkins和kubernetes 进行关联,让Jenkins能够连通kubernetes 而且自动在kubernetes 中 进行命令操做,须要添加kubernetes 云,操做步骤以下:
Jenkins 首页 ------ > 系统管理 ------ > 系统配置 ------ > 云 ------ > 新增一个云 ------ > Kubernetes
配置一下kubernetes 云,当前咱们部署的Jenkins是在kubernetes 中直接部署的pod,Jenkins能够直接经过service 读取到kubernetes的地址,因此咱们这个地方输入kubernetes的DNS地址(https://kubernetes.default)就能够了,输入完以后不要忘记点击连接测试哦。
Jenkins地址咱们也直接输入DNS地址既能够,地址为(http://jenkins.default),这样咱们就新增了一个kubernetes云。
配置所需文件:
[root@k8s-master1 jenkins-slave]# tree . ├── Dockerfile #构建Jenkins-slave所需 ├── helm #helm 命令:用于在Jenkins-slave pod 工做时,执行helm 操做安装helm chart库。 ├── jenkins-slave #jenkins-slave所需脚本 ├── kubectl #kebectl 命令:用于在Jenkins-slave pod 工做中,执行pod 建立命令和查询pod 运行结果等。 ├── settings.xml #Jenkins-slave 所需文件 └── slave.jar #Jenkins-slave jar包 0 directories, 6 files
Jenkins-slave 所需 Dockerfile文件
FROM centos:7 LABEL maintainer passzhang RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \ yum clean all && \ rm -rf /var/cache/yum/* && \ mkdir -p /usr/share/jenkins COPY slave.jar /usr/share/jenkins/slave.jar COPY jenkins-slave /usr/bin/jenkins-slave COPY settings.xml /etc/maven/settings.xml RUN chmod +x /usr/bin/jenkins-slave COPY helm kubectl /usr/bin/ ENTRYPOINT ["jenkins-slave"]
参考:https://github.com/jenkinsci/docker-jnlp-slave
参考:https://plugins.jenkins.io/kubernetes
推送Jenkins-slave 镜像到harbor仓库
[root@k8s-master1 jenkins-slave]# docker build -t jenkins-slave:jdk-1.8 . docker tag jenkins-slave:jdk-1.8 192.168.25.228:8088/library/jenkins-slave:jdk-1.8 docker login 192.168.25.228:8088 #登陆私有仓库 docker push 192.168.25.228:8088/library/jenkins-slave:jdk-1.8 #推送镜像到私有仓库
配置好以后,须要使用pipeline 流水线测试一下是否能够直接调用Jenkins-slave ,查看Jenkins-slave 是否正常工做。
测试pipeline脚本:
pipeline { agent { kubernetes { label "jenkins-slave" yaml """ apiVersion: v1 kind: Pod metadata: name: jenkins-slave spec: containers: - name: jnlp image: 192.168.25.228:8088/library/jenkins-slave:jdk-1.8 """ } } stages { stage('Build') { steps { echo 'Building' } } stage('Test') { steps { echo 'Testing' } } stage('Deploy') { steps { echo 'Deploying' } } } }
部署截图以下:
部署步骤:
拉取代码 ——> 代码编译 ——> 单元测试 ——> 构建镜像 ——> Helm部署到K8S 测试
增长pipeline脚本:
#!/usr/bin/env groovy // 所需插件: Git Parameter/Git/Pipeline/Config File Provider/kubernetes/Extended Choice Parameter // 公共 def registry = "192.168.25.228:8088" // 项目 def project = "ms" def git_url = "http://192.168.25.227:9999/root/simple-microservice.git" def gateway_domain_name = "gateway.test.com" def portal_domain_name = "portal.test.com" // 认证 def image_pull_secret = "registry-pull-secret" def harbor_registry_auth = "9d5822e8-b1a1-473d-a372-a59b20f9b721" def git_auth = "2abc54af-dd98-4fa7-8ac0-8b5711a54c4a" // ConfigFileProvider ID def k8s_auth = "f1a38eba-4864-43df-87f7-1e8a523baa35" pipeline { agent { kubernetes { label "jenkins-slave" yaml """ kind: Pod metadata: name: jenkins-slave spec: containers: - name: jnlp image: "${registry}/library/jenkins-slave:jdk-1.8" imagePullPolicy: Always volumeMounts: - name: docker-cmd mountPath: /usr/bin/docker - name: docker-sock mountPath: /var/run/docker.sock - name: maven-cache mountPath: /root/.m2 volumes: - name: docker-cmd hostPath: path: /usr/bin/docker - name: docker-sock hostPath: path: /var/run/docker.sock - name: maven-cache hostPath: path: /tmp/m2 """ } } parameters { gitParameter branch: '', branchFilter: '.*', defaultValue: '', description: '选择发布的分支', name: 'Branch', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH' extendedChoice defaultValue: 'none', description: '选择发布的微服务', \ multiSelectDelimiter: ',', name: 'Service', type: 'PT_CHECKBOX', \ value: 'gateway-service:9999,portal-service:8080,product-service:8010,order-service:8020,stock-service:8030' choice (choices: ['ms', 'demo'], description: '部署模板', name: 'Template') choice (choices: ['1', '3', '5', '7', '9'], description: '副本数', name: 'ReplicaCount') choice (choices: ['ms'], description: '命名空间', name: 'Namespace') } stages { stage('拉取代码'){ steps { checkout([$class: 'GitSCM', branches: [[name: "${params.Branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]] ]) } } stage('代码编译') { // 编译指定服务 steps { sh """ mvn clean package -Dmaven.test.skip=true """ } } stage('构建镜像') { steps { withCredentials([usernamePassword(credentialsId: "${harbor_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) { sh """ docker login -u ${username} -p '${password}' ${registry} for service in \$(echo ${Service} |sed 's/,/ /g'); do service_name=\${service%:*} image_name=${registry}/${project}/\${service_name}:${BUILD_NUMBER} cd \${service_name} if ls |grep biz &>/dev/null; then cd \${service_name}-biz fi docker build -t \${image_name} . docker push \${image_name} cd ${WORKSPACE} done """ configFileProvider([configFile(fileId: "${k8s_auth}", targetLocation: "admin.kubeconfig")]){ sh """ # 添加镜像拉取认证 kubectl create secret docker-registry ${image_pull_secret} --docker-username=${username} --docker-password=${password} --docker-server=${registry} -n ${Namespace} --kubeconfig admin.kubeconfig |true # 添加私有chart仓库 helm repo add --username ${username} --password ${password} myrepo http://${registry}/chartrepo/${project} """ } } } } stage('Helm部署到K8S') { steps { sh """ common_args="-n ${Namespace} --kubeconfig admin.kubeconfig" for service in \$(echo ${Service} |sed 's/,/ /g'); do service_name=\${service%:*} service_port=\${service#*:} image=${registry}/${project}/\${service_name} tag=${BUILD_NUMBER} helm_args="\${service_name} --set image.repository=\${image} --set image.tag=\${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set service.targetPort=\${service_port} myrepo/${Template}" # 判断是否为新部署 if helm history \${service_name} \${common_args} &>/dev/null;then action=upgrade else action=install fi # 针对服务启用ingress if [ \${service_name} == "gateway-service" ]; then helm \${action} \${helm_args} \ --set ingress.enabled=true \ --set ingress.host=${gateway_domain_name} \ \${common_args} elif [ \${service_name} == "portal-service" ]; then helm \${action} \${helm_args} \ --set ingress.enabled=true \ --set ingress.host=${portal_domain_name} \ \${common_args} else helm \${action} \${helm_args} \${common_args} fi done # 查看Pod状态 sleep 10 kubectl get pods \${common_args} """ } } } }
执行结果以下:
当前直接点击构建,构建时前面几回可能会失败,多构建一次,打印出全部参数,既能够直接执行成功。
点击发布gateway-service pod 查看日志结果
+ kubectl get pods -n ms --kubeconfig admin.kubeconfig NAME READY STATUS RESTARTS AGE eureka-0 1/1 Running 0 3h11m eureka-1 1/1 Running 0 3h10m eureka-2 1/1 Running 0 3h9m ms-gateway-service-66d695c486-9x9mc 0/1 Running 0 10s [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline Finished: SUCCESS # 执行成功以后,会打印出来pod 信息
发布剩下的服务,并查看结果:
+ kubectl get pods -n ms --kubeconfig admin.kubeconfig NAME READY STATUS RESTARTS AGE eureka-0 1/1 Running 0 3h14m eureka-1 1/1 Running 0 3h13m eureka-2 1/1 Running 0 3h12m ms-gateway-service-66d695c486-9x9mc 1/1 Running 0 3m1s ms-order-service-7465c47d79-lbxgd 0/1 Running 0 10s ms-portal-service-7fd6c57955-jkgkk 0/1 Running 0 11s ms-product-service-68dbf5b57-jwpv9 0/1 Running 0 10s ms-stock-service-b8b9895d6-cb72b 0/1 Running 0 10s [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline Finished: SUCCESS
查看eureka结果:
能够看到全部的服务模块都已经注册到eureka中了。
访问一下前端页面:
能够看到有商品查询出来,表明已经链接数据库,同时业务能够正常运行。大功告成了!