Kubernetes部署(十二):helm部署harbor企业级镜像仓库

相关内容:

Kubernetes部署(一):架构及功能说明
Kubernetes部署(二):系统环境初始化
Kubernetes部署(三):CA证书制做
Kubernetes部署(四):ETCD集群部署
Kubernetes部署(五):Haproxy、Keppalived部署
Kubernetes部署(六):Master节点部署
Kubernetes部署(七):Node节点部署
Kubernetes部署(八):Flannel网络部署
Kubernetes部署(九):CoreDNS、Dashboard、Ingress部署
Kubernetes部署(十):储存之glusterfs和heketi部署
Kubernetes部署(十一):管理之Helm和Rancher部署
Kubernetes部署(十二):helm部署harbor企业级镜像仓库node

 
 

harbor简介

harbor官方githubhttps://github.com/goharbor
Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器。Harbor经过添加用户一般须要的功能(如安全性,身份和管理)来扩展开源Docker Distribution。使registry更接近构建和运行环境能够提升图像传输效率。Harbor支持在registry之间复制映像,还提供高级安全功能,如用户管理,访问控制和活动审计。mysql

特征

  • 云原生注册表:Harbour 支持容器镜像和Helm chart,可用做本地云环境(如容器运行和业务流程平台)的注册表。
  • 基于角色的访问控制:用户和存储库经过“项目”进行组织,用户能够对项目下的镜像拥有不一样的权限。
  • 基于策略的映像复制:能够基于具备多个过滤器(存储库,标记和标签)的策略在多个注册表实例之间复制(同步)映像。若是遇到任何错误,Harbor将自动重试进行复制。很是适合负载平衡,高可用性,多数据中心,混合和多云场景。
  • 漏洞扫描:Harbor按期扫描镜像并警告用户漏洞。
  • LDAP / AD支持:Harbor与现有企业LDAP / AD集成以进行用户身份验证和管理,并支持将LDAP组导入Harbor并为其分配适当的项目角色。
  • 镜像删除和垃圾收集:能够删除图像,并能够回收它们的空间。
  • 公证:能够确保镜像的真实性。
  • 图形用户门户:用户能够轻松浏览,搜索存储库和管理项目。
  • 审计:跟踪存储库的全部操做。
  • RESTful API:适用于大多数管理操做的RESTful API,易于与外部系统集成。
  • 易于部署:提供在线和离线安装程序。

    先决条件

  • Kubernetes集群 1.10+
  • helm 2.8.0+

Harbor部署

1. 添加域名解析。

h.cnlinux.clubn.cnlinux.club的A记录解析到个人负载均衡IP 10.31.90.200,用于建立ingress。linux

2. 下载harbor的chart包

[root@node-01 harbor]# wget https://github.com/goharbor/harbor-helm/archive/1.0.0.tar.gz -O harbor-helm-v1.0.0.tar.gz

3. 修改配置文件

  • 提取harbor-helm-v1.0.0.tar.gz文件中的values.yaml文件,并放到和harbor-helm-v1.0.0.tar.gz同一级的目录中。
  • 修改values.yaml,个人配置修改了以下几个字段:nginx

    须要说明的是若是k8s集群中存在storageclass就能够直接用storageclass,在几个persistence.persistentVolumeClaim.XXX.storageClass中指定storageclass名就能够了,会自动建立多个pvc,可是我这里为了防止建立多个pvc增长管理难度,我在部署前建立了一个pvc,harbor下全部的服务都使用这一个pvc,具体每一个字段的做用请查看官方文档https://github.com/goharbor/harbor-helmgit

    • expose.ingress.hosts.core
    • xpose.ingress.hosts.notary
    • externalURL
    • persistence.persistentVolumeClaim.registry.existingClaim
    • persistence.persistentVolumeClaim.registry.subPath
    • persistence.persistentVolumeClaim.chartmuseum.existingClaim
    • persistence.persistentVolumeClaim.chartmuseum.subPath
    • persistence.persistentVolumeClaim.jobservice.existingClaim
    • persistence.persistentVolumeClaim.jobservice.subPath
    • persistence.persistentVolumeClaim.database.existingClaim
    • persistence.persistentVolumeClaim.database.subPath
    • persistence.persistentVolumeClaim.redis.existingClaim
    • persistence.persistentVolumeClaim.redis.subPath
expose:
  type: ingress
  tls:
    enabled: true
    secretName: ""
    notarySecretName: ""
    commonName: ""
  ingress:
    hosts:
      core: h.cnlinux.club
      notary: n.cnlinux.club
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
  clusterIP:
    name: harbor
    ports:
      httpPort: 80
      httpsPort: 443
      notaryPort: 4443
  nodePort:
    name: harbor
    ports:
      http:
        port: 80
        nodePort: 30002
      https: 
        port: 443
        nodePort: 30003
      notary: 
        port: 4443
        nodePort: 30004
externalURL: https://h.cnlinux.club
persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      existingClaim: "pvc-harbor"
      storageClass: ""
      subPath: "registry"
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:
      existingClaim: "pvc-harbor"
      storageClass: ""
      subPath: "chartmuseum"
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: "pvc-harbor"
      storageClass: ""
      subPath: "jobservice"
      accessMode: ReadWriteOnce
      size: 1Gi
    database:
      existingClaim: "pvc-harbor"
      storageClass: ""
      subPath: "database"
      accessMode: ReadWriteOnce
      size: 1Gi
    redis:
      existingClaim: "pvc-harbor"
      storageClass: ""
      subPath: "redis"
      accessMode: ReadWriteOnce
      size: 1Gi
  imageChartStorage:
    type: filesystem
    filesystem:
      rootdirectory: /storage
imagePullPolicy: IfNotPresent
logLevel: debug
harborAdminPassword: "Harbor12345"
secretKey: "not-a-secure-key"
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v1.7.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
portal:
  image:
    repository: goharbor/harbor-portal
    tag: v1.7.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
core:
  image:
    repository: goharbor/harbor-core
    tag: v1.7.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
adminserver:
  image:
    repository: goharbor/harbor-adminserver
    tag: v1.7.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v1.7.0
  replicas: 1
  maxJobWorkers: 10
  jobLogger: file
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
registry:
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.6.2-v1.7.0
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v1.7.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
chartmuseum:
  enabled: true
  image:
    repository: goharbor/chartmuseum-photon
    tag: v0.7.1-v1.7.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
clair:
  enabled: true
  image:
    repository: goharbor/clair-photon
    tag: v2.0.7-v1.7.0
  replicas: 1
  httpProxy:
  httpsProxy:
  updatersInterval: 12
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
notary:
  enabled: true
  server:
    image:
      repository: goharbor/notary-server-photon
      tag: v0.6.1-v1.7.0
    replicas: 1
  signer:
    image:
      repository: goharbor/notary-signer-photon
      tag: v0.6.1-v1.7.0
    replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
database:
  type: internal
  internal:
    image:
      repository: goharbor/harbor-db
      tag: v1.7.0
    password: "changeit"
    nodeSelector: {}
    tolerations: []
    affinity: {}
  podAnnotations: {}
redis:
  type: internal
  internal:
    image:
      repository: goharbor/redis-photon
      tag: v1.7.0
    nodeSelector: {}
    tolerations: []
    affinity: {}
  podAnnotations: {}

4. 建立存储卷

由于harbor须要使用到mysql,为防止mysql在调度过程当中形成数据丢失,咱们须要将mysql的数据存储在gluster的存储卷里。github

[root@node-01 harbor]# vim pvc-harbor.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-harbor
spec:
  storageClassName: gluster-heketi
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
[root@node-01 harbor]# kubectl apply -f pvc-harbor.yaml

5. 安装harbor

[root@node-01 harbor]# helm install  --name harbor harbor-helm-v1.0.0.tar.gz -f values.yaml

若是安装不成功能够用helm del --purge harbor删除从新安装。redis

6. 演示

在一段时间后能够看到harbor全部相关的pod都已经运行起来了,咱们就能够访问了,默认用户密码是admin/Harbor12345,能够经过修改values.yaml来更改默认的用户名和密码。sql

[root@node-01 ~]# kubectl get pod
NAME                                           READY     STATUS    RESTARTS   AGE
harbor-harbor-adminserver-7fffc7bf4d-vj845     1/1       Running   1          15d
harbor-harbor-chartmuseum-bdf64f899-brnww      1/1       Running   0          15d
harbor-harbor-clair-8457c45dd8-9rgq8           1/1       Running   1          15d
harbor-harbor-core-7fc454c6d8-b6kvs            1/1       Running   1          15d
harbor-harbor-database-0                       1/1       Running   0          15d
harbor-harbor-jobservice-7895949d6b-zbwkf      1/1       Running   1          15d
harbor-harbor-notary-server-57dd94bf56-txdkl   1/1       Running   0          15d
harbor-harbor-notary-signer-5d64c5bf8d-kppts   1/1       Running   0          15d
harbor-harbor-portal-648c56499f-g28rz          1/1       Running   0          15d
harbor-harbor-redis-0                          1/1       Running   0          15d
harbor-harbor-registry-5cd9c49489-r92ph        2/2       Running   0          15d

Kubernetes部署(十二):helm部署harbor企业级镜像仓库

  • 接下来咱们建立test的私有项目用来测试。
    Kubernetes部署(十二):helm部署harbor企业级镜像仓库docker

  • 由于咱们建立的harbor仓库是https的因此在docker pull或者push镜像以前,须要先把证书加到docker对应的配置目录里,否则docker是没法登陆harbor的。
  • 进入test项目,点解“注册证书”下载harbor的CA证书。
    Kubernetes部署(十二):helm部署harbor企业级镜像仓库
  • 在每一个node节点建立目录(之后可能会在master上传镜像,因此这次个人master节点也都一块儿建立了)
for n in `seq -w 01 06`;do ssh node-$n "mkdir -p /etc/docker/certs.d/h.cnlinux.club";done
#将下载下来的harbor CA证书拷贝到每一个node节点的etc/docker/certs.d/h.cnlinux.club目录下
for n in `seq -w 01 06`;do scp ca.crt node-$n:/etc/docker/certs.d/h.cnlinux.club/;done
  • 在node节点上功登陆harbor,登陆成功后的信息保存在当前用户家目录下的.docker/config.json里。
[root@node-06 ~]# docker login h.cnlinux.club
Username: admin
Password: 
Login Succeeded

[root@node-06 ~]# cat .docker/config.json 
{
        "auths": {
                "h.cnlinux.club": {
                        "auth": "YWRtaW46SGFyYm9yMTIzNDU="
                }
        }
}
  • 在官方docker仓库pull一个nginx镜像,任何打上tag,push到harbor仓库,以下就能够看到harbor的test项目下已经有nginx的镜像了
    [root@node-06 ~]# docker pull nginx:latest
    [root@node-06 ~]# docker tag nginx:latest h.cnlinux.club/test/nginx:latest
    [root@node-06 ~]# docker push h.cnlinux.club/test/nginx:latest

    Kubernetes部署(十二):helm部署harbor企业级镜像仓库

问题:若是个人k8s集群不少的node节点是否是每一个node节点都要上去登陆才能pull harbor仓库的镜像?这样是否是就很是麻烦了?json

  • 其实在k8s里有一种secret的类型是kubernetes.io/dockerconfigjson就是用来解决这种问题的。
  • 首先将docker的登陆信息转换成base64格式
[root@node-06 ~]# cat .docker/config.json |base64
ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0=
  • 建立secret
    apiVersion: v1
    kind: Secret
    metadata:
    name: harbor-registry-secret
    namespace: default
    data:
    .dockerconfigjson: ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0=
    type: kubernetes.io/dockerconfigjson
    [root@node-01 ~]# kubectl create -f harbor-registry-secret.yaml 
    secret/harbor-registry-secret created
  • 建立nginx demo并使用harbor上的nginx镜像。并将nginx.cnlinux.club解析到负载均衡10.31.90.200
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template: 
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: h.cnlinux.club/test/nginx:latest
        ports:
        - containerPort: 80
      imagePullSecrets:
        - name: harbor-registry-secret
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - name: nginx
    protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP 

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    # nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: nginx.cnlinux.club
      http:
        paths:
          - path: 
            backend:
              serviceName: nginx
              servicePort: 80
  • 能够看到3过node节点上的nginx已经运行,证实harbor上的镜像已经成功pull下来。
    [root@node-01 ~]# kubectl get pod -o wide|grep nginx 
    deploy-nginx-647f9649f5-88mkt                  1/1     Running            0          2m41s   10.34.0.5      node-06   <none>           <none>
    deploy-nginx-647f9649f5-9z842                  1/1     Running            0          2m41s   10.40.0.5      node-04   <none>           <none>
    deploy-nginx-647f9649f5-w44ck                  1/1     Running            0          2m41s   10.46.0.6      node-05   <none>           <none>

    最后咱们访问http://nginx.cnlinux.club,至此全部的都已完成。
    Kubernetes部署(十二):helm部署harbor企业级镜像仓库
    后续会陆续更新全部的k8s相关文档,若是你以为我写的不错,但愿你们多多关注点赞,若有问题能够在下面给我留言,很是感谢!

相关文章
相关标签/搜索