https://jimmysong.io/kubernetes-handbook/cloud-native/play-with-kubernetes.html (在CentOS上部署kubernetes集群)html
本系列文档介绍使用二进制部署 kubernetes 集群的全部步骤,而不是使用 kubeadm 等自动化方式来部署集群,同时开启了集群的TLS安全认证,该安装步骤适用于全部bare metal环境、on-premise环境和公有云环境。 若是您想快速的在本身电脑的本地环境下使用虚拟机来搭建kubernetes集群,能够参考本地分布式开发环境搭建(使用Vagrant和Virtualbox、http://192.168.66.102/k8s/k8s-doc/blob/master/develop/using-vagrant-and-virtualbox-for-development.md)。 在部署的过程当中,将详细列出各组件的启动参数,给出配置文件,详解它们的含义和可能遇到的问题。 部署完成后,你将理解系统各组件的交互原理,进而能快速解决实际问题。 因此本文档主要适合于那些有必定 kubernetes 基础,想经过一步步部署的方式来学习和了解系统配置、运行原理的人。 注:本文档中不包括docker和私有镜像仓库的安装,安装说明中使用的镜像来自 Google Cloud Platform,为了方便国内用户下载,我将其克隆并上传到了 时速云镜像市场,供你们免费下载。 欲下载最新版本的官方镜像请访问 Google 云平台容器注册表。
•OS:CentOS Linux release 7.3.1611 (Core) 3.10.0-514.16.1.el7.x86_64 •Kubernetes 1.9.0+(最低的版本要求是1.6) •Docker 1.12.5(使用yum安装) •Etcd 3.1.5 •Flannel 0.7.1 vxlan或者host-gw 网络 •TLS 认证通讯 (全部组件,如 etcd、kubernetes master 和 node) •RBAC 受权 •kubelet TLS BootStrapping •kubedns、dashboard、heapster(influxdb、grafana)、EFK(elasticsearch、fluentd、kibana) 集群插件 •私有docker镜像仓库harbor(请自行部署,harbor提供离线安装包,直接使用docker-compose启动便可)
在下面的步骤中,咱们将在三台CentOS系统的物理机上部署具备三个节点的kubernetes1.9.0集群。 角色分配以下: 镜像仓库: 192.168.55.33 (harbor: https://www.cnblogs.com/jicki/p/5737369.html) Master:192.168.55.36 Node:192.168.55.36、192.168.55.37、192.168.55.38 注意:192.168.55.36这台主机master和node复用。全部生成证书、执行kubectl命令的操做都在这台节点上执行。一旦node加入到kubernetes集群以后就不须要再登录node节点了。
1.建立 TLS 证书和秘钥 2.建立kubeconfig 文件 3.建立高可用etcd集群 4.安装kubectl命令行工具 5.部署master节点 6.安装flannel网络插件 7.部署node节点 8.安装kubedns插件 9.安装dashboard插件 10.安装heapster插件 11.安装EFK插件
生成的 CA 证书和秘钥文件以下:
•ca-key.pem
•ca.pem
•kubernetes-key.pem
•kubernetes.pem
•kube-proxy.pem
•kube-proxy-key.pem
•admin.pem
•admin-key.pemnode
使用证书的组件以下:
•etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
•kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
•kubelet:使用 ca.pem;
•kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
•kubectl:使用 ca.pem、admin-key.pem、admin.pem;
•kube-controller-manager:使用 ca-key.pem、ca.pemlinux
注意:如下操做都在 master 节点即 192.168.55.36 这台主机上执行,证书只须要建立一次便可,之后在向集群中添加新节点时只要将 /etc/kubernetes/ 目录下的证书拷贝到新节点上便可。nginx
方式一:直接使用二进制源码包安装 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 chmod +x cfssl_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo export PATH=/usr/local/bin:$PATH 方式二:使用go命令安装 咱们的系统中安装了Go1.7.5,使用如下命令安装更快捷: $ go get -u github.com/cloudflare/cfssl/cmd/... $ echo $GOPATH /usr/local $ls /usr/local/bin/cfssl* cfssl cfssl-bundle cfssl-certinfo cfssljson cfssl-newkey cfssl-scan
1.2.1建立 CA 配置文件 ----------------------------------------------------------------------- mkdir /root/ssl cd /root/ssl cfssl print-defaults config > config.json cfssl print-defaults csr > csr.json # 根据config.json文件的格式建立以下的ca-config.json文件 # 过时时间设置成了 87600h cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF ------------------------------------------------------------------------- 字段说明 • ca-config.json:能够定义多个 profiles,分别指定不一样的过时时间、使用场景等参数;后续在签名证书时使用某个 profile; • signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE; • server auth:表示client能够用该 CA 对server提供的证书进行验证; • client auth:表示server能够用该CA对client提供的证书进行验证; 1.2.2建立 CA 证书签名请求 建立 ca-csr.json 文件,内容以下: ------------------------------------------------------------------------- cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ], "ca": { "expiry": "87600h" } } EOF ------------------------------------------------------------------------- 字段说明 •"CN":Common Name,kube-apiserver 从证书中提取该字段做为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; •"O":Organization,kube-apiserver 从证书中提取该字段做为请求用户所属的组 (Group); 1.2.3生成 CA 证书和私钥 ------------------------------------------------------------------------- $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem -------------------------------------------------------------------------
1.3.1建立 kubernetes 证书签名请求文件 kubernetes-csr.json: ------------------------------------------------------------------------- cd /root/ssl/ cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.55.33", "192.168.55.36", "192.168.55.37", "192.168.55.38", "172.16.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF ------------------------------------------------------------------------- 字段说明 •若是 hosts 字段不为空则须要指定受权使用该证书的 IP 或域名列表,因为该证书后续被 etcd 集群和 kubernetes master 集群使用,因此上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP 和 kubernetes 服务的服务 IP(通常是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1)。 •这是最小化安装的kubernetes集群,包括一个私有镜像仓库,三个节点的kubernetes集群,以上物理节点的IP也能够更换为主机名。 注意: hosts字段为空后继会出问题,必定把hosts字段配置上。 1.3.2生成 kubernetes 证书和私钥 ------------------------------------------------------------------------- 方式一: $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes $ ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem 方式二: 或者直接在命令行上指定相关参数: echo '{"CN":"kubernetes","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname="127.0.0.1,172.20.0.112,172.20.0.113,172.20.0.114,172.20.0.115,kubernetes,kubernetes.default" - | cfssljson -bare kubernetes -------------------------------------------------------------------------
1.4.1建立 admin 证书签名请求文件 admin-csr.json: ------------------------------------------------------------------------- cd /root/ssl/ cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF ------------------------------------------------------------------------- 字段说明 •后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行受权; • kube-apiserver 预约义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的全部 API的权限; •O 指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,因为证书被 CA 签名,因此认证经过,同时因为证书用户组为通过预受权的 system:masters,因此被授予访问全部 API 的权限; 1.4.2注意: 这个admin 证书,是未来生成管理员用的kube config 配置文件用的,如今咱们通常建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 做为User, O 字段做为 Group(具体参考 Kubernetes中的用户与身份认证受权中 X509 Client Certs 一段)。 在搭建完 kubernetes 集群后,咱们能够经过命令: kubectl get clusterrolebinding cluster-admin -o yaml ,查看到 clusterrolebinding cluster-admin 的 subjects 的 kind 是 Group,name 是 system:masters。 roleRef 对象是 ClusterRole cluster-admin。 意思是凡是 system:masters Group 的 user 或者 serviceAccount 都拥有 cluster-admin 的角色。 所以咱们在使用 kubectl 命令时候,才拥有整个集群的管理权限。可使用 kubectl get clusterrolebinding cluster-admin -o yaml 来查看。 ------------------------------------------------------------------------- $ kubectl get clusterrolebinding cluster-admin -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2017-04-11T11:20:42Z labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin resourceVersion: "52" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin uid: e61b97b2-1ea8-11e7-8cd7-f4e9d49f8ed0 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters ------------------------------------------------------------------------- 1.4.3生成 admin 证书和私钥: ------------------------------------------------------------------------- $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin $ ls admin* admin.csr admin-csr.json admin-key.pem admin.pem -------------------------------------------------------------------------
1.5.1建立 kube-proxy 证书签名请求文件 kube-proxy-csr.json: ------------------------------------------------------------------------- cd /root/ssl/ cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF ------------------------------------------------------------------------- 字段说明: •CN 指定该证书的 User 为 system:kube-proxy; • kube-apiserver 预约义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限; 1.5.2生成 kube-proxy 客户端证书和私钥 ------------------------------------------------------------------------- $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy $ ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem -------------------------------------------------------------------------
使用 opsnssl 命令 $ openssl x509 -noout -text -in kubernetes.pem ... Signature Algorithm: sha256WithRSAEncryption Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=Kubernetes Validity Not Before: Apr 5 05:36:00 2017 GMT Not After : Apr 5 05:36:00 2018 GMT Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes ... X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: DD:52:04:43:10:13:A9:29:24:17:3A:0E:D7:14:DB:36:F8:6C:E0:E0 X509v3 Authority Key Identifier: keyid:44:04:3B:60:BD:69:78:14:68:AF:A0:41:13:F6:17:07:13:63:58:CD X509v3 Subject Alternative Name: DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:172.20.0.112, IP Address:172.20.0.113, IP Address:172.20.0.114, IP Address:172.20.0.115, IP Address:10.254.0.1 ... •确认 Issuer 字段的内容和 ca-csr.json 一致; •确认 Subject 字段的内容和 kubernetes-csr.json 一致; •确认 X509v3 Subject Alternative Name 字段的内容和 kubernetes-csr.json 一致; •确认 X509v3 Key Usage、Extended Key Usage 字段的内容和 ca-config.json 中 kubernetes profile 一致; 使用 cfssl-certinfo 命令 $ cfssl-certinfo -cert kubernetes.pem ... { "subject": { "common_name": "kubernetes", "country": "CN", "organization": "k8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": [ "CN", "BeiJing", "BeiJing", "k8s", "System", "kubernetes" ] }, "issuer": { "common_name": "Kubernetes", "country": "CN", "organization": "k8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": [ "CN", "BeiJing", "BeiJing", "k8s", "System", "Kubernetes" ] }, "serial_number": "174360492872423263473151971632292895707129022309", "sans": [ "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "127.0.0.1", "10.64.3.7", "10.254.0.1" ], "not_before": "2017-04-05T05:36:00Z", "not_after": "2018-04-05T05:36:00Z", "sigalg": "SHA256WithRSA", ...
将生成的证书和秘钥文件(后缀名为.pem)拷贝到全部机器的 /etc/kubernetes/ssl 目录下备用; ------------------------------------------------------------------------- mkdir -p /etc/kubernetes/ssl cp *.pem /etc/kubernetes/ssl -------------------------------------------------------------------------
master节点上执行如下操做git
mkdir -p /opt/k8s/bin wget https://dl.k8s.io/v1.9.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /opt/k8s/bin/
------------------------------------------------------------ export KUBE_APISERVER="https://192.168.55.36:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} # 设置客户端认证参数 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem # 设置上下文参数 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 设置默认上下文 kubectl config use-context kubernetes ------------------------------------------------------------ 说明: •admin.pem 证书 OU 字段值为 system:masters,kube-apiserver 预约义的 RoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限; •生成的 kubeconfig 被保存到 ~/.kube/config 文件; 注意:~/.kube/config文件拥有对该集群的最高权限,请妥善保管。
Token能够是任意的包含128 bit的字符串,可使用安全的随机数发生器生成。 ------------------------------------------------------------ export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF cp token.csv /etc/kubernetes/ ------------------------------------------------------------ 注意:在进行后续操做前请检查 token.csv 文件,确认其中的 ${BOOTSTRAP_TOKEN} 环境变量已经被真实的值替换。 BOOTSTRAP_TOKEN 将被写入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,若是后续从新生成了 BOOTSTRAP_TOKEN,则须要: 1.更新 token.csv 文件,分发到全部机器 (master 和 node)的 /etc/kubernetes/ 目录下,分发到node节点上非必需;【步骤2.3】 2.从新生成 bootstrap.kubeconfig 文件,分发到全部 node 机器的 /etc/kubernetes/ 目录下; 【步骤2.4】 3.重启 kube-apiserver 和 kubelet 进程; 4.从新 approve kubelet 的 csr 请求;[步骤6.3.7]
cd /etc/kubernetes export KUBE_APISERVER="https://192.168.55.36:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig ------------------------------------------------------------ 说明: •--embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中; •设置客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;
------------------------------------------------------------ export KUBE_APISERVER="https://192.168.55.36:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig ------------------------------------------------------------ 说明: •设置集群参数和客户端认证参数时 --embed-certs 都为 true,这会将 certificate-authority、client-certificate 和 client-key 指向的证书文件内容写入到生成的 kube-proxy.kubeconfig 文件中; • kube-proxy.pem 证书中 CN 为 system:kube-proxy,kube-apiserver 预约义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
将两个 kubeconfig 文件分发到全部 Node 机器的 /etc/kubernetes/ 目录 ------------------------------------------------------------ cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/ ------------------------------------------------------------
------------------------------------------------------------- 3节点: 192.168.55.36 192.168.55.37 192.168.55.38 -------------------------------------------------------------
-------------------------------------------------------------
须要为 etcd 集群建立加密通讯的 TLS 证书,这里复用之前建立的 kubernetes 证书
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
-------------------------------------------------------------
注意:•kubernetes 证书的 hosts 字段列表中包含上面三台机器的 IP,不然后续证书校验会失败;
------------------------------------------------------------- 方式一: mkdir -p /opt/etcd/bin wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz tar -xvf etcd-v3.1.5-linux-amd64.tar.gz mv etcd-v3.1.5-linux-amd64/etcd* /opt/etcd/bin/ 方式二: yum install etcd 若使用yum安装,默认etcd命令将在/usr/bin目录下,注意修改下面的etcd.service文件中的启动命令地址为/usr/bin/etcd。 -------------------------------------------------------------
在/etc/systemd/system/目录下建立文件etcd.service,内容以下。注意替换IP地址为你本身的etcd集群的主机IP vim /etc/systemd/system/etcd.service ------------------------------------------------------------- [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/opt/etcd/bin/etcd \ --name ${ETCD_NAME} \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster infra1=https://192.168.55.36:2380,infra2=https://192.168.55.37:2380,infra3=https://192.168.55.38:2380 \ --initial-cluster-state new \ --data-dir=${ETCD_DATA_DIR} Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target ------------------------------------------------------------- 说明: •指定 etcd 的工做目录为 /var/lib/etcd,数据目录为 /var/lib/etcd,需在启动服务前建立这个目录,不然启动服务的时候会报错“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”; •为了保证通讯安全,须要指定 etcd 的公私钥(cert-file和key-file)、Peers 通讯的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file); •建立 kubernetes.pem 证书时使用的 kubernetes-csr.json 文件的 hosts 字段包含全部 etcd 节点的IP,不然证书校验会出错; • --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
------------------------------------------------------------- # [member] ETCD_NAME=infra1 ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.55.36:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.55.36:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.55.36:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.55.36:2379" ------------------------------------------------------------- 说明: 这是192.168.55.36节点的配置,其余两个etcd节点只要将上面的IP地址改为相应节点的IP地址便可。ETCD_NAME换成对应节点的infra1/2/3。
------------------------------------------------------------- systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd ------------------------------------------------------------- 说明: 在全部的 kubernetes master、nodes 节点重复上面的步骤,直到全部机器的 etcd 服务都已启动。 注意: 若是日志中出现链接异常信息,请确认全部节点防火墙是否开放2379,2380端口
在任一 kubernetes master 机器上执行以下命令: ------------------------------------------------------------- $ etcdctl \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ cluster-health 2017-04-11 15:17:09.082250 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated 2017-04-11 15:17:09.083681 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated member 9a2ec640d25672e5 is healthy: got healthy result from https://172.20.0.115:2379 member bc6f27ae3be34308 is healthy: got healthy result from https://172.20.0.114:2379 member e5c92ea26c4edba0 is healthy: got healthy result from https://172.20.0.113:2379 cluster is healthy ------------------------------------------------------------- 说明: 结果最后一行为 cluster is healthy 时表示集群服务正常。
------------------------------------------------------------- kubernetes master 节点包含的组件: •kube-apiserver •kube-scheduler •kube-controller-manager 目前这三个组件须要部署在同一台机器上。 • kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关; •同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工做状态,若是运行多个,则须要经过选举产生一个 leader; 注: •暂时未实现master节点的高可用 •master节点上没有部署flannel网络插件,若是想要在master节点上也能访问ClusterIP,请参考下一节部署node节点中的配置Flanneld部分。 -------------------------------------------------------------
------------------------------------------------------------- 如下pem证书文件咱们在建立TLS证书和秘钥这一步中已经建立过了,token.csv文件在建立kubeconfig文件的时候建立。咱们再检查一下。 $ ls /etc/kubernetes/ssl admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem $ /etc/kubernetes/token.csv -------------------------------------------------------------
------------------------------------------------------------- mkdir -p /opt/k8s/bin wget https://dl.k8s.io/v1.0.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /opt/k8s/bin -------------------------------------------------------------
4.4.1 建立 kube-apiserver的service配置文件 service配置文件/etc/systemd/system/kube-apiserver.service内容: ------------------------------------------------------------- [Unit] Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/opt/k8s/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target ------------------------------------------------------------- 4.4.2 /etc/kubernetes/config文件的内容为: ------------------------------------------------------------- ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver #KUBE_MASTER="--master=http://test-001.jimmysong.io:8080" KUBE_MASTER="--master=http://192.168.55.36:8080" ------------------------------------------------------------- 说明: 该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。 4.4.3 apiserver配置文件/etc/kubernetes/apiserver内容为: ------------------------------------------------------------- ### ## kubernetes system config ## ## The following values are used to configure the kube-apiserver ## # ## The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io" KUBE_API_ADDRESS="--advertise-address=192.168.55.36 --bind-address=192.168.55.36 --insecure-bind-address=192.168.55.36" # ## The port on the local server to listen on. #KUBE_API_PORT="--port=8080" # ## Port minions listen on #KUBELET_PORT="--kubelet-port=10250" # ## Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379" # ## Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.16.0.1/16" # ## default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" # ## Add your own! KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h" ------------------------------------------------------------- 说明: •--experimental-bootstrap-token-auth Bootstrap Token Authentication在1.9版本已经变成了正式feature,参数名称改成--enable-bootstrap-token-auth •若是中途修改过--service-cluster-ip-range地址,则必须将default命名空间的kubernetes的service给删除,使用命令:kubectl delete service kubernetes,而后系统会自动用新的ip重建这个service,否则apiserver的log有报错the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/16; please recreate • --authorization-mode=RBAC 指定在安全端口使用 RBAC 受权模式,拒绝未经过受权的请求; •kube-scheduler、kube-controller-manager 通常和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通讯; •kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,若是经过安全端口访问 kube-apiserver,则必须先经过 TLS 证书认证,再经过 RBAC 受权; •kube-proxy、kubectl 经过在使用的证书里指定相关的 User、Group 来达到经过 RBAC 受权的目的; •若是使用了 kubelet TLS Boostrap 机制,则不能再指定 --kubelet-certificate-authority、--kubelet-client-certificate 和 --kubelet-client-key 选项,不然后续 kube-apiserver 校验 kubelet 证书时出现 ”x509: certificate signed by unknown authority“ 错误; • --admission-control 值必须包含 ServiceAccount; • --bind-address 不能为 127.0.0.1; • runtime-config配置为rbac.authorization.k8s.io/v1beta1,表示运行时的apiVersion; • --service-cluster-ip-range 指定 Service Cluster IP 地址段,该地址段不能路由可达; •缺省状况下 kubernetes 对象保存在 etcd /registry 路径下,能够经过 --etcd-prefix 参数进行调整; •若是须要开通http的无认证的接口,则能够增长如下两个参数:--insecure-port=8080 --insecure-bind-address=127.0.0.1。注意,生产上不要绑定到非127.0.0.1的地址上 Kubernetes 1.9 •对于Kubernetes1.9集群,须要注意配置KUBE_API_ARGS环境变量中的--authorization-mode=Node,RBAC,增长对Node受权的模式,不然将没法注册node。 • --experimental-bootstrap-token-auth Bootstrap Token Authentication在kubernetes 1.9版本已经废弃,参数名称改成--enable-bootstrap-token-auth 4.4.4 启动kube-apiserver ------------------------------------------------------------- systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver -------------------------------------------------------------
4.5.1 建立 kube-controller-manager的serivce配置文件 文件路径/etc/systemd/system/kube-controller-manager.service ------------------------------------------------------------- [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/opt/k8s/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ------------------------------------------------------------- 4.5.2 配置文件/etc/kubernetes/controller-manager ------------------------------------------------------------- ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=172.16.0.1/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true" ------------------------------------------------------------- 说明: • --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致; • --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 建立的证书和私钥; • --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件; • --address 值必须为 127.0.0.1,kube-apiserver 指望 scheduler 和 controller-manager 在同一台机器; 4.5.3 启动 kube-controller-manager ------------------------------------------------------------- systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager ------------------------------------------------------------- 4.5.4 查看各个组件的状态; ------------------------------------------------------------- $ kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} -------------------------------------------------------------
4.6.1 建立 kube-scheduler的serivce配置文件 文件路径/etc/systemd/system/kube-scheduler.service ------------------------------------------------------------- [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/opt/k8s/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ------------------------------------------------------------- 4.6.2 配置文件/etc/kubernetes/scheduler ------------------------------------------------------------- ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1" ------------------------------------------------------------- 说明: • --address 值必须为 127.0.0.1,由于当前 kube-apiserver 指望 scheduler 和 controller-manager 在同一台机器; 4.6.3 启动 kube-scheduler ------------------------------------------------------------- systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler -------------------------------------------------------------
------------------------------------------------------------- $ kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} -------------------------------------------------------------
------------------------------------------------------------- 全部的node节点都须要安装网络插件才能让全部的Pod加入到同一个局域网中,本文是安装flannel网络插件的参考文档。 建议直接使用yum安装flanneld,除非对版本有特殊需求,默认安装的是0.7.1版本的flannel。
注意: 0.7.1版本有问题
解决方法以下:
替换高版本的flanneld可执行文件
下载地址: wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
原文地址: https://www.cnblogs.com/cs-zh/p/7879658.html
yum install -y flannel -------------------------------------------------------------
/etc/systemd/system/flanneld.service ------------------------------------------------------------- [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start \ -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix=${FLANNEL_ETCD_PREFIX} \ $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service -------------------------------------------------------------
------------------------------------------------------------- # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem" ------------------------------------------------------------- 说明: 若是是多网卡(例如vagrant环境),则须要在FLANNEL_OPTIONS中增长指定的外网出口的网卡,例如-iface=eth2
执行下面的命令为docker分配IP地址段 ------------------------------------------------------------- etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kube-centos/network etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kube-centos/network/config '{"Network":"10.200.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}' ------------------------------------------------------------- 说明: 若是你要使用host-gw模式,能够直接将vxlan改为host-gw便可。 注:参考网络和集群性能测试那节,最终咱们使用的host-gw模式,关于flannel支持的backend模式见:https://github.com/coreos/flannel/blob/master/Documentation/backends.md。
------------------------------------------------------------- systemctl daemon-reload systemctl enable flanneld systemctl start flanneld systemctl status flanneld -------------------------------------------------------------
如今查询etcd中的内容能够看到 ------------------------------------------------------------- etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kube-centos/network/subnets 结果: /kube-centos/network/subnets/10.200.75.0-24 etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/config 结果: {"Network":"10.200.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}} etcdctl --endpoints=https://192.168.55.36:2379,https://192.168.55.37:2379,https://192.168.55.38:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/subnets/10.200.75.0-24 结果: {"PublicIP":"192.168.55.36","BackendType":"host-gw"} ------------------------------------------------------------- 说明: 若是能够查看到以上内容证实flannel已经安装完成
6.1.1 Kubernetes node节点包含以下组件: ------------------------------------------------------------- Kubernetes node节点包含以下组件: •Flanneld:参考我以前写的文章Kubernetes基于Flannel的网络配置,以前没有配置TLS,如今须要在service配置文件中增长TLS配置,安装过程请参考上一节安装flannel网络插件。 •Docker1.12.5:docker的安装很简单,这里也不说了,可是须要注意docker的配置。 •kubelet:直接用二进制文件安装 •kube-proxy:直接用二进制文件安装 注意:每台 node 上都须要安装 flannel,master 节点上能够不安装。 ------------------------------------------------------------- 6.1.2 步骤简介 ------------------------------------------------------------- 1.确认在上一步中咱们安装配置的网络插件flannel已启动且运行正常 2.安装配置docker后启动 3.安装配置kubelet、kube-proxy后启动 4.验证 ------------------------------------------------------------- 6.1.3 目录和文件 咱们再检查一下三个节点上,通过前几步操做咱们已经建立了以下的证书和配置文件。 ------------------------------------------------------------- $ ls /etc/kubernetes/ssl admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem $ ls /etc/kubernetes/ apiserver bootstrap.kubeconfig config controller-manager kubelet kube-proxy.kubeconfig proxy scheduler ssl token.csv -------------------------------------------------------------
6.2.1 安装docker yum install -y docker 6.2.2 配置docker ------------------------------------------------------------- 6.2.2.1 使用systemctl命令启动flanneld后,会自动执行./mk-docker-opts.sh -i生成以下两个文件环境变量文件: /run/flannel/subnet.env #内容以下: FLANNEL_NETWORK=10.200.0.0/16 FLANNEL_SUBNET=10.200.75.1/24 FLANNEL_MTU=1500 FLANNEL_IPMASQ=false /run/flannel/docker #内容以下: DOCKER_OPT_BIP="--bip=10.200.75.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1500" DOCKER_NETWORK_OPTIONS=" --bip=10.200.75.1/24 --ip-masq=true --mtu=1500" Docker将会读取这两个环境变量文件做为容器启动参数。 6.2.2.2 /etc/systemd/system/docker.service #内容以下: 添加了两个 /run/flannel/docker /run/flannel/subnet.env [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target rhel-push-plugin.socket registries.service Wants=docker-storage-setup.service Requires=docker-cleanup.timer [Service] Type=notify NotifyAccess=all EnvironmentFile=-/run/flannel/docker EnvironmentFile=-/run/flannel/subnet.env EnvironmentFile=-/run/containers/registries.conf EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network Environment=GOTRACEBACK=crash Environment=DOCKER_HTTP_HOST_COMPAT=1 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin ExecStart=/usr/bin/dockerd-current \ --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \ --default-runtime=docker-runc \ --exec-opt native.cgroupdriver=systemd \ --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \ --init-path=/usr/libexec/docker/docker-init-current \ --seccomp-profile=/etc/docker/seccomp.json \ $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY \ $REGISTRIES ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal KillMode=process [Install] WantedBy=multi-user.target
6.2.2.2-1 docker加速
/etc/sysconfig/docker
更改OPTIONS的内容设置为:github
OPTIONS='--selinux-enabled=false --insecure-registry daocloud.io'docker
6.2.2.3 启动docker systemctl daemon-reload systemctl start docker systemctl enable docker systemctl status docker ps -ef | grep docker #查看进程 能够看到有 --bip=10.200.75.1/24 这样的参数 6.2.2.4 重启了docker后还要重启kubelet,这时又遇到问题,kubelet启动失败。报错: Mar 31 16:44:41 test-002.jimmysong.io kubelet[81047]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" 解决: 这是kubelet与docker的cgroup driver不一致致使的,/etc/kubernetes/kubelet配置里:有个—cgroup-driver参数能够指定为"cgroupfs"或者“systemd”。 配置docker的service配置文件/etc/systemd/system/docker.service,设置ExecStart中的--exec-opt native.cgroupdriver=systemd。 -------------------------------------------------------------
6.3.1 建立kubelet向 kube-apiserver发送请求权限(master上执行) kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,须要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 而后 kubelet 才能有权限建立认证请求(certificate signing requests): ------------------------------------------------------------- cd /etc/kubernetes kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap ------------------------------------------------------------- 说明: --user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件; 6.3.2 分发配置文件 将两个 kubeconfig 文件分发到全部 Node 机器的 /etc/kubernetes/ 目录 ------------------------------------------------------------ cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/ ------------------------------------------------------------ 6.3.3 下载最新的kubelet和kube-proxy二进制文件 注意请下载对应的Kubernetes版本的安装包。 ------------------------------------------------------------ mkdir -p /opt/k8s/bin wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes cp -r ./server/bin/{kube-proxy,kubelet} /opt/k8s/bin ------------------------------------------------------------ 6.3.4 建立kubelet的service配置文件 文件位置/etc/systemd/system/kubelet.service ------------------------------------------------------------ [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/opt/k8s/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target ------------------------------------------------------------ kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改成你的每台node节点的IP地址。 注意:在启动kubelet以前,须要先手动建立/var/lib/kubelet目录。 6.3.5 kubelet的配置文件/etc/kubernetes/kubelet ------------------------------------------------------------ ### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.55.36" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.55.36" # ## location of the api-server ## COMMENT THIS ON KUBERNETES 1.8+ #KUBELET_API_SERVER="--api-servers=http://192.168.55.36:8080" # ## pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=index.tenxcloud.com/jimmy/pod-infrastructure:rhel7" # ## Add your own! KUBELET_ARGS="--fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --cgroup-driver=systemd --cluster-dns=172.16.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --logtostderr false --log-dir /var/log/kubernetes --v 2" ------------------------------------------------------------ 说明: •对于kuberentes1.9集群中的kubelet配置,取消了KUBELET_API_SERVER的配置,而改用kubeconfig文件来定义master地址,因此请注释掉KUBELET_API_SERVER配置 •若是使用systemd方式启动,则须要额外增长两个参数--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice • --experimental-bootstrap-kubeconfig 在1.9版本已经变成了--bootstrap-kubeconfig • --address 不能设置为 127.0.0.1,不然后续 Pods 访问 kubelet 的 API 接口时会失败,由于 Pods 访问的 127.0.0.1 指向本身而不是 kubelet; •若是设置了 --hostname-override 选项,则 kube-proxy 也须要设置该选项,不然会出现找不到 Node 的状况; • "--cgroup-driver 配置成 systemd,不要使用cgroup,不然在 CentOS 系统中 kubelet 将启动失败(保持docker和kubelet中的cgroup driver配置一致便可,不必定非使用systemd)。 • --experimental-bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求; •管理员经过了 CSR 请求后,kubelet 自动在 --cert-dir 目录建立证书和私钥文件(kubelet-client.crt 和 kubelet-client.key),而后写入 --kubeconfig 文件; •建议在 --kubeconfig 配置文件中指定 kube-apiserver 地址,若是未指定 --api-servers 选项,则必须指定 --require-kubeconfig 选项后才从配置文件中读取 kube-apiserver 的地址,不然 kubelet 启动后将找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes 不会返回对应的 Node 信息; --require-kubeconfig 在1.9.0版本被移除,参看PR; • --cluster-dns 指定 kubedns 的 Service IP(能够先分配,后续建立 kubedns 服务时指定该 IP,这个ip必须是apiserver配置中--service-cluster-ip-range值范围内的值),--cluster-domain 指定域名后缀,这两个参数同时指定后才会生效; • --cluster-domain 指定 pod 启动时 /etc/resolve.conf 文件中的 search domain ,起初咱们将其配置成了 cluster.local.,这样在解析 service 的 DNS 名称时是正常的,但是在解析 headless service 中的 FQDN pod name 的时候却错误,所以咱们将其修改成 cluster.local,去掉最后面的 ”点号“ 就能够解决该问题,关于 kubernetes 中的域名/服务名称解析请参见个人另外一篇文章。 • --kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次启动kubelet以前并不存在,请看下文,当经过CSR请求后会自动生成kubelet.kubeconfig文件,若是你的节点上已经生成了~/.kube/config文件,你能够将该文件拷贝到该路径下,并重命名为kubelet.kubeconfig,全部node节点能够共用同一个kubelet.kubeconfig文件,这样新添加的节点就不须要再建立CSR请求就能自动添加到kubernetes集群中。一样,在任意可以访问到kubernetes集群的主机上使用kubectl --kubeconfig命令操做集群时,只要使用~/.kube/config文件就能够经过权限认证,由于这里面已经有认证信息并认为你是admin用户,对集群拥有全部权限。 • KUBELET_POD_INFRA_CONTAINER 是基础镜像容器,这里我用的是私有镜像仓库地址,你们部署的时候须要修改成本身的镜像。我上传了一个到时速云上,能够直接 docker pull index.tenxcloud.com/jimmy/pod-infrastructure:rhel7 下载。pod-infrastructure镜像是Redhat制做的,大小接近80M,下载比较耗时,其实该镜像并不运行什么具体进程,可使用Google的pause镜像gcr.io/google_containers/pause-amd64:3.0,这个镜像只有300多K,或者经过DockerHub下载jimmysong/pause-amd64:3.0。 • --fail-swap-on=false 发需要加上 这是关闭swap的,否则kubelet启动不了。 6.3.6 启动kublet ------------------------------------------------------------ systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet ------------------------------------------------------------ 6.3.7 经过 kublet 的 TLS 证书请求 kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须经过后 kubernetes 系统才会将该 Node 加入到集群。 查看未受权的 CSR 请求 ------------------------------------------------------------ $ kubectl get csr NAME AGE REQUESTOR CONDITION csr-2b308 4m kubelet-bootstrap Pending $ kubectl get nodes No resources found. ------------------------------------------------------------ 经过 CSR 请求 ------------------------------------------------------------ $ kubectl certificate approve csr-2b308 certificatesigningrequest "csr-2b308" approved $ kubectl get nodes NAME STATUS AGE VERSION 10.64.3.7 Ready 49m v1.6.1 ------------------------------------------------------------ 自动生成了 kubelet kubeconfig 文件和公私钥 ------------------------------------------------------------ $ ls -l /etc/kubernetes/kubelet.kubeconfig -rw------- 1 root root 2284 Apr 7 02:07 /etc/kubernetes/kubelet.kubeconfig $ ls -l /etc/kubernetes/ssl/kubelet* -rw-r--r-- 1 root root 1046 Apr 7 02:07 /etc/kubernetes/ssl/kubelet-client.crt -rw------- 1 root root 227 Apr 7 02:04 /etc/kubernetes/ssl/kubelet-client.key -rw-r--r-- 1 root root 1103 Apr 7 02:07 /etc/kubernetes/ssl/kubelet.crt -rw------- 1 root root 1675 Apr 7 02:07 /etc/kubernetes/ssl/kubelet.key ------------------------------------------------------------ 说明: 假如你更新kubernetes的证书,只要没有更新token.csv,当重启kubelet后,该node就会自动加入到kuberentes集群中,而不会从新发送certificaterequest,也不须要在master节点上执行kubectl certificate approve操做。前提是不要删除node节点上的/etc/kubernetes/ssl/kubelet*和/etc/kubernetes/kubelet.kubeconfig文件。不然kubelet启动时会提示找不到证书而失败。 注意:若是启动kubelet的时候见到证书相关的报错,有个trick能够解决这个问题,能够将master节点上的~/.kube/config文件(该文件在安装kubectl命令行工具这一步中将会自动生成)拷贝到node节点的/etc/kubernetes/kubelet.kubeconfig位置,这样就不须要经过CSR,当kubelet启动后就会自动加入的集群中。
6.4.1 安装conntrack ------------------------------------------------------------ yum install -y conntrack-tools ------------------------------------------------------------ 6.4.2 建立 kube-proxy 的service配置文件 文件路径/etc/systemd/system/kube-proxy.service ------------------------------------------------------------ [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/opt/k8s/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ------------------------------------------------------------ 6.4.3 kube-proxy配置文件/etc/kubernetes/proxy ------------------------------------------------------------ ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.55.36 --hostname-override=192.168.55.36 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=172.16.0.0/16" ------------------------------------------------------------ 说明: • --hostname-override 参数值必须与 kubelet 的值一致,不然 kube-proxy 启动后会找不到该 Node,从而不会建立任何 iptables 规则; • kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求作 SNAT 此项要跟apiserver配置里的--service-cluster-ip-range 值同样; • --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息; •预约义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限; 6.4.4 启动 kube-proxy ------------------------------------------------------------ systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy ------------------------------------------------------------
6.4.5 验证测试
咱们建立一个nginx的service试一下集群是否可用
------------------------------------------------------------
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.55.36 Ready <none> 1d v1.9.0apache
$ kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=index.tenxcloud.com/docker_library/nginx --port=80 #--port值须要对上容器指供的端口值
deployment "nginx" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-744c5fd44f-lwnl7 0/1 Running 0 3m
nginx-744c5fd44f-mzrwp 0/1 Running 0 3mjson
$ kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposedbootstrap
$ kubectl describe svc example-service #这步要查看到下面的内容 是须要一会时间的
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 172.16.238.215
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31107/TCP
Endpoints: 10.200.75.2:80,10.200.75.3:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ curl "172.16.238.215:80"
------------------------------------------------------------
说明:
访问192.168.55.36:31107 能够获得nginx的页面, 172.16.238.215是service_ip 10.200.75.2:80,10.200.75.3:80是容器ip 31107是宿主机映射后端service的端口(有疑问)
[root@k8s-master /opt/k8s/yml 11:03:15&&154]#cat kube-dns.yml apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 172.16.0.2 #这个ip须要和 kubelet 的 --cluster-dns 参数值一致。 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP --- apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-kube-dns-amd64:1.14.9 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 #__PILLAR__FEDERATIONS__DOMAIN__MAP__ env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-dnsmasq-nanny-amd64:1.14.9 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --log-facility=- - --server=/cluster.local./127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-sidecar-amd64:1.14.9 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns
说明: 蓝色字体的是修改部分,上面的配置文件已是修改过的。注意namespace的 kube-system是不能修改的
建立kubedns [root@k8s-master /opt/k8s/yml 11:14:09&&167]#kubectl apply -f kube-dns.yml service "kube-dns" created serviceaccount "kube-dns" created configmap "kube-dns" created deployment "kube-dns" created 查看kubedns pod [root@k8s-master /opt/k8s/yml 11:14:17&&168]#kubectl get pod --namespace=kube-system NAME READY STATUS RESTARTS AGE kube-dns-5c874ccb67-vqtvb 3/3 Running 0 29s 验证kubedns 说明: 建立个pod, 进入pod 查看/etc/resolv.conf的 nameserver是不是172.16.0.2 cat > httpd.yml << EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpd-deployment spec: replicas: 1 template: metadata: labels: run: httpd spec: containers: - name: httpd image: daocloud.io/library/httpd ports: - containerPort: 80 EOF [root@k8s-master /opt/k8s/yml 11:17:30&&176]#kubectl apply -f httpd.yml deployment "httpd-deployment" created [root@k8s-master /opt/k8s/yml 11:18:31&&177]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd-deployment-5c9bc776cb-x82hs 1/1 Running 0 34s 10.200.75.3 192.168.55.36 [root@k8s-master /opt/k8s/yml 11:19:05&&178]#kubectl exec -ti httpd-deployment-5c9bc776cb-x82hs -- /bin/bash root@httpd-deployment-5c9bc776cb-x82hs:/usr/local/apache2# cat /etc/resolv.conf nameserver 172.16.0.2 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 root@httpd-deployment-5c9bc776cb-x82hs:/usr/local/apache2# ping kubernetes PING kubernetes.default.svc.cluster.local (172.16.0.1) 56(84) bytes of data. ^C --- kubernetes.default.svc.cluster.local ping statistics --- 18 packets transmitted, 0 received, 100% packet loss, time 17000ms
注意:直接ping ClusterIP是ping不通的,ClusterIP是根据IPtables路由到服务的endpoint上,只有结合ClusterIP加端口才能访问到对应的服务。
验证kubedns2:
[root@k8s-master /opt/k8s/yml 14:56:51&&43]#kubectl run busybox --rm -ti --image=busybox /bin/sh #这个pod 退出即消失
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
nameserver 172.16.0.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # nslookup nginx-svc #注意:这个nginx-svc 须要提早建立
Server: 172.16.0.2
Address 1: 172.16.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-svc
Address 1: 172.16.98.222 nginx-svc.default.svc.cluster.local
/ # ping kubernetes
PING kubernetes (172.16.0.1): 56 data bytes
apiVersion: v1 kind: ServiceAccount metadata: name: dashboard namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dashboard subjects: - kind: ServiceAccount name: dashboard namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: dashboard containers: - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.7.1 resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi ports: - containerPort: 9090 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 tolerations: - key: "CriticalAddonsOnly" operator: "Exists" --- apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: type: NodePort selector: k8s-app: kubernetes-dashboard ports: - port: 80 targetPort: 9090
建立资源 [root@k8s-master /opt/k8s/yml 12:39:06&&196]#kubectl apply -f doshboard.yml serviceaccount "dashboard" created clusterrolebinding "dashboard" created deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created 查看svc [root@k8s-master /opt/k8s/yml 12:50:10&&206]#kubectl get svc -o wide --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes-dashboard NodePort 172.16.71.213 <none> 80:31130/TCP 10m k8s-app=kubernetes-dashboard 查看pod [root@k8s-master /opt/k8s/yml 12:50:14&&207]#kubectl get pod -o wide --namespace=kube-system NAME READY STATUS RESTARTS AGE IP NODE kubernetes-dashboard-f874767d4-x8zn4 1/1 Running 0 11m 10.200.75.5 192.168.55.36 访问dashboard http://192.168.55.36:31130