前言html
kubernetes master 节点包含的组件:
kube-apiserver :集群核心,集群API接口、集群各个组件通讯的中枢;集群安全控制;
kube-scheduler: 集群调度器 ,根据node负载(cpu、内存、存储、策略等)将pod分配到合适node。
kube-controller-manager:集群状态管理器 。当集群状态与集群指望值不一样时,该控制器会根据已有策略将其恢复到指定状态。node
注意:linux
三者的功能紧密相关;集群只能有一个 kube-scheduler、kube-controller-manager 进程处于工做状态,若是运行多个,则须要经过选举产生一个 leadergit
环境说明:
github
192.168.214.88 master1
web
192.168.214.89 master2docker
192.168.214.90 master3bootstrap
下载并解压安装包
vim
[root@master1 ~]# wget https://dl.k8s.io/v1.12.2/kubernetes-server-linux-amd64.tar.gz [root@master1 ~]# tar zxvf kubernetes-server-linux-amd64.tar.gz [root@master1 ~]# tree kubernetes kubernetes ├── addons ├── kubernetes-src.tar.gz ├── LICENSES └── server └── bin ├── apiextensions-apiserver ├── cloud-controller-manager ├── cloud-controller-manager.docker_tag ├── cloud-controller-manager.tar ├── hyperkube ├── kubeadm ├── kube-apiserver ├── kube-apiserver.docker_tag ├── kube-apiserver.tar ├── kubeconfig ├── kube-controller-manager ├── kube-controller-manager.docker_tag ├── kube-controller-manager.tar ├── kubectl ├── kubelet ├── kube-proxy ├── kube-proxy.docker_tag ├── kube-proxy.tar ├── kube-scheduler ├── kube-scheduler.docker_tag ├── kube-scheduler.tar └── mounter
将服务相关的命令拷贝到/usr/local/bin/目录,并添加执行权限
api
[root@master1 ~]# cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/ [root@master1 ~]# chmod +x kube*
生成集群管理员admin.kubeconfig配置文件供kubectl调用
[root@master1 ssl]# export KUBE_APISERVER="https://192.168.214.88:6443" [root@master1 ssl]# kubectl config set-cluster kubernetes \ #设置集群参数 > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=admin.kubeconfig Cluster "kubernetes" set. #设置客户端认证参数 [root@master1 log]# kubectl config set-credentials admin \ > --client-certificate=/opt/kubernetes/ssl/admin.pem \ > --embed-certs=true \ > --client-key=/opt/kubernetes/ssl/admin-key.pem \ > --kubeconfig=admin.kubeconfig User "admin" set. [root@master1ssl]# kubectl config set-context kubernetes \ #设置上下文参数 > --cluster=kubernetes \ > --user=admin \ > --kubeconfig=admin.kubeconfig Context "kubernetes" modified. [root@master1 log]# kubectl config use-context kubernetes \ > --kubeconfig=admin.kubeconfig #设置默认上下文 Switched to context "kubernetes".
说明:
生成的admin.kubeconfig里的内容同时被保存到~/.kube/config文件,该文件拥有对集群的最高权限,须要妥善保管;
admin.pem 证书 OU 字段值为 system:masters,kube-apiserver 预约义的 RoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 相关 API 的权限;
kubelet、kube-proxy 等 Node 机器上的进程与 Master 机器的 kube-apiserver 进程通讯时须要认证和受权;kubernetes 1.4 开始支持由 kube-apiserver 为客户端生成 TLS 证书的 TLS Bootstrapping 功能,这样就不须要为每一个客户端生成证书了;该功能当前仅支持为 kubelet 生成证书
如下操做只须要在master节点上执行,生成的*.kubeconfig文件能够直接拷贝到node节点的/opt/kubernetes/ssl目录下。
建立TLS Bootstrapping Token
[root@master1 ~]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') [root@master1 ~]# cat > token.csv << EOF > ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" > EOF [root@master1 ssl]# cat token.csv bfdf3a25e9cf9f5278ea4c9ff9227e23,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@master1 ~]# mv token.csv /opt/kubernetes/ssl
说明:
Token能够是任意的包含128 bit的字符串,可使用安全的随机数发生器生成。建立完成后,查看文件,确认其中的 ${BOOTSTRAP_TOKEN} 环境变量已经被真实的值替换。
BOOTSTRAP_TOKEN 将被写入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,若是后续从新生成了 BOOTSTRAP_TOKEN,则须要:
更新 token.csv 文件,分发到全部机器 (master 和 node)的 /opt/kubernetes/ssl 目录下,分发到node节点上非必需;
从新生成 bootstrap.kubeconfig 文件,分发到全部 node 机器的 /opt/kubernetes/ssl 目录下;
重启 kube-apiserver 和 kubelet 进程;
从新 approve kubelet 的 csr 请求;
建立kubectl bootstrapping.kubeconfig文件
[root@master1 ssl]# export KUBE_APISERVER="https://192.168.214.88:6443" [root@master1 ssl]# kubectl config set-cluster kubernetes \ #设置集群参数 > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set. [root@master1 ssl]# kubectl config set-credentials kubelet-bootstrap \ ##设置客户端认证参数 > --token=${BOOTSTRAP_TOKEN} \ > --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set. [root@master1 ssl]# kubectl config set-context default \ ##设置上下文参数 > --cluster=kubernetes \ > --user=kubelet-bootstrap \ > --kubeconfig=bootstrap.kubeconfig Context "default" created. [root@master1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #设置默认上下文 Switched to context "default".
建立kube-proxy.kubeconfig文件
[root@master1 ssl]# export KUBE_APISERVER="https://192.168.214.88:6443" [root@master1 ssl]# kubectl config set-cluster kubernetes \ #设置集群参数 > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set. [root@master1 ssl]# kubectl config set-credentials kube-proxy \ #设置客户端认证参数 > --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ > --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ > --embed-certs=true \ > --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set. [root@master1 ssl]# kubectl config set-context default \ #设置上下文参数 > --cluster=kubernetes \ > --user=kube-proxy \ > --kubeconfig=kube-proxy.kubeconfig Context "default" created. [root@master1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig #设置默认上下文 Switched to context "default".
说明:
设置集群参数和客户端认证参数时 –embed-certs 都为 true,这会将 certificate-authority、client-certificate 和 client-key 指向的证书文件内容写入到生成的 kube-proxy.kubeconfig 文件中;
kube-proxy.pem 证书中 CN 为 system:kube-proxy,kube-apiserver 预约义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
生成高级审计配置
[root@master1 ssl]# cat >> audit-policy.yaml <<EOF > apiVersion: audit.k8s.io/v1beta1 > kind: Policy > rules: > - level: Metadata > EOF
能够提早将kubeconfig文件分发到各node节点,部署node时要用到
[root@master1 ssl]# scp -r /opt/kubernetes/ssl/*.kubeconfig node1:/opt/kubernetes/ssl/ [root@master1 ssl]# scp -r /opt/kubernetes/ssl/*.kubeconfig node2:/opt/kubernetes/ssl/ [root@master1 ssl]# scp -r /opt/kubernetes/ssl/*.kubeconfig node3:/opt/kubernetes/ssl/
建立service文件
建立/usr/lib/systemd/system/kube-apiserver.service
[root@master1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \ --advertise-address=192.168.214.88 \ --bind-address=192.168.214.88 \ --insecure-bind-address=127.0.0.1 \ --kubelet-https=true \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/ssl/token.csv \ --feature-gates=CustomPodDNS=true \ --service-cluster-ip-range=172.21.0.0/16 \ --service-node-port-range=8400-20000 \ --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \ --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \ --etcd-servers=https://192.168.214.200:2379,https://192.168.214.201:2379,https://192.168.214.202:2379 \ --logtostderr=false \ --log-dir=/var/log/kube-apiserver \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/lib/audit.log \ --event-ttl=1h \ --v=2 \ --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \ --requestheader-allowed-names= \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ #--proxy-client-cert-file=/opt/kubernetes/ssl/kubelet-client.crt \ #--proxy-client-key-file=/opt/kubernetes/ssl/kubelet-client.key \ --enable-aggregator-routing=true \ --runtime-config=rbac.authorization.k8s.io/v1beta1,settings.k8s.io/v1alpha1=true,api/all=true Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install]
--admission-control #插件,Kubernetes中许多高级功能须要激活Admission Controller插件,以便更好的支持该功能(必须包含 ServiceAccount)。参考:Admission Controller
--advertise-address #广播API Server给全部集群成员的地址
--bind-address #不能为127.0.0.1
--insecure-bind-address #非安全端口的服务IP地址,默认是本地地址,能够不用证书验证。
--kubelet-https[=true] #使用https创建kubelet链接
--authorization-mode #受权模式,指定在安全端口使用RRBAC和node模式,拒绝未经过受权的请求。参考https://kubernetes.io/docs/reference/access-authn-authz/rbac/ ,http://docs.kubernetes.org.cn/156.html
--enable-bootstrap-token-auth #启用启动引导令牌,参考https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/
--token-auth-file #生成的token文件位置
kube-scheduler、kube-controller-manager 通常和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通讯;
kubelet、kube-proxy 部署在其它 Node 节点上,若是经过安全端口访问 kube-apiserver,则必须先经过 TLS 证书认证,再经过 RBAC 受权;
kube-proxy、kubelet 经过在使用的证书里指定相关的 User、Group 来达到经过 RBAC 受权的目的;
--feature-gates=CustomPodDNS=true #启用该特性后,用户能够将Pod的dnsPolicy字段设置为"None",而且能够在Pod。Spec中添加新的字段dnsConfig,
其中dnsConfig用来定义DNS参数,而dnsPolicy用来给Pod选取预设的DNS 参考http://www.jintiankansha.me/t/Js1R84GGAl
--service-cluster-ip-range #kubernetes集群中service的虚拟IP地址范围
k8s会分配给Service一个固定IP,这是一个虚拟IP(也称为ClusterIP),并非一个真实存在的IP,而是由k8s虚拟出来的。
虚拟IP属于k8s内部的虚拟网络,外部是寻址不到的。在k8s系统中,其实是由k8s Proxy组件负责实现虚拟IP路由和转发的,因此k8s Node中都必须运行了k8s Proxy,从而在容器覆盖网络之上又实现了k8s层级的虚拟转发网络。
--service-node-port-range #kubernetes集群可映射的物理机端口范围
--enable-swagger-ui=true #能够经过/swagger-ui访问Swagger UI
建立/usr/lib/systemd/system/kube-controller-manager.service
[root@master1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=172.21.0.0/16 \ --cluster-cidr=172.20.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --leader-elect=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
建立/usr/lib/systemd/system/kube-scheduler.service
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --leader-elect=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
启动服务并设置开机自启动
[root@master1 ~]# systemctl daemon-reload [root@master1 ~]# systemctl start kube-apiserver.service [root@master1 ~]# systemctl start kube-controller-manager.service [root@master1 ~]# systemctl start kube-scheduler.service [root@master1 ~]# systemctl enable kube-apiserver.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-apiserver.service. [root@master1 ~]# systemctl enable kube-controller-manager.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-controller-manager.service [root@master1 ~]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
检查各组件状态
[root@master1 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
将相关文件拷贝到另外两台master节点,包括相关证书、kubeconfig文件、二进制文件、service文件
[root@master1 kubernetes]# scp -r ssl/ master2:/opt/kubernetes/ [root@master1 kubernetes]# scp -r ssl/ master3:/opt/kubernetes/ [root@master1 kubernetes]# scp -r /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} master2:/usr/local/bin/ [root@master1 kubernetes]# scp -r /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} master3:/usr/local/bin/ [root@master1 kubernetes]# scp /usr/lib/systemd/system/kube* master2:/usr/lib/systemd/system/ [root@master1 kubernetes]# scp /usr/lib/systemd/system/kube* master3:/usr/lib/systemd/system/
在master2,3节点上,根据ip修改APIserver配置文件、添加二进制文件执行权限、启动相关服务并设置开机自启
同时须要加载下环境变量,不然没法kubectl
[root@master2 bin]# echo "export KUBECONFIG=/opt/kubernetes/admin.kubeconfig" >> /etc/profile [root@master2 bin]# source /etc/profile [root@master2 bin]# echo $KUBECONFIG /opt/kubernetes/admin.kubeconfig