kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本。Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现广泛可用性。node
Kubernetes 1.13 的核心特性包括:利用 kubeadm 简化集群管理、容器存储接口(CSI )以及将 CoreDNS 做为默认 DNS 。linux
利用 kubeadm 简化集群管理功能git
大多数与 Kubernetes 接触频繁的人或多或少都会亲自动手使用 kubeadm ,它是管理集群生命周期的重要工具,可以帮助从建立到配置再到升级的整个流程。;随着 1.13 版本的发布,kubeadm 功能进入 GA 版本,正式广泛可用。kubeadm 处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心 Kubernetes 组件,以便为新节点提供安全而简单的链接流程并支持轻松升级。github
该 GA 版本中最值得注意的是已经毕业的高级功能,尤为是可插拔性和可配置性。kubeadm 旨在为管理员与高级自动化系统提供一套工具箱,现在已迈出重要一步。算法
容器存储接口(CSI)docker
容器存储接口最初于 1.9 版本中做为 alpha 测试功能引入,在 1.10 版本中进入 beta 测试,现在终于进入 GA 阶段正式广泛可用。在 CSI 的帮助下,Kubernetes 卷层将真正实现可扩展性。经过 CSI ,第三方存储供应商将能够直接编写可与 Kubernetes 互操做的代码,而无需触及任何 Kubernetes 核心代码。事实上,相关规范也已经同步进入 1.0 阶段。json
随着 CSI 的稳定,插件做者将可以按照本身的节奏开发核心存储插件,详见 CSI 文档。bootstrap
CoreDNS 成为 Kubernetes 的默认 DNS 服务器vim
在 1.11 版本中,开发团队宣布 CoreDNS 已实现基于 DNS 服务发现的广泛可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成为 Kubernetes 中的默认 DNS 服务器。CoreDNS 是一种通用的、权威的 DNS 服务器,可以提供与 Kubernetes 向下兼容且具有可扩展性的集成能力。因为 CoreDNS 自身单一可执行文件与单一进程的特性,所以 CoreDNS 的活动部件数量会少于以前的 DNS 服务器,且可以经过建立自定义 DNS 条目来支持各种灵活的用例。此外,因为 CoreDNS 采用 Go 语言编写,它具备强大的内存安全性。后端
CoreDNS 如今是 Kubernetes 1.13 及后续版本推荐的 DNS 解决方案,Kubernetes 已将经常使用测试基础设施架构切换为默认使用 CoreDNS ,所以,开发团队建议用户也尽快完成切换。KubeDNS 仍将至少支持一个版本,但如今是时候开始规划迁移了。另外,包括 1.11 中 Kubeadm 在内的许多 OSS 安装工具也已经进行了切换。
IP地址 | 主机名 | CPU | 内存 | 磁盘 |
---|---|---|---|---|
192.168.4.100 | master | 1C | 1G | 40G |
192.168.4.21 | node | 1C | 1G | 40G |
192.168.4.56 | node1 | 1C | 1G | 40G |
连接:https://pan.baidu.com/s/1wO6T7byhaJYBuu2JlhZvkQ
提取码:pm9u
集群功能各模块功能描述:
Master节点:
Master 节点上面主要由四个模块组成,APIServer,schedule , controller-manager , etcd
APIServer: APIServer 负责对外提供 RESTful 的 kubernetes API 的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给 APIServer 处理后再交给 etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对 kubernetes API 的调用)是直接和 APIServer 交互的。
schedule: schedule 负责调度 Pod 到合适的 Node 上,若是把 scheduler 当作一个黑匣子,那么它的输入是 pod 和由多个 Node 组成的列表,输出是 Pod 和一个 Node 的绑定。 kubernetes 目前提供了调度算法,一样也保留了接口。用户根据本身的需求定义本身的调度算法。
controller manager: 若是 APIServer 作的是前台的工做的话,那么 controller manager 就是负责后台的。每个资源都对应一个控制器。而 control manager 就是负责管理这些控制器的,好比咱们经过 APIServer 建立了一个Pod,当这个 Pod 建立成功后,APIServer 的任务就算完成了。
etcd:etcd 是一个高可用的键值存储系统,kubernetes 使用它来存储各个资源的状态,从而实现了 Restful 的 API。
Node节点:
每一个Node节点主要由三个模板组成:kublet, kube-proxy
kube-proxy: 该模块实现了 kubernetes 中的服务发现和反向代理功能。kube-proxy 支持 TCP 和 UDP 链接转发,默认基 Round Robin 算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy 使用etcd 的 watch 机制监控集群中 service 和 endpoint 对象数据的动态变化,而且维护一个 service 到 endpoint 的映射关系,从而保证了后端 pod 的 IP 变化不会对访问者形成影响,另外,kube-proxy 还支持 session affinity。
kublet:kublet 是 Master 在每一个 Node 节点上面的 agent,是 Node 节点上面最重要的模块,它负责维护和管理该 Node 上的全部容器,可是若是容器不是经过 kubernetes 建立的,它并不会管理。本质上,它负责使 Pod 的运行状态与指望的状态一致。
systemctl stop firewalld && systemctl disable firewalld setenforce 0 vi /etc/selinux/config SELINUX=disabled
swapoff -a && sysctl -w vm.swappiness=0 vi /etc/fstab #UUID=7bff6243-324c-4587-b550-55dc34018ebf swap swap defaults 0 0
cat << EOF | tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce --showduplicates | sort -r yum install docker-ce -y systemctl start docker && systemctl enable docker
mkdir /k8s/etcd/{bin,cfg,ssl} -p mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "192.168.4.100", "192.168.4.21", "192.168.4.56" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.4.100", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:FQjjiRDp8IKGT+UDM+GbQLBzF3DqDJ+pKnMIcHGyO/o root@qas-k8s-master01 The key's randomart image is: +---[RSA 2048]----+ |o.==o o. .. | |ooB+o+ o. . | |B++@o o . | |=X**o . | |o=O. . S | |..+ | |oo . | |* . | |o+E | +----[SHA256]-----+ # 复制 SSH 密钥到目标主机,开启无密码 SSH 登陆 # ssh-copy-id 192.168.4.21 # ssh-copy-id 192.168.4.56
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.100:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.100:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/k8s/etcd/cfg/etcd ExecStart=/k8s/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/k8s/etcd/ssl/server.pem \ --key-file=/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/k8s/etcd/ssl/server.pem \ --peer-key-file=/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
cp ca*pem server*pem /k8s/etcd/ssl
systemctl daemon-reload systemctl enable etcd systemctl start etcd
cd /k8s/ scp -r etcd 192.168.4.21:/k8s/ scp -r etcd 192.168.4.56:/k8s/ scp /usr/lib/systemd/system/etcd.service 192.168.4.21:/usr/lib/systemd/system/etcd.service scp /usr/lib/systemd/system/etcd.service 192.168.4.56:/usr/lib/systemd/system/etcd.service #--节点1 vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.21:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.21:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.21:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.21:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://172.16.8.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #--节点2 vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.56:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.56:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.56:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.56:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
[root@master ~]# cd /k8s/etcd/bin/ [root@master bin]# ./etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.100:2379,\ > https://192.168.4.21:2379,\ > https://192.168.4.56:2379" cluster-health member 2345cdd5020eb294 is healthy: got healthy result from https://192.168.4.100:2379 member 91d74712f79e544f is healthy: got healthy result from https://192.168.4.21:2379 member b313b7e8d0a528cc is healthy: got healthy result from https://192.168.4.56:2379 cluster is healthy 注意: 启动ETCD集群同时启动二个节点,启动一个节点集群是没法正常启动的(或将处于activing状态)
cd /k8s/etcd/ssl/ /k8s/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem \ --key-file=server-key.pem \ --endpoints="https://192.168.4.100:2379,\ https://192.168.4.21:2379,https://192.168.4.56:2379" \ set /coreos.com/network/config '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
cd /k8s/ scp -r kubernetes 192.168.4.21:/k8s/ scp -r kubernetes 192.168.4.56:/k8s/ scp /k8s/kubernetes/cfg/flanneld 192.168.4.21:/k8s/kubernetes/cfg/flanneld scp /k8s/kubernetes/cfg/flanneld 192.168.4.56:/k8s/kubernetes/cfg/flanneld scp /usr/lib/systemd/system/docker.service 192.168.4.21:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/docker.service 192.168.4.56:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/flanneld.service 192.168.4.21:/usr/lib/systemd/system/flanneld.service scp /usr/lib/systemd/system/flanneld.service 192.168.4.56:/usr/lib/systemd/system/flanneld.service # 启动服务 systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl restart docker
查看是否生效
[root@node ssl]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a5:99:6a brd ff:ff:ff:ff:ff:ff inet 192.168.4.21/16 brd 192.168.255.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::93dc:dfaf:2ddf:1aa9/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:5a:29:34:85 brd ff:ff:ff:ff:ff:ff inet 172.18.58.1/24 brd 172.18.58.255 scope global docker0 valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 16:6e:22:47:d0:cd brd ff:ff:ff:ff:ff:ff inet 172.18.58.0/32 scope global flannel.1 valid_lft forever preferred_lft forever
kubernetes master 节点运行以下组件:
tar -xvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
cp *pem /k8s/kubernetes/ssl/
[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 91af09d8720f467def95b65704862025 [root@master ~]# cat /k8s/kubernetes/cfg/token.csv 91af09d8720f467def95b65704862025,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 \ --bind-address=192.168.4.100 \ --secure-port=6443 \ --advertise-address=192.168.4.100 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
[root@master ~]# ps -ef |grep kube-apiserver root 90572 118543 0 10:27 pts/0 00:00:00 grep --color=auto kube-apiserver root 119804 1 1 Feb26 ? 00:22:45 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 --bind-address=192.168.4.100 --secure-port=6443 --advertise-address=192.168.4.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem
vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-scheduler.service systemctl restart kube-scheduler.service
[root@master ~]# ps -ef |grep kube-scheduler root 3591 1 0 Feb25 ? 00:16:17 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 90724 118543 0 10:28 pts/0 00:00:00 grep --color=auto kube-scheduler [root@master ~]# [root@master ~]# systemctl status kube-scheduler ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-02-25 14:58:31 CST; 1 day 19h ago Docs: https://github.com/kubernetes/kubernetes Main PID: 3591 (kube-scheduler) Memory: 36.9M CGroup: /system.slice/kube-scheduler.service └─3591 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect Feb 27 10:22:54 master kube-scheduler[3591]: I0227 10:22:54.611139 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:23:01 master kube-scheduler[3591]: I0227 10:23:01.496338 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:23:02 master kube-scheduler[3591]: I0227 10:23:02.346595 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:23:19 master kube-scheduler[3591]: I0227 10:23:19.677905 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:26:36 master kube-scheduler[3591]: I0227 10:26:36.850715 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:27:21 master kube-scheduler[3591]: I0227 10:27:21.523891 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:27:22 master kube-scheduler[3591]: I0227 10:27:22.520733 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:28:12 master kube-scheduler[3591]: I0227 10:28:12.498729 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:28:33 master kube-scheduler[3591]: I0227 10:28:33.519011 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:28:50 master kube-scheduler[3591]: I0227 10:28:50.573353 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Hint: Some lines were ellipsized, use -l to show in full.
vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
[root@master ~]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-02-26 14:14:18 CST; 20h ago Docs: https://github.com/kubernetes/kubernetes Main PID: 120023 (kube-controller) Memory: 76.2M CGroup: /system.slice/kube-controller-manager.service └─120023 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elec... Feb 27 10:31:30 master kube-controller-manager[120023]: I0227 10:31:30.722696 120023 node_lifecycle_controller.go:929] N...tamp. Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.088697 120023 gc_controller.go:144] GC'ing orphaned Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.094678 120023 gc_controller.go:173] GC'ing unsche...ting. Feb 27 10:31:34 master kube-controller-manager[120023]: I0227 10:31:34.271634 120023 attach_detach_controller.go:634] pr...4.21" Feb 27 10:31:35 master kube-controller-manager[120023]: I0227 10:31:35.723490 120023 node_lifecycle_controller.go:929] N...tamp. Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.377876 120023 attach_detach_controller.go:634] pr....100" Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.498005 120023 attach_detach_controller.go:634] pr...4.56" Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.500915 120023 cronjob_controller.go:111] Found 0 jobs Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505005 120023 cronjob_controller.go:119] Found 0 cronjobs Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505021 120023 cronjob_controller.go:122] Found 0 groups Hint: Some lines were ellipsized, use -l to show in full. [root@master ~]# [root@master ~]# ps -ef|grep kube-controller-manager root 90967 118543 0 10:31 pts/0 00:00:00 grep --color=auto kube-controller-manager root 120023 1 0 Feb26 ? 00:08:42 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem
vim /etc/profile PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin # 生效变量 source /etc/profile
[root@master ~]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"}
kubernetes work 节点运行以下组件:
cp kubelet kube-proxy /k8s/kubernetes/bin/ scp kubelet kube-proxy 192.168.4.21:/k8s/kubernetes/bin/ scp kubelet kube-proxy 192.168.4.56:/k8s/kubernetes/bin/
# 在master节点 cd /k8s/kubernetes/ssl/ # 编辑并运行该脚本 vim environment.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=91af09d8720f467def95b65704862025 KUBE_APISERVER="https://192.168.4.100:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.21:/k8s/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.56:/k8s/kubernetes/cfg/
建立 kubelet 参数配置模板文件
# 节点1 vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.4.21 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true # 节点2 vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.4.56 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
# 节点1 vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.21 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" # 节点2 vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.56 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
能够手动或自动 approve CSR 请求。推荐使用自动的方式,由于从 v1.8 版本开始,能够自动轮转approve csr 后生成的证书。
手动 approve CSR 请求
查看 CSR 列表:
# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 39m kubelet-bootstrap Pending node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s 5m5s kubelet-bootstrap Pending # kubectl certificate approve node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs certificatesigningrequest.certificates.k8s.io/node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs # kubectl certificate approve node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s certificatesigningrequest.certificates.k8s.io/node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s approved [ # kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 41m kubelet-bootstrap Approved,Issued node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s 7m32s kubelet-bootstrap Approved,Issued
[root@master ssl]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.4.100 Ready 43h v1.13.0 192.168.4.21 Ready 20h v1.13.0 192.168.4.56 Ready 20h v1.13.0
kube-proxy 运行在全部 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化状况,建立路由规则来进行服务负载均衡。
vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.100 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
[root@node ~]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-02-25 15:38:16 CST; 1 day 19h ago Main PID: 2887 (kube-proxy) Memory: 8.2M CGroup: /system.slice/kube-proxy.service ‣ 2887 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.4.100 --cluster-cidr=10.... Feb 27 11:06:44 node kube-proxy[2887]: I0227 11:06:44.625875 2887 config.go:141] Calling handler.OnEndpointsUpdate
打node 或者master 节点的标签
kubectl label node 192.168.4.100 node-role.kubernetes.io/master='master' kubectl label node 192.168.4.21 node-role.kubernetes.io/node='node' kubectl label node 192.168.4.56 node-role.kubernetes.io/node='node'
[root@master ~]# kubectl get node,cs NAME STATUS ROLES AGE VERSION node/192.168.4.100 Ready master 43h v1.13.0 node/192.168.4.21 Ready node 20h v1.13.0 node/192.168.4.56 Ready node 20h v1.13.0 NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"}