几张比较不错的图node
集群功能各模块功能描述:linux
Master节点:
Master节点上面主要由四个模块组成,APIServer,schedule,controller-manager,etcdgit
APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。github
schedule: schedule负责调度Pod到合适的Node上,若是把scheduler当作一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,一样也保留了接口。用户根据本身的需求定义本身的调度算法。web
controller manager: 若是APIServer作的是前台的工做的话,那么controller manager就是负责后台的。每个资源都对应一个控制器。而control manager就是负责管理这些控制器的,好比咱们经过APIServer建立了一个Pod,当这个Pod建立成功后,APIServer的任务就算完成了。算法
etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。docker
Node节点:
每一个Node节点主要由三个模板组成:kublet, kube-proxyjson
kube-proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP链接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,而且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者形成影响,另外,kube-proxy还支持session affinity。bootstrap
kublet:kublet是Master在每一个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的全部容器,可是若是容器不是经过kubernetes建立的,它并不会管理。本质上,它负责使Pod的运行状态与指望的状态一致。vim
os:centos 7.5.1804
IP | hostname | 服务 |
---|---|---|
4.71 | master | kube-apiservice,kube-scheduler,kube-controller-manager,etcd |
4.72 | node1 | kubelet,kube-proxy,flanneld,docker |
4.76 | node2 | kubelet,kube-proxy,flanneld,docker |
systemctl stop firewalld setenforce 0 (临时关闭) vi /etc/selinux/config SELINUX=disabled
Client Binaries https://dl.k8s.io/v1.14.0/kubernetes-client-linux-amd64.tar.gz Server Binaries https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz Node Binaries https://dl.k8s.io/v1.14.0/kubernetes-node-linux-amd64.tar.gz etcd https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz flannel https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz wget https://dl.k8s.io/v1.14.0/kubernetes-client-linux-amd64.tar.gz wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
mkdir /k8s/etcd/{bin,cfg,ssl} -p mkdir /k8s/kubernetes/{bin,cfg,ssl} -p cd /k8s/etcd/ssl/
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "192.168.4.71", "192.168.4.72", "192.168.4.76" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
[root@t71 cfg]# vim /k8s/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/data1/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.71:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.71:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.71:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.71:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.71:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #[Security] ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"
mkdir /data1/etcd [root@t71 cfg]# vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data1/etcd/ EnvironmentFile=-/k8s/etcd/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.targe
systemctl daemon-reload systemctl enabel etcd systemctl start etcd
[root@t71 bin]# etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.71:2379" cluster-health member ac829673d2b22824 is healthy: got healthy result from https://192.168.4.71:2379 cluster is healthy [root@t71 bin]#
cd /k8s/kubernetes/ssl cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.254.0.1", "127.0.0.1", "192.168.4.71", "192.168.4.72", "192.168.4.76", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
tar -zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
[root@elasticsearch01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 663eb46fb81c4cf2a9bedb84bea03582 vim /k8s/kubernetes/cfg/token.csv 663eb46fb81c4cf2a9bedb84bea03582,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@t71 cfg]# vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.4.71:2379 \ --bind-address=192.168.4.71 \ --secure-port=6443 \ --advertise-address=192.168.4.71 \ --allow-privileged=true \ --service-cluster-ip-range=10.254.0.0/16 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
[root@t71 cfg]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver
[root@t71 cfg]# vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080
[root@t71 cfg]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl start kube-scheduler systemctl enable kube-scheduler
[root@t71 cfg]# vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
[root@t71 cfg]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl start kube-controller-manager systemctl enable kube-controller-manager
vim /etc/profile export PATH=/k8s/kubernetes/bin:$PATH source /etc/profile
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce --showduplicates | sort -r yum install docker-ce -y systemctl start docker && systemctl enable docker
kublet 运行在每一个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等; kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用状况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和受权,拒绝未受权的访问(如apiserver、heapster)
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz tar zxvf kubernetes-node-linux-amd64.tar.gz cd kubernetes/node/bin/ cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
scp *.pem 10.2.8.65:$PWD
[root@t72 cfg]# vim environment.sh #!/bin/bash #建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=663eb46fb81c4cf2a9bedb84bea03582 KUBE_APISERVER="https://192.168.4.71:6443" #设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig #设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \ --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
sh environment.sh
[root@t72 cfg]# vim kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.4.72 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.254.0.10"] clusterDomain: cluster.local. failSwapOn: false #authentication: # anonymous: # enabled: true authentication: anonymous: enabled: true # Defaults to false as of 1.10 webhook: enabled: false # Deafults to true as of 1.10 authorization: mode: AlwaysAllow
[root@t72 cfg]# vim kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.72 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@t72 cfg]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
systemctl daemon-reload systemctl start kubelet systemctl enable kubelet
# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 102s kubelet-bootstrap Pending
接受node
# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved
再看CRS
kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 5m13s kubelet-bootstrap Approved,Issued
[root@t72 cfg]# vim kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.72 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
[root@t72 cfg]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl start kube-proxy systemctl enable kube-proxy
注意期间要是kubelet,kube-proxy配置错误,好比监听IP或者hostname错误致使node not found,须要删除kubelet-client证书,重启kubelet服务,重启认证csr便可
/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.71:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
注:写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
[root@t72 cfg]# vim flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.71:2379, -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
[root@t72 cfg]# vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS #ExecStart=/k8s/kubernetes/bin/flanneld $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
注:mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥; flanneld 使用系统缺省路由所在的接口与其它节点通讯,对于有多个网络接口(如内网和公网)的节点,能够用 -iface 参数指定通讯接口; flanneld 运行时须要 root 权限;
[root@t72 cfg]# cat /usr/lib/systemd/system/docker.service | grep -v "#" [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl stop docker systemctl start flanneld systemctl enable flanneld systemctl start docker systemctl restart kubelet systemctl restart kube-proxy
ip addr
kubectl get nodes -o wide