kube-scheduler为master节点组件。kube-scheduler集群包含 3 个节点,启动后将经过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的高可用性。node
特别说明:这里全部的操做都是在devops这台机器上经过ansible工具执行;kube-scheduler 在以下两种状况下使用该证书:linux
#################### Variable parameter setting ###################### KUBE_NAME=kube-scheduler K8S_INSTALL_PATH=/data/apps/k8s/kubernetes K8S_BIN_PATH=${K8S_INSTALL_PATH}/sbin K8S_LOG_DIR=${K8S_INSTALL_PATH}/logs K8S_CONF_PATH=/etc/k8s/kubernetes KUBE_CONFIG_PATH=/etc/k8s/kubeconfig CA_DIR=/etc/k8s/ssl SOFTWARE=/root/software VERSION=v1.14.2 PACKAGE="kubernetes-server-${VERSION}-linux-amd64.tar.gz" DOWNLOAD_URL=“”https://github.com/devops-apps/download/raw/master/kubernetes/${PACKAGE}" ETH_INTERFACE=eth1 LISTEN_IP=$(ifconfig | grep -A 1 ${ETH_INTERFACE} |grep inet |awk '{print $2}') USER=k8s
访问kubernetes github 官方地址下载稳定的 realease 包至本机;git
wget $DOWNLOAD_URL -P $SOFTWARE
将kubernetes 软件包分发到各个master节点服务器;github
sudo ansible master_k8s_vgs -m copy -a "src=${SOFTWARE}/$PACKAGE dest=${SOFTWARE}/" -b
### 1.Check if the install directory exists. if [ ! -d "$K8S_BIN_PATH" ]; then mkdir -p $K8S_BIN_PATH fi if [ ! -d "$K8S_LOG_DIR/$KUBE_NAME" ]; then mkdir -p $K8S_LOG_DIR/$KUBE_NAME fi if [ ! -d "$K8S_CONF_PATH" ]; then mkdir -p $K8S_CONF_PATH fi if [ ! -d "$KUBE_CONFIG_PATH" ]; then mkdir -p $KUBE_CONFIG_PATH fi ### 2.Install kube-apiserver binary of kubernetes. if [ ! -f "$SOFTWARE/kubernetes-server-${VERSION}-linux-amd64.tar.gz" ]; then wget $DOWNLOAD_URL -P $SOFTWARE >>/tmp/install.log 2>&1 fi cd $SOFTWARE && tar -xzf kubernetes-server-${VERSION}-linux-amd64.tar.gz -C ./ cp -fp kubernetes/server/bin/$KUBE_NAME $K8S_BIN_PATH ln -sf $K8S_BIN_PATH/$KUBE_NAM /usr/local/bin chown -R $USER:$USER $K8S_INSTALL_PATH chmod -R 755 $K8S_INSTALL_PATH
cd ${CA_DIR} sudo ansible master_k8s_vgs -m copy -a "src=kube-scheduler.pem dest=${CA_DIR}/" -b sudo ansible master_k8s_vgs -m copy -a "src=kube-scheduler-key.pem dest=${CA_DIR}/" -b sudo ansible master_k8s_vgs -m copy -a "src=ca.pem dest=${CA_DIR}/" -b sudo ansible master_k8s_vgs -m copy -a "src=ca-key.pem dest=${CA_DIR}/" -b
kube-scheduler使用 kubeconfig文件链接访问 apiserver服务,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler证书:shell
cd $KUBE_CONFIG_PATH sudo ansible master_k8s_vgs -m copy -a \ "src=kube-scheduler.kubeconfig dest=$KUBE_CONFIG_PATH/" -b
备注: 若是在前面小节已经同步过各组件kubeconfig和证书文件,此处能够没必要执行此操做;api
cat >${K8S_CONF_PATH}/kube-scheduler.yaml<<EOF apiVersion: kubescheduler.config.k8s.io/v1alpha1 kind: KubeSchedulerConfiguration bindTimeoutSeconds: 600 clientConnection: burst: 200 kubeconfig: "${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig" qps: 100 enableContentionProfiling: false enableProfiling: true hardPodAffinitySymmetricWeight: 1 healthzBindAddress: 127.0.0.1:10251 leaderElection: leaderElect: true metricsBindAddress: 127.0.0.1:10251 EOF
cat >/usr/lib/systemd/system/${KUBE_NAME}.service<<EOF [Unit] Description=Kubernetes kube-scheduler Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] User=${USER} WorkingDirectory=${K8S_INSTALL_PATH} ExecStart=${K8S_BIN_PATH}/${KUBE_NAME} \\ --config=/etc/k8s/kubernetes/kube-scheduler.yaml \\ --bind-address=${LISTEN_IP} \\ --secure-port=10259 \\ --tls-cert-file=${CA_DIR}/kube-scheduler.pem \\ --tls-private-key-file=${CA_DIR}/kube-scheduler-key.pem \\ --kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\ --authentication-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\ --authorization-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\ --client-ca-file=${CA_DIR}/ca.pem \\ --requestheader-allowed-names="" \\ --requestheader-client-ca-file=${CA_DIR}/ca.pem \\ --requestheader-extra-headers-prefix="X-Remote-Extra-" \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --leader-elect=true \\ --alsologtostderr=true \\ --logtostderr=false \\ --log-dir=${K8S_LOG_DIR}/${KUBE_NAME} \\ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
systemctl status kube-scheduler|grep Active
确保状态为 active (running),不然查看日志,确认缘由:安全
sudo journalctl -u kube-scheduler
注意:如下命令在 kube-scheduler 节点上执行。kube-scheduler 监听 10251 和 10251 端口:两个接口都对外提供 /metrics 和 /healthz 的访问。服务器
sudo netstat -ntlp | grep kube-sc tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 28786/kube-schedule tcp 0 0 10.10.10.22:10259 0.0.0.0:* LISTEN 28786/kube-schedule
注意:不少安装文档都是关闭了非安全端口,将安全端口改成默认的非安全端口数值,这会致使查看集群状态是报下面所示的错误,执行 kubectl get cs命令时,apiserver 默认向 127.0.0.1 发送请求。当controller-manager、scheduler以集群模式运行时,有可能和kube-apiserver不在一台机器上,且访问方式为https,则 controller-manager或scheduler 的状态为 Unhealthy,但实际上它们工做正常。则会致使上述error,但实际集群是安全状态;app
kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Unhealthy dial tcp 127.0.0.1:10252: connect: connection refused scheduler Unhealthy dial tcp 127.0.0.1:10251: connect: connection refused etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} 正常输出应该为: NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
随机找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限.tcp
kube-scheduler部署完后,整个kubernetes集群master节点部署完成,后面还须要要部署node节点相关主机,关于kube-scheduler脚本请从此处获取;