高可用环境架构:html
组件版本:node
软件 | 版本 |
---|---|
Linux操做系统 | CentOS7.5_x64 |
Kubernetes | 1.12 |
Docker | 18.xx-ce |
Etcd | 3.x |
Flannel | 0.10 |
服务器角色:linux
角色 | IP | 组件 |
---|---|---|
master01 | 192.168.1.43 | kube-apiserver,kube-controller-manager,kube-scheduler etcd |
master02 | 192.168.1.63 | kube-apiserver,kube-controller-manager,kube-scheduler etcd |
node01 | 192.168.1.30 | kubelet,kube-proxy,docker,flannel,etcd |
node02 | 192.168.1.51 | kubelet,kube-proxy,docker,flannel |
node03 | 192.168.1.141 | kubelet,kube-proxy,docker,flannel |
Load Balancer (Master) | 192.168.1.31 192.168.1.230 (VIP) | Nginx L4 |
Load Balancer (Backup) | 192.168.1.186 | Nginx L4 |
自签SSL证书:nginx
组件 | 使用的证书 |
---|---|
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,ca-key.pem |
kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin.pem,admin-key.pem |
准备工做:git
关闭防火墙: # systemctl stop firewalld && systemctl disable firewalld 同步时间:(ssl验证时间) # yum -y install ntpdate && ntpdate time.windows.com
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl #cfssl来生成证书 curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson #cfssljson传入json文件生成证书 curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo #cfssl-cetinfo查看生成证书信息 chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
# mkdir ~/k8s/etcd-cert -p # cd ~/k8s/etcd-cert
ca根证书:github
# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
ca请求签名证书:docker
# cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
为ETCD颁发ssl证书:(将etcd节点ip加入其中)数据库
# cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.1.43", "192.168.1.30", "192.168.1.51" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF
生成证书:json
初始化ca根证书: cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #会生成ca-key.pem,ca.pem 生成证书: cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server #会生成server-key,server #说明: #-ca=ca.pem 指定ca #-ca-key=ca-key.pem 指定ca私钥 #-config=ca-config.json 指定ca配置文件 #-profile=www 应用配置文件中的www
二进制包下载:https://github.com/etcd-io/etcd/releasesbootstrap
解压二进制包:
# cd ~/k8s # tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz 建立etcd目录: # mkdir /opt/etcd/{cfg,bin,ssl} -p #配置,可执行,证书目录
移动可执行文件到etcd目录:
# cd ~/k8s/etcd-v3.3.10-linux-amd64 # mv etcd etcdctl /opt/etcd/bin/ # ls /opt/etcd/bin/ etcd etcdctl
把刚生成的拷贝ssl文件到etc目录:
# cd ~/k8s/etcd-cert # cp *pem # ls /opt/etcd/ssl/ ca-key.pem ca.pem server-key.pem server.pem
建立etcd配置文件:
# cat <<EOF >/opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.43:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.43:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.43:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.43:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.43:2380,etcd02=https://192.168.1.30:2380,etcd03=https://192.168.1.51:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF
建立systemctld管理文件:
# cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd ExecStart=/opt/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
开机并启动etcd:
# systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
拷贝etcd文件到node1,node2
# scp -r /opt/etcd/ root@192.168.1.30:/opt/ # scp -r /opt/etcd/ root@192.168.1.51:/opt/ # scp /usr/lib/systemd/system/etcd.service root@192.168.1.51:/usr/lib/systemd/system/ # scp /usr/lib/systemd/system/etcd.service root@192.168.1.30:/usr/lib/systemd/system/
修改配置文件
node1: # cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.30:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.30:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.30:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.30:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.43:2380,etcd02=https://192.168.1.30:2380,etcd03=https://192.168.1.51:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" node2: # cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.51:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.51:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.51:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.51:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.43:2380,etcd02=https://192.168.1.30:2380,etcd03=https://192.168.1.51:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" 开机并启动etcd: # systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
查看etcd集群状态:
# cd /root/k8s/etcd-cert # /opt/etcd/bin/etcdctl \ > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ > --endpoints="https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379" \ > cluster-health member 8da171dbef9ded69 is healthy: got healthy result from https://192.168.1.51:2379 member d250ef9d0d70c7c9 is healthy: got healthy result from https://192.168.1.30:2379 member f3b3c9aa5b97cee8 is healthy: got healthy result from https://192.168.1.43:2379 cluster is healthy
# yum install -y yum-utils device-mapper-persistent-data lvm2 #安装依赖包 # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #添加Docker软件包源 # yum install -y docker-ce #安装Docker CE # systemctl start docker && systemctl enable docker #启动Docker服务并设置开机启动
工做原理:
# cd /root/k8s/etcd-cert # /opt/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
下载二进制包:https://github.com/coreos/flannel/releases
解压二进制包: # tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz 建立k8s目录 # mkdir /opt/kubernetes/{cfg,bin,ssl} -p 移动可执行文件到k8s目录 # mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
建立flannel配置文件:
# cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379 \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
建立flannel system管理文件:
cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF
配置Docker启动指定子网段:
# vim /usr/lib/systemd/system/docker.service EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID
重启docker和flannel
# systemctl daemon-reload && systemctl start flanneld && systemctl enable flanneld # systemctl restart docker
检查是否生效
# ps -ef |grep docker root 42770 1 0 12:41 ? 00:00:00 /usr/bin/dockerd --bip=172.17.75.1/24 --ip-masq=false --mtu=1450 # ip addr 3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether ce:e0:c4:9f:7b:64 brd ff:ff:ff:ff:ff:ff inet 172.17.75.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::cce0:c4ff:fe9f:7b64/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:41:6d:53:ce brd ff:ff:ff:ff:ff:ff inet 172.17.75.1/24 brd 172.17.75.255 scope global docker0 valid_lft forever preferred_lft forever
拷贝文件到其余节点:
scp -r /opt/kubernetes/ root@192.168.1.51:/opt scp -r /usr/lib/systemd/system/{flanneld,docker}.service root@192.168.1.51:/usr/lib/systemd/system/
最后保证全网互通。
# docker run -it busybox sh # ping 172.17.67.2
在部署Kubernetes以前必定要确保etcd、flannel、docker是正常工做的,不然先解决问题再继续。
建立ca证书:
建立目录: # cd ~/k8s # mkdir k8s-cert # cd k8s-cert # cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF # cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF 初始化 ca: # cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成api server证书(注意受权ip访问apiserver,高可用须要加入master ip,lb ip,VIP)
cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.1.43", "192.168.1.63", "192.168.1.31", "192.168.1.186", "192.168.1.230", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 生成证书: # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
生成kube-proxy证书
cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 生成证书: # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 最终生成如下证书文件: # ls *.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem 建立k8s目录: # mkdir /opt/kubernetes/{cfg,bin,ssl} -p 拷贝ssl到k8s目录下: # cp ca*.pem server*.pem /opt/kubernetes/ssl/
下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的全部组件。
# cd ~/k8s # tar -zxvf kubernetes-server-linux-amd64.tar.gz # cd ~/k8s/kubernetes/server/bin/ # cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/
建立token文件:
生成token: # head -c 16 /dev/urandom | od -An -t x | tr -d ' ' # vim /opt/kubernetes/cfg/token.csv 2f7a15198f7c0c3af3ba7f264b6885c2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
第一列:随机字符串,本身可生成
第二列:用户名
第三列:UID
第四列:用户组
建立apiserver配置文件:(注意修改master地址,etcd服务)
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379 \\ --bind-address=192.168.1.43 \\ --secure-port=6443 \\ --advertise-address=192.168.1.43 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
配置好前面生成的证书,确保能链接etcd。
参数说明:
systemd管理apiserver:
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
启动
# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver # ps -ef | grep kube-apiserver
建立配置文件:
# cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect" EOF
参数说明:
建立systemd管理文件:
# cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
启动:
# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler # ps -ef | grep kube-scheduler
建立controller-manager配置文件:
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.0.0.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s" EOF
systemd管理controller-manager组件:
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
启动
# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager # ps -ef | grep kube-controller-manager
全部组件都已经启动成功,经过kubectl工具查看当前集群组件状态:
# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} 如上输出说明组件都正常。
Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通讯,当Node节点不少时,签署证书是一件很繁琐的事情,所以有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
认证大体工做流程如图所示:
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
建立kubelet bootstrapping kubeconfig(在master上)
# cd ~/k8s # mkdir kubeconfig # cd kubeconfig/ 设置kubectl环境变量: # vi /etc/profile # export PATH=$PATH:/opt/kubernetes/bin/ # source /etc/profile # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/root/k8s/k8s-cert/ca.pem \ --embed-certs=true \ --server=https://192.168.1.43:6443 \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=2f7a15198f7c0c3af3ba7f264b6885c2 \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
建立kube-proxy kubeconfig文件:(在master上)
kubectl config set-cluster kubernetes \ --certificate-authority=/root/k8s/k8s-cert/ca.pem \ --embed-certs=true \ --server=https://192.168.1.43:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/root/k8s/k8s-cert/kube-proxy.pem \ --client-key=/root/k8s/k8s-cert/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# ls bootstrap.kubeconfig kube-proxy.kubeconfig
拷贝配置文件到node
# scp kube-proxy.kubeconfig bootstrap.kubeconfig root@192.168.1.30:/opt/kubernetes/cfg/ # scp kube-proxy.kubeconfig bootstrap.kubeconfig root@192.168.1.51:/opt/kubernetes/cfg/
将前面下载的二进制包中的kubelet和kube-proxy拷贝到/opt/kubernetes/bin目录下。
# cd ~/k8s/kubernetes/server/bin # scp kubelet kube-proxy root@192.168.1.30:/opt/kubernetes/bin/ # scp kubelet kube-proxy root@192.168.1.51:/opt/kubernetes/bin/
建立kubelet配置文件:
cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --hostname-override=192.168.1.30 \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet.config \\ --cert-dir=/opt/kubernetes/ssl \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF
参数说明:
其中/opt/kubernetes/cfg/kubelet.config配置文件以下:
cat <<EOF >/opt/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.1.30 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true EOF
systemd管理kubelet组件:
cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
启动:
# systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet # ps -ef | grep kubelet
在Master审批Node加入集群:
启动后还没加入到集群中,须要手动容许该节点才能够。
在Master节点查看请求签名的Node:
# kubectl get csr # kubectl certificate approve XXXXID # kubectl get node
建立kube-proxy配置文件:
cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \\ --v=4 \\ --hostname-override=192.168.1.30 \\ --cluster-cidr=10.0.0.0/24 \\ --proxy-mode=ipvs \\ --masquerade-all=true \\ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF
systemd管理kube-proxy组件:
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
启动:
# systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy # ps -ef | grep kube-proxy
拷贝配置文件到其余node:
配置文件: # scp -r /opt/kubernetes/ root@192.168.1.51:/opt/ systemd管理文件: # scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.1.51:/usr/lib/systemd/system/ 删除ssl文件(master颁发): # rm -f /opt/kubernetes/ssl/* 修改配置文件(节点ip): # cd /opt/kubernetes/cfg kubelet,kubelet.config,kube-proxy,
启动:
# systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy # ps -ef | grep kube-proxy # systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet # ps -ef | grep kubelet
在Master审批Node加入集群:
启动后还没加入到集群中,须要手动容许该节点才能够。
在Master节点查看请求签名的Node:
# kubectl get csr # kubectl certificate approve XXXXID # kubectl get node
# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.30 Ready <none> 14h v1.12.7 192.168.1.51 Ready <none> 23s v1.12.7 # kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
至此单master搭建完毕,下面拓展多master
拷贝全部组件到master02: # scp -r /opt/kubernetes/ root@192.168.1.63:/opt 拷贝systemd文件拷贝: # scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.1.63:/usr/lib/systemd/system/ 拷贝etcd文件: # scp -r /opt/etcd/ root@192.168.1.63:/opt/ 修改apiserver地址(address): # vi /opt/kubernetes/cfg/kube-apiserver
启动:
启动kube-apiserver: # systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver 启动kube-scheduler: # systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler 启动kube-controller-manager: # systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager 查看三个组件启动: #ps -ef | grep kube
查看集群状态:
设置kubectl环境变量: # vi /etc/profile # export PATH=$PATH:/opt/kubernetes/bin/ # source /etc/profile # kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.30 Ready <none> 15h v1.12.7 192.168.1.51 Ready <none> 53m v1.12.7 # kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
nginx-master:
配置源: # vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 安装nginx: # yum -y install nginx 添加L4负载均衡: # vim /etc/nginx/nginx.conf stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.1.43:6443; server 192.168.1.63:6443; } server { listen 6443; proxy_pass k8s-apiserver; } }
启动:
关闭selinux: # setenforce 0 # vi /etc/selinux/config 将SELINUX=enforcing改成SELINUX=disabled #systemctl start nginx # netstat -anpt | grep 6443 # echo "master" > /usr/share/nginx/html/index.html
nginx-backup:
配置源: # vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 安装nginx: # yum -y install nginx 拷贝到backup: # scp /etc/nginx/nginx.conf root@192.168.1.31:/etc/nginx/ 关闭selinux: # setenforce 0 # vi /etc/selinux/config 将SELINUX=enforcing改成SELINUX=disabled #systemctl start nginx # netstat -anpt | grep 6443 # echo "backup" > /usr/share/nginx/html/index.html
master和backup安装keeplived:
# yum -y install keepalived
master的keeplived配置文件:
# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { # 接收邮件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 邮件发送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 # VRRP 路由 ID实例,每一个实例是惟一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.230/24 } track_script { check_nginx } }
backup的keeplived配置文件:
! Configuration File for keepalived global_defs { # 接收邮件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 邮件发送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 51 # VRRP 路由 ID实例,每一个实例是惟一的 priority 90 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.230/24 } track_script { check_nginx } }
nginx检查脚本:
# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi
启动:
# systemctl start keepalived 关闭master的nginx进行测试: # systemctl stop nginx
# cd /opt/kubernetes/cfg # vi bootstrap.kubeconfig # vi kubelet.kubeconfig # vi kube-proxy.kubeconfig # systemctl restart kubelet # systemctl restart kube-proxy
kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous
# kubectl run nginx --image=nginx --replicas=3 # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
查看Pod,Service:
# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-j4bjq 1/1 Running 0 19m nginx-dbddb74b8-kpqht 1/1 Running 0 19m nginx-dbddb74b8-xjn5k 1/1 Running 0 19m # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 16h nginx NodePort 10.0.0.33 <none> 88:32694/TCP 20m
地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard
# cd /k8s/Dashboard # ls dashboard-configmap.yaml dashboard-controller.yaml dashboard-rbac.yaml dashboard-secret.yaml dashboard-service.yaml k8s-admin.yaml # kubectl apply -f . # kubectl get pod,svc -o wide --all-namespaces | grep dashboard kube-system pod/kubernetes-dashboard-65f974f565-crvwj 1/1 Running 1 6m1s 172.17.75.2 192.168.1.30 <none> kube-system service/kubernetes-dashboard NodePort 10.0.0.192 <none> 443:30001/TCP 6m k8s-app=kubernetes-dashboard
访问(尽可能用火狐):https://192.168.1.30:30001
查看token:
# kubectl get secret --all-namespaces | grep dashboard kube-system dashboard-admin-token-nrvzx kubernetes.io/service-account-token 3 9m16s kube-system kubernetes-dashboard-certs Opaque 0 9m17s kube-system kubernetes-dashboard-key-holder Opaque 2 9m17s kube-system kubernetes-dashboard-token-cqqm8 kubernetes.io/service-account-token 3 9m17s # kubectl describe secret dashboard-admin-token-nrvzx -n kube-system