在开始以前呢,咱们回顾下以前学过的知识点:node
最开始经过Kubeadm静默黑盒(自动)来安装,为何这么说呢由于咱们是经过Kubeadm自动安装的,并不知道作了那些具体的操做。这也是为何写这篇手动部署的缘由,是为了让你们更好的了解下和体验下二者区别以及部署流程linux
接着咱们学习了如何经过Dashboard访问以及一些重要的知识技能点的应用好比:Label标签、DaemonSet调度神器、应用状态检测。nginx
还有一些更为接近实际应用的操做好比:最简易的外网访问(适用于新手快速体验)、高级的外网访问nginx-ingress、traefik-ingress(实际场景应用)以及咱们的业务弹性伸缩和滚动升级。git
最后学习了存储资源管理和外挂配置管理ConfigMap,这些都在实际应用场景中很是实用。那OK,咱们今天来学习下手动搭建Kubernetes,缘由刚才也说了,为了让你们更好的了解下K8S,可能会有人问,学着学着怎么倒回来了,在这里向你们道个歉,由于以前没有很好的梳理,以致于遗忘了手动部署,闲话很少说,咱们下面来看看怎么部署。github
环境描述:web
采用CentOS7.4 minimual,docker 17.03-ce, etcd 3.1, k8s 1.11docker
咱们这里选用三个节点搭建一个实验环境。json
10.0.100.202 k8s-masterbootstrap
10.0.100.203 k8s-node1centos
10.0.100.204 k8s-node2
准备环境:(下面6条在全部节点操做)
1.配置好各节点hosts文件
2.关闭各节点系统防火墙
3.关闭各节点SElinux
4.关闭各节点swap (注释/etc/fstab文件里swap相关的行)
5.配置各节点系统内核参数使流过网桥的流量也进入iptables/netfilter框架中,在/etc/sysctl.conf中添加如下配置:
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF sysctl --system
6.配置所需的YUM源
yum -y install epel-release yum install -y yum-utils device-mapper-persistent-data lvm2 wget yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum –y install --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos systemctl enable docker && systemctl restart docker
OK到这里准备环境就作好了,下面咱们来建立部署集群时所需的TLS证书以及密钥
kubernetes 系统的各组件须要使用 TLS 证书对通讯进行加密,本文使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书;
注意:如下操做都在 master 节点即10.0.100.202这台主机上执行,证书只须要建立一次便可,之后在向集群中添加新节点时只要将 /etc/kubernetes/ 目录下的证书拷贝到新节点上便可。
Master节点
1.安装CFSSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 chmod +x cfssl_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo export PATH=/usr/local/bin:$PATH
2.配置CA
mkdir /root/ssl cd /root/ssl cfssl print-defaults config > config.json cfssl print-defaults csr > csr.json # 根据config.json文件的格式建立以下的ca-config.json文件 # 过时时间设置成了 87600h cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF
3.建立CA证书签名请求,即建立ca-csr.json 文件,内容以下:
{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ], "ca": { "expiry": "87600h" } }
4. 生成 CA 证书和私钥
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
5.建立Kubernetes证书签名请求文件 kubernetes-csr.json,注意记得替换相应ip
{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.0.100.202", "10.0.100.203", "10.0.100.204", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
6. 生成 kubernetes 证书和私钥
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes $ ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
7.建立admin证书签名请求文件 admin-csr.json
{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] }
8. 生成 admin 证书和私钥
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin $ ls admin* admin.csr admin-csr.json admin-key.pem admin.pem
9. 建立 kube-proxy 证书签名请求文件 kube-proxy-csr.json
{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
10. 生成 kube-proxy 客户端证书和私钥
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy $ ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
11.分发证书,将生成的证书和秘钥文件(后缀名为.pem)拷贝到全部机器的 /etc/kubernetes/ssl 目录下备用;
mkdir -p /etc/kubernetes/ssl cp *.pem /etc/kubernetes/ssl cd /etc/kubernetes scp ./ssl/* 10.0.100.203:/etc/kubernetes/ssl/ scp ./ssl/* 10.0.100.204:/etc/kubernetes/ssl/
12.安装Kubectl命令行工具
wget https://dl.k8s.io/v1.11.0/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz cp kubernetes/client/bin/kube* /usr/bin/ chmod a+x /usr/bin/kube*
13.建立 kubectl kubeconfig 文件
export KUBE_APISERVER="https://10.0.100.202:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} # 设置客户端认证参数 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem # 设置上下文参数 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 设置默认上下文 kubectl config use-context kubernetes
14. 建立 TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF cp token.csv /etc/kubernetes/
15. 建立 kubelet bootstrapping kubeconfig 文件
cd /etc/kubernetes export KUBE_APISERVER="https://10.0.100.202:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
16. 建立 kube-proxy kubeconfig 文件
export KUBE_APISERVER="https://10.0.100.202:6443" # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
17.分发 kubeconfig 文件,将两个 kubeconfig 文件分发到全部 Node 机器的 /etc/kubernetes/ 目录
cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/ scp ./bootstrap.kubeconfig kube-proxy.kubeconfig 10.0.100.203:/etc/kubernetes/ scp ./bootstrap.kubeconfig kube-proxy.kubeconfig 10.0.100.204:/etc/kubernetes/
OK,到这里建立证书以及密钥就高一段落了,相信有不少人都有所迷惑,由于刚才建立了好多密钥和证书,下面我来总结下:
生成的 CA 证书和秘钥文件以下:
ca-key.pem ca.pem kubernetes-key.pem kubernetes.pem kube-proxy.pem kube-proxy-key.pem admin.pem admin-key.pem
使用证书的组件以下:
etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem; kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem; kubelet:使用 ca.pem; kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem; kubectl:使用 ca.pem、admin-key.pem、admin.pem; kube-controller-manager:使用 ca-key.pem、ca.pem
相信看完上面的总结就一目了然了,OK下面咱们来进行etcd集群的安装。
全部节点部署etcd
Kuberntes 使用 etcd 来存储全部数据,下面咱们来建立三节点etcd集群,也就是master、node一、node2前面咱们已经建立了不少TLS证书,我们这里就复用下kubernetes的证书,如下操做在全部节点执行。
1.下载etcd源码文件
wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz tar -xvf etcd-v3.1.5-linux-amd64.tar.gz mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin
2.建立 etcd 的 systemd unit 文件,在/usr/lib/systemd/system/目录下建立文件etcd.service,内容以下。注意替换IP地址为你本身的etcd集群的主机IP。
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/local/bin/etcd \ --name ${ETCD_NAME} \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster k8s-master=https://10.0.100.202:2380,k8s-node1=https://10.0.100.203:2380,k8s-node2=https://10.0.100.204:2380 \ --initial-cluster-state new \ --data-dir=${ETCD_DATA_DIR} Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
注意:etcd 的数据目录为 /var/lib/etcd,需在启动服务前建立这个目录,不然启动服务的时候会报错“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”;
3. 环境变量配置文件/etc/etcd/etcd.conf
#[member] ETCD_NAME=k8s-master ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://10.0.100.202:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.100.202:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.100.202:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS=https://10.0.100.202:2379
注意:这是10.0.100.202节点的配置,其余两个etcd节点只要将上面的IP地址改为相应节点的IP地址便可。ETCD_NAME换成对应节点的k8s-node一、k8s-node2。
4. 启动etcd服务
systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd # 在全部的 kubernetes节点重复上面的步骤,直到全部机器的 etcd 服务都已启动。
5. 验证服务
etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem cluster-health 2018-08-14 02:16:44.081321 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated 2018-08-14 02:16:44.084285 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated member 109271147228d387 is healthy: got healthy result from https://10.0.100.203:2379 member 298a4447067ff8b8 is healthy: got healthy result from https://10.0.100.204:2379 member 5bc4c443d246701d is healthy: got healthy result from https://10.0.100.202:2379 cluster is healthy
结果最后一行为 cluster is healthy 时表示集群服务正常。
Master节点
接着刚才的Master节点的操做来,刚才穿插了下etcd的部署,下面来部署Master所需的服务:kube-apiserver、kube-scheduler、kube-controller-manager
1.下载Kubernetes V1.11的源码包
wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes tar -xzvf kubernetes-src.tar.gz cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
2.建立 kube-apiserver的service配置文件,/usr/lib/systemd/system/kube-apiserver.service内容:
[Unit] Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/local/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target /etc/kubernetes/config文件的内容为: KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow-privileged=true" KUBE_MASTER="--master= #该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。
3. apiserver配置文件/etc/kubernetes/apiserver内容为:
KUBE_API_ADDRESS="--advertise-address=10.0.100.202 --bind-address=10.0.100.202 --insecure-bind-address=10.0.100.202" KUBE_ETCD_SERVERS="--etcd-servers=https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"
4. 启动kube-apiserver
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver
5. 建立 kube-controller-manager的serivce配置文件/usr/lib/systemd/system/kube-controller-manager.service:
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/local/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
6. 配置文件/etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"
7. 启动 kube-controller-manager
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager
8. 建立 kube-scheduler的serivce配置文件/usr/lib/systemd/system/kube-scheduler.service:
[Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/local/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
9. 配置文件/etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
10. 启动 kube-scheduler
systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler
11.验证Master节点功能
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}
全部节点部署Flannel
下面咱们来安装Flannel网络插件,全部的node节点都须要安装网络插件才能让全部的Pod加入到同一个局域网中,因此下面的操做在全部节点都须要执行一遍。建议直接使用yum安装flanneld,除非对版本有特殊需求,默认安装的是0.7.1版本的flannel。
1.安装flannel
yum install -y flannel
2.修改service文件/usr/lib/systemd/system/flanneld.service
[Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start \ -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix=${FLANNEL_ETCD_PREFIX} \ $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target WantedBy=docker.service
3.修改/etc/sysconfig/flanneld配置文件:
# Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
若是是多网卡(例如vagrant环境),则须要在FLANNEL_OPTIONS中增长指定的外网出口的网卡,例如iface=eth2
4. 在etcd中建立网络配置(这里只在master节点操做一次就行)
etcdctl --endpoints=https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kube-centos/network etcdctl --endpoints=https://10.0.100.202:2379,https://10.0.100.203:2379,https://10.0.100.204:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kube-centos/network/config '{"Network":"10.30.0.0/16","SubnetLen":24,"Backend":{"Type”:”host-gw“}}’
若是你要使用vxlan模式,能够直接将host-gw改为vxlan便可。
5. 启动flannel
systemctl daemon-reload systemctl enable flanneld systemctl start flanneld systemctl status flannel
部署node节点
OK,到此为止咱们已经完成了Master节点服务、etcd集群、flannel集群都已经搭建完成,下面咱们来看看node节点的服务搭建。首先须要确认下node节点的flannel、docker、etcd是否启动,其次检查下/etc/kubernetes/下的证书和配置文件是否在,具体操做这里就再也不赘述了。
1.修改docker配置使其可使用flannel网络
使用systemctl命令启动flanneld后,会自动执行./mk-docker-opts.sh -i在/run/flannel/目录下生成以下两个文件环境变量文件:
ls /run/flannel/ docker subnet.env
Docker将会读取这两个环境变量文件做为容器启动参数,修改docker的配置文件/usr/lib/systemd/system/docker.service,增长一条环境变量配置:
EnvironmentFile=-/run/flannel/docker
为了不一会重启kubelet的时候会出现error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"报错,咱们如今就增长一条配置:ExecStart中的--exec-opt native.cgroupdriver=systemd,那么为何会出现这个问题呢,这是由于kubelet与docker的cgroup driver不一致致使的,kubelet启动的时候有个—cgroup-driver参数能够指定为"cgroupfs"或者“systemd”。
2.在安装以前先去master节点生成kubelet所需的权限角色:
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes
注意:两个角色缺一不可,不然就会出现这样的报错cannot list pods at the cluster scope
3.下面咱们来安装配置kubelet
wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes tar -xzvf kubernetes-src.tar.gz cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/
4. 建立kubelet的service配置文件/usr/lib/systemd/system/kubelet.service:
[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
5.建立kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改成你的每台node节点的IP地址
KUBELET_ADDRESS="--address=10.0.100.203" KUBELET_HOSTNAME="--hostname-override=10.0.100.203" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
6. 启动kublet
systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet
7. 去Master节点经过kublet的TLS证书请求
[root@k8s-master ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-qgKV6Z_YCV5Zwt0erq2sdtEK8V1z_7Opa5C2JtSW54I 3s kubelet-bootstrap Pending [root@k8s-master ~]# kubectl certificate approve node-csr-qgKV6Z_YCV5Zwt0erq2sdtEK8V1z_7Opa5C2JtSW54I [root@k8s-master ~]# kubectl get no NAME STATUS ROLES AGE VERSION 10.0.100.203 Ready <none> 10s v1.11.0
8.安装conntrack
yum install -y conntrack-tools
9. 建立 kube-proxy 的service配置文件/usr/lib/systemd/system/kube-proxy.service:
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/local/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
10. kube-proxy配置文件/etc/kubernetes/proxy
KUBE_PROXY_ARGS="--bind-address=10.0.100.203 --hostname-override=10.0.100.203 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"
11.启动proxy
systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy
到此咱们K8S集群就手动搭建完毕了,最后咱们来启动个demo来测试下
$ kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx --port=80 deployment "nginx" created $ kubectl expose deployment nginx --type=NodePort --name=example-service service "example-service" exposed $ kubectl describe svc example-service Name: example-service Namespace: default Labels: run=load-balancer-example Annotations: <none> Selector: run=load-balancer-example Type: NodePort IP: 10.254.102.2 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30460/TCP Endpoints: 172.17.0.2:80 Session Affinity: None External Traffic Policy: Cluster Events: <none>
能够看到咱们采用都是最原始的nodeport方式来访问的,端口30460,访问集群节点任意一个ip均可以看到页面
OK,到这里咱们的手动搭建就告一段落了,后续有时间也会写一些实战的东西出来,请期待。
本文参考了jimsong的博客:
https://jimmysong.io/kubernetes-handbook/practice/
个人博客即将同步至腾讯云+社区,邀请你们一同入驻:https://cloud.tencent.com/developer/support-plan?invite_code=tcwqxy4yt70z