是借鉴网上的几篇文章加上本身的理解整理获得的结果,去掉了一些文章中比较冗余的组件和操做,力争作到部署简单化。node
Kubernetes包含两种节点角色:master节点和minion节点linux
apiserver:用户和 kubernetes 集群交互的入口,封装了核心对象的增删改查操做,提供了 RESTFul 风格的 API 接口,经过 etcd 来实现持久化并维护对象的一致性。web
scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出须要从新分配机器时,scheduler 经过必定的调度算法从而找到最合适的节点。算法
controller-manager:主要是用于保证 replicationController 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系老是最新的。docker
etcd:key-value键值存储数据库,用来存储kubernetes的信息的。数据库
kubelet:运行在 minion 节点,负责和节点上的 Docker 交互,例如启停容器,监控运行状态等。json
proxy:运行在 minion 节点,负责为 pod 提供代理功能,会按期从 etcd 获取 service 信息,并根据 service 信息经过修改 iptables 来实现流量转发(最初的版本是直接经过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。vim
flannel:Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,须要另外下载部署。咱们知道当咱们启动 Docker 后会有一个用于和容器进行交互的 IP 地址,若是不去管理的话可能这个 IP 地址在各个机器上是同样的,而且仅限于在本机上进行通讯,没法访问到其余机器上的 Docker 容器。Flannel 的目的就是为集群中的全部节点从新规划 IP 地址的使用规则,从而使得不一样节点上的容器可以得到同属一个内网且不重复的 IP 地址,并让属于不一样节点上的容器可以直接经过内网 IP 通讯。centos
os:centos 7.5.1804api
IP | hostname | 服务 |
---|---|---|
44.201 | node1 | kube-apiservice,kube-scheduler,kube-controller-manager,etcd |
44.202 | node2 | kubelet,kube-proxy,flanneld,docker |
44.203 | node3 | kubelet,kube-proxy,flanneld,docker |
yum install kubernetes-master etcd -y
yum install kubernetes-node flannel -y
[root@node1 ~]# vim /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="http://192.168.44.201:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.44.201:2379,http://127.0.0.1:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="default" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.44.201:2380" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.44.201:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="default=http://192.168.44.201:2380" ....
针对几个URLS作下简单的解释: [member] ETCD_NAME :ETCD的节点名 ETCD_DATA_DIR:ETCD的数据存储目录 ETCD_SNAPSHOT_COUNTER:多少次的事务提交将触发一次快照 ETCD_HEARTBEAT_INTERVAL:ETCD节点之间心跳传输的间隔,单位毫秒 ETCD_ELECTION_TIMEOUT:该节点参与选举的最大超时时间,单位毫秒 ETCD_LISTEN_PEER_URLS:该节点与其余节点通讯时所监听的地址列表,多个地址使用逗号隔开,其格式能够划分为scheme://IP:PORT,这里的scheme能够是http、https ETCD_LISTEN_CLIENT_URLS:该节点与客户端通讯时监听的地址列表 [cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS:该成员节点在整个集群中的通讯地址列表,这个地址用来传输集群数据的地址。所以这个地址必须是能够链接集群中全部的成员的。 ETCD_INITIAL_CLUSTER:配置集群内部全部成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,若是有多个使用逗号隔开 ETCD_ADVERTISE_CLIENT_URLS:广播给集群中其余成员本身的客户端地址列表
注:k8s中只须要master节点配置etcd,由于其余节点共享同一个etcd服务
[root@node1 ~]# vim /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" KUBE_API_ADDRESS="--address=0.0.0.0" ##API-server绑定的安全IP只有127.0.0.1,至关于一个白名单,修改为"--address=0.0.0.0"",表示运行全部节点进行访问。 # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.44.201:2379" #etcd服务 # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.1.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key"
[root@node1 ~]# vim /etc/kubernetes/controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key"
如下文件在操做过程当中,未修改过,不过也比较重要
[root@node1 kubernetes]# cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf User=etcd # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@node1 kubernetes]#
[root@node2 kubernetes]# vim /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.44.201:8080"
[root@node2 kubernetes]# vim /etc/kubernetes/kubelet ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.44.202" ##修改ip # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.44.202" ## 修改ip # location of the api-server KUBELET_API_SERVER="--api-servers=http://192.168.44.201:8080" # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS=""
注:修改对应的ip地址
[root@node2 kubernetes]# vim /etc/sysconfig/flanneld # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://192.168.44.201:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
注:node3与node2上面配置相同
如下文件在操做过程当中,未修改过,不过也比较重要
[root@node2 kubernetes]# cat /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS #ExecStart=/usr/bin/flanneld --iface ens33 $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target WantedBy=docker.service
[root@node2 kubernetes]# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target flanneld.service Wants=docker-storage-setup.service Requires=flanneld.service #这个参数应该能够没有,只要docker的启动顺序在flanneld以后就OK [Service] Type=notify NotifyAccess=main EnvironmentFile=-/run/containers/registries.conf EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network Environment=GOTRACEBACK=crash Environment=DOCKER_HTTP_HOST_COMPAT=1 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin ExecStart=/usr/bin/dockerd-current \ --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \ --default-runtime=docker-runc \ --exec-opt native.cgroupdriver=systemd \ --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \ --init-path=/usr/libexec/docker/docker-init-current \ --seccomp-profile=/etc/docker/seccomp.json \ $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY \ $REGISTRIES ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal KillMode=process [Install] WantedBy=multi-user.target
minion节点上的flanneld服务必需要在master上的etcd服务启动以后再启动
systemctl start etcd systemctl enable etcd systemctl start kube-apiserver systemctl enable kube-apiserver systemctl start kube-scheduler sysemtctl enable kube-scheduler systemctl start kube-controller-manager systemctl enable kube-controller-manager
systemctl start kubelet systemctl enable kubelet systemctl start kube-proxy systemctl enable kube-proxy systemctl start flanneld systemctl enable flanneld systemctl start docker systemctl enable docker
注:若是以前docker启动,而且docker0存在(ip addr),则须要先将docker中止(systemctl stop docker),而且删掉docker0(ip link delete docker0)
systemctl stop firewalld systemctl disable firewalld
临时关闭: setenforce 0 永久关闭: [root@node2 kubernetes]# vim /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F iptables -L -n
docker search kubernetes-dashboard-amd64
docker images
docker tag siriuszg/kubernetes-dashboard-amd64:latest 192.168.44.201:5000/kubernetes-dashboard-amd64:latest
[root@node1 ~]# vim kubernetes-dashboard.yaml # Comment the following annotation if Dashboard must not be deployed on master annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: containers: - name: kubernetes-dashboard image: 192.168.44.201:5000/kubernetes-dashboard-amd64 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. - --apiserver-host=192.168.44.201:8080 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard
[root@node1 ~]# kubectl create -f kubernetes-dashboard.yaml Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
出现上面错误,须要将kubernetes-dashboard的有关配置所有删除
[root@node1 ~]# kubectl delete -f kubernetes-dashboard.yaml
经过192.168.44.201:8080/ui能够访问
经过192.168.44.203:8080/ui能够访问