Kubernetes(一般写成“k8s”)Kubernetes是Google开源的容器集群管理系统。其设计目标是在主机集群之间提供一个可以自动化部署、可拓展、应用容器可运营的平台。Kubernetes一般结合docker容器工具工做,而且整合多个运行着docker容器的主机集群,Kubernetes不只仅支持Docker,还支持Rocket,这是另外一种容器技术。
功能特性:node
Master节点上面主要由四个模块组成:APIServer、scheduler、controller manager、etcdlinux
每一个Node节点主要由三个模块组成:kubelet、kube-proxy、runtime。
runtime。runtime指的是容器运行环境,目前Kubernetes支持docker和rkt两种容器。nginx
Pod是k8s进行资源调度的最小单位,每一个Pod中运行着一个或多个密切相关的业务容器,这些业务容器共享这个Pause容器的IP和Volume,咱们以这个不易死亡的Pause容器做为Pod的根容器,以它的状态表示整个容器组的状态。一个Pod一旦被建立就会放到Etcd中存储,而后由Master调度到一个Node绑定,由这个Node上的Kubelet进行实例化。
每一个Pod会被分配一个单独的Pod IP,Pod IP + ContainerPort 组成了一个Endpoint。git
Service其功能使应用暴露,Pods 是有生命周期的,也有独立的 IP 地址,随着 Pods 的建立与销毁,一个必不可少的工做就是保证各个应用可以感知这种变化。这就要提到 Service 了,Service 是 YAML 或 JSON 定义的由 Pods 经过某种策略的逻辑组合。更重要的是,Pods 的独立 IP 须要经过 Service 暴露到网络中。github
安装有较多方式,在此使用二进制安装和利用kubadm进行安装部署web
名称 | 主机名称 | IP地址 | 安装软件包 | 系统版本 |
---|---|---|---|---|
kubernets server | master | 172.16.0.67 | etcd,kube-apiserver,kube-controller-manager,kube-scheduler | CentOS7.3 64位 |
kubernets node1 | node01 | 172.16.0.66 | kubelet,kube-proxy,docker | CentOS7.3 64位 |
kubernets node1 | node02 | 172.16.0.68 | kubelet,kube-proxy,docker | CentOS7.3 64位 |
软件版本
kubenets网址
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#v183
server端二进制文件
https://dl.k8s.io/v1.8.13/kubernetes-server-linux-amd64.tar.gz
node端二进制文件
https://dl.k8s.io/v1.8.13/kubernetes-node-linux-amd64.tar.gz算法
防火墙配置
systemctl stop firewalld
systemctl disable firewalld
systemctl mask firewallddocker
master服务器安装etcdapache
yum install etcd -y
配置etcd,并启动服务器配置开机自启动json
下载软件包,建立目录拷贝文件
cd /tmp && wget -c https://dl.k8s.io/v1.8.13/kubernetes-server-linux-amd64.tar.gz tar -zxf kubernetes-server-linux-amd64.tar.gz mkdir -p /opt/kubernetes/{bin,cfg} mv kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl} /opt/kubernetes/bin
cat > /opt/kubernetes/cfg/kube-apiserver<<EOF KUBE_LOGTOSTDERR='--logtostderr=true' KUBE_LOG_LEVEL="--v=4" KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.0.67:2379" KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" KUBE_API_PORT="--insecure-port=8080" KUBE_ADVERTISE_ADDR="--advertise-address=172.16.0.67" KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.10.10.0/24" EOF
cat >/lib/systemd/system/kube-apiserver.service<<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver #ExecStart=/opt/kubernetes/bin/kube-apiserver ${KUBE_APISERVER_OPTS} ExecStart=/opt/kubernetes/bin/kube-apiserver \ \${KUBE_LOGTOSTDERR} \ \${KUBE_LOG_LEVEL} \ \${KUBE_ETCD_SERVERS} \ \${KUBE_API_ADDRESS} \ \${KUBE_API_PORT} \ \${KUBE_ADVERTISE_ADDR} \ \${KUBE_ALLOW_PRIV} \ \${KUBE_SERVICE_ADDRESSES} Restart=on-failure [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver
cat >/opt/kubernetes/cfg/kube-scheduler <<EOF KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=172.16.0.67:8080" KUBE_LEADER_ELECT="--leader-elect" EOF
cat>/lib/systemd/system/kube-scheduler.service<<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \ \${KUBE_LOGTOSTDERR} \ \${KUBE_LOG_LEVEL} \ \${KUBE_MASTER} \ \${KUBE_LEADER_ELECT} Restart=on-failure [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
cat > /opt/kubernetes/cfg/kube-controller-manager<<EOF KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=172.16.0.67:8080" EOF
cat > /lib/systemd/system/kube-controller-manager.service<<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \ \${KUBE_LOGTOSTDERR} \ \${KUBE_LOG_LEVEL} \ \${KUBE_MASTER} \ \${KUBE_LEADER_ELECT} Restart=on-failure [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager
至此master就已经配置完成,如若配置中有错误,能够经过#journalctl -u 服务名称
查看报错,为方便使用添加环境变量
echo "export PATH=\$PATH:/opt/kubernetes/bin" >> /etc/profile source /etc/profile
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum install docker-ce -y
cd /tmp && wget https://dl.k8s.io/v1.8.13/kubernetes-node-linux-amd64.tar.gz tar -zxf kubernetes-node-linux-amd64.tar.gz mkdir -p /opt/kubernetes/{bin,cfg} mv kubernetes/node/bin/{kubelet,kube-proxy} /opt/kubernetes/bin/
cat > /opt/kubernetes/cfg/kubelet.kubeconfig <<EOF apiVersion: v1 kind: Config clusters: - cluster: server: http://172.16.0.67:8080 name: local contexts: - context: cluster: local name: local current-context: local EOF
cat> /opt/kubernetes/cfg/kubelet <<EOF # 启用日志标准错误 KUBE_LOGTOSTDERR="--logtostderr=true" # 日志级别 KUBE_LOG_LEVEL="--v=4" # Kubelet服务IP地址 NODE_ADDRESS="--address=172.16.0.66" # Kubelet服务端口 NODE_PORT="--port=10250" # 自定义节点名称 NODE_HOSTNAME="--hostname-override=172.16.0.66" # kubeconfig路径,指定链接API服务器 KUBELET_KUBECONFIG="--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig" # 容许容器请求特权模式,默认false KUBE_ALLOW_PRIV="--allow-privileged=false" # DNS信息 KUBELET_DNS_IP="--cluster-dns=10.10.10.2" KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local" # 禁用使用Swap KUBELET_SWAP="--fail-swap-on=false" EOF
cat>/lib/systemd/system/kubelet.service<<EOF [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \ \${KUBE_LOGTOSTDERR} \ \${KUBE_LOG_LEVEL} \ \${NODE_ADDRESS} \ \${NODE_PORT} \ \${NODE_HOSTNAME} \ \${KUBELET_KUBECONFIG} \ \${KUBE_ALLOW_PRIV} \ \${KUBELET_DNS_IP} \ \${KUBELET_DNS_DOMAIN} \ \${KUBELET_SWAP} Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
启动服务
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
建立配置文件
cat>/opt/kubernetes/cfg/kube-proxy <<EOF # 启用日志标准错误 KUBE_LOGTOSTDERR="--logtostderr=true" # 日志级别 KUBE_LOG_LEVEL="--v=4" # 自定义节点名称 NODE_HOSTNAME="--hostname-override=172.16.0.66" # API服务地址 KUBE_MASTER="--master=http://172.16.0.67:8080" EOF
建立systemd服务文件
cat > /lib/systemd/system/kube-proxy.service<<EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \ \${KUBE_LOGTOSTDERR} \ \${KUBE_LOG_LEVEL} \ \${NODE_HOSTNAME} \ \${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target EOF
启动服务
systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
其余节点加入集群与node01方式相同,但需修改kubelet的--address和--hostname-override选项为本机IP便可。
yum install -y docker systemctl enable docker && systemctl start docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet systemctl daemon-reload systemctl restart kubelet
kubeadm init --pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address 指明用 Master 的哪一个 interface 与 Cluster 的其余节点通讯。若是 Master 有多个 interface,建议明确指定,若是不指定,kubeadm 会自动选择有默认网关的 interface。
--pod-network-cidr 指定 Pod 网络的范围。Kubernetes 支持多种网络方案,并且不一样网络方案对 --pod-network-cidr 有本身的要求,这里设置为 10.244.0.0/16 是由于咱们将使用 flannel 网络方案,必须设置成这个 CIDR。
命令执行完成会返回提示如何注册其余节点到 Cluster,此处须要记录下token值,或整条命令。
# 建立用户 useradd xuel passwd xuel # 切换到普通用户 su - xuel mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 配置环境变量 export KUBECONFIG=/etc/kubernetes/admin.conf echo "source <(kubectl completion bash)" >> ~/.bashrc
建议用普通用户操做kubectl
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
在须要加入集群的node节点也须要安装docker 和 kubeadm ,启动kubelet服务等操做,和master节点同样,在此就省略。
kubeadm join 172.16.0.64:6443 --token dt5tet.26peoqdwftx7yafv --discovery-token-ca-cert-hash sha256:5b4030d19662122204ff78a4fd0ac496b739a9945517deca67a9384f0bab2b21
kubectl get nodes kubectl get pod --all-namespaces
git clone https://github.com/redhatxl/k8s-prometheus-grafana.git
docker pull prom/node-exporter docker pull prom/prometheus:v2.0.0 docker pull grafana/grafana:4.2.0
kubectl create -f node-exporter.yaml
kubectl create -f k8s-prometheus-grafana/prometheus/rbac-setup.yaml
kubectl create -f k8s-prometheus-grafana/prometheus/configmap.yaml
kubectl create -f k8s-prometheus-grafana/prometheus/prometheus.deploy.yml
kubectl create -f k8s-prometheus-grafana/prometheus/prometheus.svc.yml
kubectl create -f k8s-prometheus-grafana/grafana/grafana-deploy.yaml
kubectl create -f k8s-prometheus-grafana/grafana/grafana-svc.yaml
kubectl create -f k8s-prometheus-grafana/grafana/grafana-ing.yaml
查看node-exporter
http://47.52.166.125:31672/metrics
prometheus对应的nodeport端口为30003,经过访问http://47.52.166.125:30003/target 能够看到prometheus已经成功链接上了k8s的apiserver
经过端口进行granfa访问,默认用户名密码均为admin
添加数据源
导入面板,能够直接输入模板编号315在线导入,或者下载好对应的json模板文件本地导入,面板模板下载地址https:///dashboards/315
查看展现效果
kubectl delete deployment apache
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
配置kubernetes-dashboard.yaml
cat >kubernetes-dashboard.yaml<<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 selector: matchLabels: app: kubernetes-dashboard template: metadata: labels: app: kubernetes-dashboard # Comment the following annotation if Dashboard must not be deployed on master annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: containers: - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.0 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: - --apiserver-host=http://172.16.0.67:8080 #配置为apiserver 地址 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard EOF