Kubernetes简称为k8s,它是 Google 开源的容器集群管理系统。在 Docker 技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提升了大规模容器集群管理的便捷性。k8s是容器到容器云后的产物。可是k8s并非万能,并不必定适合全部的云场景。官方有一段"What Kubernetes is not"的解释可能更有利咱们的理解。node
Kubernetes 不是一个传统意义上,一应俱全的 PaaS (平台即服务) 系统。咱们保留用户选择的自由,这很是重要。mysql
其中,控制节点,即Master节点,由三个紧密协做的独立组件组合而成,他们分别负责是API服务的kube-apiserver、负责调度的kube-scheduler,以及负责容器编排的kube-controller-manager。整个集群的持久化数据,则由kube-apiserver处理后保存在Etcd中。nginx
在计算节点上最核心的部分,则是一个叫作kubelet的组件。git
在 Kubernetes 项目中,kubelet 主要负责github
1.kubelet同容器运行(running containers)时(好比 Docker 项目)打交道。而这个交互所依赖的,是一个称做 CRI(Container Runtime Interface)的远程调用接口,这个接口定义了容器运行时的各项核心操做,好比:启动一个容器须要的全部参数。sql
这也是为什么,Kubernetes 项目并不关心你部署的是什么容器运行时、使用的什么技术实现,只要你的这个容器运行时可以运行标准的容器镜像,它就能够经过实现 CRI 接入到 Kubernetes 项目当中。docker
2.而具体的容器运行时,好比 Docker 项目,则通常经过 OCI 这个容器运行时规范同底层的 Linux 操做系统进行交互,即:把 CRI 请求翻译成对 Linux 操做系统的调用(操做 Linux Namespace 和 Cgroups 等)。数据库
3.此外,kubelet 还经过 gRPC 协议同一个叫做 Device Plugin 的插件进行交互。这个插件,是 Kubernetes 项目用来管理 GPU 等宿主机物理设备的主要组件,也是基于 Kubernetes 项目进行机器学习训练、高性能做业支持等工做必须关注的功能。json
4.kubelet 的另外一个重要功能,则是调用网络插件和存储插件为容器配置网络和持久化存储。这两个插件与 kubelet 进行交互的接口,分别是 CNI(Container Networking Interface)和 CSI(Container Storage Interface)。bootstrap
因此说,kubelet彻底是为了实现Kubernets项目对容器的管理能力而从新实现的一个组件。
Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,是使用kubelet中内置dockershim CRI来实现的
apt-get remove docker-ce apt autoremove apt-get install docker-ce
启动docker:
systemctl enable docker
systemctl start docker
kubeadm: 引导启动k8s集群的命令行工具。
kubelet: 在群集中全部节点上运行的核心组件, 用来执行如启动pods和containers等操做。
kubectl: 操做集群的命令行工具。
首先添加apt-key:
sudo apt update && sudo apt install -y apt-transport-https curl curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
添加kubernetes源:
sudo vim /etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
安装:
sudo apt update sudo apt install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
在初始化以前,咱们还有如下几点须要注意:
1.选择一个网络插件,并检查它是否须要在初始化Master时指定一些参数,好比咱们可能须要根据选择的插件来设置--pod-network-cidr参数。参考:Installing a pod network add-on。
2.kubeadm使用eth0的默认网络接口(一般是内网IP)作为Master节点的advertise address,若是咱们想使用不一样的网络接口,可使用--apiserver-advertise-address=<ip-address>参数来设置。若是适应IPv6,则必须使用IPv6d的地址,如:--apiserver-advertise-address=fd00::101。
3.1.13版本中终于解决了在国内没法拉取国外镜像的痛点,其增长了一个--image-repository参数,默认值是k8s.gcr.io,咱们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers
4.咱们还须要指定--kubernetes-version参数,由于它的默认值是stable-1,会致使从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,咱们能够将其指定为固定版本(最新版:v1.13.1)来跳过网络请求。
#kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [fn004 localhost] and IPs [121.197.130.187 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [fn004 localhost] and IPs [121.197.130.187 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [fn004 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 121.197.130.187] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 21.504803 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "fn004" as an annotation [mark-control-plane] Marking the node fn004 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node fn004 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: b0x4dv.nbut63ktiaikcc24 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join [公网IP]:6443 --token b0x4dv.nbut63ktiaikcc24 --discovery-token-ca-cert-hash sha256:551fe78b50dfe52410869685b7dc70b9a27e550241a6112d8d1fef2073759bb4
若是init出现了错误,须要从新init的时候,能够 #kubeadm reset 从新初始化集群。
接着执行:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pods --all-namespaces //能够看到coredns的状态是pending,这事由于咱们尚未安装网络插件
Calico是一个纯三层的虚拟网络方案,Calico 为每一个容器分配一个 IP,每一个 host 都是 router,把不一样 host 的容器链接起来。与 VxLAN 不一样的是,Calico 不对数据包作额外封装,不须要 NAT 和端口映射,扩展性和性能都很好。
默认状况下,Calico网络插件使用的的网段是192.168.0.0/16,在init的时候,咱们已经经过--pod-network-cidr=192.168.0.0/16来适配Calico,固然你也能够修改calico.yml文件来指定不一样的网段。
可使用以下命令命令来安装Canal插件:
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
上图出现了拉取镜像失败的状况,能够经过systemctl status kubelet 查看报错缘由,正确的结果以下:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-wdgl5 2/2 Running 0 90s kube-system coredns-78d4cf999f-jvxv9 1/1 Running 0 27m kube-system coredns-78d4cf999f-lmhdj 1/1 Running 0 27m kube-system etcd-fn004 1/1 Running 0 26m kube-system kube-apiserver-fn004 1/1 Running 0 26m kube-system kube-controller-manager-fn004 1/1 Running 0 26m kube-system kube-proxy-rkzkc 1/1 Running 0 27m kube-system kube-scheduler-fn004 1/1 Running 0 27m
以上就部署完了一个master节点,接下来就能够加入worker节点并进行测试了。
默认状况下,因为安全缘由,集群并不会将pods部署在Master节点上。可是在开发环境下,咱们可能就只有一个Master节点,这时可使用下面的命令来解除这个限制:
kubectl taint nodes --all node-role.kubernetes.io/master-
登陆另一台机器B:
直接执行:kubeadm join [masterIP]:6443 --token b0x4dv.nbut63ktiaikcc24 --discovery-token-ca-cert-hash sha256:551fe78b50dfe52410869685b7dc70b9a27e550241a6112d8d1fef2073759bb4
root@xxxx:/etc/kubernetes# kubeadm join [master_ip]:6443 --token b0x4dv.nbut63ktiaikcc24 --discovery-token-ca-cert-hash sha256:551fe78b50dfe52410869685b7dc70b9a27e550241a6112d8d1fef2073759bb4 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06 [discovery] Trying to connect to API Server "master_ip:6443" [discovery] Created cluster-info discovery client, requesting info from "https://master_ip:6443" [discovery] Requesting info from "https://master_ip:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "master_ip:6443" [discovery] Successfully established connection with API Server "master_ip:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "fn001" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
在master节点能够用#kubeadm token list 查看token.
等 一下子就能够在master节点查看节点状态:
首先验证kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
kubectl create deployment nginx --image=nginx:alpine //部署一个nginx,包含2个pod
kubectl scale deployment nginx --replicas=2
kubectl get pods -l app=nginx -o wide //验证nginx pod是否运行,会分配2个192.168.开头的集群IP
kubectl expose deployment nginx --port=80 --type=NodePort //以nodePort 方式对外提供服务
kubectl get services nginx //查看集群外可访问的Port
systemctl status kubelet //报错是由于配置文件人为被修改了,致使重启始终不成功。如下配置文件使用1.13的版本。供参考。
出现以下报错:
kubelet[12305]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. S
systemd[1]: kubelet.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.
须要检查/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /lib/systemd/system/kubelet.service 这2个配置文件是否正确生成。正确配置以下:
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/default/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
vim /lib/systemd/system/kubelet.service
[Unit] Description=kubelet: The Kubernetes Node Agent Documentation=https://kubernetes.io/docs/home/ [Service] ExecStart=/usr/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target
参考连接:
https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/