本文使用「署名 4.0 国际 (CC BY 4.0)」许可协议,欢迎转载、或从新修改使用,但须要注明来源。 署名 4.0 国际 (CC BY 4.0)html
本文做者: 苏洋node
建立时间: 2019年09月08日 统计字数: 15348字 阅读时间: 31分钟阅读 本文连接: soulteary.com/2019/09/08/…linux
去年的时候,我曾经写过如何简单搭建 Kubernetes 集群,当时使用的是官方的工具箱:Kubeadm,这个方案对于只是想试试的同窗来讲,仍是过于复杂。这里介绍一款简单的工具:MicroK8s。nginx
官方给这款工具的人设是“无需运维的 Kubernetes ,服务于工做站、物联网。”最大的价值在于能够快速搭建单节点的容器编排系统,用于生产试验。git
官方网站里的文档有简单介绍如何安装使用,可是却不曾考虑安装过程存在网络问题的神州大陆的同窗们,本文将结合这种状况聊聊。github
官方在早些时候宣布接下来将会使用 Containerd
替换 docker
:docker
The upcoming release of v1.14 Kubernetes will mark the MicroK8s switch to Containerd and enhanced security. As this is a big step forward we would like to give you a heads up and offer you a preview of what is coming. Give it a test drive with:编程
snap install microk8s --classic --channel=1.13/edge/secure-containerdubuntu
You can read more in our blog 117, and the respective pill request 13 Please, let us know 5 how we can make this transition smoother for you. Thanksvim
社区里已经有用户咨询/吐槽过了,这里考虑减小变化,暂时仍是以使用 docker 做为容器封装的 1.13 ,新版本留给下一篇“折腾”吧。
snap 是 **canonical ** 公司给出的更“高级”的包管理的解决方案,最先应用在 Ubuntu Phone 上。
使用 snap 安装 K8s 确实很简单,就像下面同样,一条命令解决问题:
snap install microk8s --classic --channel=1.13/stable
复制代码
可是这条命令若是不是在海外主机上执行,应该会遇到安装缓慢的问题。
snap install microk8s --classic --channel=1.13/stable
Download snap "microk8s" (581) from channel "1.13/stable" 0% 25.9kB/s 2h32m
复制代码
想要解决这个问题,暂时只能给 snap 添加代理来解决问题,snap 不会读取系统的环境变量,只读取应用的变量文件。
使用下面的命令能够方便的修改 snap 的环境变量,可是默认编辑器是 ** nano **,很是难用。
systemctl edit snapd.service
复制代码
这里能够先更新编辑器为咱们熟悉的 ** vim **:
sudo update-alternatives --install "$(which editor)" editor "$(which vim)" 15
sudo update-alternatives --config editor
复制代码
交互式终端须要咱们手动输入数字,而后按下回车确认选择:
There are 5 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim 15 manual mode
4 /usr/bin/vim.basic 30 manual mode
5 /usr/bin/vim.tiny 15 manual mode
Press <enter> to keep the current choice[*], or type selection number: 5
update-alternatives: using /usr/bin/vim.tiny to provide /usr/bin/editor (editor) in manual mode
复制代码
再次执行上面编辑环境变量的命令,添加一段代理配置:
[Service]
Environment="HTTP_PROXY=http://10.11.12.123:10240"
Environment="HTTPS_PROXY=http://10.11.12.123:10240"
Environment="NO_PROXY=localhost,127.0.0.1,192.168.0.0/24,*.domain.ltd"
复制代码
再次执行安装,安装进度起飞:
snap install microk8s --classic --channel=1.13/stable
Download snap "microk8s" (581) from channel "1.13/stable" 31% 14.6MB/s 11.2s
复制代码
若是速度没有变化,能够考虑重载 snap 服务。
systemctl daemon-reload && systemctl restart snapd
复制代码
若是上面的操做一切顺利,你将会看到相似下面的日志:
snap install microk8s --classic --channel=1.13/stable
microk8s (1.13/stable) v1.13.6 from Canonical✓ installed
复制代码
执行列表命令,能够看到当前 snap 已经安装好的工具:
snap list
Name Version Rev Tracking Publisher Notes
core 16-2.40 7396 stable canonical✓ core
microk8s v1.13.6 581 1.13 canonical✓ classic
复制代码
以前 独立安装 K8s 须要先安装 docker,而使用 snap 安装的话,这一切都是默认就绪的。
microk8s.docker version
Client:
Version: 18.09.2
API version: 1.39
Go version: go1.10.4
Git commit: 6247962
Built: Tue Feb 26 23:56:24 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 6247962
Built: Tue Feb 12 22:47:29 2019
OS/Arch: linux/amd64
Experimental: false
复制代码
想要使用 Kubernetes,除了安装 MicroK8s 外,还须要获取它依赖的工具镜像。然而镜像的获取仍是须要费点功夫的,首先获取 1.13
版本的 MicroK8s 代码:
git clone --single-branch --branch=1.13 https://github.com/ubuntu/microk8s.git
复制代码
而后获取其中声明的容器镜像列表:
grep -ir 'image:' * | awk '{print $3 $4}' | uniq
复制代码
由于官方代码的奔放,咱们会得到长得奇形怪状的镜像名称:
localhost:32000/my-busybox
elasticsearch:6.5.1
alpine:3.6
docker.elastic.co/kibana/kibana-oss:6.3.2
time="2016-02-04T07:53:57.505612354Z"level=error
cdkbot/registry-$ARCH:2.6
...
...
quay.io/prometheus/prometheus
quay.io/coreos/kube-rbac-proxy:v0.4.0
k8s.gcr.io/metrics-server-$ARCH:v0.2.1
cdkbot/addon-resizer-$ARCH:1.8.1
cdkbot/microbot-$ARCH
"k8s.gcr.io/cuda-vector-add:v0.1"
nginx:latest
istio/examples-bookinfo-details-v1:1.8.0
busybox
busybox:1.28.4
复制代码
根据咱们要部署的目标服务器的具体需求,替换掉 $ARCH
变量,去掉无心义的镜像名称,整理好的列表以下:
k8s.gcr.io/fluentd-elasticsearch:v2.2.0
elasticsearch:6.5.1
alpine:3.6
docker.elastic.co/kibana/kibana-oss:6.3.2
cdkbot/registry-amd64:2.6
gcr.io/google_containers/defaultbackend-amd64:1.4
quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.22.0
jaegertracing/jaeger-operator:1.8.1
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
k8s.gcr.io/heapster-grafana-amd64:v4.4.3
k8s.gcr.io/heapster-amd64:v1.5.2
cdkbot/addon-resizer-amd64:1.8.1
cdkbot/hostpath-provisioner-amd64:latest
quay.io/coreos/k8s-prometheus-adapter-amd64:v0.3.0
grafana/grafana:5.2.4
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/addon-resizer:1.0
quay.io/prometheus/prometheus
quay.io/coreos/prometheus-operator:v0.25.0
quay.io/prometheus/alertmanager
quay.io/prometheus/node-exporter:v0.16.0
quay.io/coreos/kube-rbac-proxy:v0.4.0
k8s.gcr.io/metrics-server-amd64:v0.2.1
cdkbot/addon-resizer-amd64:1.8.1
nvidia/k8s-device-plugin:1.11
cdkbot/microbot-amd64
k8s.gcr.io/cuda-vector-add:v0.1
nginx:latest
istio/examples-bookinfo-details-v1:1.8.0
istio/examples-bookinfo-ratings-v1:1.8.0
istio/examples-bookinfo-reviews-v1:1.8.0
istio/examples-bookinfo-reviews-v2:1.8.0
istio/examples-bookinfo-reviews-v3:1.8.0
istio/examples-bookinfo-productpage-v1:1.8.0
busybox
busybox:1.28.4
复制代码
将上面的列表保存为 package-list.txt 在网络通畅的云服务器上使用下面的脚本,能够将 K8s 依赖的工具镜像离线保存:
PACKAGES=`cat ./package-list.txt`;
for package in $PACKAGES; do docker pull "$package"; done
docker images | tail -n +2 | grep -v "<none>" | awk '{printf("%s:%s\n", $1, $2)}' | while read IMAGE; do
for package in $PACKAGES;
do
if [[ $package != *[':']* ]];then package="$package:latest"; fi
if [ $IMAGE == $package ];then
echo "[find image] $IMAGE"
filename="$(echo $IMAGE| tr ':' '-' | tr '/' '-').tar"
echo "[save as] $filename"
docker save ${IMAGE} -o $filename
fi
done
done
复制代码
将镜像转存待部署服务器的方式多种多样,这里提一种最简单的方案:scp
PACKAGES=`cat ./package-list.txt`;
for package in $PACKAGES;
do
if [[ $package != *[':']* ]];then package="$package:latest";fi
filename="$(echo $package| tr ':' '-' | tr '/' '-').tar"
# 根据本身实际场景修改地址
scp "mirror-server:~/images/$filename" .
scp "./$filename" "deploy-server:"
done
复制代码
若是顺利你将看到相似下面的日志:
k8s.gcr.io-fluentd-elasticsearch-v2.2.0.tar 100% 140MB 18.6MB/s 00:07
elasticsearch-6.5.1.tar 100% 748MB 19.4MB/s 00:38
alpine-3.6.tar 100% 4192KB 15.1MB/s 00:00
docker.elastic.co-kibana-kibana-oss-6.3.2.tar 100% 614MB 22.8MB/s 00:26
cdkbot-registry-amd64-2.6.tar 100% 144MB 16.1MB/s 00:08
gcr.io-google_containers-defaultbackend-amd64-1.4.tar 100% 4742KB 13.3MB/s 00:00
...
...
复制代码
最后在目标服务器使用 docker load
命令导入镜像便可。
ls *.tar | xargs -I {} microk8s.docker load -i {}
复制代码
接下来能够正式安装 K8s 啦。
使用 MicroK8s 配置各类组件很简单,只须要一条命令:
microk8s.enable dashboard dns ingress istio registry storage
复制代码
完整的组件列表能够经过 microk8s.enable --help
来查看:
microk8s.enable --help
Usage: microk8s.enable ADDON...
Enable one or more ADDON included with microk8s
Example: microk8s.enable dns storage
Available addons:
dashboard
dns
fluentd
gpu
ingress
istio
jaeger
metrics-server
prometheus
registry
storage
复制代码
执行 enable
顺利的话,你将看到相似下面的日志:
logentry.config.istio.io/accesslog created
logentry.config.istio.io/tcpaccesslog created
rule.config.istio.io/stdio created
rule.config.istio.io/stdiotcp created
...
...
Istio is starting
Enabling the private registry
Enabling default storage class
deployment.extensions/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
Storage will be available soon
Applying registry manifest
namespace/container-registry created
persistentvolumeclaim/registry-claim created
deployment.extensions/registry created
service/registry created
The registry is enabled
Enabling default storage class
deployment.extensions/hostpath-provisioner unchanged
storageclass.storage.k8s.io/microk8s-hostpath unchanged
Storage will be available soon
复制代码
使用 microk8s.status
检查各个组件的状态:
microk8s is running
addons:
jaeger: disabled
fluentd: disabled
gpu: disabled
storage: enabled
registry: enabled
ingress: enabled
dns: enabled
metrics-server: disabled
prometheus: disabled
istio: enabled
dashboard: enabled
复制代码
可是组件就绪,不表明 K8s 已经安装就绪,使用 microk8s.inspect
排查下安装部署结果:
Inspecting services
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-docker is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system info
Copy network configuration to the final report tarball
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Inspect kubernetes cluster
WARNING: IPtables FORWARD policy is DROP. Consider enabling traffic forwarding with: sudo iptables -P FORWARD ACCEPT
复制代码
解决方法很简单,使用 ufw
添加几条规则便可:
sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT
复制代码
再次使用 microk8s.inspect
命令检查,会发现 WARNING 已经消失了。
可是 Kubernetes 真的安装就绪了吗?跟随下一小节寻找答案吧。
在上面的操做顺利以后完毕后,使用 microk8s.kubectl get pods
查看当前 Kubernetes pods 状态,若是看到 ContainerCreating
,那么说明 Kubernetes 还须要一些额外的“修补工做”。
NAME READY STATUS RESTARTS AGE
default-http-backend-855bc7bc45-t4st8 0/1 ContainerCreating 0 16m
nginx-ingress-microk8s-controller-kgjtl 0/1 ContainerCreating 0 16m
复制代码
使用 microk8s.kubectl get pods --all-namespaces
查看详细的状态,不出意外的话,将看到相似下面的日志输出:
NAMESPACE NAME READY STATUS RESTARTS AGE
container-registry registry-7fc4594d64-rrgs9 0/1 Pending 0 15m
default default-http-backend-855bc7bc45-t4st8 0/1 ContainerCreating 0 16m
default nginx-ingress-microk8s-controller-kgjtl 0/1 ContainerCreating 0 16m
...
...
复制代码
首要的问题就是解决掉这个处于 Pending 状态的容器。使用 microk8s.kubectl describe pod
能够快速查看当前这个问题 pod 的详细状态:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned default/default-http-backend-855bc7bc45-t4st8 to ubuntu-basic-18-04
Warning FailedCreatePodSandBox 21m kubelet, ubuntu-basic-18-04 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning FailedCreatePodSandBox 43s (x45 over 21m) kubelet, ubuntu-basic-18-04 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
复制代码
参考日志输出,能够发现,以前整理的依赖镜像列表,还存在“漏网之鱼”: MicroK8s 中还包含未写在程序中,从远程配置中获取的镜像。
对于这种状况,咱们只能经过给 docker
添加代理来解决问题(或者手动一个一个来)。
编辑 MicroK8s 使用的 docker 环境变量配置文件 vi /var/snap/microk8s/current/args/dockerd-env
,在其中添加代理配置,好比:
HTTPS_PROXY=http://10.11.12.123:10555
HTTPS_PROXY=http://10.11.12.123:10555
NO_PROXY=127.0.0.1
复制代码
接着重启 docker :
sudo systemctl restart snap.microk8s.daemon-docker.service
复制代码
这一切就绪以后,执行下面的命令,重置 MicroK8s 并再次尝试安装各类组件:
microk8s.reset
microk8s.enable dashboard dns ingress istio registry storage
复制代码
命令以后完毕以后,片刻以后再次执行 microk8s.kubectl get pods
会发现全部的 pod 的状态就都是 Running:
NAME READY STATUS RESTARTS AGE
default-http-backend-855bc7bc45-w62jd 1/1 Running 0 46s
nginx-ingress-microk8s-controller-m9lc2 1/1 Running 0 46s
复制代码
使用 microk8s.kubectl get pods --all-namespaces
继续进行验证:
NAMESPACE NAME READY STATUS RESTARTS AGE
container-registry registry-7fc4594d64-whjnl 1/1 Running 0 2m
default default-http-backend-855bc7bc45-w62jd 1/1 Running 0 2m
default nginx-ingress-microk8s-controller-m9lc2 1/1 Running 0 2m
istio-system grafana-59b8896965-xtc27 1/1 Running 0 2m
istio-system istio-citadel-856f994c58-fbc7c 1/1 Running 0 2m
istio-system istio-cleanup-secrets-9q8tw 0/1 Completed 0 2m
istio-system istio-egressgateway-5649fcf57-cbqlv 1/1 Running 0 2m
istio-system istio-galley-7665f65c9c-l7grc 1/1 Running 0 2m
istio-system istio-grafana-post-install-sl6mb 0/1 Completed 0 2m
istio-system istio-ingressgateway-6755b9bbf6-hvnld 1/1 Running 0 2m
istio-system istio-pilot-698959c67b-zts2v 2/2 Running 0 2m
istio-system istio-policy-6fcb6d655f-mx68m 2/2 Running 0 2m
istio-system istio-security-post-install-5d7bb 0/1 Completed 0 2m
istio-system istio-sidecar-injector-768c79f7bf-qvcjd 1/1 Running 0 2m
istio-system istio-telemetry-664d896cf5-jz22s 2/2 Running 0 2m
istio-system istio-tracing-6b994895fd-z8jn9 1/1 Running 0 2m
istio-system prometheus-76b7745b64-fqvn9 1/1 Running 0 2m
istio-system servicegraph-5c4485945b-spf77 1/1 Running 0 2m
kube-system heapster-v1.5.2-64874f6bc6-8ghnr 4/4 Running 0 2m
kube-system hostpath-provisioner-599db8d5fb-kxtjw 1/1 Running 0 2m
kube-system kube-dns-6ccd496668-98mvt 3/3 Running 0 2m
kube-system kubernetes-dashboard-654cfb4879-vzgk5 1/1 Running 0 2m
kube-system monitoring-influxdb-grafana-v4-6679c46745-68vn7 2/2 Running 0 2m
复制代码
若是你看到的结果相似上面这样,说明 Kubernetes 是真的就绪了。
安都安完了,总得试着玩玩看吧,固然,这里不会随大流的展现下管理后台就匆匆搁笔。
使用 kubectl
基于现成的容器建立一个 deployment:
microk8s.kubectl create deployment microbot --image=dontrebootme/microbot:v1
复制代码
既然用上了最早进的编排系统,不体验下自动扩容岂不是太惋惜了:
microk8s.kubectl scale deployment microbot --replicas=2
复制代码
将服务暴露出来,建立流量转发:
microk8s.kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service
复制代码
使用 get
命令查看服务状态:
microk8s.kubectl get all
复制代码
若是一切顺利的话,你将会看到相似下面的日志输出:
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-855bc7bc45-w62jd 1/1 Running 0 64m
pod/microbot-7c7594fb4-dxgg7 1/1 Running 0 13m
pod/microbot-7c7594fb4-v9ztg 1/1 Running 0 13m
pod/nginx-ingress-microk8s-controller-m9lc2 1/1 Running 0 64m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.152.183.13 <none> 80/TCP 64m
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 68m
service/microbot-service NodePort 10.152.183.15 <none> 80:31354/TCP 13m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 64m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1/1 1 1 64m
deployment.apps/microbot 2/2 2 2 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-855bc7bc45 1 1 1 64m
replicaset.apps/microbot-7c7594fb4 2 2 2 13m
复制代码
能够看到咱们刚刚建立的 Service 地址是 10.11.12.234:31354
。使用浏览器访问,能够看到应用已经跑起来啦。
本着“谁制造谁收拾”的绿色环保理念,除了“无脑”建立外,咱们也须要学会如何治理(销毁),使用 delete
命令,先销毁 deployment :
microk8s.kubectl delete deployment microbot
复制代码
执行完毕后日志输出会是下面同样:
deployment.extensions "microbot" deleted
复制代码
在销毁 service 前,咱们须要使用 get
命令先获取全部的 service 的名称:
microk8s.kubectl get services microbot-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
microbot-service NodePort 10.152.183.15 <none> 80:31354/TCP 24m
复制代码
获得了 service 名称后,依旧是使用 delete
命令,删除不须要的资源:
microk8s.kubectl delete service microbot-service
复制代码
执行结果以下:
service "microbot-service" deleted
复制代码
估计仍是有同窗会想一窥 Dashboard 的情况。
能够经过 microk8s.config
命令,先得到当前服务器监听的 IP 地址:
microk8s.config
apiVersion: v1
clusters:
- cluster:
server: http://10.11.12.234:8080
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
username: admin
复制代码
能够看到,当前监听的服务 IP 地址为 10.11.12.234
,使用 proxy
命令,打开流量转发:
microk8s.kubectl proxy --accept-hosts=.* --address=0.0.0.0
复制代码
接着访问下面的地址,就能看到咱们熟悉的 Dashboard 啦:
http://10.11.12.234:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
复制代码
完整安装下来,连着系统一共花费了接近 8G 的储存空间,因此若是你打算持续使用的话,能够提早规划下磁盘空间,好比参考 迁移 Docker 容器储存位置 把将来持续膨胀的 docker 镜像包地址先作个迁移。
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 1.7M 1.6G 1% /run
/dev/sda2 79G 16G 59G 21% /
复制代码
这篇文章成文于一个月以前,因为使用的仍是 “Docker” 方案,理论来讲时效性仍是靠谱的,若是你遇到了什么问题,欢迎讨论沟通。
看着草稿箱堆积愈来愈多的有趣内容,或许应该考虑“合做撰写”的模式了。
—EOF
我如今有一个小小的折腾群,里面汇集了一些喜欢折腾的小伙伴。
在不发广告的状况下,咱们在里面会一块儿聊聊软件、HomeLab、编程上的一些问题,也会在群里不按期的分享一些技术沙龙的资料。
喜欢折腾的小伙伴欢迎扫码添加好友。(请注明来源和目的,不然不会经过审核)