本文来自 KubeSphere 社区用户 Will,演示如何使用 Sealos + Longhorn 部署一个带有持久化存储的 Kubernetes 集群,而后使用 ks-installer 在该集群上部署 KubeSphere 3.0.0。这是一篇最适合小白初次上手的 KubeSphere 3.0.0 快速部署和体验的文章🚀。node
Sealos 简介
Sealos (https://sealyun.com/
),只能用丝滑一词形容的 Kubernetes 高可用安装工具,一条命令,离线安装,包含全部依赖,内核负载不依赖 haproxy keepalived,纯 Golang 开发,99 年证书,支持 v1.16 ~ v1.19。mysql
Longhorn简介
Longhorn(https://www.rancher.cn/longhorn
)是 Rancher 开源的 Kubernetes 高可用持久化存储,提供简单的增量快照和备份,支持跨集群灾难恢复。linux
KubeSphere 简介
KubeSphere(https://kubesphere.io)是在 Kubernetes 之上构建的以应用为中心的多租户容器平台,彻底开源,支持多云与多集群管理,提供全栈的 IT 自动化运维的能力,简化企业的 DevOps 工做流。KubeSphere 提供了运维友好的向导式操做界面,帮助企业快速构建一个强大和功能丰富的容器云平台。git
KubeSphere 支持以下两种安装方式:github
使用 KubeKey 部署 Kubernetes 集群 + KubeSphere 在已有 Kubernetes 集群部署 KubeSphere
对于已有 Kubernetes 集群的用户来讲,在已有 Kubernetes 集群部署 KubeSphere 具备更高的灵活性。下面演示单独部署一个 Kubernetes 集群,并在集群上部署 KubeSphere。web
使用 Sealos 部署 Kubernetes 集群
准备 4 个节点,因为实验的机器有限,咱们暂时准备 3 个 master 和 1 个 node,注意在实际的生产环境建议配置 3 master 和至少 3 node。全部节点必须配置主机名,并确认节点时间同步:redis
hostnamectl set-hostname xx
yum install -y chrony
systemctl enable --now chronyd
timedatectl set-timezone Asia/Shanghai
在第一个 master 节点操做,下载部署工具及离线包:sql
# 基于 go 的二进制安装程序
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/latest/sealos && \
chmod +x sealos && mv sealos /usr/bin
# 以 K8s v1.18.8为例,不建议使用 v1.19.x,由于 KubeSphere v3.0.0 暂不支持
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/cd3d5791b292325d38bbfaffd9855312-1.18.8/kube1.18.8.tar.gz
执行如下命令部署 Kubernetes 集群,passwd 为全部节点 root 密码:json
sealos init --passwd 123456 \
--master 10.39.140.248 \
--master 10.39.140.249 \
--master 10.39.140.250 \
--node 10.39.140.251 \
--pkg-url kube1.18.8.tar.gz \
--version v1.18.8
确认 Kubernetes 集群运行正常:api
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready master 13h v1.18.8
k8s-master2 Ready master 13h v1.18.8
k8s-master3 Ready master 13h v1.18.8
k8s-node1 Ready <none> 13h v1.18.8
部署 Longhorn 存储
Longhorn 推荐单独挂盘做为存储使用,这里做为测试直接使用本地存储目录 /data/longhorn
,默认为 /var/lib/longhorn
。
注意,KubeSphere 有几个组件申请的 PV 大小为 20G
,确保节点空间充足,不然可能出现 PV 可以绑定成功但没有知足条件的节点可调度的状况。
安装具备 3 数据副本的 Longhorn 至少须要 3 个节点,这里去除 master 节点污点使其可调度 Pod:
kubectl taint nodes --all node-role.kubernetes.io/master-
在 k8s-master1 安装 Helm:
version=v3.3.1
curl -LO https://repo.huaweicloud.com/helm/${version}/helm-${version}-linux-amd64.tar.gz
tar -zxvf helm-${version}-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm && rm -rf linux-amd64
全部节点安装 longhorn 依赖:
yum install -y iscsi-initiator-utils
systemctl enable --now iscsid
添加 Longhorn Chart,若是网络较差能够去 Longhorn 的 github release 下载 Chart:
helm repo add longhorn https://charts.longhorn.io
helm repo update
部署 Longhorn,支持离线部署,须要提早推送镜像到私有仓库 longhorn.io 下:
kubectl create namespace longhorn-system
helm install longhorn \
--namespace longhorn-system \
--set defaultSettings.defaultDataPath="/data/longhorn/" \
--set defaultSettings.defaultReplicaCount=3 \
--set service.ui.type=NodePort \
--set service.ui.nodePort=30890 \
#--set privateRegistry.registryUrl=10.39.140.196:8081 \
longhorn/longhorn
确认 Longhorn 运行正常:
[root@jenkins longhorn]# kubectl -n longhorn-system get pods
NAME READY STATUS RESTARTS AGE
csi-attacher-58b856dcff-9kqdt 1/1 Running 0 13h
csi-attacher-58b856dcff-c4zzp 1/1 Running 0 13h
csi-attacher-58b856dcff-tvfw2 1/1 Running 0 13h
csi-provisioner-56dd9dc55b-6ps8m 1/1 Running 0 13h
csi-provisioner-56dd9dc55b-m7gz4 1/1 Running 0 13h
csi-provisioner-56dd9dc55b-s9bh4 1/1 Running 0 13h
csi-resizer-6b87c4d9f8-2skth 1/1 Running 0 13h
csi-resizer-6b87c4d9f8-sqn2g 1/1 Running 0 13h
csi-resizer-6b87c4d9f8-z6xql 1/1 Running 0 13h
engine-image-ei-b99baaed-5fd7m 1/1 Running 0 13h
engine-image-ei-b99baaed-jcjxj 1/1 Running 0 12h
engine-image-ei-b99baaed-n6wxc 1/1 Running 0 12h
engine-image-ei-b99baaed-qxfhg 1/1 Running 0 12h
instance-manager-e-44ba7ac9 1/1 Running 0 12h
instance-manager-e-48676e4a 1/1 Running 0 12h
instance-manager-e-57bd994b 1/1 Running 0 12h
instance-manager-e-753c704f 1/1 Running 0 13h
instance-manager-r-4f4be1c1 1/1 Running 0 12h
instance-manager-r-68bfb49b 1/1 Running 0 12h
instance-manager-r-ccb87377 1/1 Running 0 12h
instance-manager-r-e56429be 1/1 Running 0 13h
longhorn-csi-plugin-fqgf7 2/2 Running 0 12h
longhorn-csi-plugin-gbrnf 2/2 Running 0 13h
longhorn-csi-plugin-kjj6b 2/2 Running 0 12h
longhorn-csi-plugin-tvbvj 2/2 Running 0 12h
longhorn-driver-deployer-74bb5c9fcb-khmbk 1/1 Running 0 14h
longhorn-manager-82ztz 1/1 Running 0 12h
longhorn-manager-8kmsn 1/1 Running 0 12h
longhorn-manager-flmfl 1/1 Running 0 12h
longhorn-manager-mz6zj 1/1 Running 0 14h
longhorn-ui-77c6d6f5b7-nzsg2 1/1 Running 0 14h
确认默认的 StorageClass 已就绪:
# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
longhorn (default) driver.longhorn.io Delete Immediate true 14h
登陆 Longhorn UI 确认节点处于可调度状态:
Longhorn UI 查看绑定的 PV 卷
查看存储卷详情
在 Kubernetes 上部署 KubeSphere
使用 ks-installer 项目来安装 KubeSphere,下载 KubeSphere 安装的 Yaml 文件:
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
KubeSphere 默认仅开启了最小化安装,可修改 cluster-configuration.yaml
,找到相应字段开启须要安装的功能组件,如下仅为参考:
devops:
enabled: true
......
logging:
enabled: true
......
metrics_server:
enabled: true
......
openpitrix:
enabled: true
......
执行命令部署 KubeSphere:
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
查看部署日志,确认无报错:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
部署完成后确认全部 KubeSphere 相关的 Pod 运行正常:
[root@k8s-master1 ~]# kubectl get pods -A | grep kubesphere
kubesphere-controls-system default-http-backend-857d7b6856-q24v2 1/1 Running 0 12h
kubesphere-controls-system kubectl-admin-58f985d8f6-jl9bj 1/1 Running 0 11h
kubesphere-controls-system kubesphere-router-demo-ns-6c97d4968b-njgrc 1/1 Running 1 154m
kubesphere-devops-system ks-jenkins-54455f5db8-hm6kc 1/1 Running 0 11h
kubesphere-devops-system s2ioperator-0 1/1 Running 1 11h
kubesphere-devops-system uc-jenkins-update-center-cd9464fff-qnvfz 1/1 Running 0 12h
kubesphere-logging-system elasticsearch-logging-curator-elasticsearch-curator-160079hmdmb 0/1 Completed 0 11h
kubesphere-logging-system elasticsearch-logging-data-0 1/1 Running 0 12h
kubesphere-logging-system elasticsearch-logging-data-1 1/1 Running 0 12h
kubesphere-logging-system elasticsearch-logging-discovery-0 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-c45h2 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-kptfc 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-rzjfp 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-wztkp 1/1 Running 0 12h
kubesphere-logging-system fluentbit-operator-855d4b977d-fk6hs 1/1 Running 0 12h
kubesphere-logging-system ks-events-exporter-5bc4d9f496-x297f 2/2 Running 0 12h
kubesphere-logging-system ks-events-operator-8dbf7fccc-9qmml 1/1 Running 0 12h
kubesphere-logging-system ks-events-ruler-698b7899c7-fkn4l 2/2 Running 0 12h
kubesphere-logging-system ks-events-ruler-698b7899c7-hw6rq 2/2 Running 0 12h
kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-cxkxm 2/2 Running 0 12h
kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-lzxbm 2/2 Running 0 12h
kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 11h
kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 11h
kubesphere-monitoring-system alertmanager-main-2 2/2 Running 0 11h
kubesphere-monitoring-system kube-state-metrics-95c974544-r8kmq 3/3 Running 0 12h
kubesphere-monitoring-system node-exporter-9ddxn 2/2 Running 0 12h
kubesphere-monitoring-system node-exporter-dw929 2/2 Running 0 12h
kubesphere-monitoring-system node-exporter-ht868 2/2 Running 0 12h
kubesphere-monitoring-system node-exporter-nxdsm 2/2 Running 0 12h
kubesphere-monitoring-system notification-manager-deployment-7c8df68d94-hv56l 1/1 Running 0 12h
kubesphere-monitoring-system notification-manager-deployment-7c8df68d94-ttdsg 1/1 Running 0 12h
kubesphere-monitoring-system notification-manager-operator-6958786cd6-pllgc 2/2 Running 0 12h
kubesphere-monitoring-system prometheus-k8s-0 3/3 Running 1 11h
kubesphere-monitoring-system prometheus-k8s-1 3/3 Running 1 11h
kubesphere-monitoring-system prometheus-operator-84d58bf775-5rqdj 2/2 Running 0 12h
kubesphere-system etcd-65796969c7-whbzx 1/1 Running 0 12h
kubesphere-system ks-apiserver-b4dbcc67-2kknm 1/1 Running 0 11h
kubesphere-system ks-apiserver-b4dbcc67-k6jr2 1/1 Running 0 11h
kubesphere-system ks-apiserver-b4dbcc67-q8845 1/1 Running 0 11h
kubesphere-system ks-console-786b9846d4-86hxw 1/1 Running 0 12h
kubesphere-system ks-console-786b9846d4-l6mhj 1/1 Running 0 12h
kubesphere-system ks-console-786b9846d4-wct8z 1/1 Running 0 12h
kubesphere-system ks-controller-manager-7fd8799789-478ks 1/1 Running 0 11h
kubesphere-system ks-controller-manager-7fd8799789-hwgmp 1/1 Running 0 11h
kubesphere-system ks-controller-manager-7fd8799789-pdbch 1/1 Running 0 11h
kubesphere-system ks-installer-64ddc4b77b-c7qz8 1/1 Running 0 12h
kubesphere-system minio-7bfdb5968b-b5v59 1/1 Running 0 12h
kubesphere-system mysql-7f64d9f584-kvxcb 1/1 Running 0 12h
kubesphere-system openldap-0 1/1 Running 0 12h
kubesphere-system openldap-1 1/1 Running 0 12h
kubesphere-system redis-ha-haproxy-5c6559d588-2rt6v 1/1 Running 9 12h
kubesphere-system redis-ha-haproxy-5c6559d588-mhj9p 1/1 Running 8 12h
kubesphere-system redis-ha-haproxy-5c6559d588-tgpjv 1/1 Running 11 12h
kubesphere-system redis-ha-server-0 2/2 Running 0 12h
kubesphere-system redis-ha-server-1 2/2 Running 0 12h
kubesphere-system redis-ha-server-2 2/2 Running 0 12h
KubeSphere 部分组件使用 Helm 部署,检查 Chart 部署状况:
[root@k8s-master1 ~]# helm ls -A | grep kubesphere
elasticsearch-logging kubesphere-logging-system 1 2020-09-23 00:49:08.526873742 +0800 CST deployed elasticsearch-1.22.1 6.7.0-0217
elasticsearch-logging-curator kubesphere-logging-system 1 2020-09-23 00:49:16.117842593 +0800 CST deployed elasticsearch-curator-1.3.3 5.5.4-0217
ks-events kubesphere-logging-system 1 2020-09-23 00:51:45.529430505 +0800 CST deployed kube-events-0.1.0 0.1.0
ks-jenkins kubesphere-devops-system 1 2020-09-23 01:03:15.106022826 +0800 CST deployed jenkins-0.19.0 2.121.3-0217
ks-minio kubesphere-system 2 2020-09-23 00:48:16.990599158 +0800 CST deployed minio-2.5.16 RELEASE.2019-08-07T01-59-21Z
ks-openldap kubesphere-system 1 2020-09-23 00:03:28.767712181 +0800 CST deployed openldap-ha-0.1.0 1.0
ks-redis kubesphere-system 1 2020-09-23 00:03:19.439784188 +0800 CST deployed redis-ha-3.9.0 5.0.5
logsidecar-injector kubesphere-logging-system 1 2020-09-23 00:51:57.519733074 +0800 CST deployed logsidecar-injector-0.1.0 0.1.0
notification-manager kubesphere-monitoring-system 1 2020-09-23 00:54:14.662762759 +0800 CST deployed notification-manager-0.1.0 0.1.0
uc kubesphere-devops-system 1 2020-09-23 00:51:37.885154574 +0800 CST deployed jenkins-update-center-0.8.0 3.0.0
获取 KubeSphere Console 监听端口,默认为 30880:
kubectl get svc/ks-console -n kubesphere-system
默认登陆帐号为 admin/P@88w0rd
,登陆 KubeSphere Console:
在 KubeSphere 查看 Kubernetes 集群总览(界面很是清新简洁):
查看 Kubernetes 集群节点信息:
查看 KubeSphere 服务组件信息:
访问 KubeSphere 应用商店:
查看 KubeSphere 项目资源:
提示:关于如何在 KubeSphere 平台导入多集群、建立项目与集群资源、开启可插拔功能组件以及建立 CI/CD 流水线,能够参考 KubeSphere 官方文档 (
kubesphere.io/docs
) 了解更多信息。
清理 KubeSphere 集群
wget https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/kubesphere-delete.sh
sh kubesphere-delete.sh
原文首发于:
https://blog.csdn.net/networken/article/details/105664147
关
于 KubeSphere
KubeSphere (https://kubesphere.io)是在 Kubernetes 之上构建的容器混合云,提供全栈的 IT 自动化运维的能力,简化企业的 DevOps 工做流。
KubeSphere 已被 Aqara 智能家居、本来生活、新浪、中国人保寿险、华夏银行、中国太平保险、四川航空、国药集团、微众银行、紫金保险、Radore、ZaloPay 等海内外数千家企业采用。KubeSphere 提供了运维友好的向导式操做界面和丰富的企业级功能,包括多云与多集群管理、Kubernetes 资源管理、DevOps (CI/CD)、应用生命周期管理、微服务治理 (Service Mesh)、多租户管理、监控日志、告警通知、存储与网络管理、GPU support 等功能,帮助企业快速构建一个强大和功能丰富的容器云平台。
本文分享自微信公众号 - KubeSphere(gh_4660e44db839)。
若有侵权,请联系 support@oschina.cn 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一块儿分享。