@[toc]html
虽然K8s 1.20版本宣布将在1.23版本以后将再也不维护dockershim,意味着K8s将不直接支持Docker,不过你们没必要过于担忧。一是在1.23版本以前咱们仍然能够使用Docker,二是dockershim确定会有人接盘,咱们一样能够使用Docker,三是Docker制做的镜像仍然能够在其余Runtime环境中使用,因此你们没必要过于恐慌。node
本次安装采用的是Kubeadm安装工具,安装版本是K8s 1.20+,采用的系统为CentOS 7.9,其中Master节点3台,Node节点2台,高可用工具采用HAProxy + KeepAlived,高可用架构视频讲解点我linux
主机名 | IP地址 | 角色 | 配置 |
---|---|---|---|
k8s-master01 ~ 03 | 192.168.0.201 ~ 203 | Master/Worker节点 | 2C2G 40G |
k8s-node01 ~ 02 | 192.168.0.204 ~ 205 | Worker节点 | 2C2G 40G |
k8s-master-lb | 192.168.0.236 | VIP | VIP不占用机器 |
信息 | 备注 |
---|---|
系统版本 | CentOS 7.9 |
Docker版本 | 19.03.x |
K8s版本 | 1.20.x |
Pod网段 | 172.168.0.0/16 |
Service网段 | 10.96.0.0/12 |
全部节点配置hostsgithub
[root@k8s-master01 ~]# cat /etc/hosts 192.168.0.201 k8s-master01 192.168.0.202 k8s-master02 192.168.0.203 k8s-master03 192.168.0.236 k8s-master-lb # 若是不是高可用集群,该IP为Master01的IP 192.168.0.204 k8s-node01 192.168.0.205 k8s-node02
yum源配置docker
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
必备工具安装bootstrap
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
全部节点关闭防火墙、selinux、dnsmasq、swap。服务器配置以下:centos
systemctl disable --now firewalld systemctl disable --now dnsmasq systemctl disable --now NetworkManager setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
关闭swap分区api
swapoff -a && sysctl -w vm.swappiness=0 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install ntpdate -y
全部节点同步时间。时间同步配置以下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com
加入到crontab
*/5 * * * * ntpdate time2.aliyun.com
全部节点配置limit:
ulimit -SHn 65535 vim /etc/security/limits.conf # 末尾添加以下内容 * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited
Master01节点免密钥登陆其余节点:
ssh-keygen -t rsa for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
下载安装全部的源码文件:
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
全部节点升级系统并重启:
yum update -y && reboot
全部节点安装ipvsadm:
yum install ipvsadm ipset sysstat conntrack libseccomp -y
全部节点配置ipvs模块
vim /etc/modules-load.d/ipvs.conf # 加入如下内容 ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack_ipv4 ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip
加载内核配置
systemctl enable --now systemd-modules-load.service
开启一些k8s集群中必须的内核参数,全部节点配置k8s内核
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
全部节点安装Docker-ce 19.03
yum install docker-ce-19.03.* -y
全部节点设置开机自启动Docker
systemctl daemon-reload && systemctl enable --now docker
安装k8s组件
yum list kubeadm.x86_64 --showduplicates | sort -r
全部节点安装最新版本kubeadm
yum install kubeadm -y
默认配置的pause镜像使用gcr.io仓库,国内可能没法访问,因此这里配置Kubelet使用阿里云的pause镜像:
cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" EOF
设置Kubelet开机自启动
systemctl daemon-reload systemctl enable --now kubelet
注意:若是不是高可用集群或者在云上安装,haproxy和keepalived无需安装
全部Master节点经过yum安装HAProxy和KeepAlived:
yum install keepalived haproxy -y
全部Master节点配置HAProxy(详细配置参考HAProxy文档,全部Master节点的HAProxy配置相同):
[root@k8s-master01 etc]# mkdir /etc/haproxy [root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.0.201:6443 check server k8s-master02 192.168.0.202:6443 check server k8s-master03 192.168.0.203:6443 check
全部Master节点配置KeepAlived,配置不同,注意区分
注意每一个节点的IP和网卡(interface参数)
Master01节点的配置:
[root@k8s-master01 etc]# mkdir /etc/keepalived [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens192 mcast_src_ip 192.168.0.201 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.0.236 } # track_script { # chk_apiserver # } }
Master02节点的配置:
! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens192 mcast_src_ip 192.168.0.202 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.0.236 } # track_script { # chk_apiserver # } }
Master03节点的配置:
! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens192 mcast_src_ip 192.168.0.203 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.0.236 } # track_script { # chk_apiserver # } }
注意上述的健康检查是关闭的,集群创建完成后再开启:
# track_script { # chk_apiserver # }
配置KeepAlived健康检查文件:
[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi chmod +x /etc/keepalived/check_apiserver.sh 启动haproxy和keepalived [root@k8s-master01 keepalived]# systemctl daemon-reload [root@k8s-master01 keepalived]# systemctl enable --now haproxy [root@k8s-master01 keepalived]# systemctl enable --now keepalived
测试VIP
[root@k8s-master01 ~]# ping 192.168.0.236 -c 4 PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data. 64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=0.464 ms 64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.063 ms 64 bytes from 192.168.0.236: icmp_seq=3 ttl=64 time=0.062 ms 64 bytes from 192.168.0.236: icmp_seq=4 ttl=64 time=0.063 ms
Master01节点建立new.yaml配置文件以下:
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: 7t2weq.bjbawausm0jaxury ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.201 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - 192.168.0.236 timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.0.236:16443 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.0 networking: dnsDomain: cluster.local podSubnet: 172.168.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
注意:若是不是高可用集群,192.168.0.236:16443改成master01的地址,16443改成apiserver的端口,默认是6443,注意更改v1.20.0为本身服务器kubeadm的版本:kubeadm version
将new.yaml文件复制到其余master节点,以后全部Master节点提早下载镜像,能够节省初始化时间:
kubeadm config images pull --config /root/new.yaml
全部节点设置开机自启动kubelet
systemctl enable --now kubelet(若是启动失败无需管理,初始化成功之后便可启动)
Master01节点初始化,初始化之后会在/etc/kubernetes目录下生成对应的证书和配置文件,以后其余Master节点加入Master01便可:
kubeadm init --config /root/new.yaml --upload-certs
初始化成功之后,会产生Token值,用于其余节点加入时使用,所以要记录下初始化成功生成的token值(令牌值):
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \ --control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908
Master01节点配置环境变量,用于访问Kubernetes集群:
cat <<EOF >> /root/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf EOF source /root/.bashrc
查看节点状态:
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 NotReady control-plane,master 74s v1.20.0
采用初始化安装方式,全部的系统组件均以容器的方式运行而且在kube-system命名空间内,此时能够查看Pod状态:
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-777d78ff6f-kstsz 0/1 Pending 0 14m <none> <none> coredns-777d78ff6f-rlfr5 0/1 Pending 0 14m <none> <none> etcd-k8s-master01 1/1 Running 0 14m 192.168.0.201 k8s-master01 kube-apiserver-k8s-master01 1/1 Running 0 13m 192.168.0.201 k8s-master01 kube-controller-manager-k8s-master01 1/1 Running 0 13m 192.168.0.201 k8s-master01 kube-proxy-8d4qc 1/1 Running 0 14m 192.168.0.201 k8s-master01 kube-scheduler-k8s-master01 1/1 Running 0 13m 192.168.0.201 k8s-master01
初始化其余master加入集群
kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \ --control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa
kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908
查看集群状态:
[root@k8s-master01]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady control-plane,master 8m53s v1.20.0 k8s-master02 NotReady control-plane,master 2m25s v1.20.0 k8s-master03 NotReady control-plane,master 31s v1.20.0 k8s-node01 NotReady <none> 32s v1.20.0 k8s-node02 NotReady <none> 88s v1.20.0
如下步骤只在master01执行
cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/
修改calico-etcd.yaml的如下位置
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379"#g' calico-etcd.yaml ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'` sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
建立calico
kubectl apply -f calico-etcd.yaml
在新版的Kubernetes中系统资源的采集均使用Metrics-server,能够经过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
将Master01节点的front-proxy-ca.crt复制到全部Node节点
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其余节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt
安装metrics server
cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/ [root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl create -f comp.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
等待kube-system命令空间下的Pod所有启动后,查看状态
[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 109m 2% 1296Mi 33% k8s-master02 99m 2% 1124Mi 29% k8s-master03 104m 2% 1082Mi 28% k8s-node01 55m 1% 761Mi 19% k8s-node02 53m 1% 663Mi 17%
cd /root/k8s-ha-install/dashboard/ [root@k8s-master01 dashboard]# kubectl create -f . serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决没法访问Dashboard的问题,参考图:
--test-type --ignore-certificate-errors
更改dashboard的svc为NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
将ClusterIP更改成NodePort(若是已经为NodePort忽略此步骤):
查看端口号:
根据本身的实例端口号,经过任意安装了kube-proxy的宿主机或者VIP的IP+端口便可访问到dashboard:
访问Dashboard:https://192.168.0.236:18282(请更改18282为本身的端口),选择登陆方式为令牌(即token方式)
查看token值:
[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-r4vcp Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w
将token值输入到令牌后,单击登陆便可访问Dashboard
K8s全栈架构师培训课程,点我了解