参考文档node
kube-apiserver下载地址linux
1.下载软件包git
[root@k8s-node1 server]# wget https://dl.k8s.io/v1.15.5/kubernetes-server-linux-amd64.tar.gz --2019-11-04 03:23:38-- https://dl.k8s.io/v1.15.5/kubernetes-server-linux-amd64.tar.gz Resolving dl.k8s.io (dl.k8s.io)... 35.201.71.162 Connecting to dl.k8s.io (dl.k8s.io)|35.201.71.162|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://storage.googleapis.com/kubernetes-release/release/v1.15.5/kubernetes-server-linux-amd64.tar.gz [following] --2019-11-04 03:23:38-- https://storage.googleapis.com/kubernetes-release/release/v1.15.5/kubernetes-server-linux-amd64.tar.gz Resolving storage.googleapis.com (storage.googleapis.com)... 216.58.197.240, 2404:6800:4004:80f::2010 Connecting to storage.googleapis.com (storage.googleapis.com)|216.58.197.240|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 443974904 (423M) [application/x-tar] Saving to: ‘kubernetes-server-linux-amd64.tar.gz’ 100%[==============================================================================================>] 443,974,904 978KB/s in 8m 12s 2019-11-04 03:31:51 (882 KB/s) - ‘kubernetes-server-linux-amd64.tar.gz’ saved [443974904/443974904] [root@k8s-node1 server]# ll total 433572 -rw-r--r-- 1 root root 443974904 Oct 15 16:54 kubernetes-server-linux-amd64.tar.gz [root@k8s-node1 server]# ll -h total 424M
解包,能够看到包含了集群全部要用到的执行文件.github
[root@k8s-node1 server]# tar -zxvf kubernetes-server-linux-amd64.tar.gz kubernetes/ kubernetes/server/ kubernetes/server/bin/ kubernetes/server/bin/cloud-controller-manager.tar kubernetes/server/bin/kubelet kubernetes/server/bin/cloud-controller-manager.docker_tag kubernetes/server/bin/kube-apiserver.tar kubernetes/server/bin/cloud-controller-manager kubernetes/server/bin/kube-controller-manager.tar kubernetes/server/bin/kube-controller-manager.docker_tag kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kube-apiserver.docker_tag kubernetes/server/bin/kubeadm kubernetes/server/bin/kube-proxy kubernetes/server/bin/kube-scheduler.docker_tag kubernetes/server/bin/kube-proxy.tar kubernetes/server/bin/kubectl kubernetes/server/bin/kube-apiserver kubernetes/server/bin/hyperkube kubernetes/server/bin/kube-scheduler.tar kubernetes/server/bin/kube-controller-manager kubernetes/server/bin/mounter kubernetes/server/bin/apiextensions-apiserver kubernetes/server/bin/kube-proxy.docker_tag kubernetes/addons/ kubernetes/LICENSES kubernetes/kubernetes-src.tar.gz
2.把执行文件拷贝到全部节点web
[root@k8s-node1 bin]# pwd /opt/k8s/k8s_software/server/kubernetes/server/bin [root@k8s-node1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/k8s/bin/ [root@k8s-node1 bin]# scp kube-apiserver kube-scheduler kube-controller-manager root@k8s-node2:/opt/k8s/bin/ kube-apiserver 100% 157MB 93.3MB/s 00:01 kube-scheduler 100% 37MB 90.7MB/s 00:00 kube-controller-manager 100% 111MB 82.7MB/s 00:01 [root@k8s-node1 bin]# scp kube-apiserver kube-scheduler kube-controller-manager root@k8s-node3:/opt/k8s/bin/ kube-apiserver 100% 157MB 96.8MB/s 00:01 kube-scheduler 100% 37MB 72.4MB/s 00:00 kube-controller-manager 100% 111MB 111.1MB/s 00:01 [root@k8s-node1 bin]#
修改权限docker
[root@k8s-node1 bin]# chmod +x /opt/k8s/bin/* && chown k8s /opt/k8s/bin/* [root@k8s-node1 bin]# ssh k8s-node2 "chmod +x /opt/k8s/bin/* && chown k8s /opt/k8s/bin/*" [root@k8s-node1 bin]# ssh k8s-node3 "chmod +x /opt/k8s/bin/* && chown k8s /opt/k8s/bin/*"
3.建立kubernetes证书和私钥json
建立签名请求bootstrap
[root@k8s-node1 server]# source /opt/k8s/bin/environment.sh [root@k8s-node1 server]# echo ${MASTER_VIP} 192.168.174.127 [root@k8s-node1 server]# echo ${CLUSTER_KUBERNETES_SVC_IP} 10.254.0.1
[root@k8s-node1 server]# cat kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.174.128", "192.168.174.129", "192.168.174.130", "192.168.174.127", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "SZ", "L": "SZ", "O": "k8s", "OU": "4Paradigm" } ] } [root@k8s-node1 server]#
hosts 字段指定受权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IP、kubernetes 服务 IP 和域名.api
域名最后字符不能是 . (如不能为kubernetes.default.svc.cluster.local. ),不然解析时失败,提示: x509:cannot parse dnsName "kubernetes.default.svc.cluster.local." .安全
若是使用非 cluster.local 域名,如opsnull.com,则须要修改域名列表中的最后两个域名为:kubernetes.default.svc.opsnull 、 kubernetes.default.svc.opsnull.com
kubernetes 服务 IP 是 apiserver 自动建立的,通常是 --service-cluster-iprange参数指定的网段的第一个IP
注意:
"${MASTER_VIP}", "${CLUSTER_KUBERNETES_SVC_IP}",
这两个变量我这里修改为了真是ip,分别是192.168.174.128和10.254.0.1,简易都修改为真实ip,减小出问题的概率.
生成证书和密钥
[root@k8s-node1 server]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes 2019/11/04 03:54:08 [INFO] generate received request 2019/11/04 03:54:08 [INFO] received CSR 2019/11/04 03:54:08 [INFO] generating key: rsa-2048 2019/11/04 03:54:08 [INFO] encoded CSR 2019/11/04 03:54:09 [INFO] signed certificate with serial number 302992339200148812661461827821751283977288224855 2019/11/04 03:54:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@k8s-node1 server]# ls kubernetes kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem kubernetes-server-linux-amd64.tar.gz [root@k8s-node1 server]#
把证书拷贝到全部节点
[root@k8s-node1 server]# cp kubernetes-key.pem kubernetes.pem /etc/kubernetes/cert/ [root@k8s-node1 server]# scp kubernetes-key.pem kubernetes.pem root@k8s-node2:/etc/kubernetes/cert/ kubernetes-key.pem 100% 1675 441.6KB/s 00:00 kubernetes.pem 100% 1619 1.4MB/s 00:00 [root@k8s-node1 server]# scp kubernetes-key.pem kubernetes.pem root@k8s-node3:/etc/kubernetes/cert/ kubernetes-key.pem 100% 1675 1.7MB/s 00:00 kubernetes.pem
修改权限和属主
[root@k8s-node1 server]# chown -R k8s /etc/kubernetes/cert &&chmod +x -R /etc/kubernetes/cert [root@k8s-node1 server]# ssh k8s-node2 "chown -R k8s /etc/kubernetes/cert &&chmod +x -R /etc/kubernetes/cert" [root@k8s-node1 server]# ssh k8s-node3 "chown -R k8s /etc/kubernetes/cert &&chmod +x -R /etc/kubernetes/cert"
4.配置加密配置文件
注意加密配置文件引用的变量,必须保证执行了source /opt/k8s/bin/environment.sh
变量必须使用真实内容
[root@k8s-node1 server]# echo ${ENCRYPTION_KEY} Z+bO1U2iMACELZVoyZM4r8kpqi1LiS8IxNshCV44FGQ=
[root@k8s-node1 server]# cat encryption-config.yaml kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: ${ENCRYPTION_KEY} - identity: {} [root@k8s-node1 server]#
[root@k8s-node1 server]# cat encryption-config.yaml kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: Z+bO1U2iMACELZVoyZM4r8kpqi1LiS8IxNshCV44FGQ= - identity: {} [root@k8s-node1 server]#
将加密配置文件拷贝到全部节点的 /etc/kubernetes 目录下:
[root@k8s-node1 server]# cp encryption-config.yaml /etc/kubernetes/ [root@k8s-node1 server]# scp encryption-config.yaml root@k8s-node2:/etc/kubernetes/ encryption-config.yaml 100% 240 288.9KB/s 00:00 [root@k8s-node1 server]# scp encryption-config.yaml root@k8s-node3:/etc/kubernetes/ encryption-config.yaml 100% 240 277.6KB/s 00:00 [root@k8s-node1 server]#
5.建立 kube-apiserver systemd unit 模板文件
[root@k8s-node1 server]# cat kube-apiserver.service.template [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/opt/k8s/bin/kube-apiserver \ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRest riction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \ --advertise-address=##NODE_IP## \ --bind-address=##NODE_IP## \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=${SERVICE_CIDR} \ --service-node-port-range=${NODE_PORT_RANGE} \ --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \ --client-ca-file=/etc/kubernetes/cert/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \ --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \ --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \ --etcd-cafile=/etc/kubernetes/cert/ca.pem \ --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \ --etcd-servers=${ETCD_ENDPOINTS} \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 Type=notify User=k8s LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@k8s-node1 server]#
##--experimental-encryption-provider-config:启用加密特性.
##--authorization-mode=Node,RBAC,开启 Node 和 RBAC 受权模式,拒绝未受权的请求.
##--enable-admission-plugins,启用 ServiceAccount 和NodeRestriction.
##--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,二者配对使用.
##--tls--file:指定 apiserver 使用的证书、私钥和 CA 文件.
##--client-cafile用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kubeproxy等)请求所带的证书.
##--kubelet-client-certificate 、 --kubelet-client-key:若是指定,则使用 https 访问 kubelet APIs.须要为证书对应的用户(上面 kubernetes.pem 证书的用户为 kubernetes)用户定义 RBAC 规则,不然访问 kubelet API 时提示未受权
##--etcd-cafile=/etc/kubernetes/cert/ca.pem \
--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem 指定访问etcd使用的证书、私钥和 CA 文件
--etcd-servers=${ETCD_ENDPOINTS} etcd集群终端地址##--bind-address:不能为 127.0.0.1,不然外界不能访问它的安全端口6443.
##--insecure-port=0:关闭监听非安全端口(8080).
##--service-cluster-ip-range:指定 Service Cluster IP 地址段.
##--service-node-port-range:指定 NodePort 的端口范围.
##--runtime-config=api/all=true:启用全部版本的 APIs,如autoscaling/v2alpha1.
##--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证.
##--apiserver-count=3:指定集群运行模式,多台 kube-apiserver 会经过 leader选举产生一个工做节点,其它节点处于阻塞状态.
##User=k8s:使用 k8s 帐户运行.
分发模板文件到节点,并更名字为kube-apiserver.service
[root@k8s-node1 server]# cp kube-apiserver.service.template /etc/systemd/system/kube-apiserver.service [root@k8s-node1 server]# scp kube-apiserver.service.template root@k8s-node2:/etc/systemd/system/kube-apiserver.service kube-apiserver.service.template 100% 1651 948.8KB/s 00:00 [root@k8s-node1 server]# scp kube-apiserver.service.template root@k8s-node3:/etc/systemd/system/kube-apiserver.service kube-apiserver.service.template 100% 1651 1.4MB/s 00:00
全部节点修改用到的变量:##NODE_IP##,${SERVICE_CIDR},${NODE_PORT_RANGE},${ETCD_ENDPOINTS}
修改NODE_IP,用sed命令修改,参考见下:
[root@k8s-node1 server]# source /opt/k8s/bin/environment.sh [root@k8s-node1 server]# sed -i -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[0]}/" /etc/systemd/system/kube-apiserver.service
修改${SERVICE_CIDR},${NODE_PORT_RANGE},${ETCD_ENDPOINTS},手动去修改
[root@k8s-node1 server]# echo ${SERVICE_CIDR} 10.254.0.0/16 [root@k8s-node1 server]# echo ${NODE_PORT_RANGE} 8400-9000 [root@k8s-node1 server]# echo ${ETCD_ENDPOINTS} https://192.168.174.128:2379,https://192.168.174.129:2379,https://192.168.174.130:2379 [root@k8s-node1 server]#
截取其中一个
[root@k8s-node1 server]# cat /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/opt/k8s/bin/kube-apiserver \ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRest riction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \ --advertise-address=192.168.174.128 \ --bind-address=192.168.174.128 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.254.0.0/16 \ --service-node-port-range=8400-9000 \ --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \ --client-ca-file=/etc/kubernetes/cert/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \ --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \ --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \ --etcd-cafile=/etc/kubernetes/cert/ca.pem \ --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \ --etcd-servers=https://192.168.174.128:2379,https://192.168.174.129:2379,https://192.168.174.130:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 Type=notify User=k8s LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@k8s-node1 server]#
6.启动服务
启动前必须先建立日志目录,三节点都操做.
[root@k8s-node1 server]# mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes
启动服务报错
Nov 4 04:45:58 k8s-node1 kube-apiserver: error: [enable-admission-plugins plugin "Initializers" is unknown, enable-admission-plugins plugin "NodeRest" is unknown] Nov 4 04:48:05 k8s-node1 kube-apiserver: error: enable-admission-plugins plugin "Initializers" is unknown
在配置文件里删除Initializers和NodeReset.
再启动服务,正常
systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver
[root@k8s-node1 server]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-11-04 04:50:36 EST; 1min 19s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 18829 (kube-apiserver) Tasks: 8 Memory: 239.9M CGroup: /system.slice/kube-apiserver.service └─18829 /opt/k8s/bin/kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorag... Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.488789 18829 controller.go:606] quota admission added evaluat...k8s.io Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.492912 18829 storage_rbac.go:284] created rolebinding.rbac.au...system Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.503153 18829 storage_rbac.go:284] created rolebinding.rbac.au...system Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.514952 18829 storage_rbac.go:284] created rolebinding.rbac.au...system Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.526567 18829 storage_rbac.go:284] created rolebinding.rbac.au...system Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.536384 18829 storage_rbac.go:284] created rolebinding.rbac.au...system Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.546283 18829 storage_rbac.go:284] created rolebinding.rbac.au...system Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.558113 18829 storage_rbac.go:284] created rolebinding.rbac.au...public Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: W1104 04:50:38.598970 18829 lease.go:223] Resetting endpoints for master ser...4.128] Nov 04 04:50:38 k8s-node1 kube-apiserver[18829]: I1104 04:50:38.599671 18829 controller.go:606] quota admission added evaluat...points Hint: Some lines were ellipsized, use -l to show in full. [root@k8s-node1 server]#
7.执行命令测试
[root@k8s-node1 server]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} [root@k8s-node1 server]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 2m7s [root@k8s-node1 server]# kubectl cluster-info Kubernetes master is running at https://192.168.174.127:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-node1 server]#
检索etcd数据
[root@k8s-node1 server]# ETCDCTL_API=3 etcdctl --endpoints=${ETCD_ENDPOINTS} --cacert=/etc/kubernetes/cert/ca.pem --cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem get /registry/ --prefix --keys-only /registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.apps /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.autoscaling /registry/apiregistration.k8s.io/apiservices/v1.batch /registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.apps /registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.batch /registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.coordination.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.extensions /registry/apiregistration.k8s.io/apiservices/v1beta1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.node.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.policy /registry/apiregistration.k8s.io/apiservices/v1beta1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.scheduling.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta2.apps /registry/apiregistration.k8s.io/apiservices/v2beta1.autoscaling /registry/apiregistration.k8s.io/apiservices/v2beta2.autoscaling /registry/clusterrolebindings/cluster-admin /registry/clusterrolebindings/system:basic-user /registry/clusterrolebindings/system:controller:attachdetach-controller /registry/clusterrolebindings/system:controller:certificate-controller /registry/clusterrolebindings/system:controller:clusterrole-aggregation-controller /registry/clusterrolebindings/system:controller:cronjob-controller /registry/clusterrolebindings/system:controller:daemon-set-controller /registry/clusterrolebindings/system:controller:deployment-controller /registry/clusterrolebindings/system:controller:disruption-controller /registry/clusterrolebindings/system:controller:endpoint-controller /registry/clusterrolebindings/system:controller:expand-controller /registry/clusterrolebindings/system:controller:generic-garbage-collector /registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler /registry/clusterrolebindings/system:controller:job-controller /registry/clusterrolebindings/system:controller:namespace-controller /registry/clusterrolebindings/system:controller:node-controller /registry/clusterrolebindings/system:controller:persistent-volume-binder /registry/clusterrolebindings/system:controller:pod-garbage-collector /registry/clusterrolebindings/system:controller:pv-protection-controller /registry/clusterrolebindings/system:controller:pvc-protection-controller /registry/clusterrolebindings/system:controller:replicaset-controller /registry/clusterrolebindings/system:controller:replication-controller /registry/clusterrolebindings/system:controller:resourcequota-controller /registry/clusterrolebindings/system:controller:route-controller /registry/clusterrolebindings/system:controller:service-account-controller /registry/clusterrolebindings/system:controller:service-controller /registry/clusterrolebindings/system:controller:statefulset-controller /registry/clusterrolebindings/system:controller:ttl-controller /registry/clusterrolebindings/system:discovery /registry/clusterrolebindings/system:kube-controller-manager /registry/clusterrolebindings/system:kube-dns /registry/clusterrolebindings/system:kube-scheduler /registry/clusterrolebindings/system:node /registry/clusterrolebindings/system:node-proxier /registry/clusterrolebindings/system:public-info-viewer /registry/clusterrolebindings/system:volume-scheduler /registry/clusterroles/admin /registry/clusterroles/cluster-admin /registry/clusterroles/edit /registry/clusterroles/system:aggregate-to-admin /registry/clusterroles/system:aggregate-to-edit /registry/clusterroles/system:aggregate-to-view /registry/clusterroles/system:auth-delegator /registry/clusterroles/system:basic-user /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient /registry/clusterroles/system:controller:attachdetach-controller /registry/clusterroles/system:controller:certificate-controller /registry/clusterroles/system:controller:clusterrole-aggregation-controller /registry/clusterroles/system:controller:cronjob-controller /registry/clusterroles/system:controller:daemon-set-controller /registry/clusterroles/system:controller:deployment-controller /registry/clusterroles/system:controller:disruption-controller /registry/clusterroles/system:controller:endpoint-controller /registry/clusterroles/system:controller:expand-controller /registry/clusterroles/system:controller:generic-garbage-collector /registry/clusterroles/system:controller:horizontal-pod-autoscaler /registry/clusterroles/system:controller:job-controller /registry/clusterroles/system:controller:namespace-controller /registry/clusterroles/system:controller:node-controller /registry/clusterroles/system:controller:persistent-volume-binder /registry/clusterroles/system:controller:pod-garbage-collector /registry/clusterroles/system:controller:pv-protection-controller /registry/clusterroles/system:controller:pvc-protection-controller /registry/clusterroles/system:controller:replicaset-controller /registry/clusterroles/system:controller:replication-controller /registry/clusterroles/system:controller:resourcequota-controller /registry/clusterroles/system:controller:route-controller /registry/clusterroles/system:controller:service-account-controller /registry/clusterroles/system:controller:service-controller /registry/clusterroles/system:controller:statefulset-controller /registry/clusterroles/system:controller:ttl-controller /registry/clusterroles/system:csi-external-attacher /registry/clusterroles/system:csi-external-provisioner /registry/clusterroles/system:discovery /registry/clusterroles/system:heapster /registry/clusterroles/system:kube-aggregator /registry/clusterroles/system:kube-controller-manager /registry/clusterroles/system:kube-dns /registry/clusterroles/system:kube-scheduler /registry/clusterroles/system:kubelet-api-admin /registry/clusterroles/system:node /registry/clusterroles/system:node-bootstrapper /registry/clusterroles/system:node-problem-detector /registry/clusterroles/system:node-proxier /registry/clusterroles/system:persistent-volume-provisioner /registry/clusterroles/system:public-info-viewer /registry/clusterroles/system:volume-scheduler /registry/clusterroles/view /registry/configmaps/kube-system/extension-apiserver-authentication /registry/masterleases/192.168.174.128 /registry/masterleases/192.168.174.129 /registry/masterleases/192.168.174.130 /registry/namespaces/default /registry/namespaces/kube-node-lease /registry/namespaces/kube-public /registry/namespaces/kube-system /registry/priorityclasses/system-cluster-critical /registry/priorityclasses/system-node-critical /registry/ranges/serviceips /registry/ranges/servicenodeports /registry/rolebindings/kube-public/system:controller:bootstrap-signer /registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader /registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager /registry/rolebindings/kube-system/system::leader-locking-kube-scheduler /registry/rolebindings/kube-system/system:controller:bootstrap-signer /registry/rolebindings/kube-system/system:controller:cloud-provider /registry/rolebindings/kube-system/system:controller:token-cleaner /registry/roles/kube-public/system:controller:bootstrap-signer /registry/roles/kube-system/extension-apiserver-authentication-reader /registry/roles/kube-system/system::leader-locking-kube-controller-manager /registry/roles/kube-system/system::leader-locking-kube-scheduler /registry/roles/kube-system/system:controller:bootstrap-signer /registry/roles/kube-system/system:controller:cloud-provider /registry/roles/kube-system/system:controller:token-cleaner /registry/services/endpoints/default/kubernetes /registry/services/specs/default/kubernetes [root@k8s-node1 server]#