K8S集群证书已过时且etcd和apiserver已不能正常使用下的恢复方案

在这种比较极端的状况下,要当心翼翼的规划和操做,才不会让集群完全死翘翘。首先,几个ca根证书是10年期,应该尚未过时。咱们能够基于这几个根证书,来从新生成一套可用的各组件认证证书。html

前期,先制定如下方案步骤,可否实现,待验证。node

一,制做证书的基本文件。json

Ca-csr.json(由于根证书是OK的,因此这个文件,但是列在这里,不会用上)bootstrap

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "ca": {
    "expiry": "438000h"
  }
}

Ca-config.json(它用来从自签名根ca.crt和ca.key生成新的证书,能够共用)api

{
  "signing": {
    "default": {
      "expiry": "43800h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "43800h"
      }
    }
  }
}

二,从新生成etcd系列证书((注意,这是依据/etc/kubernetes/pki/etcd/目录下的ca证书)url

Etcd-server.jsonspa

{
    "CN": "etcdServer",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "O": "etcd",
            "OU": "etcd Security",
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -hostname=127.0.0.1,localhost,本机ip,小写主机名 \
  -profile=kubernetes \
  etcd-server.json|cfssljson -bare server

etcd-peer.jsoncode

{
    "CN": "etcdPeer",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
        "O": "etcd",
        "OU": "etcd Security",
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -hostname=127.0.0.1,localhost,本机ip,小写主机名 \
  -profile=kubernetes \
  etcd-peer.json|cfssljson -bare peer

etcd-client.jsonserver

{
    "CN": "etcdClient",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
        "O": "etcd",
        "OU": "etcd Security",
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -profile=kubernetes \
  etcd-client.json |cfssljson -bare client

三,从新制做apiserver证书(注意,这是依据/etc/kubernetes/pki目录下的ca证书)htm

Apiserver.json

{
    "CN": "kube-apiserver",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -hostname=127.0.0.1, kubernetes , kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local,本机ip,小写主机名 \
  -profile=kubernetes \
  apiserver.json |cfssljson -bare apiserver

apiserver-kubelet-client.json

{
    "CN": "kube-apiserver-kubelet-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
        "O": "system:masters"
        }
    ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -profile=kubernetes \
  apiserver-kubelet-client.json |cfssljson -bare apiserver-kubelet-client

三,从新制做front-proxy证书(注意,这是依据/etc/kubernetes/pki目录下的front-proxy-ca证书,它必须和apiserver的ca不同,牵扯到apiserver的认证顺序,切记)

Front-proxy-client.json

{
    "CN": "front-proxy-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -profile=kubernetes \
  front-proxy-client.json |cfssljson -bare front-proxy-client

四,制做scheduler,controller-manager,admin,kubelet,bootstrap证书,此证书只存在于主节点。此证书主要用来生成controller-manager.conf, scheduler.conf, admin.conf, kubelet.conf bootstrap-kubelet.conf。

若是/etc/kubernetes/pki目录下的sa.key,sa.pub存在,则无须更新,由于它没有过时概念。

kube-scheduler-csr.json

{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "O": "system:kube-scheduler",
      }
    ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -hostname=127.0.0.1,localhost,本机ip,小写主机名 \
  -profile=kubernetes \
  kube-scheduler-csr.json|cfssljson -bare kube-scheduler

kube-controller-manager-csr.json

{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "O": "system:kube-controller-manager",
      }
    ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -hostname=127.0.0.1,localhost,本机ip,小写主机名 \
  -profile=kubernetes \
  kube-controller-manager-csr.json |cfssljson -bare kube-controller-manager

admin-csr.json

{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "O": "system:masters",
    }
  ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json |cfssljson -bare kube- admin

kubelet-csr.json(这个方法,只适合master上的kubelet运行,不用bootstrap的状况)

{
  "CN": "system:node: 小写主机名",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "O": "system:nodes",
    }
  ]
}
cfssl gencert \
  -ca=ca.crt \
  -ca-key=ca.key \
  -config=ca-config.json \
  -hostname=127.0.0.1,localhost,本机ip,小写主机名 \
  -profile=kubernetes \
  kubelet-csr.json |cfssljson -bare kubelet

若是还须要bootstrap,能够参考下面的url:

https://k2r2bai.com/2018/07/17/kubernetes/deploy/manual-install/

https://www.jianshu.com/p/6650954fa973?tdsourcetag=s_pctim_aiomsg

五,以上文件做好以后,须要根据如今的k8s命令规则更名,还要根据不一样的文件,存放于不一样的目录。

六,这时,k8s master应该能够启动了。接下来,制做kubeconfig文件,参考url

http://www.javashuo.com/article/p-yqpeysxl-kq.html(配置bootstrap及kubelet认证)

http://www.javashuo.com/article/p-ylrsbnsn-kq.html(配置.kube/config文件)

# 设置集群参数

kubectl config set-cluster

# 设置客户端认证参数
kubectl config set-credentials
# 设置上下文参数
kubectl config set-context
# 设置默认上下文
kubectl config use-context
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://ip:port \
  --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://ip:port \
  --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-cluster kubernetes \
    --certificate-authority=${PKI_DIR}/ca.pem \
    --embed-certs=true \
    --server=https://ip:port \
    --kubeconfig=${K8S_DIR}/admin.conf

kubectl config set-credentials kubernetes-admin \
    --client-certificate=${PKI_DIR}/admin.pem \
    --client-key=${PKI_DIR}/admin-key.pem \
    --embed-certs=true \
    --kubeconfig=${K8S_DIR}/admin.conf

kubectl config set-context kubernetes-admin@kubernetes \
    --cluster=kubernetes \
    --user=kubernetes-admin \
    --kubeconfig=${K8S_DIR}/admin.conf

kubectl config use-context kubernetes-admin@kubernetes \
    --kubeconfig=${K8S_DIR}/admin.conf
kubectl config set-cluster kubernetes \
  --certificate-authority=${PKI_DIR}/ca.pem \
  --embed-certs=true \
  --server=https://ip:port \
  --kubeconfig=${K8S_DIR}/kubelet.conf && \
kubectl config set-credentials system:node:小写主机名 \
  --client-certificate=${PKI_DIR}/kubelet.pem \
  --client-key=${PKI_DIR}/kubelet-key.pem \
  --embed-certs=true \
  --kubeconfig=${K8S_DIR}/kubelet.conf && \
kubectl config set-context system:node:小写主机名@kubernetes \
  --cluster=kubernetes \
  --user=system:node:小写主机名 \
  --kubeconfig=${K8S_DIR}/kubelet.conf && \
kubectl config use-context system:node:小写主机名@kubernetes \
  --kubeconfig=${K8S_DIR}/kubelet.conf
七,当制做好这些文件以后,按k8s安装的位置,分发文件,重启kubelet,应该就能够从新启动好集群了。
相关文章
相关标签/搜索