主机名 | ip | 说明 |
---|---|---|
master-123(复用node) | 192.168.116.123 | etcd flannel kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy |
node-124 | 192.168.116.124 | flannel kubelet kube-proxy |
因为目前只有2台机器,etcd集群因为竞选缘由至少须要奇数台机器才能稳定运行,因此目前暂时使用1台机器安装etcd。php
192.168.116.123执行:hostnamectl --static set-hostname master-123
192.168.116.124执行:hostnamectl --static set-hostname node-124html
vi /etc/hostsnode
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.116.123 master-123 192.168.116.124 node-124
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 chmod +x cfssl_linux-amd64 sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl-certinfo_linux-amd64 sudo mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
配置config.json文件linux
{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } }
配置csr.json文件算法
{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai", "O": "k8s", "OU": "System" } ] }
mkdir –p /opt/ssl cd /opt/ssl
执行:cfssl gencert -initca csr.json | cfssljson -bare ca
docker
[root@localhost ssl]# ls -ltr total 20 -rw-r--r--. 1 root root 387 Jul 27 15:01 config.json -rw-r--r--. 1 root root 267 Jul 27 15:04 csr.json -rw-r--r--. 1 root root 1363 Jul 27 15:07 ca.pem -rw-------. 1 root root 1675 Jul 27 15:07 ca-key.pem -rw-r--r--. 1 root root 1005 Jul 27 15:07 ca.csr
建立证书目录json
mkdir -p /etc/kubernetes/ssl
拷贝全部文件至目录下centos
cp * /etc/kubernetes/ssl/
将文件拷贝至全部k8s机器上api
scp * root@192.168.116.124:/etc/kubernetes/ssl/
etcd做为一个高可用键值存储系统,天生就是为集群化而设计的。因为Raft算法在作决策时须要多数节点的投票,因此etcd通常部署集群推荐奇数个节点,推荐的数量为三、5或者7个节点构成一个集群。服务器
上传文件:etcd-3.1.7-1.el7.x86_64.rpm
执行命令:rpm -ivh etcd-3.1.7-1.el7.x86_64.rpm
下载地址:http://www.rpmfind.net/linux/...
如今只在单matser上建立etcd,以后etcd会添加2个节点
cd /opt/ssl vi etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.116.123", "192.168.116.124", "192.168.116.120", "192.168.116.123", "192.168.116.124", "192.168.116.125", "192.168.116.120" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai", "O": "k8s", "OU": "System" } ] }
上面配置文件的ip尽可能包括全部etcd节点的ip,不然须要从新分发证书
生成etcd密钥
cfssl gencert -ca=/opt/ssl/ca.pem \ -ca-key=/opt/ssl/ca-key.pem \ -config=/opt/ssl/config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
查看生成
[root@localhost ssl]# ls -ltr etcd* -rw-r--r--. 1 root root 295 Jul 27 15:22 etcd-csr.json -rw-r--r--. 1 root root 1440 Jul 27 15:24 etcd.pem -rw-------. 1 root root 1679 Jul 27 15:24 etcd-key.pem -rw-r--r--. 1 root root 1066 Jul 27 15:24 etcd.csr
拷贝到etcd服务器
cp etcd* /etc/kubernetes/ssl/ scp etcd* root@198.15.5.28:/etc/kubernetes/ssl scp etcd* root@198.15.5.29:/etc/kubernetes/ssl
若是 etcd 非 root 用户,读取证书会提示没权限
在每一台ETCD节点上运行
chmod 644 /etc/kubernetes/ssl/etcd-key.pem
vi /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/var/lib/etcd/ User=etcd # set GOMAXPROCS to number of processors ExecStart=/usr/bin/etcd \ --name=etcd1 \ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \ --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls=https://192.168.116.123:2380 \ --listen-peer-urls=https://192.168.116.123:2380 \ --listen-client-urls=https://192.168.116.123:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.116.123:2379 \ --initial-cluster-token=k8s-etcd-cluster \ --initial-cluster=etcd1=https://192.168.116.123:2380 \ --initial-cluster-state=new \ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
若是是多台etcd,应根据各节点ip的不一样修改ip,--initial-cluster=etcd1=https://192.168.116.123:2380应该为全部节点而不是单个节点。
关闭全部节点主机防火墙
关闭防火墙开机自启动:systemctl disable firewalld
关闭防火墙: systemctl stop firewalld
启动etcd:
systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd
etcdctl --endpoints=https://192.168.116.123:2379 \ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ cluster-health etcdctl --endpoints=https://192.168.116.123:2379 \ --cert-file=/etc/kubernetes/ssl/etcd.pem \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --key-file=/etc/kubernetes/ssl/etcd-key.pem \ member list
晚上回去更新第二篇:
【从零开始安装kubernetes-1.7.3】2.flannel、docker以及Harbor的配置以及做用