参考
https://www.cnrancher.com/docs/rancher/v2.x/cn/installation/ha-install/node
option | required | description |
---|---|---|
address | yes | 公共域名或IP地址 |
user | yes | 能够运行docker命令的用户 |
role | yes | 分配给节点的Kubernetes角色列表 |
internal_address | no | 内部集群通讯的私有域名或IP地址 |
ssh_key_path | no | 用于对节点进行身份验证的SSH私钥的路径(默认为~/.ssh/id_rsa) |
以上表格的意思是:linux
使用rke
安装时,肯定各个服务器的IP地址,且须要采用非root
用户,每一个服务器之间须要免密钥处理。nginx
在此选用test-kube-master-01
节点生成公钥,并分发给其余master
节点实现安装。git
dns
解析条件限制没有dns服务器,因此在每一个节点都要配置上hostsgithub
cat >> /etc/hosts << EOF 172.18.1.4 test-kube-master-01 172.18.1.5 test-kube-master-02 172.18.1.9 test-kube-master-03 172.18.1.6 test-kube-node-01 172.18.1.7 test-kube-node-02 172.18.1.8 test-kube-node-03 EOF
master
配置普通用户可操做docker具体操做方式,在第一章基础环境准备
wangpeng@test-kube-master-01:~$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/wangpeng/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/wangpeng/.ssh/id_rsa. Your public key has been saved in /home/wangpeng/.ssh/id_rsa.pub. The key fingerprint is: SHA256:66NgD0FtJuv8ASIEQcdDoCyOklo+S5TJMgkTC8pDKwE wangpeng@test-kube-master-01 The key's randomart image is: +---[RSA 2048]----+ |E=+o | |O+oo . | |O* + + | |B=.+ = | |O.B + S | |oB + o . | |. + * . . | | . + = o. | | . +... | +----[SHA256]-----+
通常传送密钥方式是这样的:web
ssh-copy-id wangpeng@test-kube-masterxx(包括本机)
rke
和rancher-cluser.yml
文件配置rke
二进制安装浏览器访问RKE Releases 页面,下载符合操做系统的最新RKE安装程序:docker
这里使用的Linux(Intel / AMD):rke_linux-amd64
shell
wget https://github.com/rancher/rke/releases/download/v0.2.4/rke_linux-amd64
运行如下命令给与二进制文件执行权限:windows
chmod +x rke_linux-amd64
rke
配置文件有两种简单的方法能够建立cluster.yml
:api
rke
配置cluster.yml
并根据将使用的节点更新它;rke config
向导式生成配置;rke config
配置向导在这只须要将3个master添加进去便可,rancher是安装在master组成的k8s集群里面
./rke_linux-amd64 config --name cluster.yml
cat cluster.yml # If you intened to deploy Kubernetes in an air-gapped environment, # please consult the documentation on how to configure custom RKE images. nodes: - address: 172.18.1.4 port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: "" user: wangpeng docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} - address: 172.18.1.5 port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: "" user: wangpeng docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} - address: 172.18.1.9 port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: "" user: wangpeng docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} services: etcd: image: "" extra_args: {} extra_binds: [] extra_env: [] external_urls: [] ca_cert: "" cert: "" key: "" path: "" snapshot: null retention: "" creation: "" backup_config: null kube-api: image: "" extra_args: {} extra_binds: [] extra_env: [] service_cluster_ip_range: 10.43.0.0/16 service_node_port_range: "" pod_security_policy: false always_pull_images: false kube-controller: image: "" extra_args: {} extra_binds: [] extra_env: [] cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 scheduler: image: "" extra_args: {} extra_binds: [] extra_env: [] kubelet: image: "" extra_args: {} extra_binds: [] extra_env: [] cluster_domain: cluster.local infra_container_image: "" cluster_dns_server: 10.43.0.10 fail_swap_on: false kubeproxy: image: "" extra_args: {} extra_binds: [] extra_env: [] network: plugin: canal options: {} authentication: strategy: x509 sans: [] webhook: null addons: "" addons_include: [] system_images: etcd: rancher/coreos-etcd:v3.2.24-rancher1 alpine: rancher/rke-tools:v0.1.28 nginx_proxy: rancher/rke-tools:v0.1.28 cert_downloader: rancher/rke-tools:v0.1.28 kubernetes_services_sidecar: rancher/rke-tools:v0.1.28 kubedns: rancher/k8s-dns-kube-dns:1.15.0 dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0 kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0 kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.0.0 coredns: rancher/coredns-coredns:1.2.6 coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.0.0 kubernetes: rancher/hyperkube:v1.14.1-rancher1 flannel: rancher/coreos-flannel:v0.10.0-rancher1 flannel_cni: rancher/flannel-cni:v0.3.0-rancher1 calico_node: rancher/calico-node:v3.4.0 calico_cni: rancher/calico-cni:v3.4.0 calico_controllers: "" calico_ctl: rancher/calico-ctl:v2.0.0 canal_node: rancher/calico-node:v3.4.0 canal_cni: rancher/calico-cni:v3.4.0 canal_flannel: rancher/coreos-flannel:v0.10.0 weave_node: weaveworks/weave-kube:2.5.0 weave_cni: weaveworks/weave-npc:2.5.0 pod_infra_container: rancher/pause:3.1 ingress: rancher/nginx-ingress-controller:0.21.0-rancher3 ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4-rancher1 metrics_server: rancher/metrics-server:v0.3.1 ssh_key_path: ~/.ssh/id_rsa ssh_cert_path: "" ssh_agent_auth: false authorization: mode: rbac options: {} ignore_docker_version: false kubernetes_version: "" private_registries: [] ingress: provider: "" options: {} node_selector: {} extra_args: {} cluster_name: "" cloud_provider: name: "" prefix_path: "" addon_job_timeout: 0 bastion_host: address: "" port: "" user: "" ssh_key: "" ssh_key_path: "" ssh_cert: "" ssh_cert_path: "" monitoring: provider: "" options: {} restore: restore: false snapshot_name: "" dns: null
运行RKE命令建立Kubernetes集群
./rke_linux-amd64 up --config cluster.yml
完成后,它应显示:Finished building Kubernetes cluster successfully。
在docker中跑有rancher/hyperkube:v1.14.1-rancher1镜像的容器中,将hyperkube拷贝出来,将hyperkube修改为kubectl,并移动到/usr/bin/kubectl便可。
wangpeng@test-kube-master-01:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES af4de55e791f rancher/nginx-ingress-controller "/entrypoint.sh /ngi…" 14 minutes ago Up 14 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-jw7w5_ingress-nginx_da8868d3-972c-11e9-bfa4-0017fa0337ea_0 a572977ce68b rancher/pause:3.1 "/pause" 15 minutes ago Up 15 minutes k8s_POD_nginx-ingress-controller-jw7w5_ingress-nginx_da8868d3-972c-11e9-bfa4-0017fa0337ea_0 086c3aad92fe rancher/coreos-flannel "/opt/bin/flanneld -…" 15 minutes ago Up 15 minutes k8s_kube-flannel_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0 8d48b7e7c492 rancher/calico-node "start_runit" 15 minutes ago Up 15 minutes k8s_calico-node_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0 8babca986d3a rancher/pause:3.1 "/pause" 16 minutes ago Up 16 minutes k8s_POD_canal-hncc5_kube-system_b957f9ba-972c-11e9-bfa4-0017fa0337ea_0 e9cb76b6ee95 rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 16 minutes ago Up 16 minutes kube-proxy 0ed5730300bc rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 16 minutes ago Up 16 minutes kubelet 2b75e5aa7802 rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 16 minutes ago Up 14 minutes kube-scheduler 46a1002715bd rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 17 minutes ago Up 14 minutes kube-controller-manager 50c736c7b389 rancher/hyperkube:v1.14.1-rancher1 "/opt/rke-tools/entr…" 17 minutes ago Up 17 minutes kube-apiserver 33387faa5469 rancher/rke-tools:v0.1.28 "/opt/rke-tools/rke-…" 18 minutes ago Up 18 minutes etcd-rolling-snapshots 3d348dea1e88 rancher/coreos-etcd:v3.2.24-rancher1 "/usr/local/bin/etcd…" 18 minutes ago Up 18 minutes etcd wangpeng@test-kube-master-01:~$ docker cp e9cb76b6ee95:/hyperkube ./ wangpeng@test-kube-master-01:~$ ls cluster.rkestate cluster.yml hyperkube kube_config_cluster.yml rke_linux-amd64 wangpeng@test-kube-master-01:~$ sudo mv hyperkube /usr/bin/kubectl wangpeng@test-kube-master-01:~$ kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?
将rke安装完成后生成的kube_config_cluster.yml
,将KUBECONFIG环境变量设置为kube_config_rancher-cluster.yml
文件路径。
wangpeng@test-kube-master-01:~$ mkdir -pv ~/.kube mkdir: created directory '/home/wangpeng/.kube' wangpeng@test-kube-master-01:~$ cp kube_config_cluster.yml ~/.kube/config
wangpeng@test-kube-master-01:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 172.18.1.4 Ready controlplane,etcd,worker 17m v1.14.1 172.18.1.5 Ready controlplane,etcd,worker 17m v1.14.1 172.18.1.9 Ready controlplane,etcd,worker 17m v1.14.1
wangpeng@test-kube-master-01:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-775b55c884-wfdgm 1/1 Running 0 23m ingress-nginx nginx-ingress-controller-jw7w5 1/1 Running 0 23m ingress-nginx nginx-ingress-controller-sg8gs 1/1 Running 0 23m ingress-nginx nginx-ingress-controller-tstp5 1/1 Running 0 23m kube-system canal-hncc5 2/2 Running 0 24m kube-system canal-qnxx4 2/2 Running 0 24m kube-system canal-sjpbv 2/2 Running 0 24m kube-system kube-dns-869c7b8d96-rtpmn 3/3 Running 0 23m kube-system kube-dns-autoscaler-78dbfd75b7-twpbm 1/1 Running 0 23m kube-system metrics-server-7f6bd4c888-gjwz8 1/1 Running 0 23m kube-system rke-ingress-controller-deploy-job-srz6h 0/1 Completed 0 23m kube-system rke-kube-dns-addon-deploy-job-lsnm4 0/1 Completed 0 24m kube-system rke-metrics-addon-deploy-job-k74sn 0/1 Completed 0 23m kube-system rke-network-plugin-deploy-job-t5btw 0/1 Completed 0 24m
保存kube_config_rancher-cluster.yml
和rancher-cluster.yml
文件的副本,您将须要这些文件来维护和升级Rancher实例。