主要步骤是参考:https://www.kubernetes.org.cn/5551.html html
本示例是使用kubeadm安装k8s 1.15,系统为centos7; node
前提:能够科-xue-shang-网
git
主要步骤记录:
github
1.安装docker,略;
2.配置Ipvs模块,kube-proxy会用到;
docker
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bashmodprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
3.配置k8s的yum源 -> 此步用到科-xue-shang-网,由于要访问https://packages.cloud.google.comcentos
#google的k8s源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF #阿里云的k8s源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
4.安装kubeadm,kubeletapi
yum makecache fast yum install -y kubelet kubeadm kubectl #启动:kubelet systemctl start kubelet
5.提早下载镜像,加速安装速度bash
#若能够访问k8s.gcr.io,能够使用这种方式下载 kubeadm config images pull #若不能访问k8s.gcr.io,能够使用这种方式下载1.15须要的镜像 docker pull gcr.azk8s.cn/google-containers/kube-apiserver:v1.15.1 docker pull gcr.azk8s.cn/google-containers/kube-controller-manager:v1.15.1 docker pull gcr.azk8s.cn/google-containers/kube-scheduler:v1.15.1 docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.15.1 docker pull gcr.azk8s.cn/google-containers/pause:3.1 docker pull gcr.azk8s.cn/google-containers/etcd:3.3.10 docker pull gcr.azk8s.cn/google-containers/coredns:1.3.1 docker tag gcr.azk8s.cn/google-containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1 docker tag gcr.azk8s.cn/google-containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 docker tag gcr.azk8s.cn/google-containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag gcr.azk8s.cn/google-containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag gcr.azk8s.cn/google-containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
6.执行kubeadm init(只在master节点执行),初始化安装操做;根据安装结果,执行如下命令能够使用kubelet命令;网络
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.安装容器网络;本例选择使用,weave;
app
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
注:到此,单节点k8s已经安装成功,kubectl get pods -n kube-system会看到各pod均是running状态
8.其它集群加入k8s集群(node节点要先启动kubelet)
#启动kubelet systemctl start kubelet kubeadm join 10.10.5.63:6443 --token nuscgj.s7lveu88id4l5dq3 \ --discovery-token-ca-cert-hash sha256:7fd76b241a139d72f5011a8b94f38d0b884495b37f63935ed52aff03e924a8ba
9.因为默认master节点,加入了node-role.kubernetes.io/master的taint,致使不能调度,删除此taint
kubectl taint node master node-role.kubernetes.io/master-
10.修改kube-proxy的工做模式为ipvs,默认是iptables
#修改mode为ipvs kubectl edit cm kube-proxy -n kube-system #删除kube-proxy的pod,以使配置生效 kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
11.安装dashboard、ceph-rook等其它辅助工做操做,方便实验
#安装dashboard,参考:https://github.com/kubernetes/dashboard kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml #安装ceph-rook,参考:https://rook.io/docs/rook/v1.0/ceph-quickstart.html git clone cd cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml kubectl create -f cluster-test.yaml #肯定ceph是否正常,参考: kubectl exec -it rook-ceph-tools-8fd8977f-9csxd bash -n rook-ceph [root@hadoop002 /]# ceph status cluster: id: 36b6bd6c-9651-413b-b2a9-e60126e4beda health: HEALTH_OK services: mon: 1 daemons, quorum a (age 11m) mgr: a(active, since 10m) osd: 3 osds: 3 up (since 9m), 3 in (since 9m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 36 GiB used, 42 GiB / 78 GiB avail pgs:
注:看到health: HEALTH_OK表示ceph集群是正常的