CentOS 7.2安装Kubernetes 1.03

截止2015年9月1日,CentOS 已经把 Kubernetes 加入官方源,因此如今安装Kubernetes已经方便不少。 node

各组件版本以下: linux

Kubernetes-1.03
docker-1.8.2
flannel-0.5.3
etcd-2.1.1
Kubernetes部署环境角色以下:
CentOS 7.2 64位系统,3台虚拟机:
master:192.168.32.15
minion1:192.168.32.16
minion2:192.168.32.17

1. 预处理 docker

每台机器禁用iptables 避免和docker 的iptables冲突: shell

systemctl stop firewalld
systemctl disable firewalld
禁用selinux:
vim /etc/selinux/config
#SELINUX=enforcing
SELINUX=disabled
在2个minions机器安装docker:
yum -y install docker
yum -y update
reboot

CentOS系统,使用devicemapper做为存储后端,初始安装docker 会使用loopback, 致使docker启动报错,须要update以后再启动。 vim

Docker启动脚本更新 后端

vim /etc/sysconfig/docker

添加:-H tcp://0.0.0.0:2375,最终配置以下,以便之后提供远程API维护: api

OPTIONS=--selinux-enabled -H tcp://0.0.0.0:2375 -H fd://

提早说明一下,kubernetes运行pods时须要连带运行一个叫pause的镜像,须要先从docker.io上下载此镜像,而后用docker命令更名字: 网络

docker pull docker.io/kubernetes/pause
docker tag kubernetes/pause gcr.io/google_containers/pause:0.8.0
docker tag gcr.io/google_containers/pause:0.8.0 gcr.io/google_containers/pause

2. master结点的安装与配置 app

安装etcd与kubernetes-master: tcp

yum -y install etcd kubernetes-master
修改etcd配置文件:
# egrep -v “^#” /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.32.15:2379"
修改kube-master配置文件:
# egrep -v ‘^#’ /etc/kubernetes/apiserver | grep -v ‘^$’
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.32.15:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""
# egrep -v ‘^#’ /etc/kubernetes/controller-manager |grep -v ‘^$’
KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
[root@localhost ~]# egrep -v ‘^#’ /etc/kubernetes/config | egrep -v ‘^$’
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.32.15:8080"
启动服务:
systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager
systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager

定义flannel网络配置到etcd,这个配置会推送到各个minions的flannel服务上:

etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'
3. minion结点的安装与配置
yum -y install kubernetes-node flannel
修改kube-node和flannel配置文件:
# egrep -v ‘^#’ /etc/kubernetes/config | grep -v ‘^$’
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.32.15:8080"
# egrep -v ‘^#’ /etc/kubernetes/kubelet | grep -v ‘^$’
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname_override=192.168.32.16"
KUBELET_API_SERVER="--api_servers=http://192.168.32.15:8080"
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"
为etcd服务配置flannel,修改配置文件 /etc/sysconfig/flanneld:
FLANNEL_ETCD="http://192.168.32.15:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
启动服务:
systemctl enable flanenld kubelet kube-proxy
systemctl restart flanneld docker
systemctl start kubelet kube-proxy
在每一个minions能够看到2块网卡:docker0和flannel0,这2块网卡的ip在不一样的机器ip地址不一样:
#minion1
4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none 
    inet 172.17.98.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:9a:01:ca:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.98.1/24 scope global docker0
       valid_lft forever preferred_lft forever

#minion2
4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none 
    inet 172.17.67.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:25:be:ba:64 brd ff:ff:ff:ff:ff:ff
    inet 172.17.67.1/24 scope global docker0
       valid_lft forever preferred_lft forever
4. 检查状态

登录master,确认minions的状态:

[root@master ~]# kubectl get nodes
NAME            LABELS                                 STATUS
192.168.32.16   kubernetes.io/hostname=192.168.32.16   Ready
192.168.32.17   kubernetes.io/hostname=192.168.32.17   Ready
kubernetes的集群就配置完成,下面就是搞pod了,后续会继续试验。
相关文章
相关标签/搜索