Kubernetes系列之kubernetes Prometheus Operatornode
Operator是由CoreOS公司开发的用来扩展Kubernetes API的特定应用程序控制器,用来建立、配置和管理复杂的有状态应用,例如Mysql、缓存和监控系统。目前CoreOS官方提供了几种Operator的代码实现,其中就包括Prometheus Operatorgit
下图为Prometheus Operator 架构图github
Operator做为一个核心的控制器,它会建立Prometheus、ServiceMonitor、alertmanager以及咱们的prometheus-rule这四个资源对象,operator会一直监控并维持这四个资源对象的状态,其中建立Prometheus资源对象就是做为Prometheus Server进行监控,而ServiceMonitor就是咱们用的exporter的各类抽象(exporter前面文章已经介绍了,就是提供咱们各类服务的metrics的工具)Prometheus就是经过ServiceMonitor提供的metrics数据接口把咱们数据pull过来的。如今咱们监控prometheus不须要每一个服务单首创建修改规则。经过直接管理Operator来进行集群的监控。这里还要说一下,一个ServiceMonitor能够经过咱们的label标签去匹配集群内部的service,而咱们的prometheus也能够经过label匹配多个ServiceMonitorsql
其中,Operator是核心部分,做为一个控制器而存在,Operator会建立Prometheus、ServiceMonitor、AlertManager及PrometheusRule这4个CRD资源对象,而后一直监控并维持这4个CRD资源对象的状态vim
CRD简介
全称CustomResourceDefinition,在Kubernetes中一切均可视为资源,在Kubernetes1.7以后增长对CRD自定义资源二次开发能力开扩展Kubernetes API,当咱们建立一个新的CRD时,Kubernetes API服务器将为你制定的每一个版本建立一个新的RESTful资源路径,咱们能够根据该API路径来建立一些咱们本身定义的类型资源。CRD能够是命名空间,也能够是集群范围。由CRD的做用域scpoe字段中所制定的,与现有的内置对象同样,删除名称空间将删除该名称中的全部自定义对象api
简单的来讲CRD是对Kubernetes API的扩展,Kubernetes中的每一个资源都是一个API对象的集合,例如yaml文件中定义spec那样,都是对Kubernetes中资源对象的定义,全部的自定义资源能够跟Kubernetes中内建的资源同样使用Kubectl缓存
这样,在集群中监控数据,就变成Kubernetes直接去监控资源对象,Service和ServiceMonitor都是Kubernetes的资源对象,一个ServiceMonitor能够经过labelSelector匹配一类Service,Prometheus也能够经过labelSelector匹配多个ServiceMonitor,而且Prometheus和AlertManager都是自动感知监控告警配置的变化,不须要认为进行reload操做。服务器
Operator是原生支持Prometheus的,能够经过服务发现来监控集群,而且是通用安装。也就是operator提供的yaml文件,基本上在Prometheus是能够直接使用的,须要改动的地方可能就只有几处架构
#官方下载 (使用官方下载的出现镜像版本不相同请本身找镜像版本) wget -P /root/ https://github.com/coreos/kube-prometheus/archive/master.zip unzip master.zip cd /root/kube-prometheus-master/manifests
prometheus-serviceMonitorKubelet.yaml (这个文件是用来收集咱们service的metrics数据的)app
不须要修改
cat prometheus-serviceMonitorKubelet.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: kubelet
name: kubelet
namespace: monitoring
spec:
endpoints:
这里修改完毕后,咱们就能够直接建立配置文件
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl apply -f ./
namespace/monitoring unchanged
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator unchanged
deployment.apps/prometheus-operator unchanged
service/prometheus-operator unchanged
serviceaccount/prometheus-operator unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator created
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main unchanged
service/alertmanager-main unchanged
serviceaccount/alertmanager-main unchanged
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources unchanged
configmap/grafana-dashboard-apiserver unchanged
configmap/grafana-dashboard-controller-manager unchanged
configmap/grafana-dashboard-k8s-resources-cluster unchanged
configmap/grafana-dashboard-k8s-resources-namespace unchanged
configmap/grafana-dashboard-k8s-resources-node unchanged
configmap/grafana-dashboard-k8s-resources-pod unchanged
configmap/grafana-dashboard-k8s-resources-workload unchanged
configmap/grafana-dashboard-k8s-resources-workloads-namespace unchanged
configmap/grafana-dashboard-kubelet unchanged
configmap/grafana-dashboard-node-cluster-rsrc-use unchanged
configmap/grafana-dashboard-node-rsrc-use unchanged
configmap/grafana-dashboard-nodes unchanged
configmap/grafana-dashboard-persistentvolumesusage unchanged
configmap/grafana-dashboard-pods unchanged
configmap/grafana-dashboard-prometheus-remote-write unchanged
configmap/grafana-dashboard-prometheus unchanged
configmap/grafana-dashboard-proxy unchanged
configmap/grafana-dashboard-scheduler unchanged
configmap/grafana-dashboard-statefulset unchanged
configmap/grafana-dashboards unchanged
deployment.apps/grafana configured
service/grafana unchanged
serviceaccount/grafana unchanged
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
deployment.apps/kube-state-metrics unchanged
role.rbac.authorization.k8s.io/kube-state-metrics unchanged
rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
service/kube-state-metrics unchanged
serviceaccount/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter unchanged
clusterrolebinding.rbac.authorization.k8s.io/node-exporter unchanged
daemonset.apps/node-exporter configured
service/node-exporter unchanged
serviceaccount/node-exporter unchanged
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-adapter unchanged
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter unchanged
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator unchanged
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources unchanged
configmap/adapter-config unchanged
deployment.apps/prometheus-adapter configured
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader unchanged
service/prometheus-adapter unchanged
serviceaccount/prometheus-adapter unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s unchanged
serviceaccount/prometheus-k8s unchanged
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
当咱们部署成功以后,咱们能够查看一下crd,yaml文件会自动帮咱们建立crd文件。只有咱们建立了crd文件,咱们的serviceMonitor才会有用
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get crd
NAME CREATED AT
alertmanagers.monitoring.coreos.com 2019-10-18T08:32:57Z
podmonitors.monitoring.coreos.com 2019-10-18T08:32:58Z
prometheuses.monitoring.coreos.com 2019-10-18T08:32:58Z
prometheusrules.monitoring.coreos.com 2019-10-18T08:32:58Z
servicemonitors.monitoring.coreos.com 2019-10-18T08:32:59Z
其余的资源文件都会部署在一个命名空间下面,在monitoring里面是operator Pod对应的列表
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 11m
alertmanager-main-1 2/2 Running 0 11m
alertmanager-main-2 2/2 Running 0 11m
grafana-55488b566f-g2sm9 1/1 Running 0 11m
kube-state-metrics-ff5cb7949-wq7pb 3/3 Running 0 11m
node-exporter-6wb5v 2/2 Running 0 11m
node-exporter-785rf 2/2 Running 0 11m
node-exporter-7kvkp 2/2 Running 0 11m
node-exporter-85bnh 2/2 Running 0 11m
node-exporter-9vxwf 2/2 Running 0 11m
node-exporter-bvf4r 2/2 Running 0 11m
node-exporter-j6d2d 2/2 Running 0 11m
prometheus-adapter-668748ddbd-d8k7f 1/1 Running 0 11m
prometheus-k8s-0 3/3 Running 1 11m
prometheus-k8s-1 3/3 Running 1 11m
prometheus-operator-55b978b89-qpzfk 1/1 Running 0 11m
其中prometheus和alertmanager采用的StatefulSet,其余的Pod则采用deployment建立
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get deployments.apps -n monitoring
NAME READY UP-TO-DATE AVAILABLE AGE
grafana 1/1 1 1 12m
kube-state-metrics 1/1 1 1 12m
prometheus-adapter 1/1 1 1 12m
prometheus-operator 1/1 1 1 12m
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get statefulsets.apps -n monitoring
NAME READY AGE
alertmanager-main 3/3 11m
prometheus-k8s 2/2 11m
#其中prometheus-operator是咱们的核心文件,它是监控咱们prometheus和alertmanager的文件
如今建立完成后咱们还没法直接访问prometheus
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get svc -n monitoring |egrep "prometheus|grafana|alertmanage"
alertmanager-main ClusterIP 10.96.226.38 <none> 9093/TCP 3m55s
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 3m10s
grafana ClusterIP 10.97.175.234 <none> 3000/TCP 3m53s
prometheus-adapter ClusterIP 10.96.43.155 <none> 443/TCP 3m53s
prometheus-k8s ClusterIP 10.105.75.186 <none> 9090/TCP 3m52s
prometheus-operated ClusterIP None <none> 9090/TCP 3m
prometheus-operator ClusterIP None <none> 8080/TCP 3m55s
因为默认的yaml文件svc采用的是ClusterIP,咱们没法进行访问。这里咱们可使用ingress进行代理,或者使用node-port临时访问。我这里就修改一下svc,使用node-port进行访问
#我这里使用edit进行修改,或者修改yaml文件apply下便可
kubectl edit svc -n monitoring prometheus-k8s
#注意修改的svc是prometheus-k8s由于这个有clusterIP
kubectl edit svc -n monitoring grafana
kubectl edit svc -n monitoring alertmanager-main
#三个文件都须要修改,不要修改错了。都是修改有clusterIP的
...
type: NodePort #将这行修改成NodePort
prometheus-k8s、grafana和alertmanager-main都是只修改type=clusterIP这行  修改完毕后,咱们在查看svc,就会发现这几个都包含node端口了,接下来在任意集群节点访问便可
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get svc -n monitoring |egrep "prometheus|grafana|alertmanage"
alertmanager-main NodePort 10.96.226.38 <none> 9093:32477/TCP 13m
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 12m
grafana NodePort 10.97.175.234 <none> 3000:32474/TCP 13m
prometheus-adapter ClusterIP 10.96.43.155 <none> 443/TCP 13m
prometheus-k8s NodePort 10.105.75.186 <none> 9090:32489/TCP 13m
prometheus-operated ClusterIP None <none> 9090/TCP 12m
prometheus-operator ClusterIP None <none> 8080/TCP 13m
接下来咱们查看prometheus的Ui界面
[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get svc -n monitoring |grep prometheus-k8s
prometheus-k8s NodePort 10.105.75.186 <none> 9090:32489/TCP 19m
[root@HUOBAN-K8S-MASTER01 manifests]# hostname -i
172.16.17.191
咱们访问的集群172.16.17.191:32489  这里kube-controller-manager和kube-scheduler并管理的目标,其余的都有。这里的就是和官方yaml文件里面定义的有关系  配置文件解释
apiVersion: monitoring.coreos.com/v1 #kubectl get crd里面包含的,不进行修改
kind: ServiceMonitor
metadata:
labels:
k8s-app: kube-scheduler
name: kube-scheduler #定义的名称
namespace: monitoring
spec:
endpoints: