通常,咱们从网上看到的帖子和资料,node
都是用prometheus监控k8s的各项资源,docker
如api server, namespace, pod, node等。api
那若是是本身的业务pod上的自定义metrics呢?app
好比,一个业务pod开放了/xxx/metrics,lua
那么,若是用prometheus来抓取呢?spa
这里,咱们就会用到kubernetes-pods这样一个job。server
而后,在业务的deployment中,加annotation来配合抓取配置。内存
以下:资源
prometheus-configmap-pod.yamlget
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config namespace: ns-monitor data: prometheus.yml: | global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name
上面yaml文件中source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path],
这样的relabel含义就是:
若是在业务pod中,annotation定义了prometheus.io/path,那么,prometheus就能够抓取其自定义的metrics。
如,一个业务deployments定义以下:
apiVersion: apps/v1 kind: Deployment metadata: name: gw namespace: default spec: replicas: 3 selector: matchLabels: name: gw template: metadata: labels: name: gw annotations: prometheus.io/path: /xxx/metrics prometheus.io/port: "32456" prometheus.io/scrape: "true" spec: imagePullSecrets: - name: dockersecret containers: - name: gw ......
那么,prometheus server加载prometheus.yml文件以后,
就会去抓取每一个业务pod的pod:32456/xxx/metrics的监控数据了。
若是现实是无极,那内存就是太极,CPU的做用只是力图将线性化的空间还原为立体化的空间。其间固然要涉及映射运算。