这篇文章介绍prometheus和alertmanager的报警和通知规则,prometheus的配置文件名为prometheus.yml,alertmanager的配置文件名为alertmanager.ymlhtml
报警:指prometheus将监测到的异常事件发送给alertmanager,而不是指发送邮件通知
通知:指alertmanager发送异常事件的通知(邮件、webhook等)node
报警规则
在prometheus.yml中指定匹配报警规则的间隔web
# How frequently to evaluate rules. [ evaluation_interval: <duration> | default = 1m ]
在prometheus.yml中指定规则文件(可以使用通配符,如rules/*.rules)express
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: - "/etc/prometheus/alert.rules"
并基于如下模板:json
ALERT <alert name> IF <expression> [ FOR <duration> ] [ LABELS <label set> ] [ ANNOTATIONS <label set> ]
其中:api
Alert name是警报标识符。它不须要是惟一的。app
Expression是为了触发警报而被评估的条件。它一般使用现有指标做为/metrics端点返回的指标。ide
Duration是规则必须有效的时间段。例如,5s表示5秒。this
Label set是将在消息模板中使用的一组标签。lua
在prometheus-k8s-statefulset.yaml 文件建立ruleSelector,标记报警规则角色。在prometheus-k8s-rules.yaml 报警规则文件中引用
ruleSelector: matchLabels: role: prometheus-rulefiles prometheus: k8s
在prometheus-k8s-rules.yaml 使用configmap 方式引用prometheus-rulefiles
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-k8s-rules namespace: monitoring labels: role: prometheus-rulefiles prometheus: k8s data: pod.rules.yaml: |+ groups: - name: noah_pod.rules rules: - alert: Pod_all_cpu_usage expr: (sum by(name)(rate(container_cpu_usage_seconds_total{image!=""}[5m]))*100) > 10 for: 5m labels: severity: critical service: pods annotations: description: 容器 {{ $labels.name }} CPU 资源利用率大于 75% , (current value is {{ $value }}) summary: Dev CPU 负载告警 - alert: Pod_all_memory_usage expr: sort_desc(avg by(name)(irate(container_memory_usage_bytes{name!=""}[5m]))*100) > 1024*10^3*2 for: 10m labels: severity: critical annotations: description: 容器 {{ $labels.name }} Memory 资源利用率大于 2G , (current value is {{ $value }}) summary: Dev Memory 负载告警 - alert: Pod_all_network_receive_usage expr: sum by (name)(irate(container_network_receive_bytes_total{container_name="POD"}[1m])) > 1024*1024*50 for: 10m labels: severity: critical annotations: description: 容器 {{ $labels.name }} network_receive 资源利用率大于 50M , (current value is {{ $value }}) summary: network_receive 负载告警
配置文件设置好后,prometheus-opeartor自动从新读取配置。
若是二次修改comfigmap 内容只须要apply
kubectl apply -f prometheus-k8s-rules.yaml
将邮件通知与rules对比一下(还须要配置alertmanager.yml才能收到邮件)
通知规则
设置alertmanager.yml的的route与receivers
global: # ResolveTimeout is the time after which an alert is declared resolved # if it has not been updated. resolve_timeout: 5m # The smarthost and SMTP sender used for mail notifications. smtp_smarthost: 'xxxxx' smtp_from: 'xxxxxxx' smtp_auth_username: 'xxxxx' smtp_auth_password: 'xxxxxx' # The API URL to use for Slack notifications. slack_api_url: 'https://hooks.slack.com/services/some/api/token' # # The directory from which notification templates are read. templates: - '*.tmpl' # The root route on which each incoming alert enters. route: # The labels by which incoming alerts are grouped together. For example, # multiple alerts coming in for cluster=A and alertname=LatencyHigh would # be batched into a single group. group_by: ['alertname', 'cluster', 'service'] # When a new group of alerts is created by an incoming alert, wait at # least 'group_wait' to send the initial notification. # This way ensures that you get multiple alerts for the same group that start # firing shortly after another are batched together on the first # notification. group_wait: 30s # When the first notification was sent, wait 'group_interval' to send a batch # of new alerts that started firing for that group. group_interval: 5m # If an alert has successfully been sent, wait 'repeat_interval' to # resend them. #repeat_interval: 1m repeat_interval: 15m # A default receiver # If an alert isn't caught by a route, send it to default. receiver: default # All the above attributes are inherited by all child routes and can # overwritten on each. # The child route trees. routes: - match: severity: critical receiver: email_alert receivers: - name: 'default' email_configs: - to : 'yi.hu@dianrong.com' send_resolved: true - name: 'email_alert' email_configs: - to : 'yi.hu@dianrong.com' send_resolved: true
名词解释
Route
route
属性用来设置报警的分发策略,它是一个树状结构,按照深度优先从左向右的顺序进行匹配。
// Match does a depth-first left-to-right search through the route tree // and returns the matching routing nodes. func (r *Route) Match(lset model.LabelSet) []*Route {
Alert
Alert
是alertmanager接收到的报警,类型以下。
// Alert is a generic representation of an alert in the Prometheus eco-system. type Alert struct { // Label value pairs for purpose of aggregation, matching, and disposition // dispatching. This must minimally include an "alertname" label. Labels LabelSet `json:"labels"` // Extra key/value information which does not define alert identity. Annotations LabelSet `json:"annotations"` // The known time range for this alert. Both ends are optional. StartsAt time.Time `json:"startsAt,omitempty"` EndsAt time.Time `json:"endsAt,omitempty"` GeneratorURL string `json:"generatorURL"` }
具备相同Lables的Alert(key和value都相同)才会被认为是同一种。在prometheus rules文件配置的一条规则可能会产生多种报警
Group
alertmanager会根据group_by配置将Alert分组。以下规则,当go_goroutines等于4时会收到三条报警,alertmanager会将这三条报警分红两组向receivers发出通知。
ALERT test1 IF go_goroutines > 1 LABELS {label1="l1", label2="l2", status="test"} ALERT test2 IF go_goroutines > 2 LABELS {label1="l2", label2="l2", status="test"} ALERT test3 IF go_goroutines > 3 LABELS {label1="l2", label2="l1", status="test"}
主要处理流程
-
接收到Alert,根据labels判断属于哪些Route(可存在多个Route,一个Route有多个Group,一个Group有多个Alert)
-
将Alert分配到Group中,没有则新建Group
-
新的Group等待group_wait指定的时间(等待时可能收到同一Group的Alert),根据resolve_timeout判断Alert是否解决,而后发送通知
-
已有的Group等待group_interval指定的时间,判断Alert是否解决,当上次发送通知到如今的间隔大于repeat_interval或者Group有更新时会发送通知
Alertmanager
Alertmanager是警报的缓冲区,它具备如下特征:
能够经过特定端点(不是特定于Prometheus)接收警报。
能够将警报重定向到接收者,如hipchat、邮件或其余人。
足够智能,能够肯定已经发送了相似的通知。因此,若是出现问题,你不会被成千上万的电子邮件淹没。
Alertmanager客户端(在这种状况下是Prometheus)首先发送POST消息,并将全部要处理的警报发送到/ api / v1 / alerts。例如:
[ { "labels": { "alertname": "low_connected_users", "severity": "warning" }, "annotations": { "description": "Instance play-app:9000 under lower load", "summary": "play-app:9000 of job playframework-app is under lower load" } }]
alert工做流程
一旦这些警报存储在Alertmanager,它们可能处于如下任何状态:
-
Inactive:这里什么都没有发生。
-
Pending:客户端告诉咱们这个警报必须被触发。然而,警报能够被分组、压抑/抑制或者静默/静音。一旦全部的验证都经过了,咱们就转到Firing。
-
Firing:警报发送到Notification Pipeline,它将联系警报的全部接收者。而后客户端告诉咱们警报解除,因此转换到状Inactive状态。
Prometheus有一个专门的端点,容许咱们列出全部的警报,并遵循状态转换。Prometheus所示的每一个状态以及致使过渡的条件以下所示:
规则不符合。警报没有激活。
规则符合。警报如今处于活动状态。 执行一些验证是为了不淹没接收器的消息。
警报发送到接收者
接收器 receiver
顾名思义,警报接收的配置。
通用配置格式
# The unique name of the receiver.
name: <string>
# Configurations for several notification integrations.
email_configs:
[ - <email_config>, ... ]
pagerduty_configs:
[ - <pagerduty_config>, ... ]
slack_config:
[ - <slack_config>, ... ]
opsgenie_configs:
[ - <opsgenie_config>, ... ]
webhook_configs:
[ - <webhook_config>, ... ]
邮件接收器 email_config
# Whether or not to notify about resolved alerts.
[ send_resolved: <boolean> | default = false ]
# The email address to send notifications to.
to: <tmpl_string>
# The sender address.
[ from: <tmpl_string> | default = global.smtp_from ]
# The SMTP host through which emails are sent.
[ smarthost: <string> | default = global.smtp_smarthost ]
# The HTML body of the email notification.
[ html: <tmpl_string> | default = '{{ template "email.default.html" . }}' ]
# Further headers email header key/value pairs. Overrides any headers
# previously set by the notification implementation.
[ headers: { <string>: <tmpl_string>, ... } ]
Slack接收器 slack_config
# Whether or not to notify about resolved alerts.
[ send_resolved: <boolean> | default = true ]
# The Slack webhook URL.
[ api_url: <string> | default = global.slack_api_url ]
# The channel or user to send notifications to.
channel: <tmpl_string>
# API request data as defined by the Slack webhook API.
[ color: <tmpl_string> | default = '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}' ]
[ username: <tmpl_string> | default = '{{ template "slack.default.username" . }}'
[ title: <tmpl_string> | default = '{{ template "slack.default.title" . }}' ]
[ title_link: <tmpl_string> | default = '{{ template "slack.default.titlelink" . }}' ]
[ pretext: <tmpl_string> | default = '{{ template "slack.default.pretext" . }}' ]
[ text: <tmpl_string> | default = '{{ template "slack.default.text" . }}' ]
[ fallback: <tmpl_string> | default = '{{ template "slack.default.fallback" . }}' ]
Webhook接收器 webhook_config
# Whether or not to notify about resolved alerts.
[ send_resolved: <boolean> | default = true ]
# The endpoint to send HTTP POST requests to.
url: <string>
Alertmanager会使用如下的格式向配置端点发送HTTP POST请求:
{
"version": "2",
"status": "<resolved|firing>",
"alerts": [
{
"labels": <object>,
"annotations": <object>,
"startsAt": "<rfc3339>",
"endsAt": "<rfc3339>"
},
...
]
}
Inhibition
抑制是指当警报发出后,中止重复发送由此警报引起其余错误的警报的机制。
例如,当警报被触发,通知整个集群不可达,能够配置Alertmanager忽略由该警报触发而产生的全部其余警报,这能够防止通知数百或数千与此问题不相关的其余警报。
抑制机制能够经过Alertmanager的配置文件来配置。
Inhibition容许在其余警报处于触发状态时,抑制一些警报的通知。例如,若是同一警报(基于警报名称)已经很是紧急,那么咱们能够配置一个抑制来使任何警告级别的通知静音。 alertmanager.yml文件的相关部分以下所示:
inhibit_rules:- source_match: severity: 'critical' target_match: severity: 'warning' equal: ['low_connected_users']
配置抑制规则,是存在另外一组匹配器匹配的状况下,静音其余被引起警报的规则。这两个警报,必须有一组相同的标签。
# Matchers that have to be fulfilled in the alerts to be muted. target_match: [ <labelname>: <labelvalue>, ... ] target_match_re: [ <labelname>: <regex>, ... ] # Matchers for which one or more alerts have to exist for the # inhibition to take effect. source_match: [ <labelname>: <labelvalue>, ... ] source_match_re: [ <labelname>: <regex>, ... ] # Labels that must have an equal value in the source and target # alert for the inhibition to take effect. [ equal: '[' <labelname>, ... ']' ]
Silences
Silences是快速地使警报暂时静音的一种方法。 咱们直接经过Alertmanager管理控制台中的专用页面来配置它们。在尝试解决严重的生产问题时,这对避免收到垃圾邮件颇有用。
alertmanager 参考资料
抑制规则 inhibit_rule参考资料
https://www.kancloud.cn/huyipow/prometheus/527563