ubuntu安装prometheus很是简单:html
apt update apt install prometheus systemctl enable prometheus systemctl enable prometheus-node-exporter
apt安装prometheus和prometheus-node-exporter以后便带有基本配置,无需修改。node
确保开启服务开启:git
systemctl status prometheus systemctl status prometheus-node-exporter
顺便使用它监控mongodb,安装prometheus-mongodb-exporter
:github
apt install prometheus-mongodb-exporter systemctl enable prometheus-mongodb-exporter
此外因为mongodb开启了密码验证,须要注意mongodb用户的权限:mongodb_exporter github连接web
而后须要修改 /etc/default/prometheus-mongodb-exporter
中的 ARGS
以下:mongodb
# ARGS='-mongodb.uri="mongodb://localhost:27017"' ARGS='-mongodb.uri="mongodb://xxx:xxxxx@localhost:27017"'
mongodb URI格式以下:shell
mongodb://[username:password@]host1[:port1][,...hostN[:portN]][/[database][?options]]
若是 username 或 password 包含 @ : / %
四种符号须要使用 百分号编码.json
错误添加须要删除时用db.getSiblingDB("admin").dropUser("mongodb_exporter")
ubuntu
而后重启一下服务api
systemctl restart prometheus-mongodb-exporter
安装:
sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main" wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add - apt update apt install grafana
配置:
配置文件为 /etc/grafana/grafana.ini
,注意以下内容:
... [server] domain = www.xxxx.com enforce_domain = true root_url = %(protocol)s://%(domain)s/grafana ... [security] admin_password = xxxx
而后访问 www.xxxx.com/grafana 登陆,用户名admin,密码为上面设置的admin_password。
而后按照 [这里][https://github.com/percona/grafana-dashboards] 配置数据源使用prometheus,并导入面板。通常导入这些便可:
(注意:json中的pmm-singlestat-panel
可能须要替换为singlestat
)
上述Dashboard配置好以后,不该继续使用admin登陆系统。
在设置中“邀请”用户,填写本身的邮箱而后经过邮箱连接设置密码,便可以本身的邮箱登陆grafana。
注:
使用prometheus监控两台服务器,配置文件 /etc/prometheus/prometheus.yml
内容以下:
# Sample config for Prometheus. global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s # By default, scrape targets every 15 seconds. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'example' # Load and evaluate rules in this file every 'evaluation_interval' seconds. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s scrape_timeout: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: "web-server" # If prometheus-node-exporter is installed, grab stats about the local # machine by default. static_configs: - targets: ['localhost:9100'] - job_name: "worker-node1" static_configs: - targets: ['192.168.0.5:9100']
这个配置是没问题的,在另外一台机器 (192.168.0.5
) 上安装并启用 prometheus-node-exporter
便可。
但若是你仅仅修改了某个job_name
(而没有修改ip),好比把web-server
改成node
,那么grafana界面中的singlestat
panel将不能正确显示,显示“Only queries that return single...”,
这是由于singlestat只能显示一个结果,而查询语句查到了两个结果。解决方式是删除以前的数据系列:
首先中止prometheus服务,传入--web.enable-admin-api
参数手动运行
而后这样删除:
curl -X POST -g 'http://localhost:9090/api/v1/admin/tsdb/delete_series?match[]={instance="localhost:9100"}'
参考连接:Prometheus: Delete Time Series Metrcs
注2:上述状况的查询语句多是这样的(能够在grafana中看到):