InfluxDB是一个开源的没有外部依赖的时间序列数据库。适用于记录度量,事件及执行分析。css
cAdvisor是一款google开源的数据收集工具。可是它默认只显示实时数据,不储存历史数据。所以,为了存储和显示历史数据,自定义展现图,能够将cAdvisor与Influxdb+Grafana集成起来使用。html
Grafana是一个开源的度量分析与可视化套件。常常被用做基础设施的时间序列数据和应用程序分析的可视化,它在其余领域也被普遍的使用包括工业传感器,家庭自动化,天气和过程控制等。node
Grafana支持许多不一样的数据源。每一个数据源都有一个特定的查询编辑器,该编辑器定制的特性和功能是公开的特定数据来源。linux
官方支持如下数据源:Graphite,InfluxDB,OpenTSDB,Prometheus,Elasticsearch,CloudWatch和KairosDB。nginx
每一个数据源的查询语言和能力都是不一样的。你能够把来自多个数据源的数据组合到一个仪表板,但每个面板被绑定到一个特定的数据源,它就属于一个特定的组织。c++
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confgit
sysctl -pgithub
yum -y install yum-utils device-mapper-persistent-data lvm2web
curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo算法
yum -y install docker-ce
systemctl start docker
systemctl enable docker
vim /etc/docker/daemon.json
cat /etc/docker/daemon.json
{
"registry-mirrors":[ "https://registry.docker-cn.com" ]
}
systemctl daemon-reload
systemctl restart docker
docker pull tutum/influxdb
docker network create monitor
docker network ls
docker run -d --name influxdb --net monitor -p 8083:8083 -p 8086:8086 tutum/influxdb
docker ps -a
http://192.168.200.70:8083
进入Influxdb的管理界面,以下
docker pull google/cadvisor
docker images
docker run -d --name=cadvisor --net monitor -p 8081:8080 --mount type=bind,src=/,dst=/rootfs,ro --mount type=bind,src=/var/run,dst=/var/run --mount type=bind,src=/sys,dst=/sys,ro --mount type=bind,src=/var/lib/docker/,dst=/var/lib/docker,ro google/cadvisor -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_host=influxdb:8086
docker ps -a
http://192.168.200.70:8081
进入cAdvisor管理界面,以下
docker pull grafana/grafana
docker images
docker run -d --name grafana --net monitor -p 3000:3000 grafana/grafana
docker ps -a
http://192.168.200.70:3000
默认用户:admin 默认密码:admin
进入Grafana管理界面,以下
用户:grafana
密码:grafana
微服务是众多可以独立运行,独立部署,独立提供访问的服务程序。
这些独立的程序能够单独运行提供某方面的服务,也能够经过分布式的方式调用各自提供的API接口进行集群组合式服务。
就如同以前咱们安装的容器监控系统,它是经过InfluxDB+cAdvisor+Grafana组合来实现的。这三个软件服务均可以独立部署,独立运行,并都有独立提供外部访问的web界面。能够分拆来使用(搭配别的微服务软件进行组合),也能够经过各自的API接口进行串联集群式访问。
微服务的框架体系中,服务发现是不能不提的一个模块。咱们来看下图:
总结起来一句话:服务多了,配置很麻烦,问题多多
Consul是一个支持多数据中心分布式高可用的服务发现和配置共享的服务软件,由HashiCorp公司用Go语言开发,基于Mozilla Public License 2.0的协议进行开源。Consul支持健康检查,并容许HTTP和DNS协议调用API存储键值对。
综合比较,Consul做为服务注册和配置管理的新星,比较值得关注和研究。
连接:https://pan.baidu.com/s/1E7dTmKvbMRtGZ95OtuF2fw
提取码:z8ly
consul下载地址:https://www.consul.io/downloads.html
主机名 IP 用途
registrator-server 192.168.200.70 consul注册服务器
tar xf consul_1.2.1_linux_amd64.tar.gz
mv consul /usr/bin/
ll /usr/bin/consul
chmod +x /usr/bin/consul
consul agent -server -bootstrap -ui -data-dir=/var/lib/consul-data -bind=192.168.200.70 -client=0.0.0.0 -node=server01 &>/var/log/consul.log &
netstat -antup | grep consul
tcp6 0 0 :::8500 :::* LISTEN 18866/consul #这是对外访问端口
192.168.200.70:8500
consul members
consul info | grep leader
consul catalog services
curl -X PUT -d '{"id":"jetty","name":"service_name","adress":"192.168.200.70","port":8080,"tags":["test"],"checks":[{"http":"http://192.168.200.70:8080/","interval":"5s"}]}' http://192.168.200.70:8500/v1/agent/service/register
curl 192.168.200.70:8500/v1/status/peers
curl 192.168.200.70:8500/v1/status/leader
curl 192.168.200.70:8500/v1/catalog/services
curl 192.168.200.70:8500/v1/catalog/services/nginx
curl 192.168.200.70:8500/v1/catalog/nodes
https://github.com/hashicorp/consul-template
主机名 | IP | 用途 |
---|---|---|
registartor-server | 192.168.200.70 | consul注册服务器 |
nginx-LB | 192.168.200.86 | nginx反向代理服务器 |
docker-client | 192.168.200.87 | nginxWeb节点服务器 |
ls
unzip consul-template_0.19.3_linux_amd64.zip
mv consul-template /usr/bin/
which consul-template
yum -y install gcc gcc-c++ make pcre pcre-devel zlib zlib-devel openssl openssl-devel
tar xf nginx-1.10.2.tar.gz -C /usr/src/
cd /usr/src/nginx-1.10.2/
./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module && make && make install
mkdir -p /consul-tml
cd /consul-tml/
vim nginx.ctmpl
cat nginx.ctmpl
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
{{ range service "nginx" }} #获取服务nginx
server {{ .Address }}:{{ .Port }}; #循环罗列所属服务的IP和端口
{{ end }}
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
nohup consul-template -consul-addr 192.168.200.70:8500 -template /consul-tml/nginx.ctmpl:/usr/local/nginx/conf/nginx.conf:"/usr/local/nginx/sbin/nginx -s reload" 2>&1 >/consul-tml/consul-template.log &
cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend { #尚未任何容器节点注册,所以这里没东西
ip_hash;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx #配置文件里没有web节点所以nginx没有启动成功
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p
docker pull nginx
mkdir -p /www/html
echo "
hostname -I
sl.yunjisuan.com" >> /www/html/index.htmldocker run -dit --name nginxWeb01 -p 80:80 --mount type=bind,src=/www/html,dst=/usr/share/nginx/html nginx
curl localhost
docker pull gliderlabs/registrator
docker run -d --name=registrator -v /var/run/docker.sock:/tmp/docker.sock --restart=always gliderlabs/registrator:latest -ip=192.168.200.87 consul://192.168.200.70:8500
/usr/local/nginx/sbin/nginx
cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
server 192.168.200.87:80; #已经有注册的web容器地址了
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx #nginx服务也启动了
docker run -dit --name nginxWeb02 -p 81:80 --mount type=bind,src=/www/html,dst=/usr/share/nginx/html nginx
docker run -dit --name nginxWeb03 -p 82:80 --mount type=bind,src=/www/html,dst=/usr/share/nginx/html nginx
docker ps -a
[root@localhost conf]# cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
server 192.168.200.87:80;
server 192.168.200.87:81;
server 192.168.200.87:82;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx
docker stop nginxWeb02
docker stop nginxWeb03
docker ps
cat /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream http_backend {
ip_hash;
server 192.168.200.142:80;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://http_backend;
}
}
}
netstat -antup | grep nginx