ELK - nginx 日志分析及绘图

1. 前言

先上一张总体的效果图:html

 

 

 

 

 

上面这张图就是经过 ELK 分析 nginx 日志所获得的数据,经过 kibana 的功能展现出来的效果图。是否是这样对日志作了解析,想要知道的数据一目了然。接下来就是实现过程实录。node

 

2. 实现过程

经过上一篇:ELK 部署文档 已经对 ELK + filebeat 获取 nginx 作了详细的配置介绍,这里重点就不在安装 ELK 上面了。 下面这边的内容,主要是针对 logstash 配置文件的编写和 kibana web界面的配置。nginx

主机信息在申明下,和上一篇同样:web

 

 

 

2.1 nginx 日志格式编写

在编写logstash 文件以前,得有一个标准输入输出格式,这个格式通用的就是 json 格式。正则表达式

首先,考虑如何才能获取 json 格式的日志,固然能够直接经过修改 nginx 日志的格式来实现,所以开始修改 nginx 日志格式。若是获取的日志没法修改json 格式,则能够经过正则表达式来匹配。

json

在nginx 配置文件中添加以下日志格式:vim

http {
…
    log_format main_json '{"domain":"$server_name",'
    '"http_x_forwarded_for":"$http_x_forwarded_for",'
    '"time_local":"$time_iso8601",'
    '"request":"$request",'
    '"request_body":"$request_body",'
    '"status":$status,'
    '"body_bytes_sent":"$body_bytes_sent",'
    '"http_referer":"$http_referer",'
    '"upstream_response_time":"$upstream_response_time",'
    '"request_time":"$request_time",'
    '"http_user_agent":"$http_user_agent",'
    '"upstream_addr":"$upstream_addr",'
    '"upstream_status":"$upstream_status"}';


….
}

定义的这个nginx 日志格式叫 main_json 后面的配置文件,均可以引用这个日志格式。除了nginx 日志参数之外,还能够经过配置文件来自行添加自定义参数,好比 获取用户的真实ip后端

因而编写一个自定义变量的配置文件:centos

[root@192.168.118.16 ~]#vim /etc/nginx/location.conf

#set $real_ip $remote_addr;
if ( $http_x_forwarded_for ~ "^(\d+\.\d+\.\d+\.\d+)" ) {
        set $real_ip $1;
}

这个配置文件只是为了获取用户的真实IP,变量名为: real_ip  须要在nginx.conf 中引用,在刚才的配置文件中也加入该变量,完整日志格式以下:浏览器

    log_format main_json '{"domain":"$server_name",'
    '"real_ip":"$real_ip",'
    '"http_x_forwarded_for":"$http_x_forwarded_for",'
    '"time_local":"$time_iso8601",'
    '"request":"$request",'
    '"request_body":"$request_body",'
    '"status":$status,'
    '"body_bytes_sent":"$body_bytes_sent",'
    '"http_referer":"$http_referer",'
    '"upstream_response_time":"$upstream_response_time",'
    '"request_time":"$request_time",'
    '"http_user_agent":"$http_user_agent",'
    '"upstream_addr":"$upstream_addr",'
    '"upstream_status":"$upstream_status"}';

 

注释掉该行:

#access_log  /var/log/nginx/access.log  main;

接下来,编写一个nginx 配置文件 端口为 9527 做为测试使用

[root@192.168.118.16 ~]#vim /etc/nginx/conf.d/server_9527.conf
server {
    listen 9527;
    server_name localhost;
    include location.conf;
    location / {
        root /www/9527/;
        index index.html;
        access_log /www/log/access.log main_json;
        error_log /www/log/error.log;
    }
    location /shop {
        root /www/9527;
        access_log /www/log/shop_access.log main_json;
        error_log /www/log/shop_error.log;
    }
}

[root@192.168.118.16 ~]#mkdir -p /www/{9527,log}
[root@192.168.118.16 ~]#cd /www/9527/
[root@192.168.118.16 /www/9527]#vim index.html
hello, 9527
[root@192.168.118.16 /www/9527]#mkdir -pv /www/9527/shop
[root@192.168.118.16 /www/9527]#vim /www/9527/shop/index.html
出售9527

[root@192.168.118.16 /www/9527]#nginx -t
[root@192.168.118.16 /www/9527]#nginx -s reload

Nginx 配置完成,从新加载,访问测试:

[root@192.168.118.16 ~]#curl http://192.168.118.16:9527/index.html
hello, 9527
[root@192.168.118.16 ~]#curl http://192.168.118.16:9527/shop/index.html
出售9527

页面访问正常,查看日志:

[root@192.168.118.16 ~]#ll -tsh /www/log/
total 8.0K
4.0K -rw-r--r--  1 root root 346 Sep 14 14:35 shop_access.log
4.0K -rw-r--r--. 1 root root 341 Sep 14 14:35 access.log
   0 -rw-r--r--. 1 root root   0 Sep 14 14:35 error.log
   0 -rw-r--r--  1 root root   0 Sep 14 14:34 shop_error.log

 日志文件已生成,查看日志格式:

[root@192.168.118.16 ~]#cat /www/log/access.log 
{"domain":"localhost","real_ip":"","http_x_forwarded_for":"-","time_local":"2019-09-14T14:35:11+08:00","request":"GET /index.html HTTP/1.1","request_body":"-","status":200,"body_bytes_sent":"12","http_referer":"-","upstream_response_time":"-","request_time":"0.000","http_user_agent":"curl/7.29.0","upstream_addr":"-","upstream_status":"-"}

 定义的 json 格式已经被引用到,nginx日志格式配置完成,接下来就是 经过filebeat 将nginx 日志传递给 logstash

 

2.2 filebeat 传输 nginx 日志

这里在上一篇的基础上作修改,直接修改 filebeat 配置文件:

[root@192.168.118.16 ~]#vim /etc/filebeat/modules.d/nginx.yml 

重启 filebeat 服务

 

 

[root@192.168.118.16 ~]#systemctl restart filebeat

 

2.3 logstash 配置文件编写

 

 

经过上面的步骤,filebeat 已经将 nginx 日志传输过来了,接下来就看 logstash 要怎么接收日志数据了,仍是经过按部就班的方式来编写。

首先将日志数据打印到屏幕,保证数据的正确性。

 

 

从 nginx.conf 启动 logstash。经过浏览器访问 nginx  9527端口产生日志数据。

这里启动 logstash 能够添加 修改自动重载的模式,这样当修改了 nginx.conf 时,没必要频繁的去关闭重启。

[root@192.168.118.15 /etc/logstash/conf.d]#logstash -f nginx.conf  --config.reload.automatic

经过抓取一段 json 数据,分析下:

{
    "@timestamp" => 2019-09-14T06:52:16.056Z,
      "@version" => "1",
        "source" => "/www/log/access.log",
         "input" => {
        "type" => "log"
    },
          "beat" => {
            "name" => "web-node1",
         "version" => "6.8.2",
        "hostname" => "web-node1"
    },
          "host" => {
                 "name" => "web-node1",
         "architecture" => "x86_64",
                   "id" => "4b3b32a1db0343458c4942a10c79acef",
                   "os" => {
                "name" => "CentOS Linux",
            "codename" => "Core",
              "family" => "redhat",
            "platform" => "centos",
             "version" => "7 (Core)"
        },
        "containerized" => false
    },
           "log" => {
        "file" => {
            "path" => "/www/log/access.log"
        }
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
    "prospector" => {
        "type" => "log"
    },
       "fileset" => {
        "module" => "nginx",
          "name" => "access"
    },
        "offset" => 9350,
         "event" => {
        "dataset" => "nginx.access"
    },
       "message" => "{\"domain\":\"localhost\",\"real_ip\":\"\",\"http_x_forwarded_for\":\"-\",\"time_local\":\"2019-09-14T14:52:15+08:00\",\"request\":\"GET / HTTP/1.1\",\"request_body\":\"-\",\"status\":304,\"body_bytes_sent\":\"0\",\"http_referer\":\"-\",\"upstream_response_time\":\"-\",\"request_time\":\"0.000\",\"http_user_agent\":\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36\",\"upstream_addr\":\"-\",\"upstream_status\":\"-\"}"
}
logstash 输出内容

这里面数据不少,可是这里的有些数据不是必要的,应该保留须要的数据,而剔除不必的数据,使 json 看起来更加简洁。

首先来查看这段 json ,发现真正的 nginx 日志数据都存在 message 里面,其余的都是一些主机服务相关的信息,可是 message 看起来乱糟糟的,简直无法看。既然采用的 json 格式,那就可以格式化。

修改配置文件以下:

 

 

{
                  "@version" => "1",
                      "host" => {
                   "os" => {
                "name" => "CentOS Linux",
             "version" => "7 (Core)",
              "family" => "redhat",
            "platform" => "centos",
            "codename" => "Core"
        },
                 "name" => "web-node1",
                   "id" => "4b3b32a1db0343458c4942a10c79acef",
         "architecture" => "x86_64",
        "containerized" => false
    },
    "upstream_response_time" => "-",
                      "beat" => {
            "name" => "web-node1",
         "version" => "6.8.2",
        "hostname" => "web-node1"
    },
                    "domain" => "localhost",
              "request_body" => "-",
                       "log" => {
        "file" => {
            "path" => "/www/log/access.log"
        }
    },
           "http_user_agent" => "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
                "prospector" => {
        "type" => "log"
    },
              "http_referer" => "-",
                   "real_ip" => "",
                   "fileset" => {
        "module" => "nginx",
          "name" => "access"
    },
           "upstream_status" => "-",
           "body_bytes_sent" => "0",
                "@timestamp" => 2019-09-14T07:03:36.087Z,
      "http_x_forwarded_for" => "-",
                    "status" => 304,
                    "source" => "/www/log/access.log",
                     "input" => {
        "type" => "log"
    },
                "time_local" => "2019-09-14T15:03:28+08:00",
              "request_time" => "0.000",
             "upstream_addr" => "-",
                      "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
                    "offset" => 11066,
                     "event" => {
        "dataset" => "nginx.access"
    },
                   "request" => "GET / HTTP/1.1"
}
logstash 输出内容

将这两次获取的数据进行一个对比,下面这个数据 message 被删除了,可是 message 中的每一个字段都独立出来了。这样的好处:

  (1)日志信息更加清晰,可以准确的定位到某一个字段;

  (2)为后面存储到 elasticsearch 中,进行查询或者筛选作好了准备。

 

上面这个操做就等因而将原来的 message 分列存放了。

上面这个json 发现有两个时间:

  @timestamp - 格林尼治时间  -  logstash 获取日志时间

   Time_local -  东八区时间 - nginx日志记录时间

 

这两个时间的 分钟和秒钟并不一致,然后面过滤日志采用的是 @timestamp 时间,也就是 logstash 时间,这就会形成 nginx 日志时间不许确的现象,所以须要将两个时间修改成一致。

 

 

{
                  "@version" => "1",
                      "host" => {
                 "name" => "web-node1",
                   "os" => {
                "name" => "CentOS Linux",
             "version" => "7 (Core)",
              "family" => "redhat",
            "platform" => "centos",
            "codename" => "Core"
        },
                   "id" => "4b3b32a1db0343458c4942a10c79acef",
         "architecture" => "x86_64",
        "containerized" => false
    },
    "upstream_response_time" => "-",
                      "beat" => {
            "name" => "web-node1",
         "version" => "6.8.2",
        "hostname" => "web-node1"
    },
                    "domain" => "localhost",
              "request_body" => "-",
                       "log" => {
        "file" => {
            "path" => "/www/log/access.log"
        }
    },
           "http_user_agent" => "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
                "prospector" => {
        "type" => "log"
    },
              "http_referer" => "-",
                   "real_ip" => "",
                   "fileset" => {
        "module" => "nginx",
          "name" => "access"
    },
           "upstream_status" => "-",
           "body_bytes_sent" => "0",
                    "status" => 304,
      "http_x_forwarded_for" => "-",
                "@timestamp" => 2019-09-14T07:14:46.000Z,
                    "source" => "/www/log/access.log",
                     "input" => {
        "type" => "log"
    },
                "time_local" => "2019-09-14T15:14:46+08:00",
              "request_time" => "0.000",
             "upstream_addr" => "-",
                      "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
                    "offset" => 11495,
                     "event" => {
        "dataset" => "nginx.access"
    },
                   "request" => "GET / HTTP/1.1"
}
logstash 输出内容

如今,对比两个时间的分钟 和秒钟,彻底一致了。接下来,删除一些没必要要的字段,并重命名一些字段名,修改配置文件以下:

 

 

{
           "@version" => "1",
              "agent" => "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
             "domain" => "localhost",
       "request_body" => "-",
                "log" => {
        "file" => {
            "path" => "/www/log/access.log"
        }
    },
       "http_referer" => "-",
      "response_time" => [
        [0] "-"
    ],
            "real_ip" => "",
            "fileset" => {
        "module" => "nginx",
          "name" => "access"
    },
    "upstream_status" => "-",
    "body_bytes_sent" => "0",
             "status" => 304,
         "@timestamp" => 2019-09-14T07:22:14.000Z,
         "time_local" => "2019-09-14T15:22:14+08:00",
       "request_time" => "0.000",
      "upstream_addr" => "-",
    "x_forwarded_for" => [
        [0] "-"
    ],
            "request" => "GET / HTTP/1.1"
}
logstash 输出内容

通过重命名和删除没有的字段,json 也变的精简了不少,这样存储 elasticsearch 消耗的存储空间也响应的变小了。

接下来就能够将数据写入到 elasticsearch 中了。在这以前,作的都是 access.log,压根就没考虑到 error.log 的格式,由于 nginx 中 error.log 日志格式没法自定义。

尝试访问一个错误uri 来查看下获取到的数据:

[WARN ] 2019-09-14 15:25:34.300 [[main]>worker3] json - Error parsing json {:source=>"message", :raw=>"2019/09/14 15:25:29 [error] 2122#0: *33 open() \"/www/9527/123.html\" failed (2: No such file or directory), client: 192.168.118.41, server: localhost, request: \"GET /123.html HTTP/1.1\", host: \"192.168.118.16:9527\"", :exception=>#<LogStash::Json::ParserError: Unexpected character ('/' (code 47)): Expected space separating root-level values
 at [Source: (byte[])"2019/09/14 15:25:29 [error] 2122#0: *33 open() "/www/9527/123.html" failed (2: No such file or directory), client: 192.168.118.41, server: localhost, request: "GET /123.html HTTP/1.1", host: "192.168.118.16:9527""; line: 1, column: 6]>}

{
    "@timestamp" => 2019-09-14T07:25:33.173Z,
      "@version" => "1",
           "log" => {
        "file" => {
            "path" => "/www/log/error.log"
        }
    },
       "fileset" => {
        "module" => "nginx",
          "name" => "error"
    },
       "message" => "2019/09/14 15:25:29 [error] 2122#0: *33 open() \"/www/9527/123.html\" failed (2: No such file or directory), client: 192.168.118.41, server: localhost, request: \"GET /123.html HTTP/1.1\", host: \"192.168.118.16:9527\""
}
logstash 输出内容

error.log 过来的数据就变成上面这个样子了。这又是个问题,作 ELK 一是为了分析数据,二是为了尽快排错,若是 ELK 连这个都作不到,那就有点鸡肋了。

上面的这个格式看起来又是很乱了,nginx 错误日志都在 message 中,虽然nginx 错误日志没法定义格式,可是 logstash 能够经过正则表达式来将它转换为 json 格式。但在这以前,应该考虑,access.log 和 error.log 是两种不一样的格式,不能用同一种方式去匹配。那怎么判断数据是来自 access.log 仍是 error.log 呢?

这里语法确定是想到了:

If … {
Access.log
} elseif … {
Error.log
}

 对,语法没错,可是用什么条件呢?查看上面的日志,不难发现每次都有这样的字段:

Access.log 日志数据:
            "fileset" => {
        "module" => "nginx",
          "name" => "access"

error.log 日志数据:
       "fileset" => {
        "module" => "nginx",
          "name" => "error"

 

这样,就有判断的依据了,根据logstash配置语法开始写:

 

 

 

到目前为止,logstash 的 nginx 日志收集过滤配置文件以下:

配置文件名:nginx.conf

input {
    beats {
        port => "5044"    
    }    
}

filter {
    if [fileset][name] == "access" {
        json {
            source => "message"
            remove_field => "message"
            remove_field => "@timestamp"
        }
        date {
            match => ["time_local", "ISO8601"]
            target => "@timestamp"
        }
        grok {
            match => { "request" => "%{WORD:method} (?<url>.* )" }
        }
        mutate {
            remove_field => ["host","event","input","request","offset","prospector","source","type","tags","beat"]
            rename => {"http_user_agent" => "agent"}
            rename => {"upstream_response_time" => "response_time"}
            rename => {"http_x_forwarded_for" => "x_forwarded_for"}
            split => {"x_forwarded_for" => ", "}
            split => {"response_time" => ", "}
        }
        geoip {
            source => "real_ip"
        }
    }
    if [fileset][name] == "error" {
        mutate {
            remove_field => ["@timestamp"]
        }
        grok {
            match => {"message" => "(?<datetime>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<real_ip>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:domain}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?"}
        }
        date {
            match => ["datetime", "yyyy/MM/dd HH:mm:ss"]
            target => "@timestamp"
        }
        mutate {
            remove_field => ["message","request","http_referer","host","event","input","offset","prospector","source","type","tags","beat"]
        }
    }
}

output {
    stdout {
        codec => "rubydebug"        
    }
}
nginx.conf

测试 access.log 日志格式数据:

{
           "@version" => "1",
              "agent" => "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
             "domain" => "localhost",
       "request_body" => "-",
                "log" => {
        "file" => {
            "path" => "/www/log/access.log"
        }
    },
       "http_referer" => "-",
      "response_time" => [
        [0] "-"
    ],
            "real_ip" => "",
            "fileset" => {
        "module" => "nginx",
          "name" => "access"
    },
    "upstream_status" => "-",
    "body_bytes_sent" => "0",
             "status" => 304,
         "@timestamp" => 2019-09-14T07:39:50.000Z,
         "time_local" => "2019-09-14T15:39:50+08:00",
       "request_time" => "0.000",
      "upstream_addr" => "-",
    "x_forwarded_for" => [
        [0] "-"
    ],
            "request" => "GET / HTTP/1.1"
}
logstash 输出内容

 

测试 error.log 日志格式数据:

{
           "@version" => "1",
              "agent" => "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
             "domain" => "localhost",
       "request_body" => "-",
                "log" => {
        "file" => {
            "path" => "/www/log/access.log"
        }
    },
       "http_referer" => "-",
      "response_time" => [
        [0] "-"
    ],
            "real_ip" => "",
            "fileset" => {
        "module" => "nginx",
          "name" => "access"
    },
    "upstream_status" => "-",
    "body_bytes_sent" => "571",
             "status" => 404,
         "@timestamp" => 2019-09-14T07:41:48.000Z,
         "time_local" => "2019-09-14T15:41:48+08:00",
       "request_time" => "0.000",
      "upstream_addr" => "-",
    "x_forwarded_for" => [
        [0] "-"
    ],
            "request" => "GET /123.html HTTP/1.1"
}
logstash 输出内容

 

 

这下没问题了, 两种格式的数据都获取到了。接下来就将数据写入到 elasticsearch中。

 

 

到目前为止,logstash 配置文件 nginx.conf 以下:

input {
    beats {
        port => "5044"    
    }    
}

filter {
    if [fileset][name] == "access" {
        json {
            source => "message"
            remove_field => "message"
            remove_field => "@timestamp"
        }
        date {
            match => ["time_local", "ISO8601"]
            target => "@timestamp"
        }
        grok {
            match => { "request" => "%{WORD:method} (?<url>.* )" }
        }
        mutate {
            remove_field => ["host","event","input","request","offset","prospector","source","type","tags","beat"]
            rename => {"http_user_agent" => "agent"}
            rename => {"upstream_response_time" => "response_time"}
            rename => {"http_x_forwarded_for" => "x_forwarded_for"}
            split => {"x_forwarded_for" => ", "}
            split => {"response_time" => ", "}
        }
        geoip {
            source => "real_ip"
        }
    }
    if [fileset][name] == "error" {
        mutate {
            remove_field => ["@timestamp"]
        }
        grok {
            match => {"message" => "(?<datetime>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<real_ip>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:domain}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?"}
        }
        date {
            match => ["datetime", "yyyy/MM/dd HH:mm:ss"]
            target => "@timestamp"
        }
        mutate {
            remove_field => ["message","request","http_referer","host","event","input","offset","prospector","source","type","tags","beat"]
        }
    }
}

#output {
#    stdout {
#        codec => "rubydebug"        
#    }
#}
output {
    elasticsearch {
        hosts => ["192.168.118.14"]
        index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
}
nginx.conf

这个配置也算是本次 nginx 最终版配置了。

 

使用浏览器屡次访问nginx 9527 端口,而后切换到 elasticsearch-head 查看索引是否建立成功。

 

 

 ok,已经看到今天的索引建立成功,查看数据。

 

 

 

数据也是没有问题的,切换到 kibana 添加索引。

 

 

 

 

 

 

 

 

ok,目前已经将数据存储到 elasticsearch 并经过 kibana 展现出来了,可是想要更清晰的分析查看数据还须要在 kibana 上下一番功夫。

 

2.4 kibana 展现

首先是 Discover 这里,每次进来,都须要一目了然的查看日志,作如下配置:

 

 

 

 

 

上面两个设置之后,每次登陆进来只须要点击 打开 查看相关模板就能看到清晰的日志数据。

 

 

 

接下来,就是绘制最上面那副图啦。

 

在绘制以前必需要有数据支撑,由于这个是测试环境没有真是的用户访问。所以须要造一批假数据访问。

方法就是 直接去 access.log 里复制一条数据,修改 real_ip 为 公网ip

假数据添加成功后,来进行图表的配置,点击 可视化

 

 

 

第一个:访问省会城市 TOP 5 (饼图)

选择饼图,而后选择 logstash-nginx-* 索引

 

 

完成后点击保存。

 

 

 

第二个:访问分布地图(坐标地图)

 

 

 完成后点击保存。

 

 

 

第三个:域名TOP5 (数据表)

 

 

 

 

 完成后点击保存。

 

 

第四个:后端服务TOP5(数据表)

 

 

完成后点击保存。

 

 

第五个:uri top 5(数据表)

 

 

 

完成后点击保存。

 

 

 

第六个:realipTOP5 (水平条形图)

 

 

 

完成后点击保存。

 

第七个:http状态TOP5 (饼图)

 

 

 

完成后点击保存。

 

 

 

好了, 在 可视化 一栏中,建立了 7 个数据表图,点开 仪表板,将这些图表展现出来就好了。

 

 

 

 

 

而后将图表摆放好,大功告成。

相关文章
相关标签/搜索