filebeat: 6.2.3
mysql: 5.6.38node
{ "_index": "mysql-slow-2018.04.17", "_type": "doc", "_id": "AWLRiDqYhjFMCbqrK5ez", "_version": 1, "_score": null, "_source": { "@timestamp": "2018-04-17T02:56:22.823Z", "offset": 100619865, "beat": { "hostname": "test-db1", "name": "test-db1", "version": "6.2.3" }, "prospector": { "type": "log" }, "source": "/var/log/mysql_3306/mysql-slow.log", "fileset": { "module": "mysql", "name": "slowlog" }, "message": "# User@Host: test_db[test_table] @ [10.10.10.10] Id: 1874266\n# Query_time: 2.088465 Lock_time: 0.000086 Rows_sent: 67 Rows_examined: 18862\nSET timestamp=1523933781;\nselect id, dct, mh, topcolor, bit_count(dct^1144174128272565460) as dist from image_feature where topcolor=\"278522176103c518c774fe2a73b20569\" and created_at<\"2018-04-17 10:54:16\" and id not in (120251270,120251181,120251202,120251209,120251221,120251229,120251240,120251252,120251259,120251270,120251278) having dist<=20 order by dist;", "error": { "message": "Provided Grok expressions do not match field value: [# User@Host: test_db[test_table] @ [10.10.10.10] Id: 1874266\\n# Query_time: 2.088465 Lock_time: 0.000086 Rows_sent: 67 Rows_examined: 18862\\nSET timestamp=1523933781;\\nselect id, dct, mh, topcolor, bit_count(dct^1144174128272565460) as dist from image_feature where topcolor=\\\"278522176103c518c774fe2a73b20569\\\" and created_at<\\\"2018-04-17 10:54:16\\\" and id not in (120251270,120251181,120251202,120251209,120251221,120251229,120251240,120251252,120251259,120251270,120251278) having dist<=20 order by dist;]" } }, "fields": { "@timestamp": [ 1523933782823 ] }, "highlight": { "beat.name": [ "@kibana-highlighted-field@test-db1@/kibana-highlighted-field@" ] }, "sort": [ 1523933782823 ] }
{ "_index": "mysql-slow-2018.04.17", "_type": "doc", "_id": "AWLRb2nl6-SuKroP98i-", "_version": 1, "_score": null, "_source": { "@timestamp": "2018-04-17T02:29:21.535Z", "offset": 100614853, "beat": { "hostname": "test-db1", "name": "test-db1", "version": "6.2.3" }, "prospector": { "type": "log" }, "source": "/var/log/mysql_3306/mysql-slow.log", "message": "# Time: 180417 10:29:18", "fileset": { "module": "mysql", "name": "slowlog" }, "error": { "message": "Provided Grok expressions do not match field value: [# Time: 180417 10:29:18]" } }, "fields": { "@timestamp": [ 1523932161535 ] }, "highlight": { "error.message": [ "Provided Grok expressions do not match field value: [# @kibana-highlighted-field@Time@/kibana-highlighted-field@: 180417 10:29:18]" ] }, "sort": [ 1523932161535 ] }
上面的这些信息能够在Kibana的Discover中能够查询获得。mysql
从上面的JSON信息咱们能够很明确的获取获得俩个信息:git
修改以前: exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*'] # Exclude the header 修改以后: exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*','^# Time.*'] # Exclude the header
修改module/mysql/slowlog/ingest/pipeline.jsongithub
修改以前: "patterns":[ "^# User@Host: %{USER:mysql.slowlog.user}(\\[[^\\]]+\\])? @ %{HOSTNAME:mysql.slowlog.host} \\[(%{IP:mysql.slowlog.ip})?\\](\\s*Id:\\s* %{NUMBER:mysql.slowlog.id})?\n# Query_time: %{NUMBER:mysql.slowlog.query_time.sec}\\s* Lock_time: %{NUMBER:mysql.slowlog.lock_time.sec}\\s* Rows_sent: %{NUMBER:mysql.slowlog.rows_sent}\\s* Rows_examined: %{NUMBER:mysql.slowlog.rows_examined}\n(SET timestamp=%{NUMBER:mysql.slowlog.timestamp};\n)?%{GREEDYMULTILINE:mysql.slowlog.query}" ] 修改以后: "patterns":[ "^# User@Host: %{USER:mysql.slowlog.user}(\\[[^\\]]+\\])? @ %{HOSTNAME:mysql.slowlog.host} \\[(IP:mysql.slowlog.ip)?\\](\\s*Id:\\s* %{NUMBER:mysql.slowlog.id})?\n# Query_time: %{NUMBER:mysql.slowlog.query_time.sec}\\s* Lock_time: %{NUMBER:mysql.slowlog.lock_time.sec}\\s* Rows_sent: %{NUMBER:mysql.slowlog.rows_sent}\\s* Rows_examined: %{NUMBER:mysql.slowlog.rows_examined}\n(SET timestamp=%{NUMBER:mysql.slowlog.timestamp};\n)?%{GREEDYMULTILINE:mysql.slowlog.query}" ],
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.3-x86_64.rpm
sudo yum install filebeat-6.2.3-x86_64.rpm
index.number_of_shards: 5 output.elasticsearch.hosts: ["localhost:9200"] output.elasticsearch.index: "mysql-slow-%{+yyyy.MM.dd}" setup.tempate.name: "mysql-slow" setup.template.pattern: "mysql-slow*"
4.开启MySQL modulesql
filebeat modules list filebeat modules enable mysql filebeat modules list
5.设置mysql slow日志路径 modules.d/mysql.ymlexpress
var.paths: ["/var/log/mysql_3306/mysql-slow.log"]
提交的BUG: json
https://github.com/elastic/beats/issues/6882
filebeat 6.2.3的版本虽然做者写了是测试了MySQL5.5到M有SQL5.7的版本,可是在我在测试MySQL5.7.19的慢日志的时候,发现filebeat仍是没法解析MySQL慢日志信息,由于相对于MySQL5.6,MySQL5.7.19的慢日志格式已经作了更改。在module/mysql/slowlog/ignest/pipeline.json里面有作更改,可是比较奇怪的是,在执行慢日志测试的时候,在Kibana上面带有Time语句信息,有些并无。ruby
MySQL 5.6.38 慢日志格式: # Time: 180308 16:01:16 # User@Host: test[zabbix] @ [10.10.10.10] Id: 934 # Query_time: 1.760919 Lock_time: 0.000052 Rows_sent: 2 Rows_examined: 5141130 SET timestamp=1520496076; select passed, count(*) count from inscount where user='gu1417' and created_at>='2018-03-08' and team=1 group by passed;
MySQL 5.7.19慢日志格式: # Time: 2018-04-17T17:59:40.019149+08:00 # User@Host: root[root] @ localhost [] Id: 576311 # Query_time: 1.726471 Lock_time: 0.000111 Rows_sent: 0 Rows_examined: 896862 SET timestamp=1523959180; SELECT source, reason, COUNT(id) AS report_count FROM report WHERE type = 'circle_article' AND deleted_at IS NULL GROUP BY source, reason HAVING report_count > 9 ORDER BY report_count DESC;
比较奇怪的一点就是在于,当我执行慢SQL很频繁的时候,在监听的filebeat日志中能够比较明显的看到message信息发送过去的带有"# Time..."信息,可是假如执行慢SQL通常的时候就是频率不高的时候,发送的message信息中就没有"# Time..."。elasticsearch
~~ 在监听了一段时间的filebeat日志信息,在结合查看源码的一些信息。我的比较倾向于,filebeat监听慢日志是一段的时间内去探测慢日志的inode信息,而后再将更改的信息处理完以后发送给elasticsearch。在探测的时间间隔内假如只有单条慢日志信息,filebeat可以正常的解析,可是慢日志多条的话,filebeat就会解析出现问题。具体的验证和测试方案我也不知如何下手,但愿看到这篇文章的朋友有方案的能够一块儿交流。~~tcp
在上面以前作实验的时候我测试的慢SQL的时候忽略了一个很重要的点,就是我通常执行慢SQL的时候都是在本地执行的,并无在remote_server上面执行慢SQL,在测试远程的慢SQL的时候观察慢日志又出现了新的状况:
在本地执行慢SQL:
# Time: 180418 17:29:36 # User@Host: root[root] @ localhost [] Id: 165675344 # Query_time: 2.012857 Lock_time: 0.000039 Rows_sent: 1 Rows_examined: 7138643 SET timestamp=1524043776; SELECT count(id) as total FROM user WHERE deleted_at=0;
在remote_server执行慢SQL:
# Time: 180418 17:33:26 # User@Host: remix[remix] @ [10.10.10.10] Id: 165674369 # Query_time: 5.317408 Lock_time: 0.000118 Rows_sent: 653 Rows_examined: 3569801 use remix_liveroom; SET timestamp=1524044006; SELECT count(id) as total FROM user WHERE deleted_at=0;
上面的慢SQL都是基于MySQL 5.6的版本信息。能够很明显的看获得在# User@Host
这块俩者明显不同..因此仍是须要更改filebeat的pipeline.json的patterns。
最后尝试了不少次,可是仍是没法解决这个问题。由于更改基于filebeat的grok的语法,暂时尚未找到能够在线验证的UI,在filebeat将massage推送到elasticsearch的时候只会抛出grok匹配失败,致使我一直在不停的调试grok的语法。因此最后和同事们商量将全部的filebeat收集的日志导入logstash,在logstash里面进行分析和聚合。
filebeat.yml的配置:
filebeat.prospectors: - type: log enabled: true paths: - /var/log/mysql/mysql-slow.log multiline.pattern: "^# User@Host:" multiline.negate: true multiline.match: after output.logstash: hosts: ["10.10.10.10:5044"] logging.level: debug logging.to_files: true logging.files: path: /var/log/filebeat name: filebeat keepfiles: 1 permissions: 0644
logstash的配置
input { beats { port => 5044 } #tcp { # port => 1928 #} } filter { grok { match => [ "message", "(?m)^# User@Host: %{USER:query_user}\[[^\]]+\] @ (?:(?<query_host>\S*) )?\[(?:%{IP:query_ip})?\](?:\s*Id: %{NUMBER:id:int})?\s+# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)" ] } grok { match => { "message" => "# Time: " } add_tag => [ "drop" ] tag_on_failure => [] } if "drop" in [tags] { drop {} } date { match => ["mysql.slowlog.timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss"] target => "@timestamp" timezone => "Asia/Chongqing" } ruby { code => "event.set('[@metadata][today]', Time.at(event.get('@timestamp').to_i).localtime.strftime('%Y.%m.%d'))" } mutate { remove_field => [ "message" ] } } output { #stdout { codec => rubydebug } elasticsearch { hosts => ["10.10.10.10:9200"] index => "mysql-slow-%{[@metadata][today]}" document_type => "mysql-slow" template_overwrite => true } }
我首先是将slow日志信息在本地聚合成多行在发送给logstash,而后在logstash里面进行二次聚合。上面的配置grok语法能够匹配mysql5.1 , 5.6,5.7的版本。
上周须要将filebeat收集的MySQL的慢日志整合到ELK的日志平台上面,主要方案是将filebeat看成client客户端收集数据,将数据发送到logstash进行聚合分析,在将聚合以后的数据发送到elasticsearch。为此从新建立了一个logstash节点,在建立新的节点的时候logstash的配置文件中设置了template_overwrite => true这个参数,致使在启动logstash的节点的时候建立了一个默认的模版logstash-*(索引会默认和同名的模版进行正则匹配,例如index mysql-slow-* 会和mysql-*匹配),因为qba的日志信息匹配的模版名是logstash-qba-*,致使qba的日志信息会优先匹配logstash-*(由于qba的index索引名是logstash-qba-*),可是logstash-*默认模版和qba的日志信息字段信息没法匹配致使日志没法写入。qba的日志信息没法写入elk,它就会不停的去重试一直到可以正确的写入,在这个过程当中日志会将数据存放到队列中,而且会落地。因为咱们使用elastic集群,日志在传输到elastic先进行路由再进行模版匹配写入,因此致使整个集群的负载和内存产生告警。