Logstash是ES下的一款开源软件,它可以同时从多个来源采集数据、转换数据,而后将数据发送到Eleasticsearch中建立索引。html
注意下载的logstash版本最好要和你使用的ElasticSearch版本一致。java
下载网址:https://www.elastic.co/downloads/logstashmysql
解压以后:sql
logstash-input-jdbc是ruby开发的,先下载ruby并安装数据库
下载地址:https://rubyinstaller.org/downloads/json
下载2.5版本便可。ruby
安装以后在命令提示符中键入 ruby -v 查看ruby版本app
Logstash5.x以上版本自己自带有logstash-input-jdbc,6.x版本自己不带logstash-input-jdbc插件,须要手动安装elasticsearch
安装成功后咱们能够在logstash根目录下的如下目录查看对应的插件版本ide
咱们使用Logstash从MySQL中读取数据,向ES中建立索引,这里须要提早建立mapping的模板文件以便logstash使用。
在logstach的config目录建立XXX.json,内容和在建立ElasticSearch映射字段时同样。
例如,
{ "mappings" : { "doc" : { "properties" : { "charge" : { "type" : "keyword" }, "description" : { "analyzer" : "ik_max_word", "search_analyzer" : "ik_smart", "type" : "text" }, "end_time" : { "format" : "yyyy‐MM‐dd HH:mm:ss", "type" : "date" }, "expires" : { "format" : "yyyy‐MM‐dd HH:mm:ss", "type" : "date" }, "grade" : { "type" : "keyword" }, "id" : { "type" : "keyword" }, "mt" : { "type" : "keyword" }, "name" : { "analyzer" : "ik_max_word", "search_analyzer" : "ik_smart", "type" : "text" }, "pic" : { "index" : false, "type" : "keyword" }, "price" : { "type" : "float" }, "price_old" : { "type" : "float" }, "pub_time" : { "format" : "yyyy‐MM‐dd HH:mm:ss", "type" : "date" }, "qq" : { "index" : false, "type" : "keyword" }, "st" : { "type" : "keyword" }, "start_time" : { "format" : "yyyy‐MM‐dd HH:mm:ss", "type" : "date" }, "status" : { "type" : "keyword" }, "studymodel" : { "type" : "keyword" }, "teachmode" : { "type" : "keyword" }, "teachplan" : { "analyzer" : "ik_max_word", "search_analyzer" : "ik_smart", "type" : "text" }, "users" : { "index" : false, "type" : "text" }, "valid" : { "type" : "keyword" } } } }, "template" : "xc_course" }
在logstash的config目录下配置mysql.conf文件供logstash使用,logstash会根据mysql.conf文件的配置的地址从MySQL中读取数据向ES中写入索引。
参考https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html
配置输入数据源和输出数据源。
input { stdin { } jdbc { jdbc_connection_string => "jdbc:mysql://localhost:3306/xc_course?useUnicode=true&characterEncoding=utf-8&useSSL=true&serverTimezone=UTC" # the user we wish to excute our statement as jdbc_user => "root" jdbc_password => root # the path to our downloaded jdbc driver jdbc_driver_library => "D:/software/ElasticSearch01/logstash-6.2.1/config/mysql-connector-java-5.1.32.jar" # the name of the driver class for mysql jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_paging_enabled => "true" jdbc_page_size => "50000" #要执行的sql文件 #statement_filepath => "/conf/course.sql" statement => "select * from course_pub where timestamp > date_add(:sql_last_value,INTERVAL 8 HOUR)" #定时配置 schedule => "* * * * *" record_last_run => true last_run_metadata_path => "D:/software/ElasticSearch01/logstash-6.2.1/config/logstash_metadata" } } output { elasticsearch { #ES的ip地址和端口 hosts => "localhost:9200" #hosts => ["localhost:9200","localhost:9202","localhost:9203"] #ES索引库名称 index => "xc_course" document_id => "%{id}" document_type => "doc" template =>"D:/software/ElasticSearch01/logstash-6.2.1/config/xc_course_template.json" template_name =>"xc_course" template_overwrite =>"true" } stdout { #日志输出 codec => json_lines } }
说明:
一、ES采用UTC时区问题
ES采用UTC 时区,比北京时间早8小时,因此ES读取数据时让最后更新时间加8小时
where timestamp > date_add(:sql_last_value,INTERVAL 8 HOUR)
二、logstash每一个执行完成会在D:/ElasticSearch/logstash-6.2.1/config/logstash_metadata记录执行时间下次以此时间为基准进行增量同步数据到索引库。
启动logstash.bat:
.\logstash.bat ‐f ..\config\mysql.conf
经过在数据库中更新对应字段时间戳能够完成ElasticSearch索引更新。