elasticsearh6.3.2+kibana-6.3.2+logstash-6.3.2+ik

1.文件准备(elasticsearch的jdk须要1.8以上)html

去官网下载最新的elasticsearch-6.3.2和kibana-6.3.2java

https://www.elastic.co/downloadsnode

2.下载完成后,上传至linux,解压到指定目录。mysql

tar -zxvf /opt/elasticserach-6.3.2 -C /var/opt/

3.修改elasticsearch的配置文件linux

cd elasticsearch-6.3.2/config
vim elasticsearh.yml


cluster.name: elasticsearch-6
node.name: node-1
network.host: 0.0.0.0
http.port: 9200

4.直接root用户启动会报错,6.0后直接修改配置文件,用root用户启动已经不行了,因此须要建立一个其余用户来启动elasticsearch。git

#建立elasticsearch用户组
groupadd elasticsearch
useradd es -g elasticsearch-p /var/opt/elasticsearch-6.3.2
#更改elasticsearch文件夹及内部文件的所属用户及组为es:elasticsearch
cd /var/opt
chown -R es:elasticsearch  elasticsearch-6.3.2
#切换到elsearch用户再启动
su es
cd elasticsearch-6.3.2/bin
./elasticsearh

5.启动发现报错:github

1.max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

2.max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]sql

修改配置文件:json

#修改第一个错误
vim /etc/security/limits.conf
# 在最后面追加下面内容
es hard nofile 65536
es soft nofile 65536
#es即为启动用户的名称
---修改第二个错误
#切换到root用户修改配置sysctl.conf
vi /etc/sysctl.conf 
#添加下面配置:
vm.max_map_count=655360
#并执行命令:
sysctl -p

参考文档:vim

https://blog.csdn.net/jiankunking/article/details/65448030

https://www.cnblogs.com/yidiandhappy/p/7714481.html

https://www.elastic.co/guide/en/elasticsearch/reference/current/zip-targz.html

 

kibana-6.3.2安装

#解压至指定目录
tar -zxvf kibana-6.3.2-linux-x86_64.tar.gz -C /var/opt/
#修改配置config/kibana.yml
cd /var/opt/kibana-6.3.2-linux-x86_64/config
vim kibana.yml
#端口
server.port: 5601
#主机
server.host: "172.16.20.11"
#es的地址
elasticsearch.url: "http://172.16.20.11:9200"
#kibana在es中的索引
kibana.index: ".kibana"

#启动kibana
#前台启动
./kibana
#后台启动
nohup ./kibana &

参考文档:

https://my.oschina.net/shea1992/blog/1925414

 

安装ik分词器elasticsearch-analysis-ik-6.3.2

下载地址:https://github.com/medcl/elasticsearch-analysis-ik/releases

须要下载对应的的elasticseach的版本的分词器,而后直接解压zip到es的plugins目录中

#建立ik分词器的目录
cd /var/opt/elaticsearch-6.3.2/plugins/
mkdir ik
#解压至指定目录
unzip /var/opt/elasticsearch-analysis-ik-6.3.2.zip -d /var/opt/elaticsearch-6.3.2/plugins/ik
#重启es
#在kibana中测试分词器是否加载成功

#点击Dev_Tools
GET _analyze?pretty
{

        "analyzer":"ik_smart",

        "text":"我爱北京天安门"

}

#显示以下
{
  "tokens": [
    {
      "token": "我",
      "start_offset": 0,
      "end_offset": 1,
      "type": "CN_CHAR",
      "position": 0
    },
    {
      "token": "爱",
      "start_offset": 1,
      "end_offset": 2,
      "type": "CN_CHAR",
      "position": 1
    },
    {
      "token": "北京",
      "start_offset": 2,
      "end_offset": 4,
      "type": "CN_WORD",
      "position": 2
    },
    {
      "token": "天安门",
      "start_offset": 4,
      "end_offset": 7,
      "type": "CN_WORD",
      "position": 3
    }
  ]
}

logstash-6.3.2安装

下载logstash-6.3.2

https://www.elastic.co/downloads/logstash

#解压loastash至指定文件
tar -zxvf logstash-6.3.2.tar.gz -C /var/opt/
#进入到bin目录下安装jdbc插件:用于将mysql数据同步到elasticsearch
./logstash-plugin install logstash-input-jdbc

再下载mysql的连接工具:

https://dev.mysql.com/downloads/connector/j/

而后放入到指定目录--/var/opt/

从mysql导入数据到elasticsearch经过logstash

1.首先建立索引---经过kibana的Div_Tools

PUT hisms_ns 
{
    "mappings": {
        "t_kc21k1": {
            "properties": {
                "id": {
                    "type": "text",
                    "index": "false"
                },
                "aac003": {
                    "type": "text",
                    "index": "false"
                },
                "yka002": {
                    "type": "text",
                    "index": "false"
                },
                "yka003": {
                    "type": "text",
                    "index": "false"
                },
                "ykd018": {
                    "type": "text",
                    "index": "false"
                },
                "ykd018_name": {
                    "type": "text",
                    "index": "false"
                },
                "akc190": {
                    "type": "text",
                    "index": "false"
                },
                "aac001": {
                    "type": "text",
                    "index": "false"
                },
                "akc194": {
                    "type": "text",
                    "format": "dateOptionalTime"
                },
                "yka055": {
                    "type": "double"
                },
                "money_sum": {
                    "type": "double"
                },
                "id_drg": {
                    "type": "text",
                    "index": "false"
                },
                "name_drg": {
                    "type": "text",
                    "index": "false"
                },
				"id_mdc": {
                    "type": "text",
                    "index": "false"
                },
				"name_mdc": {
                    "type": "text",
                    "index": "false"
                },
				"if_ss": {
                    "type": "short"
                },
				"if_death": {
                   "type": "short"
                },
				"zysc": {
                   "type": "short"
                },
				"sq_zysc": {
                   "type": "short"
                },
				"if_ry_cy_fh": {
                   "type": "short"
                },
				"if_zdfx_death": {
                   "type": "short"
                },
				
				"if_zyhz": {
                   "type": "short"
                },
				"if_yngr": {
                   "type": "short"
                },
				"if_sscf": {
                   "type": "short"
                },
                "yp_fee": {
                    "type": "double"
                },
				"if_tskss": {
                   "type": "short"
                },
				"if_3tskss": {
                   "type": "short"
                },
                "ghc_fee": {
                    "type": "double"
                },
                "jc_fee": {
                    "type": "double"
                },
                "oneself_fee": {
                    "type": "double"
                },
                "if_34_ss": {
                    "type": "short"
                },
                "id_depa": {
                    "type": "text",
                    "index": "false"
                },
				 "id_doctor_org": {
                    "type": "text",
                    "index": "false"
                },
				 "zyys_org": {
                    "type": "text",
                    "index": "false"
                },
				 "if_zzy": {
                    "type": "short"
                },
				 "if_tzzy": {
                    "type": "short"
                },
				 "if_op10_death": {
                    "type": "short"
                },
				 "if_bl_lc_fh": {
                    "type": "short"
                },
				 "if_yx_bl_fh": {
                    "type": "short"
                },
				 "yjqk_yh_sum": {
                    "type": "short"
                },
				 "yjqk_sum": {
                    "type": "short"
                },
				 "bazl": {
                    "type": "short"
                },
				 "rescue_true": {
                    "type": "short"
                },
				 "rescue": {
                    "type": "short"
                },
				 "zhylfwf": {
                    "type": "double"
                },
				 "syszdf": {
                    "type": "double"
                },
				 "yxxzdf": {
                    "type": "double"
                },
				 "sszlf": {
                    "type": "double"
                },
				 "fsszlf": {
                    "type": "double"
                },
				 "xyf": {
                    "type": "double"
                },
				 "kjywf": {
                    "type": "double"
                },
				 "zcyjzyf": {
                    "type": "double"
                },
				 "hcf": {
                    "type": "double"
                },
				 "qtf": {
                    "type": "double"
                },
				 "zdbm": {
                     "type": "text",
                    "index": "false"
                },
				 "zdmc": {
                     "type": "text",
					 "index": "false"
                },
				 "ssbm": {
                     "type": "text",
					  "index": "false"
                },
				 "ssmc": {
                     "type": "text",
					 "index": "false"
                },
				 "group_status": {
                     "type": "text",
					 "index": "false"
                },
				 "drg_error_msg": {
                     "type": "text",
					 "index": "false"
                },
				 "bz_name": {
                     "type": "text",
					 "index": "false"
                },
				 "bz_id": {
                     "type": "text",
					  "index": "false"
                },
				 "belong_ks": {
                     "type": "text",
					 "index": "false"
                },
				 "id_doctor_add": {
                     "type": "text",
					 "index": "false"
                },
				 "zyys_add": {
                     "type": "text",
					  "index": "false"
                },
				 "id_doctor": {
                     "type": "text",
					 "index": "false"
                },
				 "zyys": {
                     "type": "text",
					 "index": "false"
                }
            }
        }
    }
}'
#安装完后,而后在进入到config的目录下,
#建立配置文件jdbc.conf(本身取名便可
vim jdbc.conf
#输入如下内容:
input {
  jdbc {
    jdbc_driver_library => "/var/opt/mysql-connector-java-8.0.11.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://10.111.121.236:3306/hisms_ns?serverTimezone=GMT%2B8"
    jdbc_user => "root"
        jdbc_password => "root"
    schedule => "* * * * *"
    statement => "SELECT * from t_kc21k1"
  }
}
 
output {
        elasticsearch {
        hosts => [ "172.16.20.11:9200" ]
        index => "hisms_ns"
        document_type => "t_kc21k1"
        document_id => "%{id}"
    }
stdout {
            codec => json_lines  
        }
}


##参数解析
 index => "hisms_ns"#指定导入的索引
 document_type => "t_kc21k1"#指定导入的该索引下的type
##若是要将不一样数据导入不一样索引,能够参考以下:
input {
    jdbc {
      jdbc_driver_library => "/var/opt/mysql-connector-java-8.0.11.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://10.111.121.236:3306/hisms_ns?serverTimezone=GMT%2B8"
    jdbc_user => "root"
    jdbc_password => "root"
    schedule => "* * * * *"
    statement => "SELECT * from t_kc21k1"
    type => "01"
    }

    jdbc {
      jdbc_driver_library => "/var/opt/mysql-connector-java-8.0.11.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://10.111.121.236:3306/hisms_ns?serverTimezone=GMT%2B8"
    jdbc_user => "root"
        jdbc_password => "root"
    schedule => "* * * * *"
    statement => "SELECT * from t_kc21k1"
      type => "02"
    }
}
output {
    if [type] == "01" {
        elasticsearch {
          hosts => [ "172.16.20.11:9200" ]
        index => "hisms_ns"
        document_type => "t_kc21k1"
        document_id => "%{id}"
        }   
        stdout {
            codec => line {
                    format => "Crawl: %{id} %{title}"
            }   
        }
    } else {
        elasticsearch {
          hosts => [ "172.16.20.11:9200" ]
        index => "hisms_ns"
        document_type => "t_kc21k1_2"
        document_id => "%{id}"
        }    
        stdout {
            codec => line {
                format => "VMS: %{id} %{title}"
            }   
        }
    }
}

最后执行导入命令便可:

./bin/logstash -f ./config/jdbc.conf --config.reload.automatic

或者

./bin/logstash -f ./config/jdbc.conf 

 

参考文档:

https://blog.csdn.net/chenjianhuideyueding/article/details/80864345

https://blog.csdn.net/lilongsy/article/details/78283122

https://elasticsearch.cn/question/3753

 

补充:kibana中的索引问题

1.经过logstash导入数据后kibana不能显示出字段问题
解决方法:
logstash导入数据后能够在kibana看见数据,此时点击management,再点击index patterns,最后点击右上角的刷新按钮,刷新全部字段便可显示。


2.针对string字段不能使用Aggregatabel问题。
case1:不手动建立索引映射的状况
直接经过logstash自动导入,此时string字段是能够被聚合的,默认自带keyword标签。

case2:须要手动建立索引映射的状况
在建立索引的时候,添加字段描述,方式以下:
"yka003": {
	"type": "text",
	"fields": {
		"keyword": {
			"type": "keyword",
			"ignore_above": 256
		}
	}
},