官方建议:在两者版本选择时, Elasticsearch 应该大于等于 Kibana 版本,不然在使用和升级过程当中会出问题,截至文章出稿,已经更新到 6.3.0 版本,本文以 5.5.3 版本为例,对其余版本安装有一样的参考做用。
4.本教程重点在于“如何搭建 Elasticsearch - Kibana 环境”,对于两者是什么,能干什么,使用场景,这里不作过多说明,读者可自行查询资料;Elasticsearch 如何使用,做者后期会出 Elasticsearch 从入门到实战系列教程。node
5.搭建此环境的难点在于:过程当中会出现各类配置问题,须要修改,可是网上资料残次不齐,或者不全,或者只给出了一行解决命令,可是没有解释为何这么修改,这个命令是干什么的,对于 Linux 基础很差的读者而言,盲目执行命令可能会对服务器有负面影响;本文不敢说全面,可是给出解决方案时,会详细告知这个命令的做用和使用方法;linux
本文共分为如下几个步骤:webpack
ES 使用 Java 编写,安装 ES 以前,须要先检查 JDK 环境,通常要求在 1.7 以上,若是没有安装 JDK,建议直接安装 1.8 版本。安装过程参考:JDK 安装git
[root@izbp163wlhi02tcaxyuxb7z wang]# java -version java version "1.8.0_172" Java(TM) SE Runtime Environment (build 1.8.0_172-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)
官方地址
我这里安装在 Linux 环境,下载 tar 包,下载完后解压:web
tar -zxvf elasticsearch-5.5.3.tar.gz
在 bin/ 目录下,直接执行 ./elasticsearch
命令便可。docker
因为 Elasticsearch 运行的环境需求,默认的系统环境通常都须要再作调整,启动可能会报以下的一些错误。shell
[root@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]# ./bin/elasticsearch Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory. # An error report file with more information is saved as: # /usr/local/wang/elasticsearch-5.5.3/hs_err_pid15795.log
缘由:这是因为 Elasticsearch 这个版本默认分配 JVM 空间大小为 2g(不一样版本默认值不同),而示例所用服务器为 1 核 2G,因此会报出内存分配错误,咱们去配置文件修改 JVM 空间分配:express
// 文件目录在:/elasticsearch-5.5.3/config [root@izbp163wlhi02tcaxyuxb7z config]# vim jvm.options
将json
-Xms2g -Xmx2g
改成
-Xms512m -Xmx512m
若是仍是报这个错误,那继续减少这个数值,这个得看机器配置。
错误:
[2018-07-04T10:43:45,590][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.5.3.jar:5.5.3] Caused by: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:106) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) ~[elasticsearch-5.5.3.jar:5.5.3] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.5.3.jar:5.5.3] ... 6 more //查看当前用户 [root@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]# whoami root
缘由:因为 Elasticsearch 能够输入且执行脚本,为了系统安全,不容许使用 root 启动;咱们看看有没有可用的用户。
[root@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]# cat /etc/passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:99:99:Nobody:/:/sbin/nologin systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin dbus:x:81:81:System message bus:/:/sbin/nologin polkitd:x:999:997:User for polkitd:/:/sbin/nologin postfix:x:89:89::/var/spool/postfix:/sbin/nologin chrony:x:998:996::/var/lib/chrony:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin ntp:x:38:38::/etc/ntp:/sbin/nologin tcpdump:x:72:72::/:/sbin/nologin nscd:x:28:28:NSCD Daemon:/:/sbin/nologin dockerroot:x:997:994:Docker User:/var/lib/docker:/sbin/nologin //用户名:密码:用户id:用户所在组id:备注:用户家目录:shell命令所在目录
若是发现用户都是系统自带的用户,那咱们最好仍是本身新建一个用户,我这里新建一个用户 wang,分组为 wang,密码为 wang。
//添加分组wang groupadd wang //添加用户wang,分组在wang,密码wang useradd wang -g wang -p wang //受权 /usr/local/wang/elasticsearch-5.5.3目录下的文件拥有者为 wang(用户):wang(分组) chown -R wang:wang /usr/local/wang/elasticsearch-5.5.3 //切换用户 //使用su和sudo是有区别的,使用su切换用户须要输入所切换到的用户的密码,而使用sudo则是当前用户的密码。 su wang
再次启动。
记住:后面修改文件时有时须要切到 root 用户,可是启动时记得切回来,不要在 root 下启动!
[wang@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]$ ./bin/elasticsearch [2018-07-04T11:25:22,745][INFO ][o.e.n.Node ] [] initializing ... [2018-07-04T11:25:22,891][INFO ][o.e.e.NodeEnvironment ] [VKU0UAW] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [32.9gb], net total_space [39.2gb], spins? [unknown], types [rootfs] [2018-07-04T11:25:22,892][INFO ][o.e.e.NodeEnvironment ] [VKU0UAW] heap size [503.6mb], compressed ordinary object pointers [true] [2018-07-04T11:25:22,894][INFO ][o.e.n.Node ] node name [VKU0UAW] derived from node ID [VKU0UAWPT06PPv0aYHIuDw]; set [node.name] to override [2018-07-04T11:25:22,894][INFO ][o.e.n.Node ] version[5.5.3], pid[16641], build[9305a5e/2017-09-07T15:56:59.599Z], OS[Linux/3.10.0-693.2.2.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_172/25.172-b11] [2018-07-04T11:25:22,894][INFO ][o.e.n.Node ] JVM arguments [-Xms512m, -Xmx512m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/wang/elasticsearch-5.5.3] [2018-07-04T11:25:25,352][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [aggs-matrix-stats] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [ingest-common] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-expression] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-groovy] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-mustache] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-painless] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [parent-join] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [percolator] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [reindex] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [transport-netty3] [2018-07-04T11:25:25,353][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [transport-netty4] [2018-07-04T11:25:25,354][INFO ][o.e.p.PluginsService ] [VKU0UAW] no plugins loaded [2018-07-04T11:25:28,878][INFO ][o.e.d.DiscoveryModule ] [VKU0UAW] using discovery type [zen] [2018-07-04T11:25:29,988][INFO ][o.e.n.Node ] initialized [2018-07-04T11:25:29,988][INFO ][o.e.n.Node ] [VKU0UAW] starting ... [2018-07-04T11:25:30,358][INFO ][o.e.t.TransportService ] [VKU0UAW] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300} [2018-07-04T11:25:30,377][WARN ][o.e.b.BootstrapChecks ] [VKU0UAW] max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536] [2018-07-04T11:25:30,377][WARN ][o.e.b.BootstrapChecks ] [VKU0UAW] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] [2018-07-04T11:25:33,470][INFO ][o.e.c.s.ClusterService ] [VKU0UAW] new_master {VKU0UAW}{VKU0UAWPT06PPv0aYHIuDw}{gqVgexbbSx-6IWNhGSzvRw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) [2018-07-04T11:25:33,589][INFO ][o.e.h.n.Netty4HttpServerTransport] [VKU0UAW] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200} [2018-07-04T11:25:33,590][INFO ][o.e.n.Node ] [VKU0UAW] started [2018-07-04T11:25:33,618][INFO ][o.e.g.GatewayService ] [VKU0UAW] recovered [0] indices into cluster_state
启动成功后,经过启动信息,咱们能够知道默认的端口在 9200,可是信息中有两个 warn 级别的日志,咱们先去浏览器访问的试试。
http://xx.xx.xx.xx:9200
发现仍是没法访问。
缘由:默认访问地址是 localhost,咱们要外网访问,须要去修改下配置文件,elasticsearch-5.5.3/config
下的 elasticsearch.yml。
vim elasticsearch.yml
将network.host放开,修改成0.0.0.0下,将http.port放开,以下:
# ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # #network.host: 192.168.0.1 network.host: 0.0.0.0 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. #
再次尝试启动。
ERROR: [2] bootstrap checks failed
[2018-07-04T16:00:28,070][INFO ][o.e.n.Node ] initialized [2018-07-04T16:00:28,070][INFO ][o.e.n.Node ] [VKU0UAW] starting ... [2018-07-04T16:00:28,377][INFO ][o.e.t.TransportService ] [VKU0UAW] publish_address {172.16.229.31:9300}, bound_addresses {0.0.0.0:9300} [2018-07-04T16:00:28,401][INFO ][o.e.b.BootstrapChecks ] [VKU0UAW] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks ERROR: [2] bootstrap checks failed [1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] [2018-07-04T16:00:28,485][INFO ][o.e.n.Node ] [VKU0UAW] stopping ... [2018-07-04T16:00:28,535][INFO ][o.e.n.Node ] [VKU0UAW] stopped [2018-07-04T16:00:28,536][INFO ][o.e.n.Node ] [VKU0UAW] closing ... [2018-07-04T16:00:28,550][INFO ][o.e.n.Node ] [VKU0UAW] closed
这里实际上是两个错误,就是前面的两个 warn 信息。
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
缘由:max_map_count
这个参数就是容许一个进程在 VMAs(虚拟内存区域)拥有最大数量,VMA 是一个连续的虚拟地址空间,当进程建立一个内存映像文件时 VMA 的地址空间就会增长,当达 到max_map_count
了就是返回 out of memory errors
。
出现这个问题,咱们须要切换到 root 用户下。
// 修改下面的文件 里面是一些内核参数 vi /etc/sysctl.conf //添加如下配置 vm.max_map_count=655360
添加完后保存,而后执行:
sysctl -p //-p 从指定的文件加载系统参数,如不指定即从/etc/sysctl.conf中加载
max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
缘由:最大文件打开数量过小,出现此错误,切换到 root 用户下,修改 limits.conf。
// 编辑此文件 [root@izbp163wlhi02tcaxyuxb7z /]# vim etc/security/limits.conf
在文件后加上
* soft nofile 65536 * hard nofile 65536
5.5.3 版本,此文件有这几个值,咱们只须要把这几个值从 65535 改成 65536 便可。
# End of file root soft nofile 65536 root hard nofile 65536 * soft nofile 65536 * hard nofile 65536
切回原来用户,再次重启 ES,检查 ES 是否启动成功。
启动成功后提示以下:
[wang@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]$ ./bin/elasticsearch [2018-07-04T16:28:45,250][INFO ][o.e.n.Node ] [] initializing ... [2018-07-04T16:28:45,359][INFO ][o.e.e.NodeEnvironment ] [VKU0UAW] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [32.9gb], net total_space [39.2gb], spins? [unknown], types [rootfs] [2018-07-04T16:28:45,361][INFO ][o.e.e.NodeEnvironment ] [VKU0UAW] heap size [503.6mb], compressed ordinary object pointers [true] [2018-07-04T16:28:45,362][INFO ][o.e.n.Node ] node name [VKU0UAW] derived from node ID [VKU0UAWPT06PPv0aYHIuDw]; set [node.name] to override [2018-07-04T16:28:45,362][INFO ][o.e.n.Node ] version[5.5.3], pid[21467], build[9305a5e/2017-09-07T15:56:59.599Z], OS[Linux/3.10.0-693.2.2.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_172/25.172-b11] [2018-07-04T16:28:45,363][INFO ][o.e.n.Node ] JVM arguments [-Xms512m, -Xmx512m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/wang/elasticsearch-5.5.3] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [aggs-matrix-stats] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [ingest-common] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-expression] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-groovy] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-mustache] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [lang-painless] [2018-07-04T16:28:46,941][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [parent-join] [2018-07-04T16:28:46,950][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [percolator] [2018-07-04T16:28:46,950][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [reindex] [2018-07-04T16:28:46,950][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [transport-netty3] [2018-07-04T16:28:46,950][INFO ][o.e.p.PluginsService ] [VKU0UAW] loaded module [transport-netty4] [2018-07-04T16:28:46,950][INFO ][o.e.p.PluginsService ] [VKU0UAW] no plugins loaded [2018-07-04T16:28:50,067][INFO ][o.e.d.DiscoveryModule ] [VKU0UAW] using discovery type [zen] [2018-07-04T16:28:51,171][INFO ][o.e.n.Node ] initialized [2018-07-04T16:28:51,172][INFO ][o.e.n.Node ] [VKU0UAW] starting ... [2018-07-04T16:28:51,484][INFO ][o.e.t.TransportService ] [VKU0UAW] publish_address {172.16.229.31:9300}, bound_addresses {0.0.0.0:9300} [2018-07-04T16:28:51,513][INFO ][o.e.b.BootstrapChecks ] [VKU0UAW] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [2018-07-04T16:28:54,650][INFO ][o.e.c.s.ClusterService ] [VKU0UAW] new_master {VKU0UAW}{VKU0UAWPT06PPv0aYHIuDw}{1HxIYnvrQ9KkyLOzhVwe3Q}{172.16.229.31}{172.16.229.31:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) [2018-07-04T16:28:54,708][INFO ][o.e.h.n.Netty4HttpServerTransport] [VKU0UAW] publish_address {172.16.229.31:9200}, bound_addresses {0.0.0.0:9200} [2018-07-04T16:28:54,708][INFO ][o.e.n.Node ] [VKU0UAW] started [2018-07-04T16:28:54,738][INFO ][o.e.g.GatewayService ] [VKU0UAW] recovered [0] indices into cluster_state [2018-07-04T16:38:43,328][INFO ][o.e.c.m.MetaDataCreateIndexService] [VKU0UAW] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [_default_, index-pattern, server, visualization, search, timelion-sheet, config, dashboard, url]
仔细检查,日志都是 info 级别,没有问题,去页面访问 xx.xx.xx.xx:9200
。
页面会出现以下信息:
{ "name": "VKU0UAW", "cluster_name": "elasticsearch", "cluster_uuid": "TTJuSo16Tny1lUoFmnF-dA", "version": { "number": "5.5.3", "build_hash": "9305a5e", "build_date": "2017-09-07T15:56:59.599Z", "build_snapshot": false, "lucene_version": "6.6.0" }, "tagline": "You Know, for Search" }
至此,Elasticsearch 安装完毕。
下面这种方式是在前台启动,咱们关闭命令行或者退出,应用就会关闭。
[wang@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]$ ./bin/elasticsearch
因此,咱们须要在后台启动,这样当咱们退出时,应用仍在后台运行。
[wang@izbp163wlhi02tcaxyuxb7z elasticsearch-5.5.3]$ ./bin/elasticsearch -d
前台启动,直接 ctrl+c
退出便可,后台启动,中止时能够直接杀掉进程。
[wang@izbp163wlhi02tcaxyuxb7z bin]$ ./elasticsearch -d [wang@izbp163wlhi02tcaxyuxb7z bin]$ jps 3697 Elasticsearch 3771 Jps [wang@izbp163wlhi02tcaxyuxb7z bin]$ kill -9 3697
每个版本的 ES 都有一个对应的 Kibana 版本,咱们能够去下面的地址查找最新的版本,建议和 ES 相同版本。
下载地址
//解压: tar -zxvf kibana-5.5.3-linux-x86_64.tar.gz
[wang@izbp163wlhi02tcaxyuxb7z kibana-5.5.3-linux-x86_64]$ ./bin/kibana
Kibana 默认是在前台启动,能够经过 ctrl+c
命令中止。
解压时的文件夹下装着全部 Kibana 相关的文件,咱们不用新建其余文件,当咱们须要删除时,直接删除此文件夹便可。
启动后消息以下:
[wang@izbp163wlhi02tcaxyuxb7z kibana-5.5.3-linux-x86_64]$ ./bin/kibana log [03:49:45.116] [info][status][plugin:kibana@5.5.3] Status changed from uninitialized to green - Ready log [03:49:45.188] [info][status][plugin:elasticsearch@5.5.3] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [03:49:45.215] [error][admin][elasticsearch] Request error, retrying HEAD http://localhost:9200/ => connect ECONNREFUSED 127.0.0.1:9200 log [03:49:45.219] [info][status][plugin:console@5.5.3] Status changed from uninitialized to green - Ready log [03:49:45.224] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/ log [03:49:45.225] [warning][admin][elasticsearch] No living connections log [03:49:45.228] [error][status][plugin:elasticsearch@5.5.3] Status changed from yellow to red - Unable to connect to Elasticsearch at http://localhost:9200. log [03:49:45.251] [info][status][plugin:metrics@5.5.3] Status changed from uninitialized to green - Ready log [03:49:45.454] [info][status][plugin:timelion@5.5.3] Status changed from uninitialized to green - Ready log [03:49:45.459] [info][listening] Server running at http://localhost:5601 log [03:49:45.461] [error][status][ui settings] Status changed from uninitialized to red - Elasticsearch plugin is red log [03:49:47.735] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/ log [03:49:47.735] [warning][admin][elasticsearch] No living connections log [03:49:50.244] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/ log [03:49:50.245] [warning][admin][elasticsearch] No living connections log [03:49:52.751] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/ log [03:49:52.751] [warning][admin][elasticsearch] No living connections ......
咱们能够看到,他会默认去连接同一台服务器上的 9200 端口提供的服务,若是没有启动 Elasticsearch 服务,他会一直尝试去链接,咱们启动下 Elasticsearch。
访问 http://xx.xx.xx.xx:5601,而后发现访问不了,咱们注意上面的日志,有这么一句:
log [03:49:45.459] [info][listening] Server running at http://localhost:5601
在 config/kibana.yml 中,有以下配置,意思是默认是 localhost,外网是没法访问的,若是外网想访问,那须要修改一下 server.host。
# Kibana is served by a back end server. This setting specifies the port to use. #server.port: 5601 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. #server.host: "localhost"
咱们放开端口,放开 server.host,并修改以下:
server.port: 5601 server.host: 0.0.0.0
意思是任何人均可以访问,而后再次启动,访问http://xx.xx.xx.xx:5601,出现以下页面,说明大功告成。
当使用前台启动时,若是咱们退出终端,服务就会中止,咱们可使用 nohup
命令来启动。
[root@izbp163wlhi02tcaxyuxb7z kibana-5.5.3-linux-x86_64]# nohup ./bin/kibana &
nohup
命令:若是你在运行一个进程,你但愿在退出帐户或者关闭终端时继续运行相应的进程,就可使用 nohup(no hang up);
该命令格式为:nohup command &
。
咱们查看下 Kibana 的目录:
[wang@izbp163wlhi02tcaxyuxb7z kibana-5.5.3-linux-x86_64]$ ls bin config data LICENSE.txt node node_modules NOTICE.txt optimize package.json plugins README.txt src ui_framework webpackShims
拓展阅读:《高可用 Elasticsearch 集群 21 讲》。