Elasticsearch之优化

 

 

为何es须要优化?html

  答:node

 

 

 

 

 

 

 

 

 

 

 

[root@master elasticsearch-2.4.0]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 6661
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 6661
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
[root@master elasticsearch-2.4.0]# ulimit -n 32000
[root@master elasticsearch-2.4.0]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 6661
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 6661
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
[root@master elasticsearch-2.4.0]# 

  es集群的3节点,每一个机器都要去设置。master、slave1和slave2都要去操做。express

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

怎么来作好es的优化工做?bootstrap

途径一、解决es启动的警告信息【或者es中Too many open files的问题】vim

  max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]tomcat

  vi /etc/security/limits.conf 添加下面两行服务器

  * soft nofile 65536app

  * hard nofile 131072less

 即,意思是把它们调大,重启es服务进程,就生效了。curl

 

 

 

 

途径二、修改配置文件调整ES的JVM内存大小

  修改bin/elasticsearch.in.sh中ES_MIN_MEM和ES_MAX_MEM的大小,建议设置同样大,避免频繁的分配内存,根据服务器内存大小,通常分配60%左右(默认256M)

  注意:内存最大不要超过32G【详情请看以下的截图和文字说明】

  一旦你越过这个神奇的32 GB边界,指针会切换回普通对象指针.。每一个指针的大小增长,使用更多的CPU内存带宽。事实上,你使用40~50G的内存和使用32G的内存效果是同样的。

 

 

连接:https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops

Don’t Cross 32 GB!
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns out, the HotSpot JVM uses a trick to compress object pointers when heaps are less than around 32 GB.
In Java, all objects are allocated on the heap and referenced by a pointer. Ordinary object pointers (OOP) point at these objects, and are traditionally the size of the CPU’s native word: either 32 bits or 64 bits, depending on the processor. The pointer references the exact byte location of the value.
For 32-bit systems, this means the maximum heap size is 4 GB. For 64-bit systems, the heap size can get much larger, but the overhead of 64-bit pointers means there is more wasted space simply because the pointer is larger. And worse than wasted space, the larger pointers eat up more bandwidth when moving values between main memory and various caches (LLC, L1, and so forth).
Java uses a trick called compressed oops to get around this problem. Instead of pointing at exact byte locations in memory, the pointers reference object offsets. This means a 32-bit pointer can reference four billion objects, rather than four billion bytes. Ultimately, this means the heap can grow to around 32 GB of physical size while still using a 32-bit pointer.
Once you cross that magical ~32 GB boundary, the pointers switch back to ordinary object pointers. The size of each pointer grows, more CPU-memory bandwidth is used, and you effectively lose memory. In fact, it takes until around 40–50 GB of allocated heap before you have the same effective memory of a heap just under 32 GB using compressed oops.
The moral of the story is this: even when you have memory to spare, try to avoid crossing the 32 GB heap boundary. It wastes memory, reduces CPU performance, and makes the GC struggle with large heaps.

   注意:是每一个es实例不要超过32G,而不是全部的。

 

 

 

 

 

 

 

 

 

[hadoop@master bin]$ pwd
/home/hadoop/app/elasticsearch-2.4.0/bin [hadoop@master bin]$ ll total 324 -rwxr-xr-x 1 hadoop hadoop 5551 Aug 24 2016 elasticsearch -rw-rw-r-- 1 hadoop hadoop 909 Aug 24 2016 elasticsearch.bat -rw-rw-r-- 1 hadoop hadoop 3307 Aug 24 2016 elasticsearch.in.bat -rwxr-xr-x 1 hadoop hadoop 2814 Aug 24 2016 elasticsearch.in.sh -rw-rw-r-- 1 hadoop hadoop 104448 Jul 27 2016 elasticsearch-service-mgr.exe -rw-rw-r-- 1 hadoop hadoop 103936 Jul 27 2016 elasticsearch-service-x64.exe -rw-rw-r-- 1 hadoop hadoop 80896 Jul 27 2016 elasticsearch-service-x86.exe -rwxr-xr-x 1 hadoop hadoop 2992 Aug 24 2016 plugin -rw-rw-r-- 1 hadoop hadoop 1303 Aug 24 2016 plugin.bat -rw-rw-r-- 1 hadoop hadoop 6872 Aug 24 2016 service.bat [hadoop@master bin]$ vim elasticsearch.in.sh 

 

 

 

 

  你们,自行去,根据本身机器内存实情,设置为其60%。

 

 

 

 

 

 

途径三、设置memory_lock来锁定进程的物理内存地址

  避免交换(swapped)来提升性能

  修改文件conf/elasticsearch.yml

  bootstrap.memory_lock: true

   这里,我就不赘述了。

 

 

 

 

 

 

 

 

 

 

 

[hadoop@master config]$ pwd
/home/hadoop/app/elasticsearch-2.4.0/config [hadoop@master config]$ ll total 12 -rw-rw-r-- 1 hadoop hadoop 3393 Jul 5 22:19 elasticsearch.yml -rw-rw-r-- 1 hadoop hadoop 2571 Aug 24 2016 logging.yml drwxrwxr-x 2 hadoop hadoop 4096 Apr 21 15:43 scripts [hadoop@master config]$ vim elasticsearch.yml 

 

 

   去掉注释。

  es的3节点集群,master、slave1和slave2都要去操做。

 

 

 

 

 

 

 

 

 

途径四、分片多的话,能够提高创建索引的能力,5-20个比较合适。

  若是分片数过少或过多,都会致使检索比较慢。

  分片数过多会致使检索时打开比较多的文件,另外也会致使多台服务器之间通信。

  而分片数过少会导至单个分片索引过大,因此检索速度也会慢。

  建议单个分片最多存储20G左右的索引数据,因此,分片数量=数据总量/20G

 

 

 

 

 

 

 

途径五、副本多的话,能够提高搜索的能力,可是若是设置不少副本的话也会对服务器形成额外的压力,由于须要主分片须要给全部副本同步数据。因此建议最多设置1-2个便可。

 

 

 

 

 

 

 

 

 

途径六、Elastic 官方文档建议:一个 es实例中 最好不要多于三个 shards,如果 "more shards”,只能增长更多的机器 ,若是服务器性能好的话能够在一台服务器上启动多个es实例

 

 

 

 

途径七、要定时对索引进行合并优化,否则segment越多,占用的segment memory越多,查询的性能也越差

  索引量不是很大的话状况下能够将segment设置为1

  在es2.1.0之前调用_optimize接口,后期改成_forcemerge接口

  curl -XPOST 'http://localhost:9200/zhouls/_forcemerge?max_num_segments=1'

  client.admin().indices().prepareForceMerge("zhouls").setMaxNumSegments(1).get();

 

 

 

 

 

 

 

 

 

 

 

途径八、针对不使用的index,建议close,减小内存占用。由于只要索引处于open状态,索引库中的segement就会占用内存,close以后就只会占用磁盘空间了。

curl -XPOST 'localhost:9200/zhouls/_close'

 

 

 

 

 

 

 

 

途径九、删除文档:在es中删除文档,数据不会立刻在硬盘上除去,而是在es索引中产生一个.del的文件,而在检索过程当中这部分数据也会参与检索,es在检索过程会判断是否删除了,若是删除了在过滤掉。这样也会下降检索效率。因此能够执行清除删除文档

curl -XPOST 'http://192.168.80.10:9200/zhouls/_forcemerge?only_expunge_deletes=true'

client.admin().indices().prepareForceMerge("zhouls").setOnlyExpungeDeletes(true).get();

 

 

 

 

 

 

 

 

 

 

途径十、若是在项目开始的时候须要批量入库大量数据的话,建议将副本数设置为0

  由于es在索引数据的时候,若是有副本存在,数据也会立刻同步到副本中,这样会对es增长压力。能够等索引完成后将副本按须要改回来。这样能够提升索引效率。

 

 

 

 

 

 

途径十一、去掉mapping中_all字段,Index中默认会有_all这个字段,默认会把全部字段的内容都拷贝到这一个字段里面,这样会给查询带来方便,可是会增长索引时间和索引尺寸。

  禁用_all字段  "_all":{"enabled":false} 

  若是只是某个字段不但愿被加到_all中,可使用 "include_in_all":false

 

 

 

 

 

 

 

 

途径十二、log输出的水平默认为trace,即查询超过500ms即为慢查询,就要打印日志,形成cpu和mem,io负载很高。把log输出水平改成info,能够减轻服务器的压力。

  修改ES_HOME/conf/logging.yaml文件

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

途径1:能够解决es的警告信息

   其实啊,若咱们在ES_HOME目录下,不用后台bin/elasticsearch -d这种方式来启动的话,用前台bin/elasticsearch。则会看到,以下:

   说明,我这里是由于安装了tomcat。因此,在前台直接启动,会出错。

   因此,

[hadoop@HadoopMaster bin]$ pwd
/home/hadoop/app/tomcat-7.0.73/bin
[hadoop@HadoopMaster bin]$ ./startup.sh
Using CATALINA_BASE: /home/hadoop/app/tomcat-7.0.73
Using CATALINA_HOME: /home/hadoop/app/tomcat-7.0.73
Using CATALINA_TMPDIR: /home/hadoop/app/tomcat-7.0.73/temp
Using JRE_HOME: /home/hadoop/app/jdk1.7.0_79/jre
Using CLASSPATH: /home/hadoop/app/tomcat-7.0.73/bin/bootstrap.jar:/home/hadoop/app/tomcat-7.0.73/bin/tomcat-juli.jar
Tomcat started.
[hadoop@HadoopMaster bin]$ jps
2916 Jps
2906 Bootstrap
[hadoop@HadoopMaster bin]$ cd ..
[hadoop@HadoopMaster tomcat-7.0.73]$ cd ..
[hadoop@HadoopMaster app]$ cd elasticsearch-2.4.3/
[hadoop@HadoopMaster elasticsearch-2.4.3]$ bin/elasticsearch
[2017-02-28 22:08:49,862][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
[2017-02-28 22:08:51,324][INFO ][node ] [Dragonwing] version[2.4.3], pid[2930], build[d38a34e/2016-12-07T16:28:56Z]
[2017-02-28 22:08:51,324][INFO ][node ] [Dragonwing] initializing ...
[2017-02-28 22:08:55,760][INFO ][plugins ] [Dragonwing] modules [lang-groovy, reindex, lang-expression], plugins [analysis-ik, kopf, head], sites [kopf, head]
[2017-02-28 22:08:55,846][INFO ][env ] [Dragonwing] using [1] data paths, mounts [[/home (/dev/sda5)]], net usable_space [23.4gb], net total_space [26.1gb], spins? [possibly], types [ext4]
[2017-02-28 22:08:55,846][INFO ][env ] [Dragonwing] heap size [1015.6mb], compressed ordinary object pointers [true]
[2017-02-28 22:08:55,848][WARN ][env ] [Dragonwing] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2017-02-28 22:09:00,957][INFO ][ik-analyzer ] try load config from /home/hadoop/app/elasticsearch-2.4.3/config/analysis-ik/IKAnalyzer.cfg.xml
[2017-02-28 22:09:00,959][INFO ][ik-analyzer ] try load config from /home/hadoop/app/elasticsearch-2.4.3/plugins/ik/config/IKAnalyzer.cfg.xml
[2017-02-28 22:09:01,925][INFO ][ik-analyzer ] [Dict Loading] custom/mydict.dic
[2017-02-28 22:09:01,926][INFO ][ik-analyzer ] [Dict Loading] custom/single_word_low_freq.dic
[2017-02-28 22:09:01,932][INFO ][ik-analyzer ] [Dict Loading] custom/zhouls.dic
[2017-02-28 22:09:01,933][INFO ][ik-analyzer ] [Dict Loading] http://192.168.80.10:8081/zhoulshot.dic
[2017-02-28 22:09:09,451][INFO ][ik-analyzer ] 好记性不如烂笔头感叹号博客园热更新词
[2017-02-28 22:09:09,550][INFO ][ik-analyzer ] 桂林不雾霾
[2017-02-28 22:09:09,615][INFO ][ik-analyzer ] [Dict Loading] custom/ext_stopword.dic
[2017-02-28 22:09:13,620][INFO ][node ] [Dragonwing] initialized
[2017-02-28 22:09:13,621][INFO ][node ] [Dragonwing] starting ...
[2017-02-28 22:09:13,932][INFO ][transport ] [Dragonwing] publish_address {192.168.80.10:9300}, bound_addresses {[::]:9300}
[2017-02-28 22:09:13,960][INFO ][discovery ] [Dragonwing] elasticsearch/eKzsH0g5QoGl6pQlCG4mOQ
[2017-02-28 22:09:17,357][INFO ][cluster.service ] [Dragonwing] detected_master {Carrie Alexander}{98-Mux6mQsu1oE__EJN7yQ}{192.168.80.11}{192.168.80.11:9300}, added {{Carrie Alexander}{98-Mux6mQsu1oE__EJN7yQ}{192.168.80.11}{192.168.80.11:9300},{Shocker}{u_IYMF3ISe6_iki9KwxPCA}{192.168.80.12}{192.168.80.12:9300},}, reason: zen-disco-receive(from master [{Carrie Alexander}{98-Mux6mQsu1oE__EJN7yQ}{192.168.80.11}{192.168.80.11:9300}])
[2017-02-28 22:09:17,637][INFO ][http ] [Dragonwing] publish_address {192.168.80.10:9200}, bound_addresses {[::]:9200}
[2017-02-28 22:09:17,638][INFO ][node ] [Dragonwing] started
[2017-02-28 22:09:19,812][INFO ][ik-analyzer ] 从新加载词典...
[2017-02-28 22:09:19,816][INFO ][ik-analyzer ] try load config from /home/hadoop/app/elasticsearch-2.4.3/config/analysis-ik/IKAnalyzer.cfg.xml
[2017-02-28 22:09:19,820][INFO ][ik-analyzer ] try load config from /home/hadoop/app/elasticsearch-2.4.3/plugins/ik/config/IKAnalyzer.cfg.xml
[2017-02-28 22:09:23,102][WARN ][monitor.jvm ] [Dragonwing] [gc][young][8][7] duration [1.6s], collections [1]/[1.9s], total [1.6s]/[5.2s], memory [121.7mb]->[79.4mb]/[1015.6mb], all_pools {[young] [59.9mb]->[457kb]/[66.5mb]}{[survivor] [8.2mb]->[8.3mb]/[8.3mb]}{[old] [53.5mb]->[70.6mb]/[940.8mb]}
[2017-02-28 22:09:23,946][INFO ][ik-analyzer ] [Dict Loading] custom/mydict.dic
[2017-02-28 22:09:23,947][INFO ][ik-analyzer ] [Dict Loading] custom/single_word_low_freq.dic
[2017-02-28 22:09:23,953][INFO ][ik-analyzer ] [Dict Loading] custom/zhouls.dic
[2017-02-28 22:09:23,955][INFO ][ik-analyzer ] [Dict Loading] http://192.168.80.10:8081/zhoulshot.dic
[2017-02-28 22:09:23,996][INFO ][ik-analyzer ] 好记性不如烂笔头感叹号博客园热更新词
[2017-02-28 22:09:23,997][INFO ][ik-analyzer ] 桂林不雾霾
[2017-02-28 22:09:24,000][INFO ][ik-analyzer ] [Dict Loading] custom/ext_stopword.dic
[2017-02-28 22:09:24,002][INFO ][ik-analyzer ] 从新加载词典完毕...

  

  更详细,es的前台和后台启动,请移步

Elasticsearch之启动(前台和后台)

  怎么作,以下:

 

后续更新

相关文章
相关标签/搜索