文档元数据html
查询集群的名字java
⇒ curl -XGET 'http://localhost:9200'
查询集群的健康情况git
⇒ curl -XGET 'http://localhost:9200/_cluster/health?format=yaml'
status字段说明:github
- green 一切正常
- yellow replicas没有分配[多是只有单个节点],集群正常
- red 某些数据取不到
format=yaml指定使用yaml格式输出,方便查看web
获取集群的全部索引数据库
⇒ curl -XGET 'http://localhost:9200/_cat/indices'
索引的字段apache
⇒ curl -XGET 'http://localhost:9200/mytest/_mapping?format=yaml'
结果并发
mytest: mappings: external: properties: addre: type: "string" name: type: "string"
它相似于数据库的schema,描述文档可能具备的字段或属性、每一个字段的数据类型。字段对于非string类型,通常只须要设置type。string域两重要属性 index analyzerapp
indexcurl
1. analyzed 全文索引这个域。首先分析字符串,而后索引 2. not_analyzed 精确索引 ,不分析 3. no 此域不会被搜索
analyzer
将文本分红四核倒排索引的独立词条,后将词条统一化提升可搜索性
动态映射: 文档中出现以前从未遇到过的字段,动态肯定数据类型,并自动把新的字段添加到类型映射
⇒ curl -XPUT 'localhost:9200/mytest'
⇒ curl -XDELETE 'localhost:9200/mytest?format=yaml'
插入单条数据
⇒ curl -XPUT 'localhost:9200/mytest/external/1?format=yaml' -d ' quote> { "name":"paxi"}'
查询单条数据
⇒ curl -XGET 'localhost:9200/mytest/external/1?format=yaml'
删除单条数据
curl -XDELETE 'localhost:9200/mytest/external/3?format=yaml'
curl -XGET 'localhost:9200/_analyze?format=yaml' -d ' {"papa xixi write"}'
结果为
tokens: - token: "papa" start_offset: 3 end_offset: 7 type: "<ALPHANUM>" position: 1 - token: "xixi" start_offset: 8 end_offset: 12 type: "<ALPHANUM>" position: 2 - token: "write" start_offset: 13 end_offset: 18 type: "<ALPHANUM>" position: 3
token 表示实际存储的词条,position表示词条在原始文本中的位置。能够看出完整的文本会被切割存储成不一样的词条
curl -XGET 'localhost:9200/mytest/_search?filter_path=hits.hits._source&format=yaml' -d ' { "query":{"match":{"name":"papa xixi write"}}}'
低版本没法生效
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"match":{"name":"papa xixi write"}},"_source":["name"]}'
低版本无效,能够用通配符
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"match":{"name":"papa xixi write"}}}'
查询匹配的结果以下
hits: - _index: "mytest" _type: "external" _id: "11" _score: 0.6532502 _source: name: "papa xixi write" - _index: "mytest" _type: "external" _id: "4" _score: 0.22545706 _source: name: "papa xixi" - _index: "mytest" _type: "external" _id: "2" _score: 0.12845722 _source: name: "papa" - _index: "mytest" _type: "external" _id: "10" _score: 0.021688733 _source: name: "xixi"
从查询结果,它获取到了全部包含 papa 、 xixi和write 的词,至关于将原来的词拆开,而后两个单词作了 OR 操做,若是要所有匹配,可使用AND操做
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"match":{"name":{"query":"papa xixi write","operator":"and"}}}}' --- hits: total: 1 max_score: 0.6532502 hits: - _index: "mytest" _type: "external" _id: "11" _score: 0.6532502 _source: name: "papa xixi write"
若是只是想提升精度
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"match":{"name":{"query":"papa xixi write","minimum_should_match":"75%"}}}}' --- hits: total: 2 max_score: 0.6532502 hits: - _index: "mytest" _type: "external" _id: "11" _score: 0.6532502 _source: name: "papa xixi write" - _index: "mytest" _type: "external" _id: "4" _score: 0.22545706 _source: name: "papa xixi"
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"term":{"name":"papa xixi write"}}}'
它的结果是什么也没有查到
total: 0 max_score: null hits: []
换用查询语句
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"term":{"name":"papa"}}}'
结果为
hits: - _index: "mytest" _type: "external" _id: "2" _score: 1.0 _source: name: "papa" - _index: "mytest" _type: "external" _id: "4" _score: 0.37158427 _source: name: "papa xixi" - _index: "mytest" _type: "external" _id: "11" _score: 0.2972674 _source: name: "papa xixi write"
match 若是在全文字段上查询,会使用正确的分析器分析查询字符串;若是精确值字段使用,会精确匹配。 term精确匹配,只要包含了对应的文本就能够,不对文本分析(not_analyzed文本会精确匹配,terms 多个值只要有一个匹配就匹配);
从"papa xixi write"的存储文本分析来看,它自己会被切割成不一样的词条,因此用 term查询"papa xixi write",没法获取到结果,可是match确可以匹配
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"filtered":{"filter":{"range":{"name":{"gt":"w"}}}}}}'
或者
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"constant_score":{"filter":{"range":{"name":{"gt":"w"}}}}}}'
⇒ curl -XGET 'localhost:9200/_validate/query?explain&format=yaml' -d '{ "query":{{"filter":{"range":{"name":{"gt":"w"}}}}}' --- valid: false //缘由省略
使用term查询
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { "query":{"term":{"addre":"beijing"}}}'
结果为
hits: - _index: "mytest" _type: "external" _id: "5" _score: 0.30685282 _source: addre: "beijing" - _index: "mytest" _type: "external" _id: "6" _score: 0.30685282 _source: addre: "beijing" name: "px"
转换为bool查询,结果同样
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { query:{bool:{must:{match:{addre:"beijing"}}}}}'
若是只想要最后一条
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { query:{bool:{must:{match:{addre:"beijing"}},must:{match:{name:"px"}}}}}'
想要第一条
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { query:{bool:{must:{match:{addre:"beijing"}},must_not:{match:{name:"px"}}}}}'
都想要
curl -XGET 'localhost:9200/mytest/_search?format=yaml' -d ' { query:{bool:{must:{match:{addre:"beijing"}},should:{match:{name:"px"}}}}}'
must的意思是当前值必须是有的,must_not必须没有,should表示数据能够有也能够没有
/ ** @param startTime 开始的时间 * @param endTime 结束的时间 * @param termAggName term过滤 * @param fieldName 要作count的字段 * @param top 返回的数量 */ RangeQueryBuilder actionPeriod = QueryBuilders.rangeQuery("myTimeField").gte(startTime).lte(endTime).format("epoch_second"); TermsBuilder termsBuilder = AggregationBuilders.terms(termAggName).field(fieldName).size(top).order(Terms.Order.count(false)); return client.prepareSearch(INDICE).setQuery(actionPeriod).addAggregation(termsBuilder).setSize(0).execute().actionGet();
order(Terms.Order.count(false)):表示降序size(top):top表示只要排序的数量
prepareSearch(INDICE):INDICE表示索引的名字
setSize(0):表示只要聚合结果
若是须要去掉某些特殊字段取值
client为构建的ES客户端
BoolQueryBuilder actionPeriodMustNot = QueryBuilders.boolQuery().must(QueryBuilders.rangeQuery("myTimeField").gte(startTime).lte(endTime).format("epoch_second")).mustNot(QueryBuilders.termQuery(field, value));
若是是单个字段特定的多个值
//values是个List BoolQueryBuilder actioPeriodMust = QueryBuilders.boolQuery().must(QueryBuilders.rangeQuery("myTimeField").gte(startTime).lte(endTime).format("epoch_second")).must(QueryBuilders.termsQuery(field, values));
使用结果
Terms clickCount= sr.getAggregations().get(termAggName); for (Terms.Bucket term:clickCount.getBuckets()){ int key = term.getKeyAsNumber().intValue(); //要排序字段的值 long docCount = term.getDocCount(); //数量 }
BoolQueryBuilder actioPeriodMust = QueryBuilders.boolQuery().must(QueryBuilders.rangeQuery("myTimeField").gte(startTime).lte(endTime).format("epoch_second")); DateHistogramBuilder actionInterval = AggregationBuilders.dateHistogram(dateNickName).field("myTimeField").timeZone("Asia/Shanghai"); if (timeInterval<MINUTE){ actionTimeInterval.interval(DateHistogramInterval.seconds(timeInterval)).format("HH:mm:ss"); }else if (timeInterval<HOUR){ actionTimeInterval.interval(DateHistogramInterval.minutes(timeInterval / MINUTE)).format("dd HH:mm"); }else if (timeInterval < DAY){ actionTimeInterval.interval(DateHistogramInterval.hours(timeInterval / HOUR)).format("HH:mm"); }else if (timeInterval < THIRTY_DAY){ actionTimeInterval.interval(DateHistogramInterval.days(timeInterval / DAY)); }else{ actionTimeInterval.interval(DateHistogramInterval.MONTH); } actionInterval.format("yyyy-MM-dd HH:mm:ssZ"); return client.prepareSearch(INDICE).setQuery(actioPeriodMust).addAggregation(actionInterval).setSize(0).execute().actionGet();
es自己默认设置的时间戳是 UTC形式,在国内要设置TimeZone(“Asia/Shanghai”);java的SimpleDateFormate会默认获取虚拟机所在时区的时间戳,因此存时间的时候,最好存与时区无关的时间,再作本地化显示
使用结果
Histogram histogram=sr.getAggregations().get(dateNickName); for(Histogram.Bucket entry:histogram.getBuckets()){ String key = entry.getKeyAsString();//时间间隔 long count = entry.getDocCount();//数量 }
至关于合并上述两个场景
BoolQueryBuilder query = QueryBuilders.boolQuery().must(QueryBuilders.rangeQuery("myTimeField").gte(startTime).lte(endTime).format("epoch_second")) .must(QueryBuilders.termsQuery("action", orderValue)); DateHistogramBuilder actionTimeInterval = AggregationBuilders.dateHistogram(dateNickName).field("myTimeField").timeZone("Asia/Shanghai"); actionTimeInterval.subAggregation(AggregationBuilders.terms(termNickName).field("action").size(size)); return client.prepareSearch(INDICE).setQuery(query).addAggregation(actionTimeInterval).setSize(0).execute().actionGet();
使用结果
Histogram hitogram = sr.getAggregations().get(dateAggName); for (Histogram.Bucket date : hitogram.getBuckets()) { String intervalName = date.getKeyAsString(); long timeIntervalCount = date.getDocCount(); if (timeIntervalCount != 0) { Terms terms = date.getAggregations().get(termAggName); for (Terms.Bucket entry : terms.getBuckets()) { int key= entry.getKeyAsNumber().intValue(); long childCount = entry.getDocCount(); } } }
BoolQueryBuilder actionPeriodMust = QueryBuilders.boolQuery().must(QueryBuilders.termQuery(key, value)).must(QueryBuilders.rangeQuery("myTimeField").gte(startTime).lte(endTime).format("epoch_second")); return client.prepareSearch(INDICE).setQuery(actionPeriodMust).addSort(SortBuilders.fieldSort("myTimeField").order(SortOrder.ASC)).setFrom(from).setSize(size).execute().actionGet();
使用
Iterator<SearchHit> iterator = sr.getHits().iterator(); while (iterator.hasNext()) { SearchHit next = iterator.next(); JSONObject jo = JSONObject.parseObject(next.getSourceAsString()); }
BoolQueryBuilder query = QueryBuilders.boolQuery().must(QueryBuilders.rangeQuery("myTimeField").gte(startTimeInSec*1000).lte(endTimeInSec*1000).format("epoch_millis")); CardinalityBuilder fieldCardinality = AggregationBuilders.cardinality(cardinalityAggName).field(field);//field 要获取的字段 return client.prepareSearch(INDICE).setQuery(query).addAggregation(fieldCardinality).execute().actionGet();
使用结果
Cardinality cardinality = sr.getAggregations().get(cardinalityAggName); long value = cardinality.getValue();
好比想要addr是beijing的,同时必须知足条件:name是 paxi,或者,phoneNumber是 1234567890
BoolQueryBuilder searchIdQuery = QueryBuilders.boolQuery(); BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery(); while (kvs.hasNext()){ Map.Entry<String, String> fieldValue = kvs.next(); String field=fieldValue.getKey(); String value=fieldValue.getValue(); searchIdQuery.should(QueryBuilders.termQuery(field, value)); } boolQueryBuilder.must(searchIdQuery); boolQueryBuilder.must(QueryBuilders.termsQuery(key, values)); return client.prepareSearch(INDICE).setQuery(boolQueryBuilder).execute().actionGet();
./bin/logstash -f conf/test.conf
启动方式
kibana框中的查询可使用LUCENE查询语法或者是ES的查询语句
查询指定的字段不然使用默认字段
好比 index包含两个字段 title , text ;text是默认字段
title:”hello world” AND text:to 和 title:”hello world” AND to 等效
title: hello world 查询的则是 title为hello的字段 text为world的字段
te?t 匹配 text test ;表示任意一个字符
test* 匹配 test tests tester;表示0到多个字符
?和 * 不能用在第一个位置
roam~ 匹配 foam和roams 基于 Levenshtein Distance,波浪线添加在末尾。从1.9版本开始能够追加数字表明类似度,越接近1类似度越高,好比 roam~0.8,默认是0.5
“jakarta apache”~10 匹配从jakarta到apache中间隔10个单词
mode_date:[20020101 TO 20030101] 匹配时间在20020101到20030101之间,包括20020101和20030101
title:{Aida TO Carmen} 匹配Aida 到 Carmen之间,不包括Aida和Carmen
“[”表示包含 “{”表示不包含
关键字要大写
(jakarta OR apache) AND website 组合查询 包含 website 和 jakarta/apache
\(1\+1\)\:2
将ES命令中的 -d 后面的参数加入便可;好比curl查询为
curl -XGET 'localhost:9200/_search?format=yaml' -d ' { "query":{"term":{"addre":"beijing"}}}'
命令行下输入为:{ "query":{"term":{"addre":"beijing"}}}