客户调用批量查询接口对Solr核进行查询时以为查询响应时间有些慢,接口的内部实现目前是顺序执行每一个查询,再把结果汇总起来返回给调用方。所以,考虑引入线程池对查询接口的内部实现进行重构优化。java
- private ExecutorService executor = Executors.newCachedThreadPool();//查询请求处理线程池
而后是主线程方法的代码: public Listsql
- List<Map<String, String>> finalResult = null;
- if (idList == null || idList.size() == 0 || StringUtil.isBlank(entityCode)) {//参数合法性校验
- return finalResult;
- }
- finalResult = new ArrayList<Map<String, String>>();
- List<Future<Map<String, String>>> futureList = new ArrayList<Future<Map<String, String>>>();
- int threadNum = idList.size();//查询子线程数目
- for (int i = 0; i < threadNum; i++) {
- Long itemId = idList.get(i);
- Future<Map<String, String>> future = executor.submit(new QueryCallable (entityCode, itemId));
- futureList.add(future);
- }
- for(Future<Map<String, String>> future : futureList) {
- Map<String, String> threadResult = null;
- try {
- threadResult = future.get();
- } catch (Exception e) {
- threadResult = null;
- }
- if (null != threadResult && threadResult.size() > 0) {//结果集不为空
- finalResult.add(threadResult);
- }
- }
- return finalResult;
- }
最后是具体负责处理每一个查询请求的Callableapache
- public class QueryCallable implements Callable<Map<String, String>> {
- private String entityCode = "";
- private Long itemId = 0L;
- public GetEntityListCallable(String entityCode, Long itemId) {
- this. entityCode = entityCode;
- this.itemId = itemId;
- }
- public Map<String, String> call() throws Exception {
- Map<String, String> entityMap = null;
- try {
- entityMap = QueryServiceImpl.this.getEntity(entityCode, itemId);//先去hbase查基本信息
- } catch (Exception e) {
- entityMap = null;
- }
- return entityMap;
- }
- }
经过线程池的使用,能够减小建立,销毁进程所带来的系统开销,并且线程池中的工做线程能够重复使用,极大地利用现有系统资源,增长了系统吞吐量。app
另外,今天也尝试了另外一种合并Solr索引的方法,直接经过底层的Lucene的API进行,而不是提交Http请求,具体方法以下:ide
java -cp lucene-core-3.4.0.jar:lucene-misc-3.4.0.jar org/apache/lucene/misc/IndexMergeTool ./newindex ./app1/solr/data/index ./app2/solr/data/index 优化