一次MongoDB分页查询致使的OOM问题

OOM描述信息:java

2018-09-18 14:46:54.338 [http-nio-8099-exec-8] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet] - Servlet.service() for servlet [dispatcherServlet] in context with path [/party-data-center] threw exception [Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: GC overhead limit exceeded] with root cause
java.lang.OutOfMemoryError: GC overhead limit exceeded
	at org.bson.io.ByteBufferBsonInput.readString(ByteBufferBsonInput.java:154)
	at org.bson.io.ByteBufferBsonInput.readString(ByteBufferBsonInput.java:126)
	at org.bson.BsonBinaryReader.doReadString(BsonBinaryReader.java:245)
	at org.bson.AbstractBsonReader.readString(AbstractBsonReader.java:461)
	at org.bson.codecs.BsonStringCodec.decode(BsonStringCodec.java:31)
	at org.bson.codecs.BsonStringCodec.decode(BsonStringCodec.java:28)
	at org.bson.codecs.BsonArrayCodec.readValue(BsonArrayCodec.java:102)
	at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:67)
	at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:37)
	at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
	at org.bson.codecs.configuration.LazyCodec.decode(LazyCodec.java:47)
	at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
	at org.bson.codecs.configuration.LazyCodec.decode(LazyCodec.java:47)
	at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
	at org.bson.codecs.configuration.LazyCodec.decode(LazyCodec.java:47)
	at org.bson.codecs.BsonArrayCodec.readValue(BsonArrayCodec.java:102)
	at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:67)
	at org.bson.codecs.BsonArrayCodec.decode(BsonArrayCodec.java:37)
	at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
	at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
	at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
	at com.mongodb.connection.ReplyMessage.<init>(ReplyMessage.java:51)
	at com.mongodb.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:301)

根据以上信息,好像是MongoDB查询数据的时候占用内存过大,致使的OOMmongodb

导出dump文件而且分析一下 使用MAT打开文件后有个 Problem Suspect 1(最有可能致使内存溢出的提示)apache

The thread org.apache.tomcat.util.threads.TaskThread @ 0xf9b19fa0 http-nio-8099-exec-8 keeps local variables with total size 58,255,056 (60.49%) bytes.
The memory is accumulated in one instance of "java.lang.Object[]" loaded by "<system class loader>".
The stacktrace of this Thread is available. See stacktrace.


Keywords
java.lang.Object[]

Details »

点击 See stacktracetomcat

信息量仍是很庞大的,慢慢分析。 找到测试

at com.mongodb.DB.command(Lcom/mongodb/DBObject;Lcom/mongodb/ReadPreference;Lcom/mongodb/DBEncoder;)Lcom/mongodb/CommandResult; (DB.java:496)
  at com.mongodb.DB.command(Lcom/mongodb/DBObject;Lcom/mongodb/ReadPreference;)Lcom/mongodb/CommandResult; (DB.java:512)
  at com.mongodb.DB.command(Lcom/mongodb/DBObject;)Lcom/mongodb/CommandResult; (DB.java:467)

咱们能够发现是执行Mongo命令出的错误,MongoResult,,,这不是返回的Mongo查询结果集吗??难道是返回的结果集过大??颇有可能!!! 继续往下看。。。this

at com.fosung.data.party.dao.DetailDao.detailQuery(Lcom/fosung/data/party/dto/PartyItemDto;)Lcom/fosung/data/party/vo/OutDetailCountVo; (DetailDao.java:314)
  at com.fosung.data.party.dao.DetailDao$$FastClassBySpringCGLIB$$caf49f16.invoke(ILjava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (Unknown Source)

此处看到咱们业务代码的方法,颇有可能就是此处方法致使的OOM,进一步分析咱们的业务方法: 通过咱们仔细分析终于找出问题的缘由: 上面出现问题的缘由是在获取总条数的时候,没有加分页条件(skip和limit)致使查询全部符合条件的记录(符合条件的记录有6w多条),所有加载到内存中,所以致使了OOM问题。spa

解决: MongoDB使用管道查询后获取符合条件的总条数code

db.getCollection('user_order').aggregate([
     { "$match" : { "code" : "100002255842358"}} , 
     { "$project" : { "code" : 1 , "yearInfo" : 1 , "personInfo" : 1}} , 
     { "$unwind" : "$yearInfo.counts"} , 
     { "$unwind" : "$yearInfo.counts.code"} , 
     { "$match" : { "yearInfo.counts.code" : { "$in" : [ "1"]}}} , 
     { "$sort" : { "code" : 1 , "yearInfo.counts.sort" : 1}} ,
     { "$lookup" : { "from" : "user_info" , "localField" : "yearInfo.counts.detail" , "foreignField" : "_id" , "as" : "personInfo"}} , 
     { "$unwind" : "$personInfo"} , 
      {"$group":{"_id":null,"totalCount":{"$sum":1}}},
      {"$project":{"totalCount":"$totalCount","_id":0}}
    ])

不须要每次去获取全部记录数,再取记录的条数。blog

修改完后测试完美经过。。。ip

相关文章
相关标签/搜索