mongo

1 QA.http://api.mongodb.com/python/current/faq.htmlhtml

参考:http://api.mongodb.com/python/current/faq.html#how-does-connection-pooling-work-in-pymongo python

client = MongoClient(host, port, maxPoolSize=50, waitQueueMultiple=500, waitQueueTimeoutMS=100)

Create this client once for each process, and reuse it for all operations. It is a common mistake to create a new client for each request, which is very inefficient.mongodb

MongoClient 不能够被屡次建立。shell

When 500 threads are waiting for a socket, the 501st that needs a socket raises ExceededMaxWaiters.api

A thread that waits more than 100ms (in this example) for a socket raises ConnectionFailure服务器

When close() is called by any thread, all idle sockets are closed, and all sockets that are in use will be closed as they are returned to the pool.并发

 

关于_id:app

PyMongo adds an _id field in this manner for a few reasons:socket

  • All MongoDB documents are required to have an _id field.
  • If PyMongo were to insert a document without an _id MongoDB would add one itself, but it would not report the value back to PyMongo.
  • Copying the document to insert before adding the _id field would be prohibitively expensive for most high write volume applications.

If you don’t want PyMongo to add an _id to your documents, insert only documents that already have an _id field, added by your application.tcp

 

What does CursorNotFound cursor id not valid at server mean?

Cursors in MongoDB can timeout on the server if they’ve been open for a long time without any operations being performed on them. This can lead to an CursorNotFound exception being raised when attempting to iterate the cursor.

How do I change the timeout value for cursors?

MongoDB doesn’t support custom timeouts for cursors, but cursor timeouts can be turned off entirely. Pass no_cursor_timeout=True to find().

 

MongoDB1.3版本以上都经过MongoClient类进行链接,其策略默认就是长链接,并且没法修改。因此链接数其实取决于fpm的客户进程数。若是fpm量太大,必然会致使链接数过多的问题。若是你全部机器上一共有1000个fpm,那么就会建立1000个长链接,按mongodb服务端的策略,每一个链接最低消耗1M内存,那这1G内存就没了。因此直接方案是每次使用完后进行close操做,这样不会让服务端须要保持大量的链接。而close函数也有一个坑,就是默认只关闭写链接(好比master或者replica sets的primary),若是要关闭所有链接,须要添加参数true即:$mongo->close(true)每次关闭链接的方案能够有效减小服务器的并发链接数,除非你的操做自己很是慢。可是一样也有它的问题,好比每次不能复用以前的tcp链接,须要从新进行链接,这样链接耗时会比较高,特别是用replica sets的时候,须要建立多个tcp链接。因此最终可能只有两个方案一是减少fpm的数量二是自建链接池,经过链接池将之个客户端的链接收敛成固定数量对MongoDB的链接。