默认状况下,一个线程的栈要预留1M的内存空间
而一个进程中可用的内存空间只有2G,因此理论上一个进程中最多能够开2048个线程
可是内存固然不可能彻底拿来做线程的栈,因此实际数目要比这个值要小。
你也能够经过链接时修改默认栈大小,将其改的比较小,这样就能够多开一些线程。
如将默认栈的大小改为512K,这样理论上最多就能够开4096个线程。
即便物理内存再大,一个进程中能够起的线程总要受到2GB这个内存空间的限制。
比方说你的机器装了64GB物理内存,但每一个进程的内存空间仍是4GB,其中用户态可用的仍是2GB。
若是是同一台机器内的话,能起多少线程也是受内存限制的。每一个线程对象都要站用非页面内存,而非页面内存也是有限的,当非页面内存被耗尽时,也就没法建立线程了。
若是物理内存很是大,同一台机器内能够跑的线程数目的限制值会愈来愈大。 windows
在Windows下写个程序,一个进程Fork出2000个左右线程就会异常退出了,为何?服务器
这个问题的产生是由于windows32位系统,一个进程所能使用的最大虚拟内存为2G,而一个线程的默认线程栈StackSize为1024K(1M),这样当线程数量逼近2000时,2000*1024K=2G(大约),内存资源就至关于耗尽。app
MSDN原文:async
“The number of threads a process can create is limited by the available virtual memory. By default, every thread has one megabyte of stack space. Therefore, you can create at most 2,028 threads. If you reduce the default stack size, you can create more threads. However, your application will have better performance if you create one thread per processor and build queues of requests for which the application maintains the context information. A thread would process all requests in a queue before processing requests in the next queue.”ide
如何突破2000个限制?ui
能够经过修改CreateThread参数来缩小线程栈StackSize,例如spa
#define MAX_THREADS 50000
DWORD WINAPI ThreadProc( LPVOID lpParam ){
while(1){
Sleep(100000);
}
return 0;
}
int main() {
DWORD dwThreadId[MAX_THREADS];
HANDLE hThread[MAX_THREADS];
for(int i = 0; i < MAX_THREADS; ++i)
{
hThread[i] = CreateThread(0, 64, ThreadProc, 0, STACK_SIZE_PARAM_IS_A_RESERVATION, &dwThreadId[i]);
if(0 == hThread[i])
{
DWORD e = GetLastError();
printf("%d\r\n",e);
break;
}
}
ThreadProc(0);
}
服务器端程序设计操作系统
若是你的服务器端程序设计成:来一个client链接请求则建立一个线程,那么就会存在2000个限制(在硬件内存和CPU个数必定的状况下)。建议以下:线程
The "one thread per client" model is well-known not to scale beyond a dozen clients or so. If you're going to be handling more than that many clients simultaneously, you should move to a model where instead of dedicating a thread to a client, you instead allocate an object. (Someday I'll muse on the duality between threads and objects.) Windows provides I/O completion ports and a thread pool to help you convert from a thread-based model to a work-item-based model.
1. Serve many clients with each thread, and use nonblocking I/O and level-triggered readiness notification
2. Serve many clients with each thread, and use nonblocking I/O and readiness change notification
3. Serve many clients with each server thread, and use asynchronous I/O