(转)The C10K problem翻译

 

现在的web服务器须要同时处理一万个以上的客户端了,难道不是吗?毕竟现在的网络是个big place了。php

如今的计算机也很强大了,你只须要花大概$1200就能够买一个1000MHz的处理器,2G的内存, 1000Mbit/sec的网卡的机器。让咱们来看看--20000个客户,每一个为50KHz,100Kbyes和 50Kbit/sec,那么没有什么比为这两万个客户端的每一个每秒从硬盘读取4千字节而后发送到网络上 去更消耗资源的了。能够看出硬件再也不是瓶颈了。 (That works out to $0.08 per client, by the way. Those $100/client licensing fees some operating systems charge are starting to look a little heavy!)css

在1999年最繁忙的ftp站点,cdrom.com,尽管有G比特的网络带宽,却也只能同时处理10000个 客户端。在2001年,一样的速度能够被几个ISP服务商所提供,他们预期该趋势会由于大量的商业 用户而变得愈来愈广泛。html

目前的瘦客户端模型也开始又变得流行起来了--服务器运行在Internet上,为数千个客户端服务。java

基于以上一些考虑,这里就配置操做系统或者编写支持数千个网络客户端的代码问题提出一些 注意点,该论题是基于类Unix操做系统的--该系统是个人我的爱好,固然Windows也有占有一席之地。node

Contents

 

Related Sites

2003年10月,Felix von Leitner整理了一个很好的网站和一个 presentation,该网站介绍了网络的可测量性,完成 了以不一样网络系统调用和不一样的操做系统为基准的性能比较。其中一项就是2.6版本的Linux内核 击败了2.4的内核,固然还有许多的图片能够给OS的开发者在平时提供点想法。
(See also the Slashdot comments; it'll be interesting to see whether anyone does followup benchmarks improving on Felix's results.)linux

Book to Read First

若是你尚未读过W.Richard Stevens先生的《Unix网络编程:第一卷》的话,请尽快获取一份 拷贝,该书描述了许多关于编写高性能的服务器的I/O策略和各自的一些缺陷,甚至还讲述 了"thundering herd"问题,同时你也能够阅读Jeff Darcy写的关于高性能服务器设计的一些 notes。 
(Another book which might be more helpful for those who are *using* rather than *writing* a web server isBuilding Scalable Web Sites by Cal Henderson.)git

I/O框架

如下所列的为几个包装好的库,它们概要了几中常见的技巧,而且可使你的代码与具体操做 系统隔离,从而具备更好的移植性。web

  • ACE, 一个重量级的C++ I/O框架,用面向对象实现了一些I/O策略和其它有用的东西,特别的, 它的Reactor是用OO方式处理非阻塞I/O,而Proactor是用OO方式处理异步I/O的( In particular, his Reactor is an OO way of doing nonblocking I/O, and Proactor is an OO way of doing asynchronous I/O).
  • ASIO 一个C++的I/O框架,逐渐成为Boost库的一部分。it's like ACE updated for the STL era。
  • libevent 由Niels Provos用C编写的一个轻量级的I/O框架。它支持kqueue和select,而且很 快就能够支持poll和epoll(翻译此文时已经支持)。我想它应该是只采用了水平触发机制,该机制 有好处固然也有很差的地方。Niels给出了一张图 来讲明时间和链接数目在处理一个事件上的功能,从图上能够看出kqueue和sys_epoll明显胜出。
  • 我本人也尝试太轻量级的框架(很惋惜没有维持至今):
    • Poller 是一个轻量级的C++ I/O框架,它使用任何一种准备就绪API(poll, select, /dev/poll, kqueue, sigio)实现水平触发准备就绪API。以其它不一样的API为基准 ,Poller的性能 好得多。该连接文档的下面一部分说明了如何使用这些准备就绪API。
    • rn 是一个轻量级的C I/O框架,也是我继Poller后的第二个框架。该框架能够很容易的被用 于商业应用中,也容易的适用于非C++应用中。它现在已经在几个商业产品中使用。
  • Matt Welsh在2000年四月关于在构建服务器方面如何平衡工做线程和事件驱动技术的使用写了 一篇论文,在该论文中描述了他本身的Sandstorm I/O框架。
  • Cory Nelson's Scale! library - 一个Windows下的异步套接字,文件和管道的库。

I/O Strategies

网络软件设计者每每有不少种选择,如下列出一些:sql

  • 是否处理多个I/O?如何处理在单一线程中的多个I/O调用?
    • 不处理,从头至尾使用阻塞和同步I/O调用,可使用多线程或多进程来达到并发效果。
    • 使用非阻塞调用(如在一个设置O_NONBLOCK选项的socket上使用write)读取I/O,当I/O完 成时发出通知(如poll,/dev/poll)从而开始下一个I/O。这种主要使用在网络I/O上,而不是磁盘的I/O上。
    • 使用异步调用(如aio_write())读取I/O,当I/O完成时会发出通知(如信号或者完成端口),能够同时使用在网络I/O和磁盘I/O上。
  • 如何控制对每一个客户的服务?
    • 对每一个客户使用一个进程(经典的Unix方法,自从1980年一直使用)
    • 一个系统级的线程处理多个客户,每一个客户是以下一种:
      • a user-level thread (e.g. GNU state threads, classic Java with green threads)
      • a state machine (a bit esoteric, but popular in some circles; my favorite)
      • a continuation (a bit esoteric, but popular in some circles)
    • o一个系统级的线程对应一个客户端(e.g. classic Java with native threads)
    • 一个系统级的线程对应每个活动的客户端(e.g. Tomcat with apache front end; NT完成端口; 线程池)
  • 是否使用标准的操做系统服务,仍是把一些代码放入内核中(如自定义驱动,内核模块,VxD)。

下面的五种方式应该是最经常使用的了。apache

  1. 一个线程服务多个客户端,使用非阻塞I/O和水平触发的就绪通知
  2. 一个线程服务多个客户端,使用非阻塞I/O和就绪改变时通知
  3. 一个服务线程服务多个客户端,使用异步I/O
  4. 一个服务线程服务一个客户端,使用阻塞I/O
  5. 把服务代码编译进内核

1. 一个线程服务多个客户端,使用非阻塞I/O和水平触发的就绪通知

...把网络句柄设置为非阻塞模型,而后使用select()或poll()来告知哪一个句柄已有数据在等待 处理。此模型是最传统的,在此模型下,由内核告知你某个文件描述符是否准备好,是否已经完 成你的任务自从上次内核告知已准备好以来(“水平触发”这个名字来源计算机硬件设计,与其 相对的是“边缘触发”,Jonathon Lemon在它的关于kqueue() 的论文中介绍了这两个术语)。

注意:牢记内核的就绪通知仅仅只是个提示,当你试图从一个文件描述符读取数据时,该文件 描述符可能并无准备好。这就是为何须要在使用就绪通知的时候使用非阻塞模型的缘由。

一个重要的瓶颈是read()或sendfile()从磁盘块读取时,若是该页当前并不在内存中。设置磁 盘文件描述符为非阻塞没有任何影响。一样的问题也发生在内存映射磁盘文件中。首先一个服务 须要磁盘I/O时,进程块和全部的客户端都必须等待,所以最初的非线程的性能就被消耗了。 
这也是异步I/O的目的,固然仅限于没有AIO的系统。处理磁盘I/O的工做线程或工做进程也可能遭遇此 瓶颈。一条途径就是使用内存映射文件,若是mincore()指明I/O必需的话,那么要求一个工做线 程来完成此I/O,而后继续处理网络事件。Jef Poskanzer提到Pai,Druschel和Zwaenepoel的 Flash web服务器使用了这个方法,而且他们就此在Usenix'99上作了一个演讲,看上去就好像 FreeBSD和Solaris 中提供了mincore()同样,可是它并非Single Unix Specification的一部分,在Linux的2.3.51 的内核中提供了该方法,感谢Chuck Lever

2003.11的 freebsd-hackers list中,Vivek Pei上报了一个不错的成果,他们利用系统剖析 工具剖析它们的Flash Web服务器,而后再攻击其瓶颈。其中找到的一个瓶颈就是mincore(猜想 毕竟不是好办法),另一个就是sendfile在磁盘块访问时。他们修改了sendfile(),当须要读 取的页不在内存中时则返回相似EWOULDBLOCK的值,从而提升了性能。The end result of their optimizations is a SpecWeb99 score of about 800 on a 1GHZ/1GB FreeBSD box, which is better than anything on file at spec.org.

在非阻塞套接字的集合中,关于单一线程是如何告知哪一个套接字是准备就绪的,如下列出了几 种方法:

  • 传统的select() 
    遗憾的是,select()受限于FD_SETSIZE个句柄。该限制被编译进了标准库和用户程序(有些 版本的C library容许你在用户程序编译时放宽该限制)。

    See Poller_select (cch) for an example of how to use select() interchangeably with other readiness notification schemes.

     

  • 传统的poll() 
    poll()虽然没有文件描述符个数的硬编码限制,可是当有数千个时速度就会变得很慢,由于 大多数的文件描述符在某个时间是空闲的,完全扫描数千个描述符是须要花费必定时间的。

    有些操做系统(如Solaris 8)经过使用了poll hinting技术改进了poll(),该技术由Niels Provos在1999年实现并利用基准测试程序测试过。

    See Poller_poll (cchbenchmarks) for an example of how to use poll() interchangeably with other readiness notification schemes.

     

  • /dev/poll
    这是在Solaris中被推荐的代替poll的方法。

    /dev/poll的背后思想就是利用poll()在大部分的调用时使用相同的参数。使用/dev/poll时 ,首先打开/dev/poll获得文件描述符,而后把你关心的文件描述符写入到/dev/poll的描述符, 而后你就能够从/dev/poll的描述符中读取到已就绪的文件描述符。

    /dev/poll 在Solaris 7(see patchid 106541) 中就已经存在,不过在Solaris 8 中才公开现身。在750个客户端的状况下,this has 10% of the overhead of poll()。

    关于/dev/poll在Linux上有多种不一样的尝试实现,可是没有一种能够和epoll相比,不推荐在 Linux上使用/dev/poll。

    See Poller_devpoll (cch benchmarks ) for an example of how to use /dev/poll interchangeably with many other readiness notification schemes. (Caution - the example is for Linux /dev/poll, might not work right on Solaris.)

     

  • kqueue()
    这是在FreeBSD系统上推荐使用的代替poll的方法(and, soon, NetBSD).

    kqueue()便可以水平触发,也能够边缘触发,具体请看下面.

2. 一个线程服务多个客户端,使用非阻塞I/O和就绪改变时通知

Readiness change notification(或边缘触发就绪通知)的意思就是当你给内核一个文件描述 符,一段时间后,若是该文件描述符从没有就绪到已经准备就绪,那么内核就会发出通知,告知 该文件描述符已经就绪,而且不会再对该描述符发出相似的就绪通知直到你在描述符上进行一些 操做使得该描述符再也不就绪(如直到在send,recv或者accept等调用上遇到EWOULDBLOCK错误,或 者发送/接收了少于须要的字节数)。

当使用Readiness change notification时,必须准备好处理乱真事件,由于最多见的实现是只 要接收到任何数据包都发出就绪信号,而无论文件描述符是否准备就绪。

这是水平触发的就绪通知的相对应的机制。It's a bit less forgiving of programming mistakes, since if you miss just one event, the connection that event was for gets stuck forever. 然而,我发现edge-triggered readiness notification可使编写带OpenSSL的 非阻塞客户端更简单,能够试下。

[Banga, Mogul, Drusha '99]详细描述了这种模型.

有几种APIs可使得应用程序得到“文件描述符已就绪”的通知:

3. 一个服务线程服务多个客户端,使用异步I/O

该方法目前尚未在Unix上广泛的使用,可能由于不多的操做系统支持异步I/O,或者由于它需 要从新修改应用程序(rethinking your applications)。 在标准Unix下,异步I/O是由"aio_"接口 提供的,它把一个信号和值与每个I/O操做关联起来。信号和其值的队列被有效地分配到用户的 进程上。异步I/O是POSIX 1003.1b实时标准的扩展,也属于Single Unix Specification,version 2.

AIO使用的是边缘触发的完成时通知,例如,当一个操做完成时信号就被加入队列(也可使用 水平触发的完成时通知,经过调用aio_suspend()便可, 不过我想不多人会这么作).

glibc 2.1和后续版本提供了一个普通的实现,仅仅是为了兼容标准,而不是为了得到性能上的提升。

Ben LaHaise编写的Linux AIO实现合并到了2.5.32的内核中,它并无采用内核线程,而是使 用了一个高效的underlying api,可是目前它还不支持套接字(2.4内核也有了AIO的补丁,不过 2.5/2.6的实现有必定程序上的不一样)。更多信息以下:

Suparma建议先看看AIO的API.

RedHat AS和Suse SLES都在2.4的内核中提供了高性能的实现,与2.6的内核实现类似,但并不彻底同样。

2006.2,在网络AIO有了一个新的尝试,具体请看Evgeniy Polyakov的基于kevent的AIO.

1999, SGI为Linux实现了一个高速的AIO< /a>,在到1.1版本时,听说能够很好的工做于磁盘I/O和网 络套接字,且使用了内核线程。目前该实现依然对那些不能等待Ben的AIO套接字支持的人来讲是 颇有用的。

O'Reilly 的"POSIX.4: Programming for the Real World"一书对aio作了很好的介绍.

这里 有一个指南介绍了早期的非标准的aio实现,能够看看,可是请记住你得把"aioread"转换为"aio_read"。

注意AIO并无提供无阻塞的为磁盘I/O打开文件的方法,若是你在乎因打开磁盘文件而引发 sleep的话,Linus建议你在另一个线程中调用open()而不是把但愿寄托在对aio_open()系统调用上。

在Windows下,异步I/O与术语"重叠I/O"和"IOCP"(I/O Completion Port,I/O完成端口)有必定联系。Microsoft的IOCP结合了 先前的如异步I/O(如aio_write)的技术,把事件完成的通知进行排队(就像使用了aio_sigevent字段的aio_write),而且它 为了保持单一IOCP线程的数量从而阻止了一部分请求。(Microsoft's IOCP combines techniques from the prior art like asynchronous I/O (like aio_write) and queued completion notification (like when using the aio_sigevent field with aio_write) with a new idea of holding back some requests to try to keep the number of running threads associated with a single IOCP constant.) 更多信息请看 Mark russinovich在sysinternals.com上的文章 Inside I/O Completion Ports, Jeffrey Richter的书"Programming Server-Side Applications for Microsoft Windows 2000" (AmazonMSPress), U.S. patent #06223207, or MSDN.

4. 一个服务线程服务一个客户端,使用阻塞I/O

... 让read()和write()阻塞. 这样很差的地方在于须要为每一个客户端使用一个完整的栈,从而比较浪费内存。 许多操做系统仍在处理数百个线程时存在必定的问题。若是每一个线程使用2MB的栈,那么当你在32位的机器上运行 512(2^30 / 2^21=512)个线程时,你就会用光全部的1GB的用户可访问虚拟内存(Linux也是同样运行在x86上的)。 你能够减少每一个线程所拥有的栈内存大小,可是因为大部分线程库在一旦线程建立后就不能增大线程栈大小,因此这样作 就意味着你必须使你的程序最小程度地使用内存。固然你也能够把你的程序运行在64位的处理器上去。

Linux,FreeBSD和Solaris系统的线程库一直在更新,64位的处理器也已经开始在大部分的用户中所使用。 也许在不远的未来,这些喜欢使用一个线程来服务一个客户端的人也有能力服务于10000个客户了。 可是在目前,若是你想支持更多的客户,你最好仍是使用其它的方法。

For an unabashedly pro-thread viewpoint, see Why Events Are A Bad Idea (for High-concurrency Servers) by von Behren, Condit, and Brewer, UCB, presented at HotOS IX. Anyone from the anti-thread camp care to point out a paper that rebuts this one? :-)

LinuxThreads

LinuxTheads 是标准Linux线程库的命名。 它从glibc2.0开始已经集成在glibc库中,而且高度兼容Posix标准,不过在性能和信号的支持度上稍逊一筹。

NGPT: Next Generation Posix Threads for Linux下一代LinuxPosix线程

NGPT是一个由IBM发起的项目,其目的是提供更好的Posix兼容的Linux线程支持。 如今已到2.2稳定版,而且运行良好...可是NGPT team 公布 他们正在把NGPT的代码基改成support-only模式,由于他们以为这才是支持社区长久运行的最好的方式。 NGPT小组将继续改进Linux的线程支持,但主要关注NPTL方面。 (Kudos to the NGPT team for their good work and the graceful way they conceded to NPTL.)

NPTL: Native Posix Thread Library for Linux(Linux本地Posix线程库)

NPTL是由 Ulrich Drepper ( glibc的主要维护人员)和 Ingo Molnar发起的项目,目的是提供world-class的Posix Linux线程支持。

2003.10.5,NPTL做为一个add-on目录(就像linuxthreads同样)被合并到glibc的cvs树中,因此颇有可能随glibc的下一次release而 一块儿发布。

Red Hat 9是最先的包含NPTL的发行版本(对一些用户来讲有点不太方便,可是必须有人来打破这沉默[break the ice]...)

NPTL links:

这是我尝试写的描述NPTL历史的文章(也能够参考Jerry Cooperstein的文章):

2002.3,NGPT小组的Bill Abt,glibc的维护者Ulrich Drepper 和其它人召开了个会议来探讨LinuxThreads的发展,会议的一个idea就是要改进mutex的性能。 Rusty Russell 等人 随后实现了 fast userspace mutexes (futexes), (现在已在NGPT和NPTL中应用了)。 与会的大部分人都认为NGPT应该合并到glibc中。

然而Ulrich Drepper并不怎么喜欢NGPT,他认为他能够作得更好。 (对那些曾经想提供补丁给glibc的人来讲,这应该不会令他们感到惊讶:-) 因而在接下来的几个月里,Ulrich Drepper, Ingo Molnar和其它人致力于glibc和内核的改变,而后就弄出了 Native Posix Threads Library (NPTL). NPTL使用了NGPT设计的全部内核改进(kernel enhancement),而且采用了几个最新的改进。 Ingo Molnar描述了 一下的几个内核改进:

NPTL使用了三个由NGPT引入的内核特征: getpid()返回PID,CLONE_THREAD和futexes; NPTL还使用了(并依赖)也是该项目的一部分的一个更为wider的内核特征集。

一些由NGPT引入内核的items也被修改,清除和扩展,例如线程组的处理(CLONE_THREAD). [the CLONE_THREAD changes which impacted NGPT's compatibility got synced with the NGPT folks, to make sure NGPT does not break in any unacceptable way.]

这些为NPTL开发的而且后来在NPTL中使用的内核特征都描述在设计白皮书中,http://people.redhat.com/drepper/nptl-design.pdf ...

A short list: TLS support, various clone extensions (CLONE_SETTLS, CLONE_SETTID, CLONE_CLEARTID), POSIX thread-signal handling, sys_exit() extension (release TID futex upon VM-release), the sys_exit_group() system-call, sys_execve() enhancements and support for detached threads.

There was also work put into extending the PID space - eg. procfs crashed due to 64K PID assumptions, max_pid, and pid allocation scalability work. Plus a number of performance-only improvements were done as well.

In essence the new features are a no-compromises approach to 1:1 threading - the kernel now helps in everything where it can improve threading, and we precisely do the minimally necessary set of context switches and kernel calls for every basic threading primitive.

NGPT和NPTL的一个最大的不一样就是NPTL是1:1的线程模型,而NGPT是M:N的编程模型(具体请看下面). 尽管这样, Ulrich的最初的基准测试 仍是代表NPTL比NGPT快不少。(NGPT小组期待查看Ulrich的测试程序来核实他的结果.)

FreeBSD线程支持

FreeBSD支持LinuxThreads和用户空间的线程库。一样,M:N的模型实现KSE在FreeBSD 5.0中引入。 具体请查看www.unobvious.com/bsd/freebsd-threads.html.

2003.3.25, Jeff Roberson 发表于freebsd-arch:

... 感谢Julian, David Xu, Mini, Dan Eischen,和其它的每一位参加了KSE和libpthread开发的成员所提供的基础, Mini和我已经开发出了一个1:1模型的线程实现,它能够和KSE并行工做而不会带来任何影响。It actually helps bring M:N threading closer by testing out shared bits. ...

And 2006.7, Robert Watson提议1:1的线程模型应该为FreeBSD 7.x的默认实现:

我知道曾经讨论过这个问题,可是我认为随着7.x的向前推动,这个问题应该从新考虑。 在不少普通的应用程序和特定的基准测试中,libthr明显的比libpthread在性能上要好得多。 libthr是在咱们大量的平台上实现的,而libpthread却只有在几个平台上。 最主要的是由于咱们使得Mysql和其它的大量线程的使用者转换到"libthr",which is suggestive, also! ... 因此strawman提议:让libthr成为7.x上的默认线程库。

NetBSD线程支持

根据Noriyuki Soda的描述:

内核支持M:N线程库是基于调度程序激活模型,合并于2003.1.18当时的NetBSD版本中。

详情请看Nathan J. Williams, Wasabi Systems, Inc.在2002年的FREENIX上的演示 An Implementation of Scheduler Activations on the NetBSD Operating System

Solaris线程支持

Solaris的线程支持还在进一步提升evolving... 从Solaris 2到Solaris 8,默认的线程库使用的都是M:N模型, 可是Solaris 9却默认使用了1:1线程模型. 查看Sun多线程编程指南 和Sun的关于Java和Solaris线程的note.

Java在JDK 1.3.x及更早的线程支持

你们都知道,Java一直到JDK1.3.x都没有支持任何处理网络链接的方法,除了一个线程服务一个客户端的模型以外。 Volanomark是一个不错的微型测试程序,能够用来测量在 某个时候不一样数目的网络链接时每秒钟的信息吞吐量。在2003.5, JDK 1.3的实现实际上能够同时处理10000个链接,可是性能却严重降低了。 从Table 4 能够看出JVMs能够处理10000个链接,可是随着链接数目的增加性能也逐步降低。

Note: 1:1 threading vs. M:N threading

在实现线程库的时候有一个选择就是你能够把全部的线程支持都放到内核中(也就是所谓的1:1的模型),也能够 把一些线程移到用户空间上去(也就是所谓的M:N模型)。从某个角度来讲, M:N被认为拥有更好的性能,可是因为很难被正确的编写, 因此大部分人都远离了该方法。

5. 把服务代码编译进内核

Novell和Microsoft都宣称已经在不一样时期完成了该工做,至少NFS的实现完成了该工做。 khttpd在Linux下为静态web页面完成了该工做, Ingo Molnar完成了"TUX" (Threaded linUX webserver) ,这是一个Linux下的快速的可扩展的内核空间的HTTP服务器。 Ingo在2000.9.1宣布 alpha版本的TUX能够在 ftp://ftp.redhat.com/pub/redhat/tux下载, 而且介绍了如何加入其邮件列表来获取更多信息。 
在Linux内核的邮件列表上讨论了该方法的好处和缺点,多数人认为不该该把web服务器放进内核中, 相反内核加入最小的钩子hooks来提升web服务器的性能,这样对其它形式的服务器就有益。 具体请看 Zach Brown的讨论 对比用户级别和内核的http服务器。 在2.4的linux内核中为用户程序提供了足够的权力(power),就像X15 服务器运行的速度和TUX几乎同样,可是它没有对内核作任何改变。

 

Comments

Richard Gooch曾经写了一篇讨论I/O选项的论文。

在2001, Tim Brecht和MMichal Ostrowski为使用简单的select的服务器 作了各类策略的测度 测试的数据值得看一看。

在2003, Tim Brecht发表了 userver的源码, 该服务器是整合了Abhishek Chandra, David Mosberger, David Pariag和Michal Ostrowski所写的几个服务器而成的, 可使用select(), poll(), epoll()和sigio.

回到1999.3, Dean Gaudet发表:

我一直在问“为何大家不使用基于select/event的模型,它明显是最快的。”...

他们的理由是“太难理解了,而且其中关键部分(payoff)不清晰”,可是几个月后,当该模型变得易懂时人们就开始愿意使用它了。

Mark Russinovich写了 一篇评论和 文章讨论了在2.2的linux内核只可以I/O策略问题。 尽管某些地方彷佛有点错误,不过仍是值得去看。特别是他认为Linux2.2的异步I/O (请看上面的F_SETSIG) 并无在数据准备好时通知用户进程,而只有在新的链接到达时才有。 这看起来是一个奇怪的误解。 还能够看看 早期的一些commentsIngo Molnar在1999.4.30所举的反例Russinovich在1999.5.2的comments, Alan Cox的 反例,和各类 linux内核邮件. 我怀疑他想说的是Linux不支持异步磁盘I/O,这在过去是正确的,可是如今SGI已经实现了KAIO,它已再也不正确了。

查看页面 sysinternals.com和 MSDN了解一下“完成端口”, 听说它是NT中独特的技术, 简单说,win32的"重叠I/O"被认为是过低水平而不方面使用,“完成端口”是提供了完成事件队列的封装,再加上魔法般的调度, 经过容许更多的线程来得到完成事件若是该端口上的其它已得到完成事件的线程处于睡眠中时(可能正在处理阻塞I/O),从而能够保持运行线程数目恒定(scheduling magic that tries to keep the number of running threads constant by allowing more threads to pick up completion events if other threads that had picked up completion events from this port are sleeping (perhaps doing blocking I/O).

查看OS/400的I/O完成端口支持.

在1999.9,在linux内核邮件列表上曾有一次很是有趣的讨论,讨论题目为 "15,000 Simultaneous Connections" (而且延续到第二周). Highlights:

  • Ed Hall 发表了一些他本身的经验:他已经在运行Solaris的UP P2/333上完成>1000个链接每秒。 他的代码使用了一个很小的线程池(每一个cpu 1或者2个线程池),每一个线程池使用事件模型来管理大量的客户端链接。
  • Mike Jagdisposted an analysis of poll/select overhead, and said "The current select/poll implementation can be improved significantly, especially in the blocking case, but the overhead will still increase with the number of descriptors because select/poll does not, and cannot, remember what descriptors are interesting. This would be easy to fix with a new API. Suggestions are welcome..."
  • Mike posted about his work on improving select() and poll().
  • Mike posted a bit about a possible API to replace poll()/select(): "How about a 'device like' API where you write 'pollfd like' structs, the 'device' listens for events and delivers 'pollfd like' structs representing them when you read it? ... "
  • Rogier Wolff suggested using "the API that the digital guys suggested",http://www.cs.rice.edu/~gaurav/papers/usenix99.ps
  • Joerg Pommnitz pointed out that any new API along these lines should be able to wait for not just file descriptor events, but also signals and maybe SYSV-IPC. Our synchronization primitives should certainly be able to do what Win32's WaitForMultipleObjects can, at least.
  • Stephen Tweedie asserted that the combination of F_SETSIG, queued realtime signals, and sigwaitinfo() was a superset of the API proposed in http://www.cs.rice.edu/~gaurav/papers/usenix99.ps. He also mentions that you keep the signal blocked at all times if you're interested in performance; instead of the signal being delivered asynchronously, the process grabs the next one from the queue with sigwaitinfo().
  • Jayson Nordwick compared completion ports with the F_SETSIG synchronous event model, and concluded they're pretty similar.
  • Alan Cox noted that an older rev of SCT's SIGIO patch is included in 2.3.18ac.
  • Jordan Mendelson posted some example code showing how to use F_SETSIG.
  • Stephen C. Tweedie continued the comparison of completion ports and F_SETSIG, and noted: "With a signal dequeuing mechanism, your application is going to get signals destined for various library components if libraries are using the same mechanism," but the library can set up its own signal handler, so this shouldn't affect the program (much).
  • Doug Royer noted that he'd gotten 100,000 connections on Solaris 2.6 while he was working on the Sun calendar server. Others chimed in with estimates of how much RAM that would require on Linux, and what bottlenecks would be hit.

Interesting reading!

 

Limits on open filehandles

  • Any Unix: the limits set by ulimit or setrlimit.
  • Solaris: see the Solaris FAQ, question 3.46 (or thereabouts; they renumber the questions periodically).
  • FreeBSD:

    Edit /boot/loader.conf, add the line
    set kern.maxfiles=XXXX
    where XXXX is the desired system limit on file descriptors, and reboot. Thanks to an anonymous reader, who wrote in to say he'd achieved far more than 10000 connections on FreeBSD 4.3, and says
    "FWIW: You can't actually tune the maximum number of connections in FreeBSD trivially, via sysctl.... You have to do it in the /boot/loader.conf file. 
    The reason for this is that the zalloci() calls for initializing the sockets and tcpcb structures zones occurs very early in system startup, in order that the zone be both type stable and that it be swappable. 
    You will also need to set the number of mbufs much higher, since you will (on an unmodified kernel) chew up one mbuf per connection for tcptempl structures, which are used to implement keepalive."
    Another reader says
    "As of FreeBSD 4.4, the tcptempl structure is no longer allocated; you no longer have to worry about one mbuf being chewed up per connection."
    See also:
  • OpenBSD: A reader says
    "In OpenBSD, an additional tweak is required to increase the number of open filehandles available per process: the openfiles-cur parameter in  /etc/login.conf needs to be increased. You can change kern.maxfiles either with sysctl -w or in sysctl.conf but it has no effect. This matters because as shipped, the login.conf limits are a quite low 64 for nonprivileged processes, 128 for privileged."
  • Linux: See Bodo Bauer's /proc documentation. On 2.4 kernels:
    echo 32768 > /proc/sys/fs/file-max 
    increases the system limit on open files, and
    ulimit -n 32768
    increases the current process' limit.

    On 2.2.x kernels,

    echo 32768 > /proc/sys/fs/file-max echo 65536 > /proc/sys/fs/inode-max 
    increases the system limit on open files, and
    ulimit -n 32768
    increases the current process' limit.

    I verified that a process on Red Hat 6.0 (2.2.5 or so plus patches) can open at least 31000 file descriptors this way. Another fellow has verified that a process on 2.2.12 can open at least 90000 file descriptors this way (with appropriate limits). The upper bound seems to be available memory. 
    Stephen C. Tweedie posted about how to set ulimit limits globally or per-user at boot time using initscript and pam_limit. 
    In older 2.2 kernels, though, the number of open files per process is still limited to 1024, even with the above changes. 
    See also Oskar's 1998 post, which talks about the per-process and system-wide limits on file descriptors in the 2.0.36 kernel.

Limits on threads

On any architecture, you may need to reduce the amount of stack space allocated for each thread to avoid running out of virtual memory. You can set this at runtime with pthread_attr_init() if you're using pthreads.

  • Solaris: it supports as many threads as will fit in memory, I hear.
  • Linux 2.6 kernels with NPTL: /proc/sys/vm/max_map_count may need to be increased to go above 32000 or so threads. (You'll need to use very small stack threads to get anywhere near that number of threads, though, unless you're on a 64 bit processor.) See the NPTL mailing list, e.g. the thread with subject "Cannot create more than 32K threads?", for more info.
  • Linux 2.4: /proc/sys/kernel/threads-max is the max number of threads; it defaults to 2047 on my Red Hat 8 system. You can set increase this as usual by echoing new values into that file, e.g. "echo 4000 > /proc/sys/kernel/threads-max"
  • Linux 2.2: Even the 2.2.13 kernel limits the number of threads, at least on Intel. I don't know what the limits are on other architectures. Mingo posted a patch for 2.1.131 on Intel that removed this limit. It appears to be integrated into 2.3.20.

    See also Volano's detailed instructions for raising file, thread, and FD_SET limits in the 2.2 kernel. Wow. This document steps you through a lot of stuff that would be hard to figure out yourself, but is somewhat dated.

  • Java: See Volano's detailed benchmark info, plus their info on how to tune various systems to handle lots of threads.

Java issues

Up through JDK 1.3, Java's standard networking libraries mostly offered the one-thread-per-client model. There was a way to do nonblocking reads, but no way to do nonblocking writes.

In May 2001, JDK 1.4 introduced the package java.nio to provide full support for nonblocking I/O (and some other goodies). See the release notes for some caveats. Try it out and give Sun feedback!

HP's java also includes a Thread Polling API.

In 2000, Matt Welsh implemented nonblocking sockets for Java; his performance benchmarks show that they have advantages over blocking sockets in servers handling many (up to 10000) connections. His class library is called java-nbio; it's part of the Sandstorm project. Benchmarks showing performance with 10000 connectionsare available.

See also Dean Gaudet's essay on the subject of Java, network I/O, and threads, and the paper by Matt Welsh on events vs. worker threads.

Before NIO, there were several proposals for improving Java's networking APIs:

  • Matt Welsh's Jaguar system proposes preserialized objects, new Java bytecodes, and memory management changes to allow the use of asynchronous I/O with Java.
  • Interfacing Java to the Virtual Interface Architecture, by C-C. Chang and T. von Eicken, proposes memory management changes to allow the use of asynchronous I/O with Java.
  • JSR-51 was the Sun project that came up with the java.nio package. Matt Welsh participated (who says Sun doesn't listen?).

Other tips

  • Zero-Copy
    Normally, data gets copied many times on its way from here to there. Any scheme that eliminates these copies to the bare physical minimum is called "zero-copy".
    • Thomas Ogrisegg's zero-copy send patch for mmaped files under Linux 2.4.17-2.4.20. Claims it's faster than sendfile().
    • IO-Lite is a proposal for a set of I/O primitives that gets rid of the need for many copies.
    • Alan Cox noted that zero-copy is sometimes not worth the trouble back in 1999. (He did like sendfile(), though.)
    • Ingo implemented a form of zero-copy TCP in the 2.4 kernel for TUX 1.0 in July 2000, and says he'll make it available to userspace soon.
    • Drew Gallatin and Robert Picco have added some zero-copy features to FreeBSD; the idea seems to be that if you call write() or read() on a socket, the pointer is page-aligned, and the amount of data transferred is at least a page, *and* you don't immediately reuse the buffer, memory management tricks will be used to avoid copies. But see followups to this message on linux-kernelfor people's misgivings about the speed of those memory management tricks.

      According to a note from Noriyuki Soda:

      Sending side zero-copy is supported since NetBSD-1.6 release by specifying "SOSEND_LOAN" kernel option. This option is now default on NetBSD-current (you can disable this feature by specifying "SOSEND_NO_LOAN" in the kernel option on NetBSD_current). With this feature, zero-copy is automatically enabled, if data more than 4096 bytes are specified as data to be sent.
    • The sendfile() system call can implement zero-copy networking.
      The sendfile() function in Linux and FreeBSD lets you tell the kernel to send part or all of a file. This lets the OS do it as efficiently as possible. It can be used equally well in servers using threads or servers using nonblocking I/O. (In Linux, it's poorly documented at the moment; use _syscall4 to call it. Andi Kleen is writing new man pages that cover this. See also Exploring The sendfile System Call by Jeff Tranter in Linux Gazette issue 91.) Rumor has it, ftp.cdrom.com benefitted noticeably from sendfile().

      A zero-copy implementation of sendfile() is on its way for the 2.4 kernel. See LWN Jan 25 2001.

      One developer using sendfile() with Freebsd reports that using POLLWRBAND instead of POLLOUT makes a big difference.

      Solaris 8 (as of the July 2001 update) has a new system call 'sendfilev'. A copy of the man page is here.. The Solaris 8 7/01 release notes also mention it. I suspect that this will be most useful when sending to a socket in blocking mode; it'd be a bit of a pain to use with a nonblocking socket.

  • Avoid small frames by using writev (or TCP_CORK)
    A new socket option under Linux, TCP_CORK, tells the kernel to avoid sending partial frames, which helps a bit e.g. when there are lots of little write() calls you can't bundle together for some reason. Unsetting the option flushes the buffer. Better to use writev(), though...

    See LWN Jan 25 2001 for a summary of some very interesting discussions on linux-kernel about TCP_CORK and a possible alternative MSG_MORE.

  • Behave sensibly on overload.
    [Provos, Lever, and Tweedie 2000] notes that dropping incoming connections when the server is overloaded improved the shape of the performance curve, and reduced the overall error rate. They used a smoothed version of "number of clients with I/O ready" as a measure of overload. This technique should be easily applicable to servers written with select, poll, or any system call that returns a count of readiness events per call (e.g. /dev/poll or sigtimedwait4()).
  • Some programs can benefit from using non-Posix threads.
    Not all threads are created equal. The clone() function in Linux (and its friends in other operating systems) lets you create a thread that has its own current working directory, for instance, which can be very helpful when implementing an ftp server. See Hoser FTPd for an example of the use of native threads rather than pthreads.
  • Caching your own data can sometimes be a win.
    "Re: fix for hybrid server problems" by Vivek Sadananda Pai (vivek@cs.rice.edu) on new-httpd, May 9th, states:

    "I've compared the raw performance of a select-based server with a multiple-process server on both FreeBSD and Solaris/x86. On microbenchmarks, there's only a marginal difference in performance stemming from the software architecture. The big performance win for select-based servers stems from doing application-level caching. While multiple-process servers can do it at a higher cost, it's harder to get the same benefits on real workloads (vs microbenchmarks). I'll be presenting those measurements as part of a paper that'll appear at the next Usenix conference. If you've got postscript, the paper is available athttp://www.cs.rice.edu/~vivek/flash99/"

Other limits

  • Old system libraries might use 16 bit variables to hold file handles, which causes trouble above 32767 handles. glibc2.1 should be ok.
  • Many systems use 16 bit variables to hold process or thread id's. It would be interesting to port theVolano scalability benchmark to C, and see what the upper limit on number of threads is for the various operating systems.
  • Too much thread-local memory is preallocated by some operating systems; if each thread gets 1MB, and total VM space is 2GB, that creates an upper limit of 2000 threads.
  • Look at the performance comparison graph at the bottom ofhttp://www.acme.com/software/thttpd/benchmarks.html. Notice how various servers have trouble above 128 connections, even on Solaris 2.6? Anyone who figures out why, let me know. 
    Note: if the TCP stack has a bug that causes a short (200ms) delay at SYN or FIN time, as Linux 2.2.0-2.2.6 had, and the OS or http daemon has a hard limit on the number of connections open, you would expect exactly this behavior. There may be other causes.

Kernel Issues

For Linux, it looks like kernel bottlenecks are being fixed constantly. See Linux Weekly NewsKernel Trafficthe Linux-Kernel mailing list, and my Mindcraft Redux page.

In March 1999, Microsoft sponsored a benchmark comparing NT to Linux at serving large numbers of http and smb clients, in which they failed to see good results from Linux. See also my article on Mindcraft's April 1999 Benchmarks for more info.

See also The Linux Scalability Project. They're doing interesting work, including Niels Provos' hinting poll patch, and some work on the thundering herd problem.

See also Mike Jagdis' work on improving select() and poll(); here's Mike's post about it.

Mohit Aron (aron@cs.rice.edu) writes that rate-based clocking in TCP can improve HTTP response time over 'slow' connections by 80%.

Measuring Server Performance

Two tests in particular are simple, interesting, and hard:

  1. raw connections per second (how many 512 byte files per second can you serve?)
  2. total transfer rate on large files with many slow clients (how many 28.8k modem clients can simultaneously download from your server before performance goes to pot?)

Jef Poskanzer has published benchmarks comparing many web servers. Seehttp://www.acme.com/software/thttpd/benchmarks.html for his results.

I also have a few old notes about comparing thttpd to Apache that may be of interest to beginners.

Chuck Lever keeps reminding us about Banga and Druschel's paper on web server benchmarking. It's worth a read.

IBM has an excellent paper titled Java server benchmarks [Baylor et al, 2000]. It's worth a read.

Examples

Interesting select()-based servers

Interesting /dev/poll-based servers

  • N. Provos, C. Lever"Scalable Network I/O in Linux," May, 2000. [FREENIX track, Proc. USENIX 2000, San Diego, California (June, 2000).] Describes a version of thttpd modified to support /dev/poll. Performance is compared with phhttpd.

Interesting kqueue()-based servers

Interesting realtime signal-based servers

  • Chromium's X15. This uses the 2.4 kernel's SIGIO feature together with sendfile() and TCP_CORK, and reportedly achieves higher speed than even TUX. The source is available under a community source (not open source) license. See the original announcement by Fabio Riccardi.
  • Zach Brown's phhttpd - "a quick web server that was written to showcase the sigio/siginfo event model. consider this code highly experimental and yourself highly mental if you try and use it in a production environment." Uses the siginfo features of 2.3.21 or later, and includes the needed patches for earlier kernels. Rumored to be even faster than khttpd. See his post of 31 May 1999 for some notes.

Interesting thread-based servers

Interesting in-kernel servers

Other interesting links

 


Changelog

$Log: c10k.html,v $ Revision 1.212 2006/09/02 14:52:13 dank added asio Revision 1.211 2006/07/27 10:28:58 dank Link to Cal Henderson's book. Revision 1.210 2006/07/27 10:18:58 dank Listify polyakov links, add Drepper's new proposal, note that FreeBSD 7 might move to 1:1 Revision 1.209 2006/07/13 15:07:03 dank link to Scale! library, updated Polyakov links Revision 1.208 2006/07/13 14:50:29 dank Link to Polyakov's patches Revision 1.207 2003/11/03 08:09:39 dank Link to Linus's message deprecating the idea of aio_open Revision 1.206 2003/11/03 07:44:34 dank link to userver Revision 1.205 2003/11/03 06:55:26 dank Link to Vivek Pei's new Flash paper, mention great specweb99 score 

Copyright 1999-2006 Dan Kegel
dank@kegel.com
Last updated: 2 Sept 2006
[Return to www.kegel.com]

相关文章
相关标签/搜索