|reply to cvig |
Re: maximum number of open sockets, files, threads, etc..?
no, and you are right. "it depends".
I think the kernel gods like to say that as you approach large numbers you "aren't doing things right", so they get less interested in the answers.
So if the current practical maximum number of threads is, say, 32000 .. it is of no interest to them that this is hard to achieve because no properly written application should require that many threads.
The issue of maximums comes up more often than it should because future web servers will need to handle 100,000+ concurrent connections. Apache still has one foot in the past and needs one process or thread per open connection. Lighttpd is a single process single thread and uses epoll() so appears to be limited by the maximum number of open file descriptors per process (which to me says it is till capped at 65k?).
"Comet" which is the next generation of "Ajax", requires a quantum leap on number of concurrent connections. Instead of Ajax scripts on each browser polling for answers, they want to hold open a connection, and get events. This is very useful for lots of things: dynamic displays of data, chat .. anything that changes can now change immediately yet take no resources in polling when there are no changes to be made. Such a more logical way of doing things. Google is already using it all over: google chat, finance.google.com for example.
But of course "comet" needs a web infrastructure (front end servers, load balancers, routers, firewalls, web servers, application servers) that is much better at far more simultaneously open connections.
And funnily enough it becomes a lot easier to handle denial of service attacks as well: something that can handle 100k+ open connections can easily limit to a few per IP address, and easily identify and kill ones that misbehave. Half of a good denial of service attack is to try to open too many active connections and hold them (the other half is flooding).
nginx is another interesting HTTP server -- on their website they claim 10,000 inactive HTTP keep-alive connections consume only 2.5M. From the LKML thread though it sounds like the real limitation is 32-bit with 8K stack. If you go 64-bit a lot of the limitations will be eased (at least in terms of max threads).
This sounds like the ideal time to bring up ideas of making a test program
said by cvig:
nginx is another interesting HTTP server
Unrelated to the thread topic, but of interest.
This happens to have been the HTTPd of choice of the authors of stormworm, the much discussed malware. It's in fact so unique, many vendors chose to trigger IDS on its banner.
For example:Cisco Intellishield
Edit: Changed tense
I like this part:
Because web connections arent as long lived as IMAP connections, we stayed with Apache for our frontends for a while longer. However weve now switched over to using nginx for our frontend web proxy as well, which has also allowed us to increase the keep-alive timeout for HTTP connections to 5 minutes, which should result in a small perceptible improvement when moving between pages.
The net result of all this is that each frontend proxy server currently maintains over 10,000 simultaneous IMAP, POP, Web & SMTP connections (including many SSL ones) using only about 10% of the available CPU on 3.20GHz Netburst Xeon based CPUs.
nginx seems to have done it right: a small amount of processes each using epoll. If the kernel doesn't waste a whole heap of memory with each open connection, this would scale really well.
|reply to BeesTea |
dammit .. now I read some more about nginx, I want to change our front end. Which means converting a bunch of ReWrite rules, testing and so on and so forth.
But since it offers proper limits per simultaneous IP and such huge scalability and such immense capacity for KeepAlive and open but quiet connections, I can't see why to stick with apache2