Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Varnish is heavily threaded. It maintains queues where connections are put and worker threads pull them out. It is expected that a single connection gets a dedicated thread.


Since each thread is an actual kernel thread, this will limit the concurrent connections to the maximum number of threads a kernel can handle which isn't that high.


Not as such. The connection itself can be accepted and put into a queue on a different thread than the one that serves the request. This means that only the actual number of requests concurrently being fulfilled (cached value is being retrieved from ram or storage, or is being written to the socket) is limited by the amount of kernel threads. With that in place that number doesn't really even have to be that high to handle lots of concurrent load.


Linux can create over 250,000 threads, but that may have been on a 32-bit system. On 64-bit it should be limited only by RAM.


The overhead of context switching becomes pretty high. Some say that context switching has become cheap, but you still at the very least need to update the tlb, and schedule the next pthread.


At least the performance of context switching should scale with the number of cores, which seems to be the main direction of increased performance in hardware looking into the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: