Re: Fastcode memory managers
From: Martin James (mjames_falcon_at_dial.pipex.com)
Date: Wed, 17 Nov 2004 11:37:57 -0000
> Naturally, the complete design is based on atomic memory operations. The
> real difference is that NexusMM will *never* force a thread context switch
> and will *never* hold a thread in a live loop (spin lock) waiting for
> another thread to "release" any a lock.
Good, good, good....
> Forced thread context switches
> (before the allocated time slice is used up) are one of the worst problems
> with multi-threaded programs and seriously affect performance.
* avidable* thread context switches, anyway.
> designed TCP server application, using I/O completion ports, a lock free
> memory manager and generally avoiding the use of everything that could
> to a kernel mode wait state are able to keep 1 thread running per CPU in
> system as long as enough requests come in. Serving them without ever
> a thread context switch and keeping the threads running for long periodes
> time (multiple continous timeslices). This will result in the best
> overall performance. But every time there is contention inside the memory
> manager and one of the threads is forced to give up the remainder of it's
> time slice (either by just calling sleep or, even worse, entering a kernel
> mode wait state) there is a noticable drop in performance.
I have tried very hard to avoid as much memory manager action as possible in
my server, but I use CS to protect the pools of buffers, sockets etc :(
AFAIK, you need at least three threads in your IOCP server, one to listen,
one to issue the overlapped reads/writes and one to run the protocol
handler. Even if you have no incoming connections.and the listening thread
is quiecent, you still need one thread either side of the IOCP queue, so
suely you will have context switches?
If a lock-free lock, (!), fails to acquire, you have to do something. If
you are not resorting to a kernel synchro object or using the dreaded
spinlocks, I wonder what you are doing? :)