@leak
I know there are some platform limitations on the number of concurrent aio operations a process (or even system) can have. I have tried my best to combat this, but in some cases it will be better to use a reactor for performance reasons (I have logging which should alert people when this is the case).
As for the windows problem you specifically pointed at, I may be able to help:
The FD_SET macro silently fails if fd_set::fd_count >= FD_SETSIZE
Also, deep in ACE, there are no bounds checks when adding sockets to the fd_set
The select reactor will lose all data sent to sockets added after the platform-specific size limit was reached, because the socket is never included in select calls.
One solution for the Trinity codebase (mangos has some different classes, but we would need the same thing to address this problem).
Count the number of current active connections in RealmAcceptor
In RealmSocket::open, if this count is greater than the platform limit,put the RealmSocket in a queue - do NOT call Base::open or RealmSession::OnAccept
When a socket closes, it should unregister itself, then dequeue a RealmSocket and call its Base::open and RealmSession::OnAccept
The platform limit should be ACE_DEFAULT_SELECT_REACTOR_SIZE
(this is all very similar to the algorithm I came up with for the aiocb list limitations)
@anybody that cares
I've made further improvements to my aio branch - please check them out and let me know what you think.
I tested 750 clients again, this time with CMSG_PING every 30 seconds. Latency seemed high, but I'm unsure if it is due to the proactor or if it's because those 750 clients are all in one .NET thread. More testing needed...