Jump to content

[dev] New networking code based on "proactor" pattern


Guest faramir118

Recommended Posts

@leak

I know there are some platform limitations on the number of concurrent aio operations a process (or even system) can have. I have tried my best to combat this, but in some cases it will be better to use a reactor for performance reasons (I have logging which should alert people when this is the case).

As for the windows problem you specifically pointed at, I may be able to help:

The FD_SET macro silently fails if fd_set::fd_count >= FD_SETSIZE

Also, deep in ACE, there are no bounds checks when adding sockets to the fd_set

The select reactor will lose all data sent to sockets added after the platform-specific size limit was reached, because the socket is never included in select calls.

One solution for the Trinity codebase (mangos has some different classes, but we would need the same thing to address this problem).

Count the number of current active connections in RealmAcceptor

In RealmSocket::open, if this count is greater than the platform limit,put the RealmSocket in a queue - do NOT call Base::open or RealmSession::OnAccept

When a socket closes, it should unregister itself, then dequeue a RealmSocket and call its Base::open and RealmSession::OnAccept

The platform limit should be ACE_DEFAULT_SELECT_REACTOR_SIZE

(this is all very similar to the algorithm I came up with for the aiocb list limitations)

@anybody that cares :)

I've made further improvements to my aio branch - please check them out and let me know what you think.

I tested 750 clients again, this time with CMSG_PING every 30 seconds. Latency seemed high, but I'm unsure if it is due to the proactor or if it's because those 750 clients are all in one .NET thread. More testing needed...

Link to comment
Share on other sites

  • Replies 57
  • Created
  • Last Reply

Top Posters In This Topic

  • 2 weeks later...
  • 3 weeks later...

Just haven't had the resources (time, equipment) to test and develop further.

Ignoring the AIOCB synchronization, here is a general state as I see it:

Outgoing synchronization is as good as it will get

  • synchronized queueing, can't be done in batch (SendPacket places packets in buffer)
  • fast batch dequeueing (double buffering, all data sent at once)

Incoming synchronization could use some work

  • batch queue operations should reduce synchronization overhead
  • packet processing throughput is hindered by implementation of ACE_Based::LockedQueue::next (the template version).
    Should be more eager, IMHO - inspect more than just _queue.front()

Accommodating AIOCB adds some complexity and locking overhead, but I believe it is stable.

Some of this work is directly applicable to master, so I guess at the very least I can put together something that will provide an immediate benefit...

Link to comment
Share on other sites

  • 5 months later...

I have had time lately to revisit this patch, as well as a Mac system to do some debugging.

If anyone has a small realm, I would appreciate some feedback for this patch. Initially, its ok to just test stability (if you can also test performance and give some numbers, that works too :))

One discouraging thing of note is that OS X Lion is configured with a max aio list size of 90. :|

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use