Been following your changes for a while now.
I don't know how "known" this is but both mangos (i only verified for realmd on win) and TC suffer from fd/handler limit issues. Once the fd limit is reached the servers become unconnectable and this condition persists even if the number of connections being open drops below the fd limit.
On Unix (where most likely ACE_Dev_Poll_Reactor is in use) ACE triggered epoll code goes into endless loop and never recovers unless the app is killed. Derex wrote a patch overloading the error handling and suspending the reactor to prevent this loop (https://github.com/derex/TrinityCore/commit/5f50d8c20f47bcf73ce6fce09e2e98bc8fe1ccbe).
On Win no solution has been found so far as the ACE_TP_Reactor which is used there doesn't really trigger error handling as Unix/ACE_Dev_Poll_Reactor does.
Long story short, you might want to verify if your proactor implementation suffers from the same problem.
Shooting something like this at the servers might reveal that:
#!/usr/bin/perl
use IO::Socket;
my @sockarr;
for(my $i = 0; $i <= 5000; $i++)
{
if ($sockarr[$i] = IO::Socket::INET->new(Proto=>"tcp",PeerAddr=>"127.0.0.1",PeerPort=>"3724"))
{
print "connection: ${i} successful\\n";
} else {
print "connection: ${i} failed\\n";
}
}
print "end";
sleep 10000;
On Win mangos is using FD_SETSIZE 4096 from what i've seen, so this script might need to be started multiple times as Perl has a handler limit below 4096.