Jump to content

leak

Members
  • Posts

    189
  • Joined

  • Last visited

    Never
  • Donations

    0.00 GBP 

leak's Achievements

Advanced Member

Advanced Member (3/3)

0

Reputation

  1. On a server with a few hundred players there are some that exceed the maximum client response delay (90sec default) even though they still seem to be connected and their latency is fine. The number of people failing the max response delay is much bigger than those who fail actual checks, that seems a bit odd. Anyone an idea what that is all about?
  2. Can't really do any further debugging - not a Mac owner myself either.
  3. Any new on OS X support? I tried your implementation TOM_RUS and currently it crashes the client after loading the module (Module_0DBBF209A27B1E279A9FEC5C168A15F7_Data). **edit Client crashes with Improper header received: [ CE FA ED FE 07 00 00 00 03 00 00 00 08 00 00 00 0A 00 00 00 48 07 00 00 85 20 01 00 01 00 00 00 ]
  4. Been following your changes for a while now. I don't know how "known" this is but both mangos (i only verified for realmd on win) and TC suffer from fd/handler limit issues. Once the fd limit is reached the servers become unconnectable and this condition persists even if the number of connections being open drops below the fd limit. On Unix (where most likely ACE_Dev_Poll_Reactor is in use) ACE triggered epoll code goes into endless loop and never recovers unless the app is killed. Derex wrote a patch overloading the error handling and suspending the reactor to prevent this loop (https://github.com/derex/TrinityCore/commit/5f50d8c20f47bcf73ce6fce09e2e98bc8fe1ccbe). On Win no solution has been found so far as the ACE_TP_Reactor which is used there doesn't really trigger error handling as Unix/ACE_Dev_Poll_Reactor does. Long story short, you might want to verify if your proactor implementation suffers from the same problem. Shooting something like this at the servers might reveal that: #!/usr/bin/perl use IO::Socket; my @sockarr; for(my $i = 0; $i <= 5000; $i++) { if ($sockarr[$i] = IO::Socket::INET->new(Proto=>"tcp",PeerAddr=>"127.0.0.1",PeerPort=>"3724")) { print "connection: ${i} successful\\n"; } else { print "connection: ${i} failed\\n"; } } print "end"; sleep 10000; On Win mangos is using FD_SETSIZE 4096 from what i've seen, so this script might need to be started multiple times as Perl has a handler limit below 4096.
  5. Thanks for your summary. I'd be interesting to hear more about why you turned SMF down. The db conversion is a one time process, so shouldn't matter that much i guess.
  6. Mind sharing your thoughts about which open source BB to pick? For all i know PunBB is a pile of crap and well FluxBB forked from that. I mean there are tons of other options, so i'm kinda curious..
  7. I beg to differ, at least with vmaps that is. http://paste2.org/p/620332
  8. Still broken in http://github.com/mangos/mangos/commit/d17c1ab25f4906411cecf16ea0ff1e6338e0e1ae
  9. mangos 0.12 doesn't, had to revert the mentioned commit to get it working again.
  10. After the recent change to threatlists i'm encountering alot of crashes related to those. Since i'm running an mtmaps patch i believe it is again an issue with threat-safety just like with bg queues before. So my question is: Is someone able to make those threatlist thread-safe? Any help is appreciated. p.s. Yes, mtmaps not supported bla bla, yet half the mangos folks are using them, but then again this is no official bug report in the bug report section...
  11. Revision: mangos-0.12 e312f67f66f048a4dbeae7df230f714e27eea0cd Patches: none Description of bug: Mage talent Improved Scorch stops proccing from Scorch after c1b2ec is applied
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use