Jump to content

faramir118

Members
  • Posts

    541
  • Joined

  • Last visited

    Never
  • Donations

    0.00 GBP 

Everything posted by faramir118

  1. The only 46 is an item enchantment field, but the typemask in error is player|unit
  2. Maybe a problem with Player::m_clientGUIDs not being maintained correctly.
  3. Pushed the queue mechanism. The general idea is there, but I will make a couple changes soon. I want to use a priority queue, so that I can prioritize write ops. Deterministically speaking, write is a much more reliable operation - read ops never timeout, so if the internal ACE aiocb list is full of reads we may deadlock until a client wakes us up. PS: I hope the code and comments make sense... [asynchronous (de)multiplexer] layered on [asynchronous event demupliplexing] will be a nightmare for whoever wants to read this code sometime in the future.
  4. Just recently added, I haven't looked at it very much: https://github.com/Lynx3d/mangos/tree/transport
  5. Problem is with Player bots, so it should work in his case. Pathfinding prevents these bots from getting onto the transport, so it needs to be disabled when the bot or the player-the-bot-is-following is on a transport.
  6. Actually, by session queue I meant when players try to log in, put them in a queue and not let them into the world.
  7. You're using the wrong tool - on the left, select Pathfind Follow or Pathfind Straight. Can you tell us some info about your server? which maps you have mmaps enabled for which compiler optimizations you applied mtmaps and how many threads
  8. By default in ACE, as Ambal pointed out, maximum POSIX aio list size is 1024 elements, which equates to 512 clients (since each client can have a pending read and write at the same time). This limit can be changed, although certain platforms won't cooperate. Asynchronous read/write calls fail immediately (with EAGAIN) if there aren't enough aio slots available. While it is a recoverable error, this isn't handled currently. Possible things that can be done: Write some runtime checks that avoid aio on platforms that have system-wide limits or low per-process limits Fall back on some synchronous mechanism Fall back on some queueing mechanism use session queue to prevent exceeding aio slot limits Thoughts?
  9. I pushed a 'lazier' crunch routine Also properly handled EAGAIN/EWOULDBLOCK (seems like both are used in different circumstances), although it is just a naive retry which probably won't solve the real problem. I need to spend some time and get more familiar with *nix aio...
  10. I pushed multi-threaded proactor support. The one change I want to make is to move the acceptor into one of the spawned threads - right now it is in its own thread.
  11. Many changes, restructured all of the socket code. I will work on multiple proactor threads next. TProactor sounds interesting. If it works well, we can probably do away with the ACE_Reactor-based sockets totally. However, it was written for a very different version of ACE, and updating it would take time. I'd still like to hear from *nix uses about performance with default ACE implementations.
  12. It's still relevant, but only this part is the fix for our issue: diff --git a/src/game/Unit.cpp b/src/game/Unit.cpp index 38b78c1..7c043b0 100644 --- a/src/game/Unit.cpp +++ b/src/game/Unit.cpp @@ -321,7 +321,7 @@ void Unit::Update( uint32 update_diff, uint32 p_time ) getThreatManager().UpdateForClient(update_diff); // update combat timer only for players and pets - if (isInCombat() && (GetTypeId() == TYPEID_PLAYER || ((Creature*)this)->IsPet() || ((Creature*)this)->isCharmed())) + if (isInCombat() && GetCharmerOrOwnerPlayerOrPlayerItself()) { // Check UNIT_STAT_MELEE_ATTACKING or UNIT_STAT_CHASE (without UNIT_STAT_FOLLOW in this case) so pets can reach far away // targets without stopping half way there and running off.
  13. Multiple threads are possible, definitely something that have been on my list of things to do. It is accomplished via creating multiple ACE_Proactors. edit: For testing, should we ask for system-wide CPU utilization? There will probably be more CPU use in kernel-space than before, just not sure how much.
  14. I pushed a commit to implement double-buffered sending and removed the message queue, as per your suggestion on github. outgoing packet buffers are reused (some small overhead when buffers must be enlarged) for sending packets, cut down the amount of memcpy calls by half. 'enlarged' the critical sections in SendPacket and handle_write_stream (we are dealing with shared variables now) removed unnecessary lock in ProcessIncoming Overall, much less CPU time spent on buffers. Could maybe remove resize overhead by picking a good starting size for the buffers.
  15. Now sending/receiving as much data as is available. I too haven't had any issues. (sorry if I rebased while you were looking at it ) Hadn't even checked which platforms support WINSOCK2. I'll leave it as is.
  16. Actually, I'm not sure how this works at all. Sending packets out of order should break the RC4 steam cipher. Working on scatter/gather support now.
  17. Maybe fixed the issue. What was it? A key to shorten things: In packet dumps, the behavior can be seen as a lack of CTS packets - client eventually stop sending that packet. It seems that CMTS and CTS packets are related: when the client sends CMTS, it is immediately before CTS time diffs between CTS increase as values in CMTS packets increase In several dumps, I could see that the last CTS was preceded by a CMTS with a very large value What fixed it? The fix is rather simple - I queue SMSG_TIME_SYNC_REQ packets to the head of the send queue. This keeps the client synchronized when there are many queued packets CMTS packets have much smaller values now and are less common, and the client seems happier (no freeze yet). This is probably a hacky fix... I see that WCell uses the CMTS value in all outgoing player movement packets. It may be nice to prioritize packets that destabilize the client if they aren't sent in a timely manner... Bad news This makes me think that asynchronous IO performs rather poorly, at least for sending. Anyway, please let me know what you think.
  18. Figures, I only tested in debug mode. Fixed those, plus some other changes.
  19. After updating to [11116], I can't seem to reproduce the original issue. Could anyone test this and report any problems? If so, make sure to update your mangos.conf file with the new option and enable Network.Async
  20. Full disclosure: I had PseuWoW working a while ago... it was a lot more work than I had wanted, mostly resolving issues with dependencies and creating VS 2010 files from scratch. Irrlicht was in particularly bad shape, and I completely removed all traces of DirectX just to simplify things. I would have been happy to continue on with this, but I did some 'housekeeping' on my dev machine and apparently deleted the only copy, so that work is gone. Anyway, a large portion of the little client that I wrote was easily pieced together from PseuWoW, MaNGOS, and WCell - with good resources like those, it didn't take that much time or effort.
  21. The VC build files for PseuWoW are in bad shape, so I started from scratch. I also don't care about loading client data or graphics libraries - that's extra for what I want to do.
  22. No progress on this patch itself, just some effort towards better understanding client-side issues. I have zero skill at asm reversing/decompiling, so I can't see what is going wrong from the client's perspective. Server side + packet sniffing looks normal (other than the client not responding). Created simple dumb client, it will at least give me an easier time to see what is received client-side. Can see it here: https://github.com/faramir118/MangosClient
  23. 6a6cde8d284370b160952582378190b92c5ac2d4 is from Nov 22, 2008 - this was right around WotLK release, so I decided that commit was for 3.x compatibility. My guess was confirmed when I tested mmaps with the 2.4.3 vmap data produced by my current github branch - navmeshes and debug output look perfect on the spot-checked maps. (except the seem to have reversed the .mdx triangle vertex order for client >= 3.x) Fixes like the one you wrote are probably good to include though
  24. https://github.com/faramir118/mangos/tree/one_vmap_rewrite'> https://github.com/faramir118/mangos/tree/one_vmap_rewrite (make sure you're on the one_vmap_rewrite branch) I backported vmap v3 to mangos-one, along with some related code. Extractor works on client 2.4.3 assembler produces valid collision data Core still needs to be tested (I don't have a db for this part) Look on github, or read this summary of the commits backported.
  25. git-cherry-pick is very useful for pulling individual commits from any repository into your own, but it's not always perfect - like any git operation that deals with other branches or repositores, merge conflicts happen. For backporting to zero/one, it's worse because 1) a large portion of the code base is different in each repository and 2) some things have to be left unchanged to preserve client compatibility.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use