Jump to content

leak

Members
  • Posts

    189
  • Joined

  • Last visited

    Never
  • Donations

    0.00 GBP 

Posts posted by leak

  1. HOWEVER, Mass Dispell is still breaking stealth. Could you modify your patch and submit a patch for that? The devs might over look this thread since one of the problems has been resolved. Thanks.

    Indeed, so does Earthbind Totem which is just like Mass Dispell not supposed to remove stealth anymore since patch 2.2.0

  2. This discussion is about using feign death and freezing trap by macro.

    So if you can test on retail do the following:

    - Enter a duel

    - Hit the opponent ( you should get in combat with him)

    - Cast Feign Death and right after place a Freezing trap (make a macro for the best effect )

    Tell us wether the trap froze the target instantly or if it took the normal 2sec trap charging time.

  3. actually i'm reading it ;)

    and i would like to automate it too (you see how often i post crashlogs in here... just too lazy to copy it out of the crashlog files)... waza, do you have the script for me? :D

    What's the point of spamming the same issue serveral times. It's not like somebody suddenly starts caring about them anyways..

  4. please, not post crash dumps with
    FreezeDetectorRunnable::run

    it makes no sense

    Besides that, personally i wouldn't expect anyone reading this thread anymore. Waza123 pretty much burried it with that automated spam containing either long known bugs (GetInstance) or plain useless (FreezeDetectorRunnable) backtraces... :rolleyes:

  5. I try to make my point more clear with a simple example:

    LogGuid are our 5 logslots per guild, timestamp in this case very simplyfied and also the issue i described above where multiple events have the same timestamp. I also "abused" NewRank to show the order in which they have been inserted.

    So assuming the server starts up now, we need to get LogGuid 3 because it was the last log entry being inserterted, then we can decrease that by 1 and get our next log slog in the rotation (i use 4 -> 0 rotation in this example)

    guildid  [b]LogGuid[/b]  EventType  PlayerGuid1  PlayerGuid2  [b]NewRank[/b]  TimeStamp
         1        [b]0[/b]          5      1026267      1245903        [b]2[/b]         [b]16[/b]
         1        [b]1[/b]          5      1183380      1081428        [b]1[/b]         [b]16[/b]
         1        [b]2[/b]          5      1026267      1100058        [b]0[/b]         [b]15[/b]
         1        [b]3[/b]          5      1026267      1166813        [b]4[/b]         [b]16[/b]
         1        [b]4[/b]          5       234234       234234        [b]3[/b]         [b]16[/b]

    Server starting, trying to get the next log slot for guild, executing:

    SELECT * FROM guild_eventlog ORDER BY TIMESTAMP DESC;

    guildid  LogGuid  EventType  PlayerGuid1  PlayerGuid2  NewRank  TimeStamp
         1        0          5      1026267      1245903        2         16
         1        1          5      1183380      1081428        1         16
         1        3          5      1026267      1166813        4         16
         1        4          5       234234       234234        3         16
         1        2          5      1026267      1100058        0         15

    As you can see MySQL uses for sorting equal values (TimeStamp) the primary key as helper, but no matter how we sort the LogGuid in addition to the time stamp we always get the wrong value. In this case we either get 4 or 0 with is both wrong since we need LogGuid 3 because this was the last log entry being inserterted.

  6. True, this fixed slot log storage screams after some LIFO system to keep it clean and avoid those renumbering queries.

    The problem here is that you can't order by timestamp cause they are "just" seconds and stuff can happen in the same second. So if you if have lets say 3 log events at the same time and you happen to have that log slot overflow, you have no chance of getting the _real_ next log slot number (logguid in our case) back. MySQL for example seems to help itself by looking into the primary keys to sort equal values (the timestamp in that explicit case) and since the logguid is part of the primary it would use that number (which is almost meaningless for the order in this overflow case) for sorting.

    The only way i see atm to avoid this problem is to store the "next sequence number" outside of that table (maybe a new field in guild) but that would lead to additional queries which would totally not help with performance.

  7. Those are four ;)

    #1 yes

    #2 for me it raises the lag border from 700 to 1100 players like pretty much every mtmaps patch does

    #3 i had some weird crashes, trying to figure out if they are really related to that patch atm

    #4 Well the theory is awesome just try and find out if it boosts your servers performance as much as advertised

  8. No it isn't. That is the master thread which spawns all the other threads later. It's cpu consumption time always ends with ~ 0:21.60.

    The thread that does all the work on the server is what i call the main thread, if there is a better term for that feel free to share.

    **edit

    Another crash i never had before: http://paste2.org/p/307520

    Either this patch is responsible or there is some really serious stuff wrong with all the commits from 8026 0.12 till 8128. :/

  9. Well i never had this one before, could be from 0.12 branch too ofc.

    Here load with ~960 players:

     6664 mangos    20   0 3314M 3046M  9920 S  0.0 38.3  0:21.60   1  |       `- ./mangos-worldd
    6681 mangos    20   0 3314M 3046M  9920 S [b]19.0[/b] 38.3 20:13.38   2  |           `- ./mangos-worldd <- net
    6680 mangos    20   0 3314M 3046M  9920 S [b]20.0[/b] 38.3 20:04.87   1  |           `- ./mangos-worldd <- net
    6679 mangos    20   0 3314M 3046M  9920 S  1.0 38.3  2:29.36   3  |           `- ./mangos-worldd
    6678 mangos    20   0 3314M 3046M  9920 S  0.0 38.3  0:00.02   2  |           `- ./mangos-worldd
    6677 mangos    20   0 3314M 3046M  9920 S  0.0 38.3  0:05.56   3  |           `- ./mangos-worldd
    6676 mangos    20   0 3314M 3046M  9920 S [b]25.0[/b] 38.3 25:53.56   2  |           `- ./mangos-worldd <-- map?
    6675 mangos    20   0 3314M 3046M  9920 S [b]29.0[/b] 38.3 25:57.04   4  |           `- ./mangos-worldd <-- map?
    6674 mangos    20   0 3314M 3046M  9920 S  2.0 38.3  2:25.08   2  |           `- ./mangos-worldd
    6673 mangos    20   0 3314M 3046M  9920 S [b]29.0[/b] 38.3 25:58.11   3  |           `- ./mangos-worldd <-- map?
    6672 mangos    20   0 3314M 3046M  9920 R [b]72.0[/b] 38.3  1h03:01   1  |           `- ./mangos-worldd <-- main
    6668 mangos    20   0 3314M 3046M  9920 S  0.0 38.3  0:03.95   3  |           `- ./mangos-worldd
    6667 mangos    20   0 3314M 3046M  9920 S  0.0 38.3  0:42.00   3  |           `- ./mangos-worldd
    6666 mangos    20   0 3314M 3046M  9920 S  0.0 38.3  0:12.10   1  |           `- ./mangos-worldd

    This patch behaves pretty much like raczman's original open mp patch. It doesn't really take that much load off the main thread (at least not more than derex patch does) and there is also this "one thread missing" issue. I switched to 4 map threads on this htop output.

    It don't know if my partial vmaps are reponsible for this kind of behavior, yet somehow i'm far away from this nice balanced stuff in Infinity's first post. :/

  10. Unfortunately no good hint, i added the last few lines of the server.log to my post above though.

    I searched all my crashlogs, i never had this one before and yes i was using derex mtmaps patch all the time.

    I'd give that GetMap patch a shot too but that needs backporting to 0.12 and i haven't seen someone doing that so far. :/

    As much as i can understand with google translate that relocation patch has some big troubles with stealth can someone tell a little more about that patch/issue?

    This is my htop output with the 5.1 open mp patch and ~750 players online (gonna post again when there is more action):

    2999 mangos    20   0 2627M 2376M  9856 S  0.0 29.8  0:21.10   1  |       `- ./mangos-worldd
    3018 mangos    20   0 2627M 2376M  9856 R 12.0 29.8  2:52.65   1  |           `- ./mangos-worldd
    3017 mangos    20   0 2627M 2376M  9856 S 13.0 29.8  2:54.72   2  |           `- ./mangos-worldd
    3016 mangos    20   0 2627M 2376M  9856 S  2.0 29.8  0:23.62   3  |           `- ./mangos-worldd
    3015 mangos    20   0 2627M 2376M  9856 S 11.0 29.8  1:59.29   3  |           `- ./mangos-worldd
    3014 mangos    20   0 2627M 2376M  9856 S 12.0 29.8  2:00.23   4  |           `- ./mangos-worldd
    3013 mangos    20   0 2627M 2376M  9856 S  7.0 29.8  1:59.21   4  |           `- ./mangos-worldd
    3012 mangos    20   0 2627M 2376M  9856 S  7.0 29.8  1:58.65   2  |           `- ./mangos-worldd
    3011 mangos    20   0 2627M 2376M  9856 S  0.0 29.8  0:00.00   2  |           `- ./mangos-worldd
    3010 mangos    20   0 2627M 2376M  9856 S  7.0 29.8  1:58.71   3  |           `- ./mangos-worldd
    3009 mangos    20   0 2627M 2376M  9856 S  9.0 29.8  1:59.75   1  |           `- ./mangos-worldd
    3008 mangos    20   0 2627M 2376M  9856 S 11.0 29.8  2:00.26   1  |           `- ./mangos-worldd
    3007 mangos    20   0 2627M 2376M  9856 S  0.0 29.8  0:00.92   2  |           `- ./mangos-worldd
    3006 mangos    20   0 2627M 2376M  9856 S  1.0 29.8  0:26.46   2  |           `- ./mangos-worldd
    3005 mangos    20   0 2627M 2376M  9856 R [b]35.0[/b] 29.8  7:02.41   1  |           `- ./mangos-worldd <-- main
    3003 mangos    20   0 2627M 2376M  9856 R  0.0 29.8  0:00.75   4  |           `- ./mangos-worldd
    3002 mangos    20   0 2627M 2376M  9856 S  0.0 29.8  0:07.83   1  |           `- ./mangos-worldd
    3001 mangos    20   0 2627M 2376M  9856 S  0.0 29.8  0:01.83   1  |           `- ./mangos-worldd

    It didn't really change that much, there is still much load on the main thread. That being with using vmaps in instances only (everything else is not playable imo).

    Another thing: Is it really recommended to use 8 map threads? I was using 4 so far and i guess with 1100 players even that is still overkill. With 8 threads the server load goes to 3 (3 threads got no cpu time during the last schedule). I think that 1 thread per cpu rule of thumb still applies here - tell me if i'm wrong.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use