Jump to content

Running behavior on 500+ users


Recommended Posts

I tried 1400 users without any vmaps. Slightly less spelllag but still noticeable. I'd really like to know where that stuff is coming from... With mtmaps the mainthread mostly ranks around 70% cpu usage, so not really overloaded.

@throneinc

Those patches above are Derex mtmaps patch. I adapted the version from that trinity patch collection and it's working fine.

Anticheat: http://getmangos.eu/community/viewtopic.php?id=4404

@Poe

What's your playercount?

Link to comment
Share on other sites

server.log

Prevent possible stack owerflow in Unit::ProcDamageAndSpellFor  Spell 11712
Prevent possible stack owerflow in Unit::ProcDamageAndSpellFor  Spell 11712
Prevent possible stack owerflow in Unit::ProcDamageAndSpellFor  Spell 27049
Prevent possible stack owerflow in Unit::ProcDamageAndSpellFor  Spell 27049

Quote from "other" ;-) forum, where also mtmaps are developed

The only place where that particular typo(owerflow) is, is:

Unit.cpp

Quote:

void Unit::ProcDamageAndSpellFor( bool isVictim, Unit * pTarget, uint32 procFlag, uint32 procExtra, WeaponAttackType attType, SpellEntry const * procSpell, uint32 damage )

{

deep ++;

if (deep > 5)

{

sLog.outError("Prevent possible stack owerflow in Unit::ProcDamageAndSpellFor");

if (procSpell)

sLog.outError(" Spell %u", procSpell->Id);

deep--;

return;

}

After few hours of server online

Quote:

ERROR:Prevent possible stack owerflow in Unit::ProcDamageAndSpellFor

appears,

I use only this patch + few smaller but they doesn't matter. Without mtmaps nothing happens...

I've got huge spam with those errors. Studying crashlogs i know I'm not alone. Can anyone solve this problem?

mangos-0.12 7290 mtmaps (by Derex), procflag, few other patches by KAPATEJIb. Debian OS

Also exist in compilation without procflag http://getmangos.eu/community/showpost.php?p=61473&postcount=167

@hikikomori18

to be honest i don't know. just reported issue to head admin. try google, there are many answers mainly given by php forums admins.

Link to comment
Share on other sites

  • 39 years later...

I'd like to talk a bit about the running behavior of mangos "under pressure" since there is virtually no information around about it.

Running worldd on a dedicated box (AMD X2 6000+ 3Ghz per core, 6GB RAM, software raid0) and mysql/realmd on a sightly slower box crosslinked via dedicated 1Gbit connection.

My observations:

Mangos scales quite well with increasing number of users, there is only one issue i've noticed so far: vmaps

Having full vmaps enabled at around 600 users the execution delay of spells ingame starts to increase massivly.

This can be stalled by disabling vmaps for the world continents (vmap.ignoreMapIds = "369, 0, 1, 530").

Having this setting enabled i can go up to 1300 users when the same problems start again.

First i thought the CPU of the worldserver hit the limit, but it is only peaking at 95% with the mainthread of mangos on one core.

Wild theories:

Unfortunately the mtmaps isn't at a working state so i can't really test all the stuff i wanted.

I've heared other people blaming slow mysql execution for those kinda delays in spell and movement behavior, yet i think i tuned my database pretty good and rather think it is a combination of issues.

My mysql server is doing about 250-300 queries/sec in average (INS,DEL, UPD, SEL) peaking up to 1000 (i assume player saves) once in a while and i've even seen it doing 4000 qps without problems.

I can't really understand that mysql<->vmaps issue yet, but like the "disable vmaps for part of the world" workaround showed, there is a problem with vmaps implementation that reduces smoothness while there is still free capacity on the backend (cpu, ram, db) available.

I would appcreciate if someone who has more insight into mangos innerworks would give a comment on this and if there is a "doable" approach how to solve it, without rewriting half of the core.

Note: I'm not talking about the multithreading/multicomputing topic here. Imo those are higher design goals that should be approached when an application already eats all the hardware it can get (100% useage on multiple cores, etc.), which isn't really the problem of mangos yet.

I'd also like to encourage other users of mangos running more than 500 players to share their impression and maybe exchange a few cfg tipps.

Link to comment
Share on other sites

If you have enough RAM, you can convert your DBs to InnoDB and tune it properly, including caches.

From what I know, the DB slowdown can be caused by writing player saves as ~400-500KB data blocks per user.

Wyk3d was working on some binary implementation of this, maybe it could result into splitting the large block to separate columns (at least to some point) later.

Link to comment
Share on other sites

ok, here we go: we've two servers up and running, one is the realmd with the database and a webserver (doesn't use mutch performance) and one with the mangosd running (i'm not 100% sure but i thougth that both servers would be in the same local network with a 1gbit inet connection). in the rush hour we've between 600 and 800 players online.

we've vmaps disabled on all continents (but not instances) and we've mtmaps running. we've really often lags and every day we've crashes.

Link to comment
Share on other sites

@freghar

I'm running char and realmd table in innodb with innodb_buffer_pool_size = 3072M (4GB RAM database machine), so basicly almost the full chardb should fit into the RAM.

@Blub

Could you also post the machine specs and the mangos rev? It seems to be a slightly older one since mtmaps stopped working at some point?

Link to comment
Share on other sites

@Blub

Could you also post the machine specs and the mangos rev? It seems to be a slightly older one since mtmaps stopped working at some point?

here is a graphic showing the server structure (in german, but should be readable also for non-german speakers i hope): http://img56.imageshack.us/img56/9951/serverstrukturqa7.jpg

rev:

[22:43] MaNGOS/0.12.0 (* * Revision 6991 - c1d24a151d210a4fe907d45d8935d6994aa446b6) for Unix (little-endian)

[22:43] Using script library: Revision [810] 2008-12-22 10:21:48 (Unix)

[22:43] Using World DB: UDB 0.10.4 (363) for MaNGOS 2008_10_31_01 with SD2 SQL for rev. 733

hitman has tried to fix mtmaps: http://github.com/HiTmAn/mangos/tree/mtmaps
Link to comment
Share on other sites

Ok, i adapted mtmaps from trinity patch repository and it seems to work quite good.

 4443 mangos    20   0 3704M 3533M  9444 S  0.0 59.3  0:12.81  |       `- ./mangos-worldd
4471 mangos    20   0 3704M 3533M  9444 R 17.0 59.3 18:12.80  |           `- ./mangos-worldd <- net
4470 mangos    20   0 3704M 3533M  9444 R 17.0 59.3 18:28.55  |           `- ./mangos-worldd <- net
4469 mangos    20   0 3704M 3533M  9444 S  7.0 59.3  7:07.51  |           `- ./mangos-worldd
4468 mangos    20   0 3704M 3533M  9444 S  0.0 59.3  0:00.09  |           `- ./mangos-worldd
4467 mangos    20   0 3704M 3533M  9444 S  0.0 59.3  0:05.75  |           `- ./mangos-worldd
4466 mangos    20   0 3704M 3533M  9444 R  0.0 59.3  0:40.82  |           `- ./mangos-worldd
4465 mangos    20   0 3704M 3533M  9444 R 69.0 59.3  1h05:39  |           `- ./mangos-worldd <- main
4464 mangos    20   0 3704M 3533M  9444 R 16.0 59.3 16:28.51  |           `- ./mangos-worldd <- map
4463 mangos    20   0 3704M 3533M  9444 R 19.0 59.3 16:24.00  |           `- ./mangos-worldd <- map
4447 mangos    20   0 3704M 3533M  9444 S  0.0 59.3  0:10.72  |           `- ./mangos-worldd
4446 mangos    20   0 3704M 3533M  9444 S  0.0 59.3  0:33.39  |           `- ./mangos-worldd
4445 mangos    20   0 3704M 3533M  9444 S  0.0 59.3  0:16.39  |           `- ./mangos-worldd

There is now a little more room on the main thread, yet the problem still comes back at 1400 users.

Usually all those quadcore cpus have less power on a single core. Can anyone with a larger user number post a htop, etc. output what's the mainthread is doing on such a machine?

Link to comment
Share on other sites

@freghar

I'm running char and realmd table in innodb with innodb_buffer_pool_size = 3072M (4GB RAM database machine), so basicly almost the full chardb should fit into the RAM.

I did the same thing, and installed a modified MySQL backend that one of my GM's came up with. Never had a problem since.

Link to comment
Share on other sites

http://github.com/derex/mangos/tree/mtmaps works perfectly on mangos-0.12

32703 wow       20   0 2012m 1.8g  12m R   73 46.5  43:14.13 2 mangos-worldd
32701 wow       20   0 2012m 1.8g  12m S   15 46.5  10:05.07 3 mangos-worldd
32708 wow       20   0 2012m 1.8g  12m S   14 46.5   8:58.89 3 mangos-worldd
32702 wow       20   0 2012m 1.8g  12m S   14 46.5  10:01.60 1 mangos-worldd
32700 wow       20   0 2012m 1.8g  12m S   13 46.5  10:02.93 1 mangos-worldd
32699 wow       20   0 2012m 1.8g  12m S   11 46.5  10:01.27 2 mangos-worldd

But there are performance issuses, at 1100 online we have a noticeable delay up to 2-3 seconds in any in-game actions such as spell casting and hit count.

In the lag time the main thread does not consume all CPU time of a single core.

So, multithreading is not a panacea for uplifting mangos' online.

And the reason of lags is not a single-core overload, maybe.

Link to comment
Share on other sites

http://github.com/derex/mangos/tree/mtmaps works perfectly on mangos-0.12

On which revision? I tried several revisions but still struggling sometimes with GIT. So I am not sure if I use the commands incorrectly or that I am using the wrong rev. Can you tell me which one you mean?

edit: Well, I managed to merge the mtmaps from Derex into the 0.12 branch. But I got several errors while compiling...

Error 1 error C2059: syntax error : 'constant' \\dep\\ACE_wrappers\\ace\\OS_NS_stdlib.h 210 game

And have this error several times. So I assume this is not working with latest 0.12 rev (7174)

Which one is working in your opinion, scamp? I hope you know that we are trying to get it working with one of the latest 0.12 branch revisions....

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use