Jump to content

WoW MaNGOS vs UO Sphere


Auntie Mangos

Recommended Posts

  • 39 years later...

Hello,

The problem: MaNGOS wow emulator has many crashes, but Sphere UO (Ultima Online) emulator has no crashes.

When MaNGOS has crash, Segmentation Fault appears.

When UO "thinks" he has crash, he report about it in console:

22:50:CRITICAL:Assert pri=2:'pContOn' file 'CClientEvent.cpp', line 584, in CClient:ispatchMsg() #6 "drop item"

But its still continue work without restarts.. all players happy, online 24/7/365

Question:

Why MaNGOS can't create something that ?

Or i do not understand some thing..? :huh:

Link to comment
Share on other sites

as i know, this can be only if child-thread crashes, not main program thread

as i understand, this can be done only in one case - if _every_ player will be in different threads, and if one thread crashs - other threads were continue

any comments?)

Maybe you or some one else, can clone mangos using GIT, and start to make this system ?

Link to comment
Share on other sites

running every player in a thread for himself would cause lots of problems and synchronization overhead...

it is not something one can easily implement...

Also there was a discussion some time ago if mangos should go the mutlithreading way (1 executable) or the multiprocessing way (several executables for each part of the world..), but I don't know how it ended.

anyway I don't think its appropriate to discuss this here... also your request looks like one can pull this off in 1 week. It is really something that is way harder to implement than you think and may take some time...

Bottom line is, you will not see something like that in mangos for at least some months

(also what is there is not a crash but a failed assertion.. a programmer puts it somewhere where he assumes that some variable/pointer has to meet a certain condition. When this is not the case you get that error... its not really a crash...)

Link to comment
Share on other sites

22:50:CRITICAL:Assert pri=2:'pContOn' file 'CClientEvent.cpp', line 584, in CClient:ispatchMsg() #6 "drop item"
That message can''t be compared to a segmentation fault, it's just an assertion which failed - that message alone doesn't proof that there are no crashes in Sphere.

The only proper way to get a more stable mangos is to use static code analysis/code review and pay attention to backtraces from users. That way crashes can be avoided as a whole.

Link to comment
Share on other sites

as i know, this can be only if child-thread crashes, not main program thread

as i understand, this can be done only in one case - if _every_ player will be in different threads, and if one thread crashs - other threads were continue

any comments?)

unfortunately not. honestly, however:

when a thread (or process) raises SIGSEGV, the application is going to crash, whatever you're about to try to go against it. it *will* abort with this signal.

SIGSEGV is the signal that is raised on typical programming errors.

These errors can be faught against.

e.g. by implementing assertions (UO Sphere does that, obviousely) which are a set of conditionas being performed right before the actual function code is performed. or right after the function has returned, in the parent function.

These assertions can then soft-abort the current thread, e.g. by C++ exceptions. which can then be caught somewhere upper in the callstack where it is save to catch it and just forget about the inner things (e.g. by good designed class destructors).

Obviousely, UO sphere does that.

Next: There actually is a way in catching a SIGSEGV without quitting the whole virtual world - UNIX mail agents and www servers have shown that it works perfectly. But this comes at a price.

The service (in our case: mangos) has to be written extensively multi-process'ly, that is, instead of spawning child thread, it will fork into child processes, and setup communication pipelines.

So a client child process may indeed SEGV, which might just result into a player client just disconnecting OR an object (e.g. Malygos) just disappearing, because the client process serving this world object just died away.

That's not bad at all, but in the latter case you still have write stable communication lines and the core shall be stable aswell, of course, too - but client processes might fail as often as they'd like.

The latter has an advantage, though, you could go and run this service on a cluster where each process may be balanced from once CPU/host node to another, which isn't as easily done in a multi-threaded service.

my personal objection: just use valgrind[1] to find the bugs and stay with the multi-threaded way.

at the end: i love users myself running my applications within valgrind (or simply gdb the core files) in order to provide a much better bug feedback then just saying "it crashes somehow".

Cheers.

[1] http://www.valgrind.org/

Link to comment
Share on other sites

Just an idea - Well I don't pretend to be an expert in Bli*zz*rd's way of hosting but as far as I know and have noticed every istance (like Kalim**r map, Eastern Kingd**s, dungeons) are hosted on a different server. But in mangos correct me if i'm wrong all these a hosted and worked on the same platform / code. Wouldn't be it a smarter choice to follow their way - different istances on different machines or executed code. Like if there's a problem with GO crashing in Outl*nd, only this server or part of code to be unavaible while still being able to play/test in the other istances.

Link to comment
Share on other sites

  • 4 months later...

My first thought when I saw this topic was LOL...

My second thought was RunUO is a better UO emulator the Sphere...

My third thought was This is like comparing Apples to Donuts, can't be done and shouldn't be done.

Also Sphere or even RunUO has had a much longer development period then MaNGOS. At this point I am toying around with running MaNGOS and seems pretty stable to me.

Link to comment
Share on other sites

  • 2 weeks later...
Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use