Jump to content

Crash in new ACE


Guest Marik

Recommended Posts

Hello,

Since the new ACE version http://github.com/mangos/mangos/commit/3c0f8c0efca896fd900adc7bc4cce6ec6360086c my server crashes really often:

Core was generated by `./mangos-worldd'.

Program terminated with signal 11, Segmentation fault.

[New process 14287]

[New process 14271]

[New process 14286]

[New process 14285]

[New process 14284]

[New process 14283]

[New process 14282]

[New process 14281]

[New process 14280]

[New process 14279]

[New process 14278]

[New process 14277]

[New process 14276]

[New process 14274]

[New process 14273]

[New process 14272]

#0 0x00007f09243349bc in ACE_Message_Block::total_size_and_length (

this=0x7f08ea26d0d0, mb_size=@0x4780ce40, mb_length=@0x4780ce38)

at ../../../../dep/ACE_wrappers/ace/Message_Block.cpp:264

264 mb_size += i->size ();

#0 0x00007f09243349bc in ACE_Message_Block::total_size_and_length (

this=0x7f08ea26d0d0, mb_size=@0x4780ce40, mb_length=@0x4780ce38)

at ../../../../dep/ACE_wrappers/ace/Message_Block.cpp:264

i = (const ACE_Message_Block *) 0x7f08ea26d0d0

#1 0x00000000005189af in ACE_Message_Queue<ACE_NULL_SYNCH>::dequeue_head_i (

this=0x7f08f3917ba0, first_item=@0x4780cea0)

at ../../../dep/ACE_wrappers/ace/Message_Queue_T.cpp:1461

mb_bytes = 0

mb_length = 0

#2 0x0000000000517ffc in ACE_Message_Queue<ACE_NULL_SYNCH>::dequeue_head (

this=0x7f08f3917ba0, first_item=@0x4780cea0, timeout=<value optimized out>)

at ../../../dep/ACE_wrappers/ace/Message_Queue_T.cpp:1941

ace_mon = {<No data fields>}

#3 0x000000000081f1bb in WorldSocket::handle_output_queue (

this=0x7f08f3935d00, g=@0x4780ced0)

at ../../../src/game/WorldSocket.cpp:375

mblk = <value optimized out>

send_len = <value optimized out>

n = <value optimized out>

#4 0x000000000081fa22 in WorldSocket::handle_output (this=0x7f08f3935d00)

at ../../../src/game/WorldSocket.cpp:362

Guard = {lock_ = 0x7f08f3936020, owner_ = 0}

send_len = 0

n = 4294967295

#5 0x00007f0924318fbf in ACE_Dev_Poll_Reactor::dispatch_io_event (

this=0x7f08f3a90a00, guard=@0x4780cfe0)

at ../../../../dep/ACE_wrappers/ace/Dev_Poll_Reactor.inl:86

eh_guard = {eh_ = 0x7f08f3935d00, refcounted_ = true}

info = <value optimized out>

disp_mask = 2

eh = (class ACE_Event_Handler *) 0x7f08f3935d00

status = 2

handle = 32

revents = <value optimized out>

#6 0x00007f09243197ce in ACE_Dev_Poll_Reactor::handle_events (

this=0x7f08f3a90a00, max_wait_time=0x4780e090)

at ../../../../dep/ACE_wrappers/ace/Dev_Poll_Reactor.cpp:1015

countdown = {<ACE_Copy_Disabled> = {<No data fields>},

max_wait_time_ = 0x4780e090, start_time_ = {static zero = {

static zero = <same as static member of an already seen type>,

static max_time = {

static zero = <same as static member of an already seen type>,

static max_time = <same as static member of an already seen type>,

tv_ = {tv_sec = 9223372036854775807, tv_usec = 999999}}, tv_ = {

tv_sec = 0, tv_usec = 0}},

static max_time = <same as static member of an already seen type>, tv_ = {

tv_sec = 1288385317, tv_usec = 361021}}, stopped_ = false}

guard = {token_ = @0x7f08f3a90a78, owner_ = 0}

result = -1

#7 0x00007f092437023d in ACE_Reactor::run_reactor_event_loop (

this=0x7f091ee21900, tv=@0x4780e090, eh=0)

at ../../../../dep/ACE_wrappers/ace/Reactor.cpp:267

result = -206437520

#8 0x0000000000824fa1 in ReactorRunnable::svc (this=0x7f08f3b202c0)

at ../../../src/game/WorldSocketMgr.cpp:167

interval = {static zero = {

static zero = <same as static member of an already seen type>,

static max_time = {

static zero = <same as static member of an already seen type>,

static max_time = <same as static member of an already seen type>,

tv_ = {tv_sec = 9223372036854775807, tv_usec = 999999}}, tv_ = {

tv_sec = 0, tv_usec = 0}},

static max_time = <same as static member of an already seen type>, tv_ = {

tv_sec = 0, tv_usec = 10000}}

__FUNCTION__ = "svc"

__PRETTY_FUNCTION__ = "virtual int ReactorRunnable::svc()"

#9 0x00007f0924395207 in ACE_Task_Base::svc_run (args=<value optimized out>)

at ../../../../dep/ACE_wrappers/ace/Task.cpp:271

t = (ACE_Task_Base *) 0x7f08f3b202c0

svc_status = <value optimized out>

#10 0x00007f09243968b5 in ACE_Thread_Adapter::invoke (this=0x7f08f3acce80)

at ../../../../dep/ACE_wrappers/ace/Thread_Adapter.cpp:94

exit_hook_instance = <value optimized out>

exit_hook_maybe = {instance_ = 0x0}

exit_hook_ptr = <value optimized out>

#11 0x00007f0922a16fc7 in start_thread () from /lib/libpthread.so.0

No symbol table info available.

#12 0x00007f0921fe659d in clone () from /lib/libc.so.6

No symbol table info available.

#13 0x0000000000000000 in ?? ()

No symbol table info available.

Link to comment
Share on other sites

  • 2 weeks later...

Not sure but... if posible than this crash would be related with SOAP? are you using SOAP?, im testing at the moment...

All crashdump have some reference to this code in MaNGOSsoap.h:

class SOAPWorkingThread : public ACE_Task<ACE_MT_SYNCH>
{
   public:
       SOAPWorkingThread ()
       { }

       virtual int svc (void)
       {
           while (1)
           {
               ACE_Message_Block *mb = 0;
               if (this->getq (mb) == -1)
               {
                   ACE_DEBUG ((LM_INFO,
                               ACE_TEXT ("(%t) Shutting down\\n")));
                   break;
               }

               // Process the message.
               process_message (mb);
           }

           return 0;
       }
   private:
       void process_message (ACE_Message_Block *mb);
};

Best regards

Link to comment
Share on other sites

Does this problem still occurs in latest revisions? The messages from you guys just gives me a distant to update...10 crashes a day is quite a lot. I am still using the 'old' ACE version. Do many people have problems like this or is this maybe due to a custom core?

Who is gonna enlighten me ;)

Link to comment
Share on other sites

5 days without crash and 10720 (only 50 players) - on debian

I think you should just try and take a look how it turns out on your environment

Thanks for your fast reaction. And exactly what I wanna hear ;)

5 days uptime is a very nice achievement. Even with 50 testers. My environment is debian (ubuntu server 64bit) based too. Although I have three times more testers, I am confident in updating the server to the latest revs. Otherwise a backup is always an option ;)

Link to comment
Share on other sites

  • 4 weeks later...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use