Jump to content

[Request] Max log filesize


Guest FragFrog

Recommended Posts

In my experience, most crashes are undetectable in the logs unless you have loglevel = 3 active. Unfortunately, if you do not know what causes a crash you can't reproduce, this means it can take a while before the crash happens again.

On our server this means even after a few hours with loglevel = 3 we get several hundred megabytes of log, and we often have uptimes in excess of one day - which would make them way too big to handle.

So I was wondering: is it possible to add an optional limit to the log filesize, and only keep the last couple'a hundred lines or so? I understand this would increase load a bit, but to be frank load's not a problem for us and crashes are.

Not sure if we're the only ones having a problem with huge logfiles, or whether this is easy to achieve, but figured I'd post it here and see what others think :lol:

Link to comment
Share on other sites

If you need a tool that copy the last ammount of given lines to a extra file you can do so by coding it into the mangos core, or code a simple application.

The application could generate a log file with the last (100) lines out of the log. If the detail level is up to 3, you get full info.This application has the charm that it could be run only once your server crashed.

Another approach would be to place this code into the core with hardcoded loglevel 3. So the user can choose a normal log file with log debug 0-3, and (if enabled) he would get the detailes log also. In this case, the extra generation of another log file have to be triggered in core. Need extra time.

I don't know what i would prefer. Disk operations are realy cost intensive in speed performance.

Link to comment
Share on other sites

If you need a tool that copy the last ammount of given lines to a extra file you can do so by coding it into the mangos core, or code a simple application.

It is a posibility, but then you would still get enormous logfiles till they are truncated. My major concern is I cannot let loglevel stay at 3 when I am not expecting crashes since if the server stays up a day or two (which is often the case) we'd be looking at ~15Gb of logfile. I am not sure the filesystem would even be able to handle it, nor whether there is enough space on the server.

Another approach would be to place this code into the core with hardcoded loglevel 3. So the user can choose a normal log file with log debug 0-3, and (if enabled) he would get the detailes log also. In this case, the extra generation of another log file have to be triggered in core. Need extra time.

Aye, this would be the optimal solution but as you said, is probably harder to make. Hence why I would be more than happy with a function to simply limit log filesize - so after a fixed number of lines has been written, each new line write also removes the first line from the log.

Link to comment
Share on other sites

Removing beginning of a file seems an expensive operation to me...I think you need to realocate the whole file each time with most file systems.

To me darkstalker's idea sounds good, have two log files, e.g. log.head and log.tail.

You write to log.tail until it grows too large and then just swap files and trunkate new log.tail, then you always have at least LIMIT bytes and at most 2*LIMIT bytes of output.

Link to comment
Share on other sites

I've made some progress in coding. For me it's long time ago since i have to code in c++ but still.. seems to go smooth.

I've witten some lines (hardcoded values) witch are able to split the logfile in two files of limited size (eg.100 lines). This is revision 0. Step by step I'll go further. My destination is a seperated logfile / debuglevel 3 / limited lines / rotation

cheers

Link to comment
Share on other sites

Because you risk having a crash 2 lines into your new logfile with no lines in it at all yet - making the entire endeavour rather an exercise in futility. Of course, using enough lines the chance of that happening diminish and purely resource-wise it is probably one of the best solutions.

I do not honestly know how well certain filesystems handle trimming from the beginning of a file - partially the reason why I have not written something like this myself. Ideally the last lines would only be kept in memory, but this of course does not really work if the server itself crashes and takes that memory with it. Memcached (link) could be used to store it, but that would require people to run an additional program, which is undesirable from a usability point of view.

Just a thought here: what are the opinions on storing a fixed number of rows in a MySQL memory table? These are volatile tables that clear when restarting MySQL, but that should not be a problem - a crash in mangosd almost never means a crash in MySQL in my experience. Then logging could rotate on an incremental unique ID that resets after a predetermined number of entries has been added, like this (PSEUDO-CODE):

private static int i = 0;

void addEntry (String logline) {
 query("REPLACE INTO `crashlog` (`id`, `time`, `entry`)
        VALUES(%u, NOW(), %s)", i, logline);
 i = i > 100 ? 0 : i++;
}

This way you would always get the last 100 lines, only those lines, no additional disk-IO is required and the only overhead would be an additional query call - which should be relatively light for a Memory table. Thoughts?

Link to comment
Share on other sites

What bug does the patch fix? What features does the patch add?

This patch limit the line size of the log file. The information will be stored in two files. One file is filled with logging data. If the file reaches max line size the other fille will be filled. The logging data that exeeds 2*size_of_lines (two log files) is not stored, an will be lost.

For which repository revision was the patch created?

Rev. 8987

Is there a thread in the bug report section or at lighthouse? If yes, please add a link to the thread.

this thread for testing

Who has been writing this patch? Please include either forum user names or email addresses.

lp-vamp

diff --git a/src/mangosd/mangosd.conf.dist.in b/src/mangosd/mangosd.conf.dist.in
index 479a6ce..a62a646 100644
--- a/src/mangosd/mangosd.conf.dist.in
+++ b/src/mangosd/mangosd.conf.dist.in
@@ -1,4 +1,4 @@
-#####################################
+#####################################
# MaNGOS Configuration file         #
#####################################
ConfVersion=2008080101
@@ -223,6 +223,11 @@ AddonChannel = 1
#        0 = Minimum; 1 = Error; 2 = Detail; 3 = Full/Debug
#        Default: 0
#
+#    LogFileSize
+#        Server logging file size
+#        0 = disabled
+#        Default: 0
+#
#    LogFilter_AchievementUpdates
#    LogFilter_CreatureMoves
#    LogFilter_TransportMoves
@@ -298,6 +303,7 @@ LogTime = 0
LogFile = "Server.log"
LogTimestamp = 0
LogFileLevel = 0
+LogFileSize = 0
LogFilter_AchievementUpdates = 1
LogFilter_CreatureMoves = 1
LogFilter_TransportMoves = 1
diff --git a/src/shared/Log.cpp b/src/shared/Log.cpp
index 429b089..c5d69b4 100644
--- a/src/shared/Log.cpp
+++ b/src/shared/Log.cpp
@@ -249,6 +249,10 @@ void Log::Initialize()

    // Char log settings
    m_charLog_Dump = sConfig.GetBoolDefault("CharLogDump", false);
+
+    // Log File size
+    m_count_lines = 0;
+    m_file_size_limit = sConfig.GetIntDefault("LogFileSize", 0);
}

FILE* Log::openLogFile(char const* configFileName,char const* configTimeStampFlag, char const* mode)
@@ -266,6 +270,8 @@ FILE* Log::openLogFile(char const* configFileName,char const* configTimeStampFla
            logfn += m_logsTimestamp;
    }

+    if(strcmp(configFileName,"LogFile")==0 && log_filename.empty())    { log_filename=log_filename.append(m_logsDir+logfn); }
+
    return fopen((m_logsDir+logfn).c_str(), mode);
}

@@ -340,6 +346,7 @@ void Log::outTitle( const char * str)
        fprintf(logfile, str);
        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }

    fflush(stdout);
@@ -355,6 +362,7 @@ void Log::outString()
        outTimestamp(logfile);
        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -390,6 +398,7 @@ void Log::outString( const char * str, ... )
        va_end(ap);

        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -426,6 +435,7 @@ void Log::outError( const char * err, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stderr);
}
@@ -463,6 +473,7 @@ void Log::outErrorDb( const char * err, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }

    if(dberLogfile)
@@ -513,6 +524,7 @@ void Log::outBasic( const char * str, ... )
        fprintf(logfile, "\\n" );
        va_end(ap);
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -552,6 +564,7 @@ void Log::outDetail( const char * str, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }

    fflush(stdout);
@@ -616,6 +629,7 @@ void Log::outDebug( const char * str, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -652,6 +666,7 @@ void Log::outCommand( uint32 account, const char * str, ... )
        fprintf(logfile, "\\n" );
        va_end(ap);
        fflush(logfile);
+        swapLogFile();
    }

    if (m_gmlog_per_account)
@@ -760,6 +775,7 @@ void Log::outMenu( const char * str, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -781,6 +797,29 @@ void Log::outRALog(    const char * str, ... )
    fflush(stdout);
}

+void Log::swapLogFile()
+{
+    if (logfile && m_file_size_limit>0) // there is a logfile and a size limit
+    {
+        if(m_count_lines >= m_file_size_limit)
+        {
+            m_count_lines=0;
+            if(log_size_limit_filename.empty())    // generate backup file name if empty
+            {
+                size_t dot_pos = log_filename.find_last_of(".");
+                log_size_limit_filename.append(log_filename);
+                log_size_limit_filename.insert(dot_pos,"_part");
+            }
+            // close the logfile, remove, rename, open new logfile .. TODO Error Handling
+            fclose(logfile);
+            remove(log_size_limit_filename.c_str());
+            rename(log_filename.c_str(),log_size_limit_filename.c_str());
+            logfile=fopen(log_filename.c_str(), "w");
+        }
+        else {m_count_lines++;}
+    }
+}
+
void outstring_log(const char * str, ...)
{
    if( !str )
@@ -849,4 +888,4 @@ void error_db_log(const char * str, ...)
    va_end(ap);

    sLog.outErrorDb(buf);
-}
+}
\\ No newline at end of file
diff --git a/src/shared/Log.h b/src/shared/Log.h
index 2500c67..ce35f43 100644
--- a/src/shared/Log.h
+++ b/src/shared/Log.h
@@ -155,6 +155,13 @@ class Log : public MaNGOS::Singleton<Log, MaNGOS::ClassLevelLockable<Log, ACE_Th
        // gm log control
        bool m_gmlog_per_account;
        std::string m_gmlog_filename_format;
+
+        // log file size limit
+        uint32 m_file_size_limit; // limit size <= 0 deaktivate
+        uint32 m_count_lines; // count the lines in the log file
+        std::string log_filename;
+        std::string log_size_limit_filename;
+        void swapLogFile();
};

#define sLog MaNGOS::Singleton<Log>::Instance()

Link to comment
Share on other sites

Wow, awesome work lp-vamp! I've just applied and compiled your patch and it seems to be working flawlessly!

Had to apply parts of it manually to make it work with 0.12 branch (revision 8719), patch for that:

From 8704557095df5f389144774447a1611bf82a00c8 Mon Sep 17 00:00:00 2001
From: unknown <[email protected]>
Date: Tue, 15 Dec 2009 02:33:36 +0100
Subject: [PATCH] Limit log filesize

---
src/mangosd/mangosd.conf.dist.in |    8 ++++++-
src/shared/Log.cpp               |   42 ++++++++++++++++++++++++++++++++++++++
src/shared/Log.h                 |    7 ++++++
3 files changed, 56 insertions(+), 1 deletions(-)

diff --git a/src/mangosd/mangosd.conf.dist.in b/src/mangosd/mangosd.conf.dist.in
index 522bc16..ffc1232 100644
--- a/src/mangosd/mangosd.conf.dist.in
+++ b/src/mangosd/mangosd.conf.dist.in
@@ -1,4 +1,4 @@
-#####################################
+#####################################
# MaNGOS Configuration file         #
#####################################
ConfVersion=2008080101
@@ -223,6 +223,11 @@ AddonChannel = 1
#        0 = Minimum; 1 = Error; 2 = Detail; 3 = Full/Debug
#        Default: 0
#
+#    LogFileSize
+#        Server logging file size
+#        0 = disabled
+#        Default: 0
+#
#    LogFilter_TransportMoves
#    LogFilter_CreatureMoves
#    LogFilter_VisibilityChanges
@@ -292,6 +297,7 @@ LogTime = 0
LogFile = "Server.log"
LogTimestamp = 0
LogFileLevel = 0
+LogFileSize = 0
LogFilter_TransportMoves = 1
LogFilter_CreatureMoves = 1
LogFilter_VisibilityChanges = 1
diff --git a/src/shared/Log.cpp b/src/shared/Log.cpp
index 72544d4..8a983ca 100644
--- a/src/shared/Log.cpp
+++ b/src/shared/Log.cpp
@@ -246,6 +246,10 @@ void Log::Initialize()

    // Char log settings
    m_charLog_Dump = sConfig.GetBoolDefault("CharLogDump", false);
+
+    // Log File size
+    m_count_lines = 0;
+    m_file_size_limit = sConfig.GetIntDefault("LogFileSize", 0);
}

FILE* Log::openLogFile(char const* configFileName,char const* configTimeStampFlag, char const* mode)
@@ -262,6 +266,10 @@ FILE* Log::openLogFile(char const* configFileName,char const* configTimeStampFla
        else
            logfn += m_logsTimestamp;
    }
+    if(strcmp(configFileName,"LogFile") == 0 && log_filename.empty())
+    {
+        log_filename=log_filename.append(m_logsDir+logfn); 
+    }

    return fopen((m_logsDir+logfn).c_str(), mode);
}
@@ -337,6 +345,7 @@ void Log::outTitle( const char * str)
        fprintf(logfile, str);
        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }

    fflush(stdout);
@@ -352,6 +361,7 @@ void Log::outString()
        outTimestamp(logfile);
        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -387,6 +397,7 @@ void Log::outString( const char * str, ... )
        va_end(ap);

        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -423,6 +434,7 @@ void Log::outError( const char * err, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stderr);
}
@@ -460,6 +472,7 @@ void Log::outErrorDb( const char * err, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }

    if(dberLogfile)
@@ -510,6 +523,7 @@ void Log::outBasic( const char * str, ... )
        fprintf(logfile, "\\n" );
        va_end(ap);
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -549,6 +563,7 @@ void Log::outDetail( const char * str, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }

    fflush(stdout);
@@ -613,6 +628,7 @@ void Log::outDebug( const char * str, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -649,6 +665,7 @@ void Log::outCommand( uint32 account, const char * str, ... )
        fprintf(logfile, "\\n" );
        va_end(ap);
        fflush(logfile);
+        swapLogFile();
    }

    if (m_gmlog_per_account)
@@ -733,6 +750,7 @@ void Log::outMenu( const char * str, ... )

        fprintf(logfile, "\\n" );
        fflush(logfile);
+        swapLogFile();
    }
    fflush(stdout);
}
@@ -754,6 +772,30 @@ void Log::outRALog(    const char * str, ... )
    fflush(stdout);
}

+void Log::swapLogFile()
+{
+    if (logfile && m_file_size_limit>0) // there is a logfile and a size limit
+    {
+        if(m_count_lines >= m_file_size_limit)
+        {
+            m_count_lines=0;
+            if(log_size_limit_filename.empty())    // generate backup file name if empty
+            {
+                size_t dot_pos = log_filename.find_last_of(".");
+                log_size_limit_filename.append(log_filename);
+                log_size_limit_filename.insert(dot_pos,"_part");
+            }
+            // close the logfile, remove, rename, open new logfile .. TODO Error Handling
+            fclose(logfile);
+            remove(log_size_limit_filename.c_str());
+            rename(log_filename.c_str(),log_size_limit_filename.c_str());
+            logfile=fopen(log_filename.c_str(), "w");
+        }
+        else {m_count_lines++;}
+    }
+}
+
+
void outstring_log(const char * str, ...)
{
    if( !str )
diff --git a/src/shared/Log.h b/src/shared/Log.h
index 27be84f..f62579e 100644
--- a/src/shared/Log.h
+++ b/src/shared/Log.h
@@ -146,6 +146,13 @@ class Log : public MaNGOS::Singleton<Log, MaNGOS::ClassLevelLockable<Log, ACE_Th
        // gm log control
        bool m_gmlog_per_account;
        std::string m_gmlog_filename_format;
+
+        // log file size limit
+        uint32 m_file_size_limit; // limit size <= 0 deaktivate
+        uint32 m_count_lines; // count the lines in the log file
+        std::string log_filename;
+        std::string log_size_limit_filename;
+        void swapLogFile();
};

#define sLog MaNGOS::Singleton<Log>::Instance()
-- 
1.6.1.9.g97c34

Maybe not the most elegant solution possible, but it does exactly what I want it to do - might need a bit more testing, but I don't see why this should not get added to official repository! :lol:

Link to comment
Share on other sites

  • 2 weeks later...

True freghar, but that means running an extra application - I think many would prefer an all-in-one solution. There is something to be said for an external monitoring / logging program though, maybe something like the RA console but just for logs. People can then write their own applications to connect to the logging port and do with the logs what they want. Would be a bit more effort than the current posted solution.

On that note: I have been testing the patch by lp-vamp on our life platform for the past three days, so far without a single crash. It seems definately stable, and I would suggest adding it to the trunk - more people can benefit from it, especially since it can also be turned off by default.

Link to comment
Share on other sites

  • 3 weeks later...

The logging data that exeeds 2*size_of_lines (two log files) is not stored, an will be lost.

Maybe i have misunderstood how it works, bu If I am logging with Level 3 and the two logs files are full, Will i be able to see the last line logged?

I think Aokromes is rigth. the better system would be

logfile.log 1000 lines

logfile2.log 1000 lines

logfile3.log 100 lines

When the 3rd file is created delete the 1st.

Thank you anyway

Link to comment
Share on other sites

I am affraid you have misunderstood Xarly. The patch already works as you propose, only with 2 logfiles (logfile 1 is full - start logfile 2. Logfile 2 is full, start with logfile 1 again). Have been using it for the past few weeks now without trouble, it's an awesome feature :)

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use