Jump to content

GWR

Members
  • Posts

    2
  • Joined

  • Last visited

    Never
  • Donations

    0.00 GBP 

About GWR

  • Birthday 01/01/1

GWR's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. Notices are no errors... To fix the notices place a if(isset($variablename["index"]) {...} around the code to remove the notice... if the array's key is important, remove the bug creating the notice/warning/error. PS: this was a general workaround, not a script-related one. bye GWR
  2. As there was already somebody mentioning lag spikes on servers with a higher amount of players connected, I tried to improve query-performance a bit. Server uses Trinity but think the change could also get applied to mangos. First I thought - instead of (single row) "Replace Into"-queries we could change to the most effective one: First query an "update", if no affected_row is returned, "insert" it. Why is that the most performant thing? Insert is done one time, each further query will just update that entry. Ok, problem is, trinity wont return ResultSets with the field containing affected rows unless there are rows returned, so only for "select"-queries. So there were 2 options left: Split the "Replace Into"-single-entry-part into two or use "Insert into ... on duplicate key update". I'm presenting both. Sadly this could only be used for the part used within the Player:SaveToDB-Part. Everything Item-related (CreateItem) couldn't be done that way. To replace it too, we would need a new unique-key in the table of the feed_log. Why? Replace-Into does replace (not update! it deletes+inserts) if every primary/unique-key fits the ones in the query, else it inserts. The about 50% faster query ("Insert into ... on duplicate key update") updates as soon as one primary key fits to the query. I don't know whether that increases performance but it should (like said by about 50% - but it depends on MySQL-cache and config - for normal "root servers" it should outperform the old one). The changes overwrite the patched part of Trinitys player.cpp Player::SaveToDB(...) old /** World of Warcraft Armory **/ std::ostringstream ps; ps << "REPLACE INTO armory_character_stats (guid,data) VALUES ('" << GetGUIDLow() << "', '"; for(uint16 i = 0; i < m_valuesCount; ++i ) { ps << GetUInt32Value(i) << " "; } ps << "')"; CharacterDatabase.Execute( ps.str().c_str() ); new ("on duplicate"-method): /** World of Warcraft Armory **/ /** * 1) INSERT INTO table SET guid = 'guid', data = 'data' ON DUPLICATE KEY UPDATE data = 'newdata' * This method is performaing about 50% faster than replace into. * In addition it really updates a entry if not existing * "Replace Into" does the following: delete and insert, if there is an existing row. * So "Replace Into" also changes autoincremental fields - which may be wished, it also * deletes multiple rows if they fit the "where"-clause. **/ std::ostringstream ps; for(uint16 i = 0; i < m_valuesCount; ++i ) { ps << GetUInt32Value(i) << " "; } std::ostringstream query; query << "INSERT INTO armory_character_stats SET guid = '" << GetGUIDLow() << "', data = '" << ps.str().c_str() << "' " << "ON DUPLICATE KEY UPDATE data = '" << ps.str().c_str() << "'"; CharacterDatabase.Execute(query.str().c_str()); sLog->outDetail("WoWArmory: inserted/updated character_stats entry for %u", GetGUIDLow()); Instead of that, we could further improve that part by splitting the insert from the update part. At the moment everything is occouring if players: 1) log out 2) say ".save" (or makro) 3) stay within the game for the intervall set in the configs (15 min) Instead of checking for existance of entry each time 1-3 happens, we should just update it, which then safes "x*SaveToDB-1" checks. First change is in the same place as above: player.cpp Player::SaveToDB(...) /** World of Warcraft Armory **/ //removed old replace-into code /** * 2) This version would use the approach, that insert/existing-check is done during load of character * so an update saves 1 check each time SaveToDB is called **/ std::ostringstream query; query << "UPDATE armory_character_stats SET data = '"; for(uint16 i = 0; i < m_valuesCount; ++i ) { query << GetUInt32Value(i) << " "; } query << "' WHERE guid = '" << GetGUIDLow() << "'"; CharacterDatabase.Execute(query.str().c_str()); sLog->outDetail("WoWArmory: updated character_stats entry for %u", GetGUIDLow()); second needed change: player.cpp Player::LoadFromDB(uint32...) End of the function /** World of Warcraft Armory - GWR change **/ /** * Doing it here: only checked once during load of a character, not everytime * SaveToDB is called **/ std::ostringstream query; query << "SELECT guid FROM armory_character_stats WHERE guid = '" << GetGUIDLow() << "'"; QueryResult qresult = CharacterDatabase.Query(query.str().c_str()); if (qresult) { sLog->outDetail("WoWArmory: found character_stats entry for %u", GetGUIDLow()); }else{ std::ostringstream ps; for(uint16 i = 0; i < m_valuesCount; ++i ) { ps << GetUInt32Value(i) << " "; } query.str(""); query << "INSERT INTO armory_character_stats (guid, data) VALUES ('" << GetGUIDLow() << "', '" << ps.str().c_str() << "')"; qresult = CharacterDatabase.Query(query.str().c_str()); if (!qresult) { sLog->outDetail("WoWArmory: inserted character_stats entry for %u", GetGUIDLow()); }else{ sLog->outError("WoWArmory: couldn't insert character_stats entry for %u", GetGUIDLow()); } } /** World of Warcraft Armory - GWR change **/ return true; } bool Player::isAllowedToLoot(const Creature* creature) .... So to sum up: Last two code boxes contain a change which improves performance of "armory_character_stats"-updates/inserts. To improve the vastly lagspiking "replace into" of the Feed-Log-table we would need to create a new unique key (combination of the former three ones). If that would be done, script could be changed to use "on duplicate key" instead of "replace into". If some of you are able to test it on big servers, they could tell if the change change something or performance gains are marginal. Last thing I want to mention: I know, it saves a big bunch of lines to loop through each field (for(uint16 i = 0; i < m_valuesCount; ++i )) but it ends up packaging 1336 fields into the blob/long text-field. I'm not quite sure whether the script uses all, but I had a look, and I believe it could be reduced to a maximum of 500 fields (player+unit). The less a query has to carry, the faster it can go through the mysql-query-optimizer. So if one is having much of sparetime, he/she could write something which produces a list of used defines (php and c++) so the query could get a bit shorter, saving space and gaining some performance. At the end I have to add: sorry for blaming a bit although that is my first post here. And if I'm not allowed to post parts of TC here, feel free to delete the parts violating the rules. bye GWR EDIT: If one of you knows how to get the affected_rows of the non-blocking queries (Execute, PExecute) we could also remove the check during the load of a character as we would than be able to do: Update(...) - row affected ? if no then insert, else everything is ok - this would reduce the amount of queries to 1 in a lifetime for each character instead of 1 everytime a player logs in. But I assume that it isn't that easy, else TC-devs would have added flag 2 to mysql_real_connect (affected_rows also returning "found" rows instead of only "changed" rows).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use