Jump to content

freghar

Members
  • Posts

    461
  • Joined

  • Last visited

    Never
  • Donations

    0.00 GBP 

Posts posted by freghar

  1. Compile it. I do compile (and mostly static-link) several things for centos (like newer mysqld, git, ...) on places where I am forced to use centos. Moreover, git is quite easy to compile and static-link on some other modern distro, then you can move the git binary to your centos server and (optionally) make git-* symlinks to it (in /usr/local/bin for example).

  2. no that would work perfectly, what Rate.Auction.Time did you use for that? my math would suggest that a setting of 60 would give auctions of 1month/2month/3month, but i'm not sure (hence my little rant about not enough documentation)

    Well "multiplier" is "base * m" where "m" is that multiplier. It's not "base ^ m". That means if your auction time is 24h and you set the multiplier to 90, it will be 24*90 hours = 90 days = roughly 3 months. Similarly, 48*90 = 180 days =~ 6 months for "48h" auctions. That's _very_ basic math.

  3. I've been using notepad++ for years, and I've gotten in the habit of working with formatting characters on.

    I see that after a fresh fetch, merge, and checkout on a brand new branch, CRLF characters are there

    git diff says that every line of each of those files has changed

    git diff --ignore-space-at-eol says there are no changes

    I can do Edit > EOL Conversion > Unix Format, and they all change to LF, and then I save

    git diff still says that every line of each of those files has changed

    git diff --ignore-space-at-eol still says there are no changes

    Ugh. I think I'll just save my code outside the repo somewhere, delete the repo, and start from scratch.

    Now, should I use CRLF or just LF? I can set my editors to do either, just need to know.

    Each time you change the autocrlf settings, you need to do a fresh checkout (either "checkout -f" or delete all files except .git/ and "reset --hard") to make the option _work_. If you checkout your working tree with autocrlf=true, it will have CRLFs there when you change it to =false. That might be the reason why they show up.

    I use autocrlf=false and save my files using the "Unix format" (notepad++ settings) on windows - so far so good.

  4. Sorry for posting in the wrong section. So under the address in the realmlist, I take it it shouldn't read the 127.0.0.1 address but more so my computers ip? or the routers ip? Thanks very much, my apologies for asking such trivial questions.

    Basically that's the IP the client will connect to. Use your logic to change it to whatever you need. Ie. it makes no sense to set it to your "external" IP if your players are going to connect from LAN. Similarly, LAN NIC IP address won't work with players outside. You can always make several realmlist entries and make them "point" to one world server.

  5. Couldn't you log each of Warden's queries and responses and make it return the OK signal even if you are using hacks, similar to AC2's crack?

    I've never used the "fake server" for AC2, so I don't know any details other than it's been already broken down to a "normal" crack level recently, ie. no fake server needed, you can google out the details.

    Back on topic - no. Imagine a client-side "warden" using some kind of PGP-encrypted/signed messages. You can't modify the message without breaking the signature. This means there's no way around without either breaking the (say, 4096-bit) encryption key or modifying the client (making your own using extracted key(s) and mimic the "valid" one, ... directly modify the client's memory, making it skip the "hack check" and so on).

    There's (currently) no known effective open-source way of ensuring client-level security. As said earlier - warden builds it's success on closed source. There's actually nice analogy saying "my computer = my fortress", which seems to be true.

    The safest solution for "remote" multiplayer would be to host a terminal server and allow only remote desktop logins directly into the game.

    PS: To complete my explanation - you can't (in most cases) even replay the client's reply, since it can use things like timestamps encoded within it's message (like SPA auth does), randomly generated parts (see WPA/WPA2 authentication) and so on.

  6. Hey TOM, how about creating a separate anti-cheat process similar with warden but totaly independent? It dosen't necesarly need to use client included warden, i think a custom luncher could do the job as well.

    What's the point? If it's client-side, you can't enforce it's usage. There are _always_ ways to bypass it. Even if it uses some sort of asymmetric cipher to "sign" packets sent to a server, it can still be fooled. Even if you "overhack" client's OS (as StarForce protection does) by running the OS kernel under your control, it can still be broken. Using virtualization, firewire interface (direct memory access) - ie. halting CPU for a few microseconds and modifying it's registers / RAM.

    It isn't just warden, other games have ie. punkbuster that does something similar. There's technically no way to prevent cheating in multiplayer games unless you're controlling everything server-side. That includes movement calculations, weapon firing, player interaction, and so on. It can be reasonably managed for 32-64 players (ie. battlefield-like games), but there's no chance for games like WoW .. except using some kind of trojan on the client (warden).

  7. First, using select() isn't the same as "using one select() in the whole project". The most effective way I know about is to create a (possibly dynamic) pool of threads that do select() / poll() and queue data for processing. There are several reasons for that, including bonus features like packet payload defragmentation (since no read() is guaranteed to return a full packet from a network socket). I guess mangos does it similarly, at least that's what mangosd.conf tells.

    ###################################################################################################################
    # NETWORK CONFIG
    #
    #    Network.Threads
    #         Number of threads for network, recommend 1 thread per 1000 connections.
    #         Default: 1
    #
    #    Network.OutKBuff
    #         The size of the output kernel buffer used ( SO_SNDBUF socket option, tcp manual ).
    #         Default: -1 (Use system default setting)
    #
    #    Network.OutUBuff
    #         Userspace buffer for output. This is amount of memory reserved per each connection.
    #         Default: 65536
    #
    #    Network.TcpNoDelay:
    #         TCP Nagle algorithm setting
    #         Default: 0 (enable Nagle algorithm, less traffic, more latency)
    #                  1 (TCP_NO_DELAY, disable Nagle algorithm, more traffic but less latency)
    #
    #    Network.KickOnBadPacket
    #         Kick player on bad packet format.
    #         Default: 0 - do not kick
    #                  1 - kick
    #
    ###################################################################################################################
    

    Moreover, I remember Derex implementing /dev/epoll interaction (kernel-level poll()), perhaps it's already within ACE.

  8. XP 32bit can't address more than 4GB of physical (and typically < 3.5GB; a limit introduced with XP SP2 due to 3rd party driver manufactures creating defective drivers) due to licensing issues.

    Applications not compiled with the LAA flag in a Windows OS that supports /3GB will continue to use a 2/2 split.

    /PAE is only related to using >4GB RAM but <16GB RAM on a 32bit system, however it was introduced with XP SP2 by default because DEP (NX bit) requires it. It doesn't enable support of additional physical memory in XP 32bit, however.

    Yes, I was able to read all that on the web, that LAA flag is there to *not* negatively affect performance for apps that use less than 2G of RAM (and thus don't need 3/1). It's even possible to have 4/4G "switch", in fact there's been a patch for 2.3/2.4 linux kernel doing that, but removing the shared part (and thus needed context switches, TLB flushes, ...) caused about 30% slowdown. There are also ways to access more than 4G RAM from within a single process, either using some windows fancy-name feature or mmap() hooks.

    What I didn't understand were crashing apps with forced LAA flag even when they were compiled without it - IIRC there was (on linux) also 2/2->3/1 switch around 2.2-2.3 kernel and no apps needed recompilation. I'm not talking about LAA flag itself, the flag (as I found out) is just a "allow 3/1 for me" thing, it has nothing to do with actual binary code (ie. it can be switched using HEX editor). However since most of the crash-sensitive applications were/are big ones like certain games, I guess NT kernel puts the shared part (2GB) in the lower section of VAS, leaving upper 2GB for app data. When the kernel try to run the binary in 3/1 mode, there's a 1GB gap and a missing 1GB on top, which probably causes those crashes. That's just my theory, as said earlier, I'm no NT kernel memory specialist.

    Anyway, it all boils down to using 64bit OS whenever possible, all major OSes are able to run both 64bit and 32bit code (given proper libraries, which is - again - automated). Having 4GB on 32bit CPU is pretty rare, though my IBM R60 is a living proof of it. So there's no reason to use 32bit system on 64bit capable CPU ... other than saving a few MBs of used memory and some GBs of disk space.

  9. freghar: On windows for the /3GB boot switch to work, you need to compile mangos with the large address aware flag. I don't think it's set by default though I could be wrong.

    EDIT: NVM It seems that it's set by default now. Did you try the /3GB flag in conjunction with the /PAE boot flag?

    Yes, I've tried it with /PAE as well, though PAE shouldn't be related to 2/2 or 1/3 "switch".

    Is there any particular reason why 2/2-compiled applications shouldn't run on 1/3 systems? Aside from some LAA thing (which is just a windows-specific binary flag as I googled out), there's no particular reason I can think of - assuming the NT kernel puts the shared 1GB stuff on top, leaving lower 3GB of VAS for the application itself. It would make sense for 1/3 -> 2/2 "move" when the binary could have hardcoded addresses above 2GB boundary, but not the other way around. Though I have to admit I'm no expert in NT kernel memory management.

    PS: On a related note - I found out that winXP is unable to address more than 4G of RAM even with /PAE - because of some *possible* 3rd party driver issues. I understand that.

  10. two reasons

    1. You have to have a master branch at all times, this seemed like the simplest thing to put there

    2. It makes it easier for newer people to clone and create a patch.

    1) is nonsense, even on github - you can easily delete it and set your new "active" (ie. the one you want to be displayed on github.com/username/reponame by default)

    2) makes sense, though I'd name it "base" or something like that

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use