Jump to content

freghar

Members
  • Posts

    461
  • Joined

  • Last visited

    Never
  • Donations

    0.00 GBP 

Everything posted by freghar

  1. Compile it. I do compile (and mostly static-link) several things for centos (like newer mysqld, git, ...) on places where I am forced to use centos. Moreover, git is quite easy to compile and static-link on some other modern distro, then you can move the git binary to your centos server and (optionally) make git-* symlinks to it (in /usr/local/bin for example).
  2. I'm not sure about that .. since mangos uses it's own makefile(s) for tbb as original tbb makefile wasn't quite compatible with autotools.
  3. Well "multiplier" is "base * m" where "m" is that multiplier. It's not "base ^ m". That means if your auction time is 24h and you set the multiplier to 90, it will be 24*90 hours = 90 days = roughly 3 months. Similarly, 48*90 = 180 days =~ 6 months for "48h" auctions. That's _very_ basic math.
  4. Old issues, old solutions, search for "anticheat" here.
  5. Three months worked for me. Do you need auction times longer than three months?
  6. That's why I recommend to disable autocrlf whenever possible - it allows "windows people" to correct CRLF/LF errors in the repository itself - it isn't possible with autocrlf turned on.
  7. Each time you change the autocrlf settings, you need to do a fresh checkout (either "checkout -f" or delete all files except .git/ and "reset --hard") to make the option _work_. If you checkout your working tree with autocrlf=true, it will have CRLFs there when you change it to =false. That might be the reason why they show up. I use autocrlf=false and save my files using the "Unix format" (notepad++ settings) on windows - so far so good.
  8. The autocrlf feature is weird indeed, it sometimes works, sometimes not. So I chose to simply go around - any more advanced text editor for windows supports LF-ending files, so autocrlf=false and using "anything except notepad.exe" works for me.
  9. I don't think mangos supports "costum" type of scripts.
  10. Also see Which Language Is Right For You.
  11. Basically that's the IP the client will connect to. Use your logic to change it to whatever you need. Ie. it makes no sense to set it to your "external" IP if your players are going to connect from LAN. Similarly, LAN NIC IP address won't work with players outside. You can always make several realmlist entries and make them "point" to one world server.
  12. Wrong world server IP address in realmd.realmlist table. I'm moving this to general help.
  13. GCD checking? I know only Stein's algorithm (binary shifts in order to find a common divisor of two numbers) and that seems to be completely unrelated to any anticheat system.
  14. I've never used the "fake server" for AC2, so I don't know any details other than it's been already broken down to a "normal" crack level recently, ie. no fake server needed, you can google out the details. Back on topic - no. Imagine a client-side "warden" using some kind of PGP-encrypted/signed messages. You can't modify the message without breaking the signature. This means there's no way around without either breaking the (say, 4096-bit) encryption key or modifying the client (making your own using extracted key(s) and mimic the "valid" one, ... directly modify the client's memory, making it skip the "hack check" and so on). There's (currently) no known effective open-source way of ensuring client-level security. As said earlier - warden builds it's success on closed source. There's actually nice analogy saying "my computer = my fortress", which seems to be true. The safest solution for "remote" multiplayer would be to host a terminal server and allow only remote desktop logins directly into the game. PS: To complete my explanation - you can't (in most cases) even replay the client's reply, since it can use things like timestamps encoded within it's message (like SPA auth does), randomly generated parts (see WPA/WPA2 authentication) and so on.
  15. What's the point? If it's client-side, you can't enforce it's usage. There are _always_ ways to bypass it. Even if it uses some sort of asymmetric cipher to "sign" packets sent to a server, it can still be fooled. Even if you "overhack" client's OS (as StarForce protection does) by running the OS kernel under your control, it can still be broken. Using virtualization, firewire interface (direct memory access) - ie. halting CPU for a few microseconds and modifying it's registers / RAM. It isn't just warden, other games have ie. punkbuster that does something similar. There's technically no way to prevent cheating in multiplayer games unless you're controlling everything server-side. That includes movement calculations, weapon firing, player interaction, and so on. It can be reasonably managed for 32-64 players (ie. battlefield-like games), but there's no chance for games like WoW .. except using some kind of trojan on the client (warden).
  16. WoW itself is not good enough in this aspect. In fact - it's either mostly-lag-free gameplay or strictly latency-based movement (like Enemy Territory). The former has much less overhead at the expense of possible cheating.
  17. use default value of 0.0.0.0 in the config file
  18. Please make a new thread for that question, I'll delete it here after that, just to keep things clean, thanks. Oh, and .. be more specific, I have no idea what kind of patch do you want.
  19. First, using select() isn't the same as "using one select() in the whole project". The most effective way I know about is to create a (possibly dynamic) pool of threads that do select() / poll() and queue data for processing. There are several reasons for that, including bonus features like packet payload defragmentation (since no read() is guaranteed to return a full packet from a network socket). I guess mangos does it similarly, at least that's what mangosd.conf tells. ################################################################################################################### # NETWORK CONFIG # # Network.Threads # Number of threads for network, recommend 1 thread per 1000 connections. # Default: 1 # # Network.OutKBuff # The size of the output kernel buffer used ( SO_SNDBUF socket option, tcp manual ). # Default: -1 (Use system default setting) # # Network.OutUBuff # Userspace buffer for output. This is amount of memory reserved per each connection. # Default: 65536 # # Network.TcpNoDelay: # TCP Nagle algorithm setting # Default: 0 (enable Nagle algorithm, less traffic, more latency) # 1 (TCP_NO_DELAY, disable Nagle algorithm, more traffic but less latency) # # Network.KickOnBadPacket # Kick player on bad packet format. # Default: 0 - do not kick # 1 - kick # ################################################################################################################### Moreover, I remember Derex implementing /dev/epoll interaction (kernel-level poll()), perhaps it's already within ACE.
  20. Furthermore, the client doesn't send the password in plaintext anymore. At least AFAIK.
  21. Yes, I was able to read all that on the web, that LAA flag is there to *not* negatively affect performance for apps that use less than 2G of RAM (and thus don't need 3/1). It's even possible to have 4/4G "switch", in fact there's been a patch for 2.3/2.4 linux kernel doing that, but removing the shared part (and thus needed context switches, TLB flushes, ...) caused about 30% slowdown. There are also ways to access more than 4G RAM from within a single process, either using some windows fancy-name feature or mmap() hooks. What I didn't understand were crashing apps with forced LAA flag even when they were compiled without it - IIRC there was (on linux) also 2/2->3/1 switch around 2.2-2.3 kernel and no apps needed recompilation. I'm not talking about LAA flag itself, the flag (as I found out) is just a "allow 3/1 for me" thing, it has nothing to do with actual binary code (ie. it can be switched using HEX editor). However since most of the crash-sensitive applications were/are big ones like certain games, I guess NT kernel puts the shared part (2GB) in the lower section of VAS, leaving upper 2GB for app data. When the kernel try to run the binary in 3/1 mode, there's a 1GB gap and a missing 1GB on top, which probably causes those crashes. That's just my theory, as said earlier, I'm no NT kernel memory specialist. Anyway, it all boils down to using 64bit OS whenever possible, all major OSes are able to run both 64bit and 32bit code (given proper libraries, which is - again - automated). Having 4GB on 32bit CPU is pretty rare, though my IBM R60 is a living proof of it. So there's no reason to use 32bit system on 64bit capable CPU ... other than saving a few MBs of used memory and some GBs of disk space.
  22. Yes, I've tried it with /PAE as well, though PAE shouldn't be related to 2/2 or 1/3 "switch". Is there any particular reason why 2/2-compiled applications shouldn't run on 1/3 systems? Aside from some LAA thing (which is just a windows-specific binary flag as I googled out), there's no particular reason I can think of - assuming the NT kernel puts the shared 1GB stuff on top, leaving lower 3GB of VAS for the application itself. It would make sense for 1/3 -> 2/2 "move" when the binary could have hardcoded addresses above 2GB boundary, but not the other way around. Though I have to admit I'm no expert in NT kernel memory management. PS: On a related note - I found out that winXP is unable to address more than 4G of RAM even with /PAE - because of some *possible* 3rd party driver issues. I understand that.
  23. 1) is nonsense, even on github - you can easily delete it and set your new "active" (ie. the one you want to be displayed on github.com/username/reponame by default) 2) makes sense, though I'd name it "base" or something like that
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Privacy Policy Terms of Use