Sponsored By

How to Hurt the Hackers: The Scoop on Internet Cheating and How You Can Combat It

Ever wonder how to confront the never-ending struggle with cheaters who are determined to hack your game? Ensemble Studios' Matt Pritchard defines the rules of the cheating game and shares his insight on the methods and motivations of hackers, based largely on his experience with the Age of Empires franchise.

July 24, 2000

35 Min Read
Game Developer logo in a gray background | Game Developer

Author: by Matthew Pritchard

I had planned to begin this article by sharing my own true experiences with online cheating as it pertained to a particular game. But I think the long version of my story would cast an unnecessarily negative light on the game and the company that made it. And since the developers are good friends of ours, I'll stick to the short version that goes like this.

Last year I became hooked on a certainfirst-person shooter (FPS) game. After a couple months of addictive online gaming, I became convinced that some players were cheating and things suddenly changed that day. I was ready to walk away from the game in disgust and tell everyone else to do the same. Instead, I decided it was time to learn what I could about the alleged cheaters, their motivations, and most importantly their methods. In my case, I discovered at least three distinctly different methods of cheating that could explain what I experienced -- though as just a player I could not prove conclusively which methods, if any, were being used against me.

The aim of this article is to bring the subject of online/multiplayer cheating out of the shadows and talk about it in terms of real problems with real games and to help build a framework for classifying and understanding the various details. I will cover some of the ways that players are able to cheat at various games; at times I will go into the working details, ways to prevent those cheats, and limitations of various game architectures as they relate to multiplayer cheating. This is by no means a comprehensive and exhaustive tome on the issue, but it is a start. There is a serious lack of information on this subject, and paranoia among developers that talking about it will reveal secrets that will only make the problem significantly worse. Several individuals at various companies declined to talk to me about cheating and their games for this and other similar reasons. I respect that, but I think developers have everything to gain by sharing our knowledge about cheaters and how to combat them.

Just how seriously should you as a developer take the possibility of online cheating? If your game is single-player only, then you have nothing to worry about. But if your game is multiplayer only, the success of your entire product is at stake. If your game does both, you're somewhere in the middle. As more games are released with online play as an integral component, drawing ever-larger audiences (and the corollary development of online communities and sites based around the game), it becomes ever more important to insure that each online game player experiences what they believe to be a fair and honest experience. I'm reminded of a quote from Greg Costikyan's excellent report, "The Future of Online Gaming" (http://www.costik.com): "An online game's success or failure is largely determined by how the players are treated. In other words, the customer experience -- in this case, the player experience -- is the key driver of online success." Our short version is, "Cheating undermines success."

Consider the well-known case of Blizzard's Diablo -- deservedly a runaway best-seller and great game that acquired a significant reputation for a horrible multiplayer experience because of cheaters. Many people I know either refused to play it online, or would only play over a LAN with trusted friends. Blizzard did their best to respond, patching it multiple times, but they were fighting an uphill battle.

Cheating hit closer to home for me while I was working on the final stages of Age of Empires II: The Age of Kings. Cheating online became a widespread problem with the original Age of Empires. Tournaments had to be cancelled due to a lack of credibility, the number of online players fell, and the reputation of my company took a direct hit from frustrated users. Unable to spare the resources to fix the game properly until after Age of Kings was done, we just had to endure our users turning their anger upon us -- probably the most personally painful thing I've experienced as a developer.

What about your next game? This is a good time to introduce my first two rules about online cheating:

Rule #1: If you build it, they will come -- to hack and cheat.

Rule #2: hacking attempts increase with the success of your game.

Need more reasons to take online cheating seriously? Go onto eBay and type in the name of your favorite massively multiplayer game. Now look at the real money changing hands for virtual characters and items. What if those items being sold were obtained via some sort of cheat or hack? Let's not overlook the growth of tournaments and contests for online games. Consider the public relations nightmare that would ensue if the winner of a cash prize in a tournament had cheated. Enough to give you a headache, eh?

Understanding the Hackers and Cheaters

The sad truth is that the Internet is full of people that love to ruin the online experiences of others. They get off on it. A great many cheaters use hacks, trainers, bots, and whatnot in order to win games. But while some openly try to wreak havoc, many really want to dominate and crush opponents, trying to make other players think they are gods at the game -- not the cheaters they are. The only thing that seems to bother them is getting caught. Beyond that, no ethical dilemmas seem to concern them. The anonymity and artificiality of the Internet seems to encourage a moral vacuum where otherwise nice people often behave in the worst possible way. A big factor in this is a lack of consequences. If a player is caught, so what? Are they fined or punished? No. Are they rejected by the people they played against? Usually, but it's so easy to establish another identity and return to play that discovery and banishment are no barrier to those with ill intent.

Another interesting aspect of online cheating is the rise of clans and how cheats get propagated. If a member of a clan hacks a game or obtains a not-readily-available program for cheating, it will often be given to other members of the clan with the understanding that it's for clan use only and to be kept secret. The purpose being, of course, to raise the standing and prestige of the clan. If the cheater is not a clan member, odds are he will keep the secret to himself for a while and not advertise his advantage. The logic here is simple: If anyone goes public with a cheat, a) he will lose his advantage, b) he will probably be identified by his opponents as a cheater, and c) the developer can then patch the game, invalidating the cheat. As a result of this secretive behavior we get to rule number three.

Rule #3: cheaters actively try to keep developers from learning their cheats.

Tools of the Hackers

So how do they discover the hacks and create the programs to cheat at your game? Consider rule number four:

Rule #4: Your game, along with everything on the cheater's computer, is not secure. The files are not secure. Memory is not secure. Services and drivers are not secure.

That's right, you gave them a copy of your game when they purchased it. The hackers have access to the same tools that you had while making the game. They have the compilers, dissemblers, debuggers, and utilities that you have, and a few that you don't. And they are smart people -- they are probably more familiar with the Assembly output of an optimized C++ file than you are. The most popular tool among the hackers I surveyed was NuMega's excellent debugger, SoftIce - definitely not a tool for the wimpy. On another day, you just might be trying to hire these people. Many of them possess a true hacker ethic, doing it just to prove it can be done, but more do it specifically to cheat. Either way we get the same result: a compromised game and an advantage to the cheater.

Hacking games is nothing new, it's been going on as long there have been computer games. For single-player games, it has never been an issue, since no matter what a player does with a game, he's only doing it to himself (and therefore must be happy about it). What's new is bringing the results of the hacking to other players, who never wanted or asked for it.

I've lost count of the number of developers I've encountered who thought that because something they designed was complicated and nobody else had the documentation, it was secure from prying eyes and hands. This is not true, as I learned the hard way. If you are skeptical, I invite you to look at the custom graphics file format used in Age of Empires. Last year, I received a demanding e-mail from a kid who wanted the file format for a utility he was writing. I told him to go away. Three days later he sent me the file format documentation that he reverse-engineered, and asked if he missed anything. He hadn't. Thus, this is a perfect example of rule number five. Yes, I've borrowed it from cryptography, but it applies equally well here.

Rule #5: Obscurity is not security.

Sometimes we do things, such as leaving debug information in the game's executable, that make the hacker's job easier. In the end, we cannot prevent most cheating. But we can make it tough. We don't want effective cheating to be a matter of just patching six bytes in a file. Ideally we want hacking a game to be so much work that it approaches the level of having to completely rewrite the game -- something that goes outside the realm of any reasonableness on the hacker's part.

One of biggest things we often do that makes it easier for a hacker, and thus harder on us, is include Easter eggs and cheat codes in the single-player portion of our games. Considered to be practically a requirement, they expose extralegal capabilities of our game engines and make it much easier for the hackers to locate the data and code that controls that functionality.

Models of Multiplayer Communications

Most online games use one of two communication models: client-server and peer-to-peer. For our discussion, the deciding factor is where game event decisions are made. If only one player's (or a separate) computer makes game event decisions or has the game simulation data, it is client-server. If all players' computers make some or all of the game event decisions, or have the full game simulation, then it's peer-to-peer. Many of the cheating methods described here are applicable to both models. I've organized the various cheats, trainers, exploits, and hacks that I've learned about into the categories listed in Table 1.

Table 1. Cheating classifications

Reflex augmentation

Authoritative clients

Information exposure

Compromised servers

Bugs and design loopholes

Environmental weaknesses

 

Oh Look, It's the Terminator...

The first type of cheat is reflex augmentation, which is when a computer program replaces human reaction to produce superior results. This type of cheating is really only applicable to games where reflexes and reaction times matter, and thus is most applicable to action games.

During my FPS obsession, I believe that I encountered a form of reflex augmentation known as an aiming proxy. An FPS aiming proxy works like this: The proxy program is run on a networked computer and the player configures it with the address of the server they are going to play on. They then run the FPS game on another machine and connect to the proxy machine, which in turn connects the game to the server, acting just like an Internet packet router.

The only hitch is that the proxy monitors and attempts to decode all of the packets it is routing. The program keeps track of the movements and locations of all the players the server is reporting to the game, building a simple model. When the proxy sees a Fire Weapon command packet issued by the cheating player, it checks the locations and directions of all the players it is currently tracking and picks a target from them. It then inserts a Move/Rotate command packet into the stream going to the server in front of (or into) the Fire Weapon command packet that points the player straight at the selected target. And there you have it: perfect aim without all the mouse twisting.

When aiming proxies for Quake first appeared a couple of years ago, their targeting wasn't too sophisticated and didn't take into account things such as the player's field-of-view (FOV) or lag. Giveaways, such as players shooting weapons out of their backs, tipped people off that something foul was afoot. One of the first countermeasures to be developed was a server add-on that statistically identified players whose aim was too good to be true, then kicked out and banned the perpetrators. This naturally proved controversial, since some people really are "railgun gods," and the issue of possibly falsely identifying a person as a cheater was raised (and has yet to go away). And of course, the aiming proxies evolved with time. Later versions were improved to consider only the player's current FOV and compensate for lag, and added just enough randomness in their aim to stay below a server's "too good to be legit" identification threshold.

This big vulnerability is summed up in rule number six. Since the proxy is not running on the same computer as the game client, definitive detection can be next to impossible. Making the development of the proxy extremely difficult then becomes a priority.

Rule #6: Any communication over an open line is vulnerable to interception, analysis, and modification.

One way to inhibit this form of cheating is to encrypt the command packets so that the proxies can't decode them. But there are limits to the extent that encryption can be used on communications. Most FPS games can send and receive a couple of kilobytes of data or more per player per second, and have to allow for lost and out-of-order packets. The encryption therefore has to be fast enough not to impact frame rate, and a given packet's encryption can not be dependent on any other packet unless guaranteed delivery is used. And once the encryption is cracked, the game is vulnerable until the encryption is revised, which usually involves issuing a patch. Then the hacking starts over.

Another way to make life more difficult for the proxy creator is to make the command syntax dynamic. Using something as simple as a seed number that's given to the game when it connects and a custom random number function, the actual opcodes used in the communication packets can be changed from game to game, or even more often. The seed itself doesn't have to be transmitted; it could be derived from some aspect of the current game itself. The idea here is that since a proxy sees all the communications, but only the communications, the random seed is derived from something not explicitly communicated. Foolproof? No. But it's far more difficult to hack, forcing the hackers to start from scratch.

If guaranteed delivery is used, another communications protection technique is to serialize each packet. Taking it a bit further, you could make a portion of the next serial number dependent on a checksum of the last packet. While there are speed issues with the delivery, it's an excellent way to make it difficult to insert or modify packets.

Though reflex augmentation seems to be exclusive to FPS games, the vulnerability extends to any game where quick reflexes can make a difference and game communications can be sniffed.

The Client Is Always Right

The next major class of cheats is exploiting authoritative clients. This is when one player's modified copy of an online game tells all the other players that a definitive game event has occurred. Examples of the communications would be "player 1 hit player 2 with the death-look spell for 200 points of damage," "player 2 has 10,000 hit points," and so on. The other players' games accept these as fact without challenging them and update their copy of the game simulation accordingly.

In this case, a hacked client can be created in many ways: The executables can be patched to behave differently, the game data files can be modified to change the game properties on the hacked client, or the network communication packets can be compromised. In any case, the result is the same - the game sends modified commands to the other players who blindly accept them. Games are especially vulnerable to this type of exploit when they are based on a single-player game engine that has been extended to support online multiplay in the most direct (read: quickest to develop) manner.

Fortunately there are several steps that a game developer can take to eliminate most problems with authoritative clients. A first step is to install a mechanism in the game that verifies that each player is using the same program and data files. This means going out and computing a CRC or similar identifier for all the data in question, not just relying on a value stored in the file or the file size. A nice side benefit is that this method also detects out-of-date files during the development process.

For peer-to-peer games, cheating can be made difficult by changing from a game engine that issues commands to one that issues command requests. It's a subtle distinction but one that requires engineering changes throughout the game. It also requires that each player's machine run a full copy of the game simulation, operating in lockstep with the other players.

Command processing in a single-player game typically works in the manner shown in Figure 1. The player issues some sort of command via the game's user interface. The game then performs a validation check on the command to see if the player has the resource, the move is legal, and so on. The game then performs the command and updates its internal game simulation. Figure 2 shows game engine command processing extended to support multiple players in the most direct way possible. The process stays the same except for the addition of a communications packet that's sent out to inform the other players of what has taken place. The receiving players integrate the data directly into their world simulation.

 

With the shift to command requests, the order of events changes a bit, which is shown in Figure 3. After determining that the command is a legal one, a command request describing the command is sent out to other players and is also placed into the player's own internal command queue, which contains command requests from other players as well as his own requests. Then the game engine pulls command requests from the queue and performs another validation check, rejecting the request if it fails. The fundamental difference is that every player has a chance to reject every action in the game based solely on the information on that player's machine. No other machine provides the information to make the determination on what is right and wrong. A hacked game cannot reach out and alter what's on an honest player's machine with this approach. Note that such an architecture works equally well for a single-player game.

Preventing a dishonest command from being accepted on an honest player's machine is only half the task. The game also has to be able to determine whether someone is playing the same game and if not, it must do something about it. For instance, when a received command request is rejected for reasons that should have prevented it from being issued in the first place (remember, the issuer is supposed to have checked it for validity before passing it to the other players), all other players should assume that a cheater is in their midst, and take some sort of action.

Often though, due to design issues (such as posting command requests to a future turn), it is not possible to thoroughly ensure that all command requests passed to other players won't be rejected if a player is being honest. A good way to deal with this is to add synchronization checking to the game. At various points during the game, each player's machine creates a status summary of the entire game simulation on that computer. The status, in the form of a series of flags, CRCs, and checksums, is then sent to all the other players for comparison. All the status summaries should be the same, provided the game program and data files are the same for each machine. If it turns out that one player has a different status from all the rest, the game can take action (like drop the player from the game). The idea is that a hacked game should cause that player's game simulation to produce different results.

Alternatively, you can make life even more difficult for the hacker by easing up on the received command request evaluations. By allowing command requests to bypass the verification check only on the machine that issued it, you're deliberately allowing the game to go out of synch if the initial verification check or data has been hacked. Combine this with a synchronization check that occurs somewhat infrequently and you've presented the hacker with something of a mystery - on his machine the cheat worked, but then a while later the other players booted him out of the game.

This status synchronization has a huge benefit for the development process as well. Getting a complicated game engine to produce the same game simulations results while having different player views, inputs, and settings is a very difficult task. It's difficult to keep the simulation-independent code from accidentally impacting the simulation. For example, a compare against the current player number variable in the simulation code, or randomly playing a background sound based on an object in the player's view using the same random function used by the simulation, could cause future executions to produce different results on different machines. Judicious use of status synchronization allows a developer to quickly narrow down the portion of the game that isn't executing the same for all players.

Client-server games unfortunately can't benefit as much from these techniques, as they lack full game information and by design must rely on the authority of the server. We will look at this more a bit later.

The next major class of cheats is what I've dubbed "information exposure." The principle is simple: On a compromised client, the player is given access or visibility to hidden information. The fundamental difference between this and authoritative clients is that information exposure does not alter communications with the other players. Any commands sent by the cheater are normal game commands - the difference is that the cheater acts upon superior information.

The first-person-shooter cheats of modified maps and models arguably fall under this classification, as they let cheating players see things that they normally wouldn't be able to (in the case of modified maps), or see them more easily (in the case of a modified player model that glows in the dark). Any game whose game play relies on some information being hidden from a player has a lot to lose to these types of cheats.

The real-time strategy (RTS) genre suffers severely from this. The most obvious being hacks that remove the "fog of war" and "unexplored map" areas from the display. With a fully visible map, the cheating player can watch what other players are planning and head them off at the pass, so to speak.

There are a couple of ways the hacker accomplishes this. The hacker may go after the variables that control the display characteristics of the map. With the help of a good debugger and single-player cheat codes to reveal the whole map, finding the locations in memory that control the map display is fairly simple. Then either the game .EXE file is modified to initialize those map control values differently, or a program is made that attaches to the game's memory space and modifies the variable values while the game is running. To combat this, the values of those variables should be regularly reported to other players in the form of a checksum or CRC code. Unfortunately, that only raises the stakes; the hackers then just attack the code that reads those control values (easy enough to find quickly), inverting or NOP'ing out the instructions that act upon them.

Additional techniques are needed to detect the hacked game view. There are a couple of ways to take advantage of the fact that the full game simulation is run on all clients. One way is to borrow a technique from the "authoritative client" section and check each command request for the side effects of a hacked map on one of the players. We specifically ask the game simulation, which is separate from the screen display, the question, "Can that player see the object he just clicked on?" In doing this we are assuming ahead of time that such hacks will be attempted, making sure we consider the side effects by which they might be detected. Once again, easing up on checks of the player's own machine is very useful. The next time the game performs a synchronization check, all the other players will agree that the cheating client is "out of synch" with the rest of the game and can deal with him accordingly.

Another technique that avoids looking at the display control variables is to compile abstract statistics on what gets drawn to the screen. The statistics are derived from the game simulation data and just filed away. This doesn't immediately prevent the hacker from cheating; instead, you send the statistics around as part of the status synchronization and see what the other players think of them.

In the RTS map-hack case, it is necessary for some change to be made to the game; either the code or some data is in a modified state while the game is running. And if something has been modified, you can attempt to detect that.

But information exposure cheats can be totally passive. Consider a scenario where a program gains access to the memory space of an RTS game that is running. It then reads key values for each player in the game out of memory and sends them to an adjacent networked computer. An industrious hacker once raised that scenario with me regarding one of the Age of Empires games, saying he had figured out how to read out of memory the resource amounts for every player. At first we thought that this wasn't very serious. He then explained that if he polled the values a couple hundred times a second, he could identify nearly every discrete transaction. A simple Visual Basic program could then display a log window for each player, with messages for events such as the training of various units (to the extent they could be distinguished from others on the basis of cost), and messages for events such as building construction, tribute, and advancement to the next age. Basically, this cheating method was the next best thing to looking over the shoulders of his opponents.

Rule #7: There is no such thing as a harmless cheat or exploit. Cheaters are incredibly inventive at figuring out how to get the most out of any loophole or exploit.

Intrigued, I asked him how he could be sure he had found the correct memory locations each time, as they changed each game since they were stored in dynamically allocated classes. His answer was most interesting. He first scanned the memory space of a paused game looking for known values for things such as population, wood, gold, and other very significant game values that he knew about and believed were unique. He had a simple custom program that looked for the values in basic formats such as long ints and floats. After his program identified all the possible addresses with those values, he ran the game a bit more until the values had changed. He then reran the program, checking the prior list of locations for the new values, reducing the list of possible addresses until he was sure he had found the correct locations. He then put a read-access breakpoint on the value and looked at how it was accessed from various points in the code. At one of the breakpoints, the C++ code for accessing the wood amount looked something like this:

GAME_MASTER -> GAME_WORLD->PLAYER[n].
ResourceAmount[Wood_Index];

This is a pointer to a pointer to an object containing an array of integers, one of which contains the value of the player's current stockpile of wood, and all the objects are dynamically allocated. The hacker's point was that if you trace back through all the dynamic pointers, you eventually find a static variable or base pointer. The different spots where his breakpoints were triggered were from member functions at different levels in the class hierarchy, and even from outside the class hierarchy containing the data. And it was finding an instance of that latter access condition that was the jackpot. There it was in his debugger's disassembly window: a base address and the Assembly code to traverse through the classes and handle player and resource index numbers.

Considering all this, I found a couple of strategies that can greatly reduce the likelihood of this sort of passive attack. Again, these tips cannot guarantee 100 percent security, but they make the hacker's job much harder.

The first strategy is to encrypt very significant values in memory at all times. Upon consideration, most game variables are not significant enough to warrant such protection - the hit points of a particular object don't tell anyone much, while a drop of 1,000 food and 800 gold from a player's resources does indicate that the player is advancing to the Imperial Age, which is an event of large strategic importance in our game. Simple encryption is relatively easy when all access to the variables goes through assessor functions. A communicative function such as XOR is your friend here, as it alters values upon storing, restores them upon reading, and is extremely fast. The whole point is to make it very hard for the hacker to find the variables he is searching for in the first place. Values the hacker would know to look for are not left around so that a simple scan can find them. In C++, our encrypted assessor functions for game resources look something like what's shown in Listing 1.

Listing 1. Hiding the variables that tip off hackers to possible cheats.

void GameResource::SetResource(int Resource_Num, int Resource_Amount)
{
GameResourceAmount[ResourceNum] = Resource_Amount ^ EncryptValue[ResourceNum];
}
int GameResouce::GetResource(int Resource_Num)
{
return( GameResourceAmount[ResourceNum] ^ EncryptValue[ResourceNum] );
}
//and more specific functions...
void GameResource::SetWood(int Wood_Amount)
{
GameResourceAmount[RESOURCE_WOOD] = Wood_Amount ^ EncryptValue[RESOURCE_WOOD];
}
int GameResource::GetWood(void)
{
return( GameResourceAmount[RESOURCE_WOOD] ^ EncryptValue[RESOURCE_WOOD] );
}

The second strategy for slowing down passive attacks is to never access very significant values from outside the class hierarchy. Assuming the values are located while using the debugger, try not to access them in a way that starts with a reliably fixed memory address. Combining this with small, randomly sized spacing buffer allocations during the main game setup ensures that the memory addresses for vital information will never be the same from one game to the next. A piece of C++ code you won't see in our next RTS game would be the following:

GLOBAL_GAME_POINTER -> PLAYER_DATA[n] ->
RESOURCE_TABLE[gold] = SOME_UNIQUE_START_VALUE;

Information access isn't limited to games as complex as RTS games, it can extend to something as simple as a card game. Consider an online card game such as poker. All it would take to ruin the game is for a player to see the values of the face-down cards in another player's hand. If the information is on the machine, hackers can go digging for it. This goes back to rule number four.

Who Do You Trust Baby?

In client-server games, because so much is controlled by the server, the game is only as good as the trust placed in the server and those who run it.

Rule #8: Trust in the server is everything in a client-server game.

An issue here is brought up because some client-server games can be customized by the user running the server. Access and configurability are great for many games, as they allow the player community to extend and evolve a game. But some individuals will test the server to see what can be exploited in the name of cheating. This in itself is not the problem - rather it's when honest but unaware players find their way to the server and don't know that they are not on a level playing field.

You really need to consider your audience here. A successful game will sell hundreds of thousands of copies, if not millions. You as a developer will be most in tune with the hard-core players - those that know the game inside and out. But it's easy to forget about the more casual players, who probably will be the majority of purchasers of your game once you pass a certain level of success. These are the people who don't know to check the status of the Cheats_Allowed flag before joining a server, or that game rule changes are transparently downloaded when they connect. All they probably know is the default game configuration, and when they see their ReallyBFG27K gun doing only 0.5 points of damage, they're going to cry foul. It doesn't matter that it was technically legal for the server operator to make the change, you still wind up with a user that is soured on the game and not likely to recommend it to his buddies anymore.

Naturally, people get a whole lot more unhappy with a game when they encounter modifications with malicious intent. What if a clan decided to add a tiny server mod to their FPS server that looked something like this snippet of C code:

If Player.Name->Contains("OUR_CLAN") Taken_Damage = Taken_Damage * 0.80;

Or what if the remote console was hacked to allow normal cheats to be toggled? Dishonest players in with the server could make a key-bind that resembled this:

Access_password on; set Cheats_Allowed true; Give Big_Ass_Weapon; Give Big_Ass_Ammo; Set Cheats_Allowed false; Access_Password off;

The important point here is that with user-run servers and powerfully configurable game engines, these kinds of shenanigans will happen. While we as developers can't protect our more casual users from joining any game server they wish, we can do a better job of letting them know when they are encountering something that could be different from what they expect. Quake 3: Arena set a great example when it introduced the concept of a "pure" server. It's a simple idea that casual users can quickly grasp and set their gaming expectations by.

But why stop there? If we download data that includes a new set of weapon properties, why not put a message on the screen saying, "Weapon properties modified"? If single-player cheat commands are issued in the middle of a game, maybe we should send a message to every client notifying them of that fact, so even players who aren't near the issuer can be made aware. Empower players to easily determine whether the games are fair or not.

Rule #9: Honest players would love for a game to tip them off to possible cheating. Cheaters want the opposite.

Bugs & Design Issues

Technically, this category of cheats is one that we bring upon ourselves: bugs in our games can be discovered by users and used to disrupt game play. Most bugs don't enable cheating, but a few do.

A good example is the farm-stopping bug in the unpatched version of Age of Empires. When a user had both a villager and a farm selected, he could issue the Stop command. Because the command was valid for a villager, it was allowed to go through, but listed both objects as the target of the command. The villager would stop working as expected and reset its state. The farm would also reset itself, something it never did normally, and replenish its food supply. Once this was discovered by players, it drastically changed the game for them, giving them a huge advantage over those who didn't know about it.

I encountered another bug when playing Half-Life. I would get into a firefight with another player, both of us using the same weapon, but when it came time to reload our weapons, my opponent was able to reload much more quickly than I could. Sure enough, when the next patch came out, I saw in the release notes that a bug allowing fast reloads was fixed. There's really not much we can do about these types of bugs, other than fix them with a patch.

Environmental Weaknesses

My last category of cheats is something of a catchall for exploitable problems a game may have on particular hardware or operating conditions. A good example is the "construction-cancelled" bug that was found amazingly in both Age of Empires and Starcraft at about the same time. The element needed to make it work was extreme lag in network communications, to the point of a momentary disconnection. When this happened, the game engines stopped advancing to the next game turn while they waited for communications to resume. During this time, the user interface still functioned, so the player didn't think the game had locked up. While the game was in this state, a player could issue a command to cancel construction of a building, returning its resources to the player's inventory - only the player would issue the command over and over as many times as possible. Normally, a player could only issue one Cancel command per turn, but because the game simulation was in a holding state, multiple command requests went into the queue. Because of some necessities of RTS engine design, when an object is destroyed during a turn by something such as a Cancel command, the destruction is postponed until after all the commands for that turn have been processed. The result was the command executed multiple times during one game update.

Once discovered, this had a horrible impact on online games. People deliberately caused massive lags to take advantage of the cheat. To fix it in Age of Empires, we had to update the validation checks to see if a similar request was already pending on the current turn and reject duplicates.

Another bug of this type involved the game Firestorm and its interaction with the Windows clipboard. It seems a clever user found out that if he pasted text from the clipboard into his chats and that text contained a certain character not normally used, the game would crash when it attempted to print it to the screen - on all player's machines. He then treated this knowledge as a personal nuclear bomb that he could spring on people when he found himself losing.

Yet another example taken from Age of Empires is what happens when a player's network connection is overloaded or ping-flooded by another player. When such an attack renders a game unable to communicate with its peers, the other players decide that something is wrong with that player and drop him from the game - a totally necessary capability, but one that can be exploited in a modern twist on scattering all the pieces on a game board when you are losing. This was one the major reasons we added Multiplayer Save and Restore capabilities to Age of Empires II.

Some Final Thoughts

I hope these examples got you thinking about some of the problems and issues at stake when developers address the problem of online cheating. We certainly have a lot more ground to cover, from massively multiplayer games, open source, and consoles, to enabling the online communities to better police the situation. But we're out of space and time for now.

Matt Pritchard is busy trying to be a modern renaissance man. When not working hard on his latest game, he can be found spending time with his family or collecting antique videogames. Send e-mail to [email protected].

Read more about:

Features
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like