Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
Good AI has often been defined as "hard to beat", when I believe that a better definition is "interesting to engage". This leads to the counter-intuitive notion that good AI should be predictable.
[This blog post is adapted from a TIGSource forums devlog for my game, The Heist Guild. I have edited it to make it less specifically about my game.]
I believe that, for much of gaming's history, approaches to AI design have been misguided. The first misstep is that AI is often thought of as existing to pose a challenge to the player, and while that is an aspect of what AI does, focusing exclusively on it has led to poor design decisions. I believe that the purpose of AI is to engage the player, not simply to challenge. We have defined good AI as "difficult to defeat", when I believe a better definition is "interesting to deal with".
The second misstep is that AI is often thought of as existing outside the gamespace, as the player does. This is mostly the case for oppositional AI, non-combatant NPCs are generally thought of as belonging to the game in the same way that any environment object does. But, fundamentally, unless we're talking about a game in which AI is literally a stand-in for a human multiplayer opponent, then AI is a part of the game environment, not something separate. From a design perspective, there isn't a meaningful difference between a spike pit and an enemy NPC, except that the rules governing the latter's behaviour are more complex.
Ultimately, AI is a tool, the primary use of which is to drive decision making and increase possibility space.
To take a basic example from Mario, the bottomless pit is a tool used to force the player to use skill in where to jump. The goomba presents a moving obstacle, forcing the player to use skill in when to jump. The goomba does not have goals in any true sense, there is no win state for the goomba; its purpose is to increase the number of ways in which the player must be able to interact with the environment.
Increasing the complexity from Mario to a stealth game, for example, is a difference of scale, not of kind. Even though the AI has much more sophisticated decision-making systems, it has no true goals in the way the player does; even if it catches the player, that isn't a win state for the AI because the AI is not working towards any kind of goal. The AI exists to increase the number of interesting ways in which the environment can be interacted with.
This focus on "interesting" being the primary virtue of AI design brings me to what will probably me my most controversial point: I believe that, in general, good AI is predictable. Good AI will do basically what the player expects it to. If this seems bizarre, it's because we're so used to thinking of good AI being hard to beat, and predictability seems antithetical to that. My reasons for advancing this idea are as follows:
1) Predictable AI does not mean easy AI. To bring us back to the Mario example, the AI in Mario are all 100% predictable. Despite this, the game is not trivially easy to beat. This is because there is more to a game's difficulty than simply outwitting the AI. You could counter that that's a bad example because Mario is also an action game, where skill is also about reflexes, whereas a strategy game is much moreso about outwitting the AI. However, just because you can predict what the AI will do next, that does not mean you will always be able to summon up a perfect response to that action. The player often does not have perfect knowledge of a given situation, leaving room for mistakes to be made.
2) Predictable AI does not incentivize conservative play in the way that unpredictable AI does. If the AI's response to an action is largely unpredictable, it is then a risk the player must account for, and situations of risk incentivize conservative play. Conservative play here being defined as play in which the player is very cautious and deliberate with their actions, ensuring as small a window for failure as possible. Bluntly, conservative play is boring to most players, but players will almost always adopt a playstyle most like to ensure success. If the player does not need to be conservative to account for the variable response of the AI then a more aggressive playstyle becomes viable. Stealth games very often have designs which incentivize conservative play, and the two are often conflated, but this need not be the case. It is simply good design to incentivize playstyles which are the most fun, or at the very least, avoid incentivizing playstyles which are unfun.
3) AI predictability increases possibility space. If the AI's actions can be predicted, then they can be manipulated. Tricking the AI, then, becomes a tool in the player's kit, and a very gratifying one to use successfully. It makes the player feel intelligent to manipulate the AI, assuming that the AI has already been established as intelligent. This is where predictable AI can become a weakness: if the player is able to manipulate them trivially (putting a bucket over their head in Skyrim) then manipulation doesn't make the player feel smart, it makes the AI look stupid. Thus, a design consideration must be the main ways in which the AI can be manipulated. It is not enough to make the AI predictable, the designer must anticipate (and discover through playtesting) the ways in which the player can use this predictability, and compensate for them so that the AI does not come off as foolish. For example, if the AI can be tricked by the player throwing a noise-making object to distract them, the player should not be able to do this as many times as they want, to the same enemy, with identical effect.
4) Predictable AI actually seems smarter. Humans intuitively understand intelligence as "correct responses to stimuli". The dog who fails to go after a thrown tennis ball is stupid. The driver who failed to stop at the red light is stupid. The person unable to read words on a page is stupid. While all of these can have other explanations beyond lack of "intelligence", that is the first assumption because the action represents a failed response to stimuli. When AI predictably react to stimuli, they appear intelligent, because their actions appear to be the result of thought processes rather than random chance. Even if there are complex reasons under the hood for the AI making the decisions it does, the reasoning should be clear at first glance because that's how long the player generally has to comprehend why the AI did what it did. If the AI is responding to a noise by running away when it would normally investigate, then that AI should, beforehand, have in some way communicated cowardice or fear. When AI perform actions which make sense for their character and their situation, they appear more intelligent and the game more immersive.
Now, these principles don't necessarily apply to AI which is a substitute for a human player. Those AI should still be governed by logical processes, as humans are, but it's not as imperative that the logic behind actions is communicated; especially since in some cases this could hamper its utility as a human substitute. The logic behind a Starcraft player's decisions will not always be totally obvious just by the way they play the game, nor would a Call of Duty player's decisions. Ultimately video game AI is an incredibly broad topic, but the important thing is to remember that it is broad, and that the same solutions do not make sense in every situation.
Read more about:
Featured BlogsYou May Also Like