Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
The implementation of artificial intelligence in games is an art that matured greatly in the last decades. We will aim at taking the better of modern ideas while fitting it for the NES and its 1.6 Mhz CPU.
The implementation of artificial intelligence (AI) in games is certainly not something new, but this art matured greatly in the last decades. In this article, we will aim at taking the better of modern ideas while fitting it for the NES, a 1980s game system featuring an 8-bits CPU running at 1.6 Mhz and 2 KB of RAM.
We will concretely discuss the new AI implementation in Super Tilt Bro, a NES homebrew that features two players fighting on a platform akin to Super Smash Bros. The former AI implementation will be described and analyzed, then we will take inspiration from a modern concept, finally we will adapt it to our system and game.
In this two-players game, the AI has only one purpose: replace the second player when you have no human at hand. It is really important, players tend to try the game alone before calling a friend to play, and hitting a dummy is not that fun.
The game follows a very simple main loop. Each frame, it captures gamepads states then updates the game accordingly. The AI is simply integrated between these two steps, allowing it to override buttons pressed by player two.
AI's place in main loop
Hooking AI code there has its pros and cons. On the pro side, it is independent from game code, reworking AI cannot introduce bugs in the game and impossible moves cannot occur, even if the AI is bugged. For the cons, the AI must handle moves as key-presses, which is more complex than just changing bits in the game state, also rewriting gamepad state feels like a hack. Anyway, this article is not about the integration of AI code, but its implementation, it is just important to know how it is hooked-in. Let's go with it.
The former implementation was super simple. Each frame, if an attack can hit the opponent, it presses the buttons associated to this attack. If no attack can hit, it simply presses the D-Pad in opponent's direction.
if an attack can hit: press attack's keys else: press D-Pad in opponent's direction
While this code is minimalist, it is surprisingly efficient. The AI's character runs to its opponent and hits him at any opportunity. It can even do pretty good moves in perilous situations as it also tries to hit the player with survival moves. Finally, it uses very little cpu-time, which is always nice.
While it is impressive for 70 lines of assembly, this implementation has some severe limitations. The three deal-breakers are listed below.
The first big problem is that it is overly aggressive. CPU opponent is aimed at players that discover the game. You do not want to be harassed permanently by a zealous bot while trying to figure out the controls.
Secondly, it is easily exploitable. While full of emergent behaviors, it is still one of the dumbest bot in the universe. Some very easy tricks can reveal it. For example, by standing still and repeatedly pressing the attack button, you can make the AI character continuously run in your attack.
Finally, there is no room for a programmer to improve it. It is not simple to fix an exploitable behavior, it cannot be adapted to the environment hazards and implementing difficulty levels would be a nightmare. The AI does not even know what it is doing, so doing it better is kind of nonsensical.
Summarily, the AI is too strong for new players, too boring for enthusiast ones and not flexible enough for programmers.
The behavior tree is an awesome tool to structure the decision-making in a game's AI. A behavior tree is an acyclic graph (yes, a tree) where each node can be executed and return SUCCESS, FAIL or RUNNING. Executing the root node propagates the execution to its children according to node's rules. Here is an example of a behavior tree for a man wanting to sleep on Monday morning:
Wanna sleep more!
In this tree, there are three kinds of node. Selector nodes, notated “?”, return SUCCESS if any of their children succeed. Sequence nodes, notated “->”, return SUCCESS if all of their children succeed. Leaves actually do something, be it checking the environment or interacting with it.
A behavior tree is meant to be executed each frame. So, with the above tree, each frame the man tries to go back to sleep, if impossible he handles the alarm, if impossible he avoids being woken up by his cat or finally wakes up if there is no other possibility.
Behavior trees are covered in-depth on the internet, but here are some of their qualities. They are executed by ticks, like most things in games, while expressing logic for actions that take time. The failure of any action is handled, no matter the cause, it avoids stalled AI by unexpected circumstances. It is a visual language, accessible for non-programmers and useful to communicate about the AI's logic. A behavior tree is entirely data, it can be switched on demand at run-time.
That's actually lots of great properties, it is no surprise that the behavior tree became super popular. So, why not implement it in our fighting game? There are two reasons, RAM and CPU.
With 2 KB of memory in the NES, everything is on budget. It is possible for a behavior tree implementation to use little RAM, but it needs a stack to store the state of each node being executed. A node can execute its children, which also have children, ... it is inherently recursive. The problem is to guess a hard limit for this stack's size, the engine sets the limit while usage is defined by data.
With 1.6 Mhz processor, we do not have tons of cycles to spare per frame for AI. There is also the actual game to run at the same time. Expressing a complex condition in a behavior tree requires many nodes, using CPU-cycles to push and pop states from a stack. All this for what can be trivially expressed in raw assembly.
Some strengths of behavior trees are particularly interesting to us. We want to split the logic in goal oriented subtasks. It is easier to think about what to do when we know what we want to achieve, so we can develop smart reactions to specific cases. We also want to be able to adapt the logic to the context, a future stage may have a specific hazard that the bot must try to avoid. All this without a stack based solution and retaining the power of assembly language.
On the other hand, there are properties that we can compromise on. The whole logic does not have to be defined in data, as long as units of code handle each a specific purpose, it is all right. It is also OK that a developer wanting to modify the AI has to know how to code, we do not need the visual aspect of behavior trees.
Here is the solution proposed in the last version of the game. It is based on two concepts: actions and selectors. An action is a series of key-presses that leads to a move. A selector is a subroutine (assembly's equivalent to function) that can select an appropriate action, or not.
The double jump action, six awesome bytes
Each tick, the AI engine runs its active action. If it has no active action, it executes selectors until one returns an action, which then become active. There are three implemented selectors: attack, recover and chase. Attack selector chooses an attack action that can hit the opponent, or pass when out of range. Recover selector tries to go back on stage when at risk of falling. Chase selector always find an action to go in the direction of the opponent.
This implementation is somewhat equivalent to a very little behavior tree with only a selector node as root and leaves that can return RUNNING or FAIL. RUNNING implying a future SUCCESS. In the default AI the three selectors are all present in the order of their description.
The default AI
By compromising on some features of behavior trees, we were able to achieve our goals at a very little cost. We can implement difficulty levels by having selectors sometimes return a “do nothing” action instead of an aggressive option, effectively slowing down bot's reactions. We can fix exploitable behaviors in a selector without impacting others. Finally, we can adapt the AI to specific condition by changing the list of selectors. All this with a state of fixed size: the current action and a counter to know where we are in it.
Thanks for reading, to go further you can now play against Super Tilt Bro's new AI or read the source.
Read more about:
BlogsYou May Also Like