Sponsored By

Thinking Out of the Box

About how to improve the AI's navigation and decision making by implementing Neural Networks and Creative Actions.

Gabriel Lievano, Blogger

June 30, 2009

3 Min Read
Game Developer logo in a gray background | Game Developer

An Artificial Intelligence's navigation system today will probably consist of a system of path nodes which will hold data on what a bot can or can't do between them.  Although this approach has proven to be the most efficient and given best results, it seems that the AI's behavior is too strict and is therefore unable to get creative in situations where a response to certain events or conditions could be be more humanly.  One may think that adding an Artificial Intelligence could mean more expensive processing but here I introduce a way to give some creativity to bots without recurring too much into scripting.

The concept behind creativity involves learning, a good perception system, and a good way to associate what is happening (through perception) with past experiences (learning).  I have found that learning and association could be made though a  neural network without going through a lot of processing.  Is a way that doesn't completely implements all the characteristics of a full featured Neural Network but serves well in helping the AI learn from the tactics used from the player and avoid random actions using more "intelligent" thinking.

The Neural Network I describe for an AI consists of only two layers which make every decision made by the computer very straightforward.  The first layer would be a perception neurons layer which activates if a certain perception is obtained and the second layer would be the decision layer.  Each neuron within the NN consists of a threshold and a set of weights for each neuron in the previous layer.

Now, the unconventional thing about this method is that the Neural Networks won't be set to each AI Controller.  Instead they will be implemented on some specific path nodes and it would be triggered by the AI's objective.  This way learning will occur in a groupal form among all the AI Controller and will also allow to save some processing by restricting certain neurons.  The weights from each neuron would be adjusted depending on the success of each action by a constant value.  This adjustment could also be variable depending on a value given to the amount of success or amount of failure.  After doing all this the only thing next to do would be giving the AI a creative set of actions to perform according to each path node.

Given the fact that each NN is set to specific path nodes, one could adopt not so common actions that the AI could use to approach their target.  For example, the AI could choose to find a larger path in order to avoid doing what their enemy is expecting.  This is specially useful in FPS, Action/Adventure and Stealth Action games where common actions in the AI are very easy to predict reducing the opportunity to add difficulty to the game.  Also, although this method proves better for FPS and Stealth games, I have tested it in fighting games and have proven to be useful also (but in this case the NN is better implemented on the AI Controller than in path nodes).

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like