Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Woodcock examines the state of AI in the game industry using input from the 1999 Game Developers Conference AI roundtable discussions. See what other AI experts are saying and find out what's on the horizon for the future.
It's been nearly a year since my first article outlining the then-current trends in the game development industry regarding game AI ("Game AI: The State of the Industry," October 1998). Since that time, another Christmas season's worth of releases has come and gone and another Game Developers Conference (GDC) has provided yet another opportunity for AI developers to exchange ideas. While polls taken at the 1999 GDC indicate that most developers (myself included) felt that the last year had seen incremental, rather than revolutionary advances in the field of game AI, it seemed that enough interesting new developments have taken place, which makes an update to my previous article seem natural.
I'm very pleased to say that good game AI is growing in importance within the industry, with both developers and marketeers seeing the value in building better and more capable computer opponents. The fears that multiplayer options on games would make good computer AIs obsolete appear to have blown over in the face of one very practical consideration — sometimes, you just don't have time to play with anybody else. The incredible pace of development in 3D graphics cards and game engines has made awesome graphics an expected feature, not an added one. Developers have found that one discriminator in a crowded marketplace is good computer AI.
As with last year's article, much of the insights presented herein flow directly from the AI roundtable discussions at the 1999 GDC. This interaction with my fellow developers has proven invaluable in the past, and the 1999 AI roundtables proved to be every bit as useful in gaining insight into what other developers are doing, the problems they're facing, and where they're going. I'll touch on some of the topics and concerns broached by developers at the 1999 roundtables. I'll also discuss what AI techniques and developments seem to be gaining favor among developers, the academic world's take on the game AI field, and where some developers think game AI will be headed in the coming year or two.
Is The Resource Battle Over?
Last year there were signs that development teams were beginning to take game AI much more seriously than they had in the past. Developers were getting involved in the design of the AI earlier in the design cycle, and many projects were beginning to dedicate one or more programmers exclusively to AI development. Polls from the AI roundtables showed a substantial increase in the number of developers devoted exclusively to AI programming (see Figure 1).
Resources dedicated to AI development. |
It was very apparent at the 1999 GDC that this trend has continued at a healthy clip, with 60 percent of the attendees at my roundtables reporting that their projects included one or more dedicated AI programmers. This number is up from approximately 24 percent in 1997 and 46 percent in 1998 and shows a growing desire on the part of development houses to make AI a more important part of their game design. If the trend continues, we'll see dedicated AI developers become as routine as dedicated 3D engine or sound developers.
AI specialists continue to be a viable alternative for many companies that lack internal resources to dedicate developers exclusively to AI development. Several developers and producers present at the roundtables indicated that they had used independent contractors to roll the AI portions of their process with varying degrees of success. The primary complaints about using contract help were perhaps the universal ones — you never really know what you're getting, and maintaining good communication is, at best, a chore.
The most interesting comments, however, concerned CPU resources available to the AI developers (Figure 1). None of the developers answering the poll questions regarding CPU resources felt that they had too little CPU available. Everybody felt they could use more if they had it, but nobody said that they were having to fight tooth and nail for resources as they had in the past. This is an amazing turn of events, which is in stark contrast to previous years when AI developers complained often and bitterly of fighting the graphics engine guys for CPU cycles. The overall percentages of CPU cycles most developers felt they were getting didn't really change, but developers were feeling much less pinched than they had been in the past. When asked why this was the case, there were a variety of theories. Most developers felt that this was, quite simply, due to the fact that faster hardware is now standard on both PCs and consoles — 5 percent of a 400Mhz Pentium III is a heck of a lot more horsepower than 5 percent of a 200Mhz Pentium I. Others thought that the availability of faster 3D hardware, combined with greater expertise of the 3D engine manufacturers, had simply made 3D engines more efficient than they had been and thus freed up more CPU resources for other tasks. Whatever the reasons, everybody was happy about it, and they thought it would only get better as hardware got faster.
The one great problem mentioned by all was the impending-shipping-date-syndrome. Christmas hasn't moved from its place as an almost magical date for targeting new releases, and the increasing complexity of games in general hasn't made meeting deadlines any easier. While there are more programmers dedicated exclusively to the AI portion of game development now than there had been in the past, most developers felt that the task itself had become more difficult.
Part of the reason for this, of course, is the increasing importance of game AI itself — having made the case that good game AI is important in increasing the odds of a game's success, developers must now actually deliver better game AI. Quite simply, that takes time. When coupled with the fact that most AI testing can't really begin until substantial portions of the game's engine are up and running, you've got a situation wherein dedicated AI developers find themselves making compromises in the face of impending shipping dates.
Some developers also professed that part of the problem was the advances made in competing products. For example, after one real-time strategy (RTS) game introduced production queues, players started looking for all RTS games to do the same, and that means additional AI development for handling such things. There is also a desire on the part of most developers to avoid doing the "same old thing" in a new release.
Technologies in the Limelight
Exploring the AI technologies used by other developers in games has been a popular topic at past CGDC roundtables. Developers are increasingly turning to military and academic sources for new ideas and technologies (and those disciplines are turning their eyes on the game industry as well). Discussions with developers at the roundtables and at demo booths in the exposition hall yielded some interesting information about what technologies are in use today.
Rules-Based AI. Rules-based approaches to game AI, led by the Finite State Machine (FSM) and the Fuzzy State Machine (FuSM), continue to lead the pack as the most popular technologies among AI developers. The reasons for this remain the same:
They're familiar to the developer, building on principles that are already well understood.
They're easy to test against, making it simpler for developers to "customize" behavior in various ways, if necessary (something that happens in more games than one might think).
They're still more familiar to most developers than the more exotic AI technologies, such as neural networks and genetic algorithms.
While every game shipped in the past year makes use of rules-based AIs to one degree or another, there were a couple that seemed to stand out in developer's minds as particularly interesting implementations. One of these was Epic Games' Unreal, a first-person 3D shooter that provided some excellent examples of the complexity of behaviors available using FSMs and FuSMs. Taking the advances of Valve Software's Half-Life one step further, the AI in Unreal also makes heavy use of FSMs to control the behavior of the player's opponents to an often amazingly realistic degree. At higher levels, there is evidence of considerable intelligence on the part of monsters, which run away, hide when wounded, summon reinforcements, and can even lead the player into ambushes when possible. Herds of miscellaneous critters scuttle about the game levels using a fairly nice flocking algorithm, adding to the overall effect of the living world that the player has been thrust. All of this was done by the developers by layering FSMs, which were built on top of an extensible scripting system called UnrealScript (more on that below).
Valve Software's Half-Life. | Epic Games' Unreal. |
A game making heavy use of FSMs is Activision's Call to Power. Billed as using "over-arching potentialities" to guide its strategic thinking, Call to Power's AI actually makes heavy use of cascaded FuSMs throughout its design. The primary reason for this was straightforward enough, in that a number of different civilization personalities had to be accommodated in the design in order to reflect the differing governmental and militaristic bents of the various civilizations portrayed in the game. If the developers had used a strictly rules-based design to accomplish this, there would have been a considerable amount of special code to handle each civilization. Using FuSM technology allowed the developers to build on core AI engine in which its various decision-making thresholds could be modified by each civilization's unique personality and philosophical leanings.
This approach allows the game to accommodate a variety of different playing styles and technological research trees without bogging down the design in too many special cases. Every decision that a given civilization can make is based partly on the strategic situation, partly on that civilization's personality, and on the decisions it had made previously. Anytime something isn't terribly obvious, or not covered by a specific rule of some kind, the AI uses fuzzy logic (in the form of the FuSMs) to make a decision. This, in turn, results in an AI whose decisions are internally consistent and plausible, yet still leaves the chance for a surprise or two.
Extensible AIs. A number of recently-released games have featured Extensible AIs (ExAIs) in one form or another, building on a trend that began a couple of years ago with the release of Quake. The success of that game's QuakeC scripting language, which permitted players to build their own computer opponents, assistants, and companions (known as "bots") and trade them over the web, has inspired a number of other developers to build similar capabilities into their releases. Several developers at the 1999 roundtables mentioned that they were at least exploring the possibility of ExAIs in their projects.
To date, most ExAIs have cropped up in the first-person 3D shooter genre. Last year's Unreal and Half-Life provided players with interfaces through which they could devise their own rules for computer opponents. However, there were differences in implementation. Unreal went with a general "directive-like" interface through which AI behavior is controlled using relatively simple commands, such as "Move forward until you see an enemy, then throw grenades." Half-Life used a more traditional "programming-like" approach that somewhat resembles Perl or JavaScript. Both approaches have proven extremely popular with players and led to legions of users trading scripts and bots online for games based on both engines.
More recently, however, ExAI technology has been finding its way into other genres of games. Interplay's Baldur's Gate, a role-playing game (RPG) based in part on the Advanced Dungeons & Dragons paper RPG, uses scripts to control non-player characters (NPCs) within the game — including those that can be in the player's own party. These scripts allow players to specify the basic reactions of their NPCs to a variety of combat situations, permitting them to adjust behavior either to accommodate a player's particular style (making mages more cautious than they are by default, for example) or to create entirely new NPC classes. Several aficionados of the game have already seized on this last capability to develop a number of NPC classes not present in the original game, creating thieves, warrior-mage combinations, elven archers, and so on.
Interplay's Baldur's Gate. |
The AI scripts themselves are heavily rules-based in the Half-Life vein, operating in a strictly linear fashion from top to bottom within the script. Thus, rules "later" in a given script might or might not ever "fire" depending on the circumstances of the game and whether or not the player overrides any particular NPC action (an option always available). Responses can be weighted to control their probability of occurrence, though there is no provision for being able to modify the internals of the game's AI engine itself. There are some pre-defined, basic strategies available for the player-cum-AI-designer to use, and, of course, the existing NPC scripts are readily available as examples of what can be done. Documentation shipping with the game is necessarily sparse (probably to help avoid too many support hassles), but a few web sites have sprung up on which tinkerers can exchange information.
Listing 1 shows a snippet of a script, which was kindly provided by Baldur's Gate enthusiast Sean Carley for my game AI web page. It's from a warrior AI he developed, and as you can see, the scripting system is very English-like in syntax.
Listing 1. Sample Baldur's Gate AI script.
IF
// If my nearest enemy is not within 3
!Range(NearestEnemyOf(Myself),3)
// and is within 8
Range(NearestEnemyOf(Myself),8)
THEN
// 1/3 of the time
RESPONSE #40
// Equip my best melee weapon
EquipMostDamagingMelee()
// and attack my nearest enemy, checking every 60
// ticks to make sure he is still the nearest
AttackReevalutate(NearestEnemyOf (Myself),60)
// 2/3 of the time
RESPONSE #80
// Equip a ranged weapon
EquipRanged()
// and attack my nearest enemy, checking every 30
// ticks to make sure he is still the nearest
However, adding ExAI capabilities to a game isn't at all easy, and most developers at the 1999 AI roundtables agreed with the opinion from previous years that the trend wasn't likely to become widespread. There are significant design considerations that have to be worked out if one desires to add the ability for players to modify a game AI to suit their tastes, not to mention the problem of after-sale support. Developers have to decide how they're going to provide these hooks (code interface? scripts?), how they're going to document them (in the manual? online? HTML on the CD? not at all?), and just how far they should go to bullet-proof the whole interface in the first place. (Whose fault is it if some player distributes an AI script that erases somebody's hard drive?)
These very issues were, in part, the reason why Activision somewhat de-emphasized its much-touted interface to the AI engine in Call to Power. Originally, the development team had planned to provide full and total access to Call to Power's AI in such a fashion that players could have hypothetically replaced the game's AI with their own. The AI is completely encapsulated within a .DLL file, and it was planned to have players access it via header files that would have provided an interface to many of the internal functions (though the source itself was not going to have been released to the public). Users would have been completely on their own while using this interface — the support issues could have been nightmarish otherwise — and this approach would have allowed anybody who had the time and patience to replace Call to Power's AI completely with their own — a first in the industry.
Activision's Call to Power. |
Unfortunately for budding developers, the pressure of shipping on time and the design complications encountered while trying to implement this rather unique feature made that goal unrealistic. Activision was forced to drop that part of the plan (oddly though, you can still find a .MAP file listing the various function interfaces on the CD). Still, a number of extensible features made their way into the game, enough so that, although Activision isn't advertising the fact much, a number of players have begun making variations and trading them online. Players can modify unit attributes (all maintained in flat text files) and have access to the fuzzy logic rules sets used by the AI to set priorities for the strategic-level AI. This allows you to create new unit types and civilizations, in much the same fashion as UnrealScript permits new bots. In a similar vein, Microsoft's Age of Empires provides much the same level of customization of units and civilizations, though emphasis is more on customization of the various personalities of each civilization type than on actual modification of their rules sets.
Learning and Strategic Thinking. Another trend that bubbled to the surface at the 1999 AI roundtables was experimentation with learning AIs in various games. While it was definitely a widely-held opinion that most games featuring learning AIs haven't really done a very good job of delivering, several developers had high hopes that they'd be able to incorporate some level of learning into their next round of releases.
Developers seem to be exploring a number of different approaches to simulate learning, most involve comparing the current strategic or tactical situation to similar past situations. Mythos Games, in their recently released Magic & Mayhem, noted that they were doing localized assault planning by continually building a data file that describes how attacks had fared historically in previous scenarios. A proposed attack is compared to this database, and if it succeeded most of the time, it's actually carried out (the threshold is determined in part randomly and in part by the personality of the AI player). A "winnowing" algorithm discards "old" lessons so the learning file doesn't become too large. The developer reported that this approach resulted in an AI that gradually tailored itself to the player's style of play — a feature that is certainly something of a Holy Grail to AI developers.
Mythos Games' Magic & Mayhem. |
Interestingly enough, some developers (roughly 20 percent of attendees) were experimenting with Artificial Neural Networks (ANNs) as a learning technology. ANNs have cropped up often in the AI roundtables as a potential solution to the learning-AI problem, but there are some interesting challenges in using the technology in games that have discouraged most developers to date. Historically, using ANNs within a game presents the developer with two particularly thorny problems: First, it can be very difficult to identify meaningful inputs and match them to outputs that make sense within the context of the game; and second, most ANNs learn through a technique called "supervised learning," which requires constant developer feedback. While it is possible to build ANNs that can learn unsupervised, there's no guarantee that they won't "go stupid" and become completely helpless players.
Most developers are trying to avoid these problems by training their ANNs exclusively during the development phase, then freezing them before the game actually ships. This allows them to let the AIs learn while playing against the development team and play testers without the risk that a shipping AI might wander off into some Rain Man universe of perception. The downside to this, of course, is that the game doesn't learn anything from the player, and so the whole effort boils down to an automated form of AI tuning (ultimately similar to using genetic algorithm to try to tune various game AI parameters). A developer of an upcoming sports game announced that he was working on a way to integrate unsupervised learning ANNs into his game, although he planned to include an option to reset the AI should the player feel it had become feeble-minded (or too strong a player, as the case may be).
One big problem with learning AIs that caused much amused discussion at the roundtables was the fact that a learning AI is, by definition, unpredictable. This leads to huge problems when it comes time to do quality assurance testing on your game — how can anything be tested reliably if it behaves differently from game to game? How can a developer fix a bug if it's impossible to recreate the conditions that led to a certain behavior?
On a closely related vein, several developers noted that they were attempting to find AI technologies that would do a better job at strategic-level thinking and planning. To date, most strategy games do an adequate job at the tactical level — identifying cities or units to attack, taking advantage of unprotected assets, and so on — but do a lousy job at developing and implementing grand strategy. The problem, from a programmer's point of view, is basically one of optimization.
Most war games (ignoring for the moment most first-person shooters and RPGs, since they are primarily tactical in the extreme), whether real-time strategy or turn-based, do a much better job of optimizing small, tactical situations over larger, strategic ones. This leads to AIs that fight battles well but still manage to lose the war, often because they overlook solutions glaringly obvious to the human player. A large part of this situation is simply the result of the historical inclination of developers to build AIs at the unit level; for example, in a Civil War game, a cavalry unit might decide to attack an artillery unit without the presence of any other support. This in turn leads to an AI that often overlooks obvious attacks in favor of frittering away its forces. Adding in an ability for a unit to call for help balances things out somewhat, but that's still a far cry from strategic-level thinking.
Additionally, there's the problem that strategic-level planning may be very good for the war effort overall, but very bad for the individual unit. One example of this might be a brigade ordered to hold a vital mountain pass in the face of overwhelming enemy attack — the war might be won because the delaying action bought the time necessary to get reinforcements to the area, but the unit itself isn't likely to survive. An AI built to handle only unit-level thinking is going to have a hard time making this kind of trade-off. Chess game AIs are perhaps the one exception to this rule, but they're cheating, since most chess programs draw upon databases of thousands of games and simply pick the highest-scoring move available at that moment.
Many developers present felt that the time had come to redress this imbalance and were looking to a number of AI-related technologies for help. Some were building on the same techniques used for learning algorithms by using databases of previously-successful moves to develop plans for similar future moves. Others were looking at tools such as Influence Maps (see "Influence Maps in a Nutshell") to provide ways for their AIs to "see" the grand strategic picture. A few were hoping simply to solve the problem the same way most chess programs do, which is to build large databases of opening strategic moves based on feedback from play testers and the development team.
Influence Maps in a Nutshell
Influence Maps (IMs) are an interesting AI technique with its roots in the field of thermodynamics, of all things. The technique is known by a variety of other names, such as "attractor-repulsor" and "force fields".
The basic IM algorithm is refreshingly simple for something in the AI field. Imagine an array which corresponds in size to the size of a strategic-level map. For instance, a strategic map of the U.S. might have resolution down to the state level — in that case, the array might consist of an array of five by ten values (one value for each state). Set all values of the array to zero. Adjust the value of each array element upward by one for each friendly unit in that sector of the map, or downward by one for each enemy unit in that sector. Then begin looking at each location of the array and adjusting the value found there by its neighbors. Typically values are increased by one for each adjacent friendly unit and decreased by one for each adjacent enemy unit.
Do this across the entire map and you now have a "picture" of sorts, that your AI can use tell how much control the two players have over the board. The sign of the value indicates which side has some control. Values near zero indicate areas where control is contested — the front. Large values (positive or negative) indicate strong control over an area.
There can be any number of variations on this basic algorithm depending on the needs of the game, of course, but the principle is the same regardless. This technique can be invaluable in providing all kinds of strategic disposition information to an AI, information which is often difficult to characterize otherwise.
— Steven Woodcock
Interestingly enough, a vocal minority of developers felt the move towards developing better strategic AIs was primarily a waste of time, particularly in games in which players can't easily see the other side's forces. The theory they put forth was that if the player can't see what the computer is doing, why waste time on elaborate strategic AIs in the first place? A few well-placed but thoroughly plausible unit placements (via judicious cheating on the part of the AI) would go a long way towards providing the player with an enjoyable gaming experience. Many of this group felt that the mere appearance of a tank deep behind enemy lines would be ascribed a meaning by the player if the attack came at a particularly vulnerable time. They based this opinion on the reams of e-mail they had received from players that raved about the intelligence of the AIs in their games, when the AI was, in fact, cheating outrageously just to keep up.
Pathfinding. Pathfinding is a perennial favorite topic at the roundtables, but most developers this year were far more interested in finding ways to solve unusual pathfinding situations than in learning "how to." The A* algorithm (for more details, see Bryan Stout's excellent article, "Smart Moves: Intelligent Pathfinding," Game Developer, October/November 1996) has become the de facto solution to this problem for one very simple reason: It works, and it works well. A* has the added benefit of scaling well into newer games that feature 3D terrain, and it requires few tweaks and modifications.
The 3D pathfinding issues presented by the latest generations of first-person 3D shooters were generally felt to be nowhere near the problem most developers were afraid it might be. The early implementations of A* for 2D games had been adapted easily by most developers for the 3D environment, with most developers coming up with variations of the same solution of overlaying a system of nodes within the 3D environment against which paths were found. Some games generate the nodal network when the game map is loaded, while others simply load a pre-defined network as a part of the map data itself. At least one upcoming first-person-shooter style game, The War In Heaven from Eternal Warriors, features an AI that uses a pre-defined node map for its basic pathfinding, but goes one better by dynamically generating new nodes for finer control based on the tactical situation.
Eternal Warrior's The War in Heaven |
Developers at the roundtables were very interested in exploring ways to handle special case pathfinding problems. Identifying and dealing with highly-restrictive terrain (such as bridges or mountain passes) was a hot topic, since these terrain types can lead to traffic jams that make an AI look extremely stupid to the player. Most developers simply marked these terrain features by hand in some fashion in order to make them easy for the AI to identify — although this greatly complicated things when the AI had to deal with randomly generated maps. Many developers said that they solved the problem in part by assigning a special AI agent to play traffic cop, thus side-stepping the issue of bogging down individual unit AIs with the details of crossing a bridge politely.
Another problem of keen interest to developers was how to handle the issue of changing terrain gracefully. One of the failings of the A* algorithm is that it assumes the terrain over which it has calculated a path doesn't change — this is a bad side effect should the bridge you were planning to cross get blown up by an artillery round. To solve this problem, some developers were using D*, a dynamic A* variant tuned to handle changeable terrain, but none were happy with it due to the CPU hit required to recalculate paths. Others simply ignored the change until the unit in question reached the point where it couldn't move, but this approach leads to behavior that most players find objectionable. A few confessed that they didn't bother trying to fix it — if a unit got stuck, they just jumped it a few squares to get it going again.
Technologies on the Wane
One interesting side discussion that cropped up at the roundtables dealt with AI technologies that developers had played with, but then discarded. Some of these will be familiar, since a year ago there was quite a bit of excitement over the possibilities offered by some of them.
Generally speaking, Artificial Life (A-Life) doesn't seem to have gained much use outside of the realm of RPGs and Creatures-style games. A-Life is a natural for RPGs in particular, since it gives developers a way to flesh out a game world using NPCs to do all the dozens of dull and mundane jobs that no player wants to do, but which are vital to the gaming experience. A good A-Life AI can make whole hordes of monsters and NPCs behave realistically with very little CPU overhead, which gives the player the feeling of being a part of a living, breathing world.
Last year, a number of developers were exploring different areas using A-Life technologies in everything from first-person shooters to RTS games, but when push came to shove, many ditched those plans in the face of the inherent difficulty of predicting exactly what a given unit would do in a given situation. Developers found, for example, that it really annoyed their producers when they created a 3D shooter level in which a guard was only "usually" at the bottom of the stairs to raise an alarm. Others found themselves wrestling with games in which a unit would ignore the commands given to it by the player — a realistic situation, perhaps, but hardly one the player is happy to be paying for.
However, some subsets of A-Life technology have found their way into various games. Several of the recent first-person shooters have used flocking algorithms to one degree or another to handle the movement of herds of monsters, birds, fish, and so on. Some RTS games were also making use of flocking variants for group unit movement, and at least one upcoming space combat game (Babylon 5 from Sierra Studios) plans to make use of flocking algorithms to control the movement of enemy fighter wings and fleets of enemy capital ships.
Flocking in Sierra Studios' Babylon 5. |
Genetic algorithms (GAs) also haven't found much use in games in the past year. Again, outside of the Creatures genre (which that game nearly owns entirely unto itself), most attempts by developers to use this technology have fallen flat. The main reason most developers cited was the usual one — too much CPU was being taken up for adaptation and learning that happened at too slow a pace to be useful. After spending several months experimenting with GAs, developers found themselves abandoning the technology in favor of more traditional FSMs and FuSMs. Not only are these more traditional techniques easier to predict and tune, but they demand considerably fewer resources of the CPU.
A few developers did report success in efforts to adapt GAs as tools to aid in tuning their AIs, and they found them easy to adapt to this task. AI tuning is always something of a problem for developers, because by the time a game is near enough to completion to make tuning a meaningful activity, there can be hundreds of parameters that can affect the AI's style of play. Testing every combination is an impossible task, more so given the often tight deadlines looming towards the end of the development cycle. Using GAs to tune an AI lets the developer automate this process, making hundreds of runs of a game using various parameters for the computer opponents. The best variations can be saved out as the basis for the default AIs shipping with the game.
Academia and the Game Industry
One interesting development at the 1999 GDC AI roundtables was the attendance of several members of the research, or academic, AI profession. The primary reason for this was probably the close scheduling of the 1999 American Association for Artificial Intelligence (AAAI) Spring Symposium and the GDC (see the sidebar "AAAI Spring Symposium" for more information on the developments at the AAAI conference). This presented an interesting opportunity for many of the theorists in the field to meet some of the engineers.
Feedback from our academic brethren was fascinating, to say the least. Two guests in one of my roundtables, one a physics major dabbling in AI, and the other a formal AI professor, were adamant that the game industry appears to be light years ahead of academia in producing practical, working AI solutions to some very tough problems. This view was echoed by several others in Dr. John Laird's final-day lecture titled "Developing an Artificial Intelligence Behavior Engine." They greatly admired the game industry's rapid pace of development, noting that more formalized AI studies can often take years to formulate theories of behavior, examine possible solutions, and develop prototypes for testing. Of necessity, the game industry moves much faster (an order of magnitude was how one professor characterized it). The lack of rigorous methodology frustrated our guests somewhat because it makes many of the game industry's solutions unacceptable as support for formal AI studies. Despite this, the academic world was still very interested in studying the solutions game developers have engineered.
Several of the game developers present (including myself) were both flattered and astonished by this interest, since many of us have long looked upon the work being done in the academic realm as "real" AI. Both groups agreed that there were lots of things each could learn from the other, something which I hope this article may help facilitate.
AAAI Spring Symposium
Many AI game programmers probably returned home from the Game Developers Conference (GDC) unaware that the next week held another interesting gathering at nearby Stanford University. The American Association of Artificial Intelligence (AAAI) holds both Spring and Fall Symposia, and this year the Spring Symposium (March 22-24) included a session focused on AI in commercial computer games.
Overall, it was an enjoyable experience. The Symposium was small enough that all participants met together in one lecture hall for each session, and attendees from both academia and industry got fairly well acquainted with each other in those two-and-a-half days. There were both lectures and demos, but most of the sessions were panel discussions. The early sessions summarized game AI's past, looking at its successes and failures; sessions in the middle looked at current work. In demos and sessions on NPC design and NPC control, we saw work exploring techniques such as AI control architectures, hierarchical AI, explanation-based representations, pathfinding, natural language interfaces (speech), smart environments, and artificial life. (You can order symposium proceedings at the URL below.) Robotics received a fair amount of focus, which is worth noting by game developers for a couple of reasons. First, game companies may wish to branch into robotic toys (for example, Lego is designing programmable vehicles that kids can tinker with, and its entries at RoboCup soccer tournaments have performed respectably). Second, software techniques and architectures used for mobile robots are often applicable to computer game AI — even low-level movement calculations are useful as game physics simulation gets more realistic.
As interesting as these presentations were, I was even more excited by the discussions about possible future developments in the field of game AI. For example, one discussion session covered AI engines and toolkits, which is a topic of growing interest. In a survey made by one panelist, results revealed the main reason game developers wanted an AI toolkit was to make a better product, rather than reducing production cost or time. Many potential obstacles to toolkit use were given, but the desire to understand the tools was the most common response. Other obstacles brought out in discussion included a suspicion of outside code, a need to know that the technology works, a lack of knowledge of AI fundamentals among developers, potentially large licensing fees, and common demands such as fast speed, low memory, flexibility, availability of source, ease of use, documentation, and support. Desired techniques for toolkits included pathfinding, rule-based expert systems (perhaps with fuzzy logic), finite state machines, inverse kinematics, resource allocation solvers, and perhaps natural language handling.
Two sessions focused on new directions for game AI, and potential killer applications for AI; these discussions were necessarily more speculative. Possible new areas for AI in entertainment included speech and camera input into almost any program or toy (such as a Tamagotchi or Furby, but more creative); genuine give-and-take conversation; intelligent physical interaction in museums or theme parks; artificial life (as Creatures and Petz are beginning to explore); real interactive stories; and more personality presence in artificial agents. Other suggestions for killer applications included a "god game" apprentice that could recognize plans and intentions; reliably smart AI for subordinates in strategy games or teammates in action games; variable-skill Quake bots; intelligent story development (causality propagation); and "Furby done right." There was some debate on what landmarks could show that AI has arrived (comments included "when AI is mentioned first in game hype," or "when AI is occasionally the lead cover story in magazines"). It was generally agreed that games and toys will be the vehicle to help familiarize and encourage acceptance of AI by the general public.
Perhaps the favorite topic of discussion was how game companies and academic AI researchers can work more closely together. In the opening session, John Laird of the University of Michigan outlined the mutual benefit: AI makes games more fun (a better challenge, more believability, better interaction), and AI helps sell more games; games help AI research by giving great demos, igniting student interest, and providing robust environments to work in and interesting research problems to solve. Further, he said the games community wants academia to provide more information on AI technology, fast, simple (and good) techniques, and more good AI programmers; academia in turn would like case histories of AI development in games, lists of important problems, interfaces to hook AI into real computer games, and funds to support research.
The symposium ended with a discussion of ways to build better bridges between the game companies and academia. Ideas include summer internships for AI students with game companies, reverse internships to send programmers to school for a course or two, a peer-reviewed journal on game AI topics, cheap student rates to the GDC, and college degree programs in interactive/electronic entertainment. It was decided that there would be a similar symposium next year. I enjoyed this year's symposium so much, I hope to attend next year. See you there. —Bryan Stout
For further symposium information:
Check out the 1999 AAAI Spring Symposium proceedings at: http://www.aaai.org/Press/Reports/reports.html#spring
For information on next year's symposium on AI in interactive entertainment: http://www.cs.nwu.edu/~wolff/AIIE-2000.html
1999 Symposium on AI and computer games: http://www.cs.nwu.edu/~wolff/aicg99/index.html
What's Next?
As always at the AI roundtables, I asked my fellow developers for their opinion on a number of questions regarding the future of the industry. Where did developers think game AI was going in the next year or so? Will AI continue to be an important part of game design, or will multiplayers render good game AIs moot? Where did developers feel the next big advance in game AI would come from?
Opinions on these questions were mixed, as one might expect. Any AI developer worth his salt, after all, is pretty darn sure that his or her next game will be the one to contribute something of particular value to the field. Most continued to feel that there would be a slow move away from rigid, rules-based AIs towards more flexible, fuzzy AIs that made use of a variety of technologies in combination with one another. Additionally, as noted above, most developers seemed to think that there would continue to be a move towards opening up the AI to ever-greater levels of user interaction, mostly through a scripting interface of some kind. Everybody was hoping that somebody would manage to put out a game that actually provided programming-level hooks into the AI engine, though nobody at the roundtables volunteered.
Nearly every developer present felt strongly that good game AI would only increase in importance as a part of the finished product, whether multiplayer options were present or not. The reasons for this belief were much the same as they were last year — good game AI will become more of a discriminator as 3D technology levels out, and advances in that area become less spectacular. Learning AIs that can adapt to a given player's style are considered to have big potential, and many developers are concentrating their efforts in that area.
When it came to where developers felt the next big advance in game AI would come from, opinions varied widely across all genres. This was echoed by a poll recently posted on my game AI page, the results of which are shown in Figure 2.
Where do you think the next innovation in game AI will come from? |
No particular conclusions can be drawn from the above, except perhaps that developers as a group seem to feel that turn-based strategy games and sports games just don't offer much opportunity for advancing the field. I can only speculate as to the reasons behind this, but I would hazard a guess that developers feel there won't be many more turn-based games released in the future, while sports games have a number of restrictions that make AI innovations a bit more difficult (if you get anything wrong, 100,000 angry fans will write the company to let you know).
There is no question that the game AI field continues to be one of the most dynamic and innovative areas of game development. CPU and memory constraints are (slowly) being lifted, freeing developers to experiment with much more interesting and aggressive AI techniques. We're figuring out what works and what doesn't, slowly building suites of tools to speed things along, and just generally getting better at the job. Better and more entertaining games will be the inevitable result.
For Further Info
Books: There are precious few books that discuss AI from a gaming perspective. Most are more academic-oriented texts that go into theory more than practice. My favorite comprehensive reference is still Artificial Intelligence:
A Modern Approach by Stuart J. Russell and Peter Norvig (Prentice Hall, 1995).
In progress:
Author Bryan Stout is working on a book dedicated to game AI due out in early 2000. It's tentatively titled Adding Intelligence to Computer Games (Morgan Kaufmann).
Newsgroups: Several Usenet newsgroups focus on artificial intelligence in general and game AI in particular. A few of the better ones in terms of noise-to-content ratio are comp.ai.games, comp.ai, and rec.games.programmer.
Web Sites:
The sister site to Game Developer magazine maintains an online roundtable of game AI that grew out of the GDC roundtables. Highly recommended.
The author's page, dedicated to all things game AI related, provides links to other AI resources and archives of various Usenet threads.
Craig Reynolds, known as the "father of flocking," has the best page on the web to start research into the theory and technology behind flocking and similar A-Life technologies.
http://www.geocities.com/ResearchTriangle/Facility/3773
James Swift has written a neat little utility that allows exploration of various 3D navigation algorithms.
http://ai.iit.nrc.ca/ai_point.html
The Artificial Intelligence Resources page maintained by the NRC/CNRC Institute for Information Technology is an excellent starting point for AI research.
http://www-cs-students.stanford.edu/~amitp/gameprog.html
Amit Patel's game programming page, crammed with information on pathfinding algorithms and pointers to other AI resources is a good basic starting point for anything AI-related.
Steve's background in AI comes from over a decade of SDI-related work building massive real-time distributed war games for the Air Force at the Joint National Test Facility. When he's not saving the world, he does AI development on a contract basis and goes target shooting when he gets the chance. Steve lives in Colorado Springs, Colo., with a very understanding wife and an indeterminate number of ferrets. He maintains a web page on game AI at http://www.gameai.com, and can be reached via e-mail at [email protected].
Read more about:
FeaturesYou May Also Like