Sponsored By

How Tom Clancy's The Division Manages AI Online

In part 2 of my series on The Division, I explain how the online systems manage the AI characters in-game.

Tommy Thompson, Blogger

December 10, 2018

10 Min Read
Game Developer logo in a gray background | Game Developer

AI and Games is a crowdfunded YouTube series hosted on Patreon.  If you enjoy reading this piece please consider supporting the show at www.patreon.com/ai_and_games

In part 1 of my case study on Tom Clancy's The Division - a 2016 rpg shooter by Ubisoft's Massive Entertainment - I looked at the overall structure of the game and it's core AI antagonists. As players carve their way through the streets of New York City, they're faced by a variety of roaming and designer-placed AI characters both friend and foe. An extensive behaviour tree implementation handled decision making, with many of the attributes used in these behaviours managed courtesy of systems that oversee character factions, as well as skill profiles and rankings.

But all of this fails to address the larger and more fundamental problem the game faces: unlike any game I've covered to-date on AI and Games, The Division is an online game and requires players to be connected to the server in order to play it. This requires AI to be executed in such a way that players who are online together have the same experience: fighting the same enemies in a shared world on individual devices. So in part 2 we're going to explore how the online infrastructure is built to support the non-player characters, how they are created and managed to enable the same experience to all players and how AI players were used to help test and deploy new parts of The Division during development.

How AI Works Online

If you're not familar with online infrastructure for games, The Division operates in what we call a client-server model. Each player logs into from their own copy of the game their PC or console (which is the client) to a server that is hosting the game. This server hosts what is happening right now in the world of The Division: the active daily and weekly challenges, the world events that are happening and the in-game economy.

This is where we need to consider the AI in the games execution. There shouldn't be any discrepancy between what one player in a fireteam experiences compared to their friends. If you're being attacked by an enemy character up close but your friends don't see that, it makes for a horrible experience. To resolve that, all AI systems in the game run server-side. None of the behaviour tree AI systems I mentioned in part 1 are executed on your end, it all happens on the server. This ensures consistency for all players online.

Behaviour Execution

But this presents a problem, the behaviour trees only represent the AI decision making aspect of play, but the player still needs to see it all happen in their game and that requires the characters in the world to execute the decisions made for them on the server. In an effort to minimise the amount of data being passed between player and server such that the AI can execute smoothly, the animation of all characters is handled purely on the client-side, with the AI systems on the server having a very limited understanding of how a given characters animations work such that it knows which ones to execute, but the client handles it all on their end.

Now this solves one problem: in that the data being sent between client and server is kept to a minimum, but it presents another one in the form of NPC movement. All non-player characters in The Division use a system of animation-driven locomotion - a more intelligent process of character animation for movement that enables NPCs to move more fluidly and respect the kinematics of their animations. But if the server doesn't monitor the executions, how does it know if a character running towards a destination runs to the correct place? To resolve this, the locomotion systems on the server side calculates a motion plan for characters to move towards, then on client side it has to come up with a movement path based on the available animations that character has that will fit the original plan. It sounds nasty, but it keeps the data overhead to an absolute minimum.

 

One problem that did cause issues was that certain information and events were executed simultaneously on client and server, such as NPC turrets being deployed, grenades being thrown and healing taking place. This is done to cut down the passing of data between client and server, but it meant some information was being synchronised between them at runtime. This sychronisation could explain why there were issues with exploits on the PC build of the game in the opening months after launch, where players modified values in memory to give themselves big health and damage boosts.

 

Managing The World

Each server used for The Division typically houses several instances of the game map, with potentially up to 1000 players expected to be playing on a given server during the busiest periods and thousands more AI to boot. To accommodate for the large number of players and AI characters, the server have a tick rate of 10Hz, meaning all behaviours and locations are updated ten times a second. Now the servers themselves are pretty meaty for 2016 - 40 core machines hosting 256GB of memory - and with all the optimisations made to the game (including many I don't mention in this video) they could run an update tick of each game world within less than 20 ms and up to 100ms in the worst case. Plus the reduction of continuous data being sent between players and servers meant that the average transfer rate between is only around 50 kilobits per second. In theory you could run The Division on a 56K modem.

So the funny thing is, that the servers are not what limits the number of AI characters in the game. You are: or rather the console or PC you play on. During development it was found that consoles created bottlenecks in the number of AI's executing at once. Hence there is a limit to the number of NPCs that can be active in-game at once. If you've played the story missions you may have noticed that the game frequently breaks up the fighting with periods of respite of varying length. Much of which addressing the console bottlenecks.

Meanwhile, the open world itself sets up zones within which a NPC is limited to move within if it has been spawned as a roamer or placed encounter. This is largely to ensure NPCs don't wander too far away from their original spawning point, meaning you're not going to bump into Last Man Battalion troops in Times Square, but it also prevents players from creating conga lines of thugs that chase them across the length and breadth of Manhattan.

Plus tied into the navigation and behaviour tree systems is the sensory systems of the AI characters. AI characters have both vision cone sensors as well as the ability to hear sounds at varying levels of intensity based on their distance. Plus they make tactical analysis of which of the 1.5 million cover positions they could go into should combat kick off. The rate at which these sensors update and their accuracy is dependant on both the distance of human players to them, but also the active weather system in the server. Hence bad weather such as rain and snow will impact the range and accuracy of these sensors. Meanwhile if the rate at which senses are update scales from every frame should the enemy be in combat, to every 2 seconds if the player is between 50 and 150m and sensory updates are outright disable if no players are within 150m of a given character.

 

 

Building and Testing Tools

Now the last thing I wanted to explore was how many the non-player character and world systems were tested but also how AI helped build test The Division itself.

As alluded to in part 1, Massive built a substantial number of tools to enable both designers to build and iterate upon their designs but also to help the programmers isolate specific issues in the games as they arose. Whilst beyond the scope of this video, Jonas Gillberg's 2016 GDC talking about their tools and systems is required viewing for tools programmers and a valuable lesson in how to streamline development for production teams of all sizes.

The testing tools built enabled full debugging of a behaviour tree as it was operating, but the most impressive part was the potential to test these tools in simulated live instances. The debug tool chain enabled for a local servers to be deployed to simulate the behaviour of a live instance and not just identify where issues arise, but rollback the timeline of behaviours during testing to understanding when they occurred.

 

But perhaps the most interesting or rather, amusing, aspect of this testing process, was ensuring that each mission was complete, valid and stable. When building games of this scale, it's possible for small bugs to deny progression between mission segments or fail to trigger specific in-game events that are required. So Massive built testing facilities that enabled for hundreds of AI characters to act as players and be dropped into testing servers to run around and kill everything.

1000 AI players would be dropped into the map but they had all sorts of hacks and modifications enabled: they didn't respect physics and could walk and even shoot through walls if necessary and could move between two locations in the world without any consideration for normal navigation. Either individually or in groups, they could fan out and complete whole segments of missions or just explore the open world. They had no real understanding of what they were doing other than wandering around and killing things they should be killing, but this 1000 monkeys with typewriters approach helped profile and bug tests many aspects of server loads, missions progression, world events, dynamic encounters and so much more.

Closing

With the release of Tom Clancy's The Division 2 not far off, it will be interesting to see how these systems have scaled up and improved between projects. It was evident as The Division continued to be updated throughout its lifecycle - with the likes of paid DLC such as The Underground, Survival and Last Stand, as well as the many free updates throughout the last year or so - that there was plenty of interesting opportunities still available in the core design. Naturally, many existing AI systems would expand and improve over this time and some new ones even come to the fore. But for now the secrets behind their development are still a mystery, but you can bet that The Division will be back here on AI and Games in the future.

Bibliography

  • Jonas Gillberg, “AI Behaviour Editing and Debugging in Tom Clancy’s The Division”, Game Developers Conference (GDC), 2016.

  • Drew Rechner & Philip Dunstan, “Blending Autonomy and Control: Creating NPC’s for Tom Clancy’s The Division”, Game Developers Conference (GDC), 2016.

  • Philip Dunstan, “How Tom Clancy’s The Division Simluate Manhattan for Millions of Players”, Nucl.ai 2016.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like