Sponsored By

Dynamic vs Static Rendering

To precompute, or not to precompute? Does the quality of precomputing illumination outweigh the ability to make dynamic and flexible scenes?

David Maletz, Blogger

January 8, 2011

5 Min Read
Game Developer logo in a gray background | Game Developer

Over the past decade, lightmapping and other precomputation methods have been used to simulate accurate lighting in games. Quake was the first computer game to use lightmapping, and the technique allowed the levels to have good lighting (for 1996) from many light sources with static shadows. Today, lightmapping is still used in many AAA game titles and can simulate accurate global illumination throughout levels, allowing game developers to have very photorealistic scenes. 


However, lightmapping and precomputation techniques come at the cost of flexible and dynamic capabilities. The complex lighting computed for lightmaps can take many minutes to compute, sometimes even hours depending on the level complexity and lightmapping settings. This computation, while very accurate, is fixed for the scene it was computed for. If the scene changes, the precomputed lightmap data is invalidated, and recomputing that data is often impractical, as lightmaps take so long to generate. In many games, this problem is overlooked in favor of improved quality of renderings. However current trends in game development call for more dynamic scenes with more complex lighting such as view-dependent specular reflections, which cannot be completely precomputed.


The perfect rendering algorithm that is flexible, can handle any kind of scene, and is high quality does not run in real time. As graphic cards become more powerful, however, those high quality and dynamic renderings become faster and faster as well. It is realistic to think that in ten years such renderings will be feasible in real time. For instance, one of my own research papers (of which my profile picture is a screenshot of) was a flexible, high quality multi-bounce global illumination solver for diffuse and low-specular scenes, and could fully converge within a few seconds. That is not exceptional timing either, as many global illumination papers are reporting times in the interactive to a few seconds range. While right now these algorithms are still too slow for games, the performance cost for these rendering algorithms goes down every year.


The dragon model in a cornell box converged in 2 seconds using my algorithm. Click image for larger size.


This does not mean that precomputation should be thrown out the window. Precomputation is useful for computing components of the scene that never need to be changed. For example, take atmospheric rendering. Rendering of multi-bounce volumetric atmospheres is expensive to compute. If the game is a space simulator featuring millions of planets with different atmospheric properties (like Infinity: TQFE), then precomputation does not make sense, and an approximate atmospheric scattering technique should be used. However, if the game takes place on only a handful of planets whose atmospheric properties do not change much, then precomputation can be used to greatly improve the accuracy of the atmosphere for those planets using techniques like Bruneton et al. If the relative sun position, atmospheric properties and the height of the camera within the atmosphere do not change much, then precomputing the entire atmosphere into a skybox is a cheap and accurate alternative. Games simply need to figure out how much of their scene needs to be dynamic, and how much quality or performance they are willing to sacrifice for that portion of the scene.


An offline, raytraced sunset image I generated in a half hour with volumetric atomosphere and water.
Compare to Bruneton et al. work (semi-precomputed) and Infinity: TQFE (fully dynamic).


Why do we care about dynamic rendering algorithms when the majority of the scenes in games are static? Because, even scenes we think of as static don't have to be static, and could benefit from motion. Buildings should be able to collapse, explosions should create realistic craters, trees should be able to bend in the wind or shockwaves from explosions, and light sources should be able to move (like the sun or headlights on a car) and still contribute more to the scene than just direct lighting. Crytek's game engine and game Crysis are a good example in my mind of using dynamic lighting instead of light maps. They developed Light Propagation Volumes, a real time global illumination solver, for their games. The quality of it cannot compete with offline precomputed lighting data that took hours to generate, but the effect is still convincing, and allows their trees to billow, their bridges to break, and creates scenes full of motion.


Keeping all of this in mind, for the game engine I've been developing everything has been designed with flexibility and dynamic scenes in mind. No more problems with getting the precomputed static objects and the on-the-fly dynamic objects to fit together, and no hassle getting a door to break, or turning static objects into dynamic objects. What can be precomputed (like the atmosphere) is precomputed, and what can not (like global illumination for dynamic scenes) is not. Nevertheless, I am curious as to what you, my fellow game developers, think about the relative costs and benefits of static versus dynamic rendering. Are the benefits of dynamic rendering (such as increased interactive capabilities) worth the cost in appearance which is currently unavoidable? Or, are precomputed scenes simply the way to go to wow the gaming world? I believe that dynamic rendering is integral to the future of gaming - especially as the quality and performance of those algorithms improves. What do you think?

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like