Sponsored By

The Revolution in 3D Cinematics: an Epitaph for Pre-rendering

The difference between pre-rendered cinematics and real-time graphics has made modern computers’ software and hardware capacities totally surpass the need for pre-rendering.

Adrian Gimate-Welsh, Blogger

June 8, 2022

4 Min Read
Game Developer logo in a gray background | Game Developer

Deus Ex Human Revolution was one of the most successful videogames of 2011. Its 3D art was beautiful and immersive. But when played again in current gen, ten years later, its real-time gameplay graphics look and run way better than the pre-rendered graphics displayed in cinematics. This strikes anyone as odd, because developers introduce cinematics precisely to show the best possible graphics achievable by the game, not the other way around. So what happened?

The key lies in the difference between pre-rendered cinematics and real-time graphics, that modern computers’ software and hardware capacities have totally surpassed. It used to be necessary for developers to pre-render cinematics and insert them between gameplay to showcase the best graphics possible, graphics that, back then, computers and consoles couldn’t render in real-time. Elements such as lightning, shadows, polycount, post processing shaders and other rendered effects overwhelmed most computer hardware and software when attempted to be displayed in real time, making it impractical. Computers crashed, froze or apps simply wouldn’t load. Because developers wanted to offer gamers the highest quality cinematics to empower their stories, pre-rendering and introducing them in the form of videos became preferable. Today, things are totally different.

Deus_Ex_comparacion.png

Deus Ex Human Revolution pre-rendered cinematic (left) versus real-time graphics (right)

Technical Advances

Previously, GPUs did not have enough transistors, megahertz (MHz) and memory capacity to render in real-time the best possible graphics developers wanted to show. For instance, Playstation 3’s GPU had 302 million transistors, 550 MHz and 256 MBs of memory. The next generation console, Playstation 4, saw a dramatic increase in GPU power, with 5,700 million transistors (eighteen times more), 911 MHz and 8 GBs of memory (32 times bigger). These changes meant that GPU could easily render the most detailed graphics, in the most complicated environments without much effort, making the pre-rendered cinematics not only obsolete but they look worse in the long run. Playstation 5 has exponentially solidified this set of facts.

Another factor is ray tracing, the technology that simulates how light behaves in real life. It was first introduced in the film industry, but has just recently, barely three years ago, been able to be introduced into videogames thanks to the Nvidia RTX GPUs, closely followed by AMD’s Radeons RX. Before this breakthrough, pre-rendering cinematics would always have better lighting effects, but no more.

Recently, with the introduction of Unreal engine 5, the nanite technology has been developed. It allows assets to have a virtually unlimited number of polygons. This is a great development because it allows high-poly assets to identically resemble the visual quality of pre-rendered film assets. It’s possible by the technology’s ability to recalculate the number of polygons based on clusters instead of complete asset manipulation, and setting a cap on the number of polygons shown on screen. Nanite allows an extraordinarily good quality of in-engine cinematics, making pre-rendering utterly superfluous.

Benefits of working with in-engine rendered cinematics

As technical advances have allowed better and better real-time graphics in cinematics, benefits have opened up for developers when choosing real-time graphics as opposed to pre-rendered ones for cinematics. One of the most advantageous is the disc space released because of this change. Videos can considerably puff up file size and fill much more storage than developers would otherwise prefer. In terms of localization, real-time cinematics save disc space because the videogame simply changes the audio file, specific textures and lip sync to match the specific language. With pre-rendered cinematics, developers need to render the same videos for each language they want to support, occupying far more storage, or spending resources splitting the videogame according to countries. It saves money and time to program Stock-Keeping Unit (SKU) in big regional discs rather than separating them for individual languages.

In terms of future proofing, cinematics with real-time graphics automatically get the benefit of any changes and improvements made to assets directly pulled from the world, without having to make further renderings. This also allows character customizations without cinematics contradicting the gameplay’s artwork. This rule also applies when porting videogames between platforms, or remastered editions for next generations, graphic changes automatically apply to in-engine cinematics, giving videogames a longer lifespan.

The trend is clear: as computer technology advances, the need for pre-rendering has greatly reduced. Videogames have the visual quality past developers only dreamed of. In-engine real-time graphic rendering enhances stories with sublime visual assets and environments in ways impossible years ago. But this is just the beginning of a new generation of graphically powerful videogames. With ray tracing and nanite, as well as other technologies like meta-human, the level of realism will eventually make videogames match the artistry of the most sophisticated animated films. That future doesn’t lie that far off.

This article was written with the help of TagWizz’s group of experts.

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like