Sponsored By

Raytraced shadows implementation in The Riftbreaker

Implementing raytraced shadows in a game with an isometric point of view required us to find solutions to a lot of very specific problems. This article describes the process we went through and what we gained from implementing raytracing.

Piotr Bomak, Blogger

November 30, 2020

16 Min Read
Game Developer logo in a gray background | Game Developer

The Riftbreaker is an isometric strategy game combined with elements of survival, exploration, and hack’n’slash. Powered by our own technology, The Schmetterling Engine 2.0, The Riftbreaker allows us to make use of the newest available developments in the gaming world, one of which is real-time raytracing. In this article, we will describe our implementation of raytracing in the game, as well as explain what kind of problems we faced and how we solved them.


Dynamically changing time of day, a variety of weather effects and multiple biomes to be explored make real-time raytracing a great choice for a game like The Riftbreaker.

A brief technology introduction

The Riftbreaker’s world is fully dynamic and destructible. Practically all of the objects that are present in the environment can be modified by the player. Vegetation can be bent, burnt, and dissolved. Thousands of creatures can swarm the player and completely fill up the screen. This kind of gameplay premise calls for a specialized approach to rendering fully dynamic shadows.

Before we added raytracing features to The Schmetterling 2.0 engine we utilized real-time shadow mapping (no precomputed shadow maps) for generating shadows. This solution was the most optimal for us because of our fully dynamic scene geometry. We could not make use of any precomputed lightmaps, as they would not match the geometry of the scene. Therefore, before real-time raytracing was possible, dynamic shadow mapping was our only real choice. While shadow mapping has been a widely utilized method in the entire industry, it has a range of limitations. 



A short fragment of a boss fight. The boss creature has a shadow casting point light attached, adding a lot of visual fidelity to the scene.

The latest generations of GPUs are powerful enough to carry out raytracing calculations in real-time. The new hardware allowed us to finally introduce real-time raytraced shadows which provide superior results when compared to traditional shadow mapping techniques. 

The basic principle of raytracing shadows is that instead of looking at the scene from the point of view of the light source and looking for all the possible shadow casters, as it is done in shadow mapping, we could simply shoot rays at the light source. If the ray hits an obstacle, it means it is not directly lit. If the ray reaches the light source - there is no shadow to be added. In principle, it is a much simpler algorithm that produces great results and offers solutions to typical shadow-mapping problems. However, it is very demanding when it comes to GPU performance.


Each element of the dynamic scene geometry has the potential to cast a shadow. Our job was to make sure it’s all as accurate as possible while maintaining performance.

Adding a completely new rendering technique to a proprietary game engine is not an easy task. In our case, it has been made possible thanks to our cooperation with AMD. They have provided us with their own GPUOpen RT Shadows library. It contained the raycasting solution, along with a denoiser that allows us to clean up the results of a raytracing pass. However, before we could use the library, we had to develop a DirectX 12 renderer for our engine. The reason for that is the DirectX Raytracing API (also known as DXR), introduced with the DirectX 12 Ultimate standard. It is a new API that allows us to make use of new shaders and utilize the hardware raytracing capabilities of modern GPUs. 


The scale of encounters in The Riftbreaker range from small skirmishes to drawn-out battles against enemy hordes. Coming up with the right optimizations was key.

Another benefit of joining our forces with AMD is the open-source nature of their solutions. This enables us to introduce technologies that will be compatible with the latest gaming platforms on the market, including the next-generation consoles. It is also worth mentioning that platform compatibility has also influenced our choice of the rendering API. The two options we had to consider were Vulkan and DirectX 12. While Vulkan does offer raytracing, the API is not available on Xbox or PlayStation, also at the time of writing this article, still only Nvidia hardware supports it on the PC. By going with DirectX 12 we received native raytracing support on Xbox and Windows PC and we’re able to utilize hardware from all manufacturers.



As the weather conditions change, so does the shadow penumbra. In this example you can see the shadows becoming softer during rain and sharpening as the sunlight intensifies.

The benefits of implementing raytraced shadows can differ based on the implementation scenario. In The Riftbreaker’s case, the most important features include:
- “infinite” shadow resolution  - the quality of the rendered shadow doesn’t depend on the distance of an object from the camera like in traditional shadow mapping. Each pixel on the screen has individually computed shadowing, which results in much more precise and much more stable shadows without flickering artifacts.
- variable shadow penumbra - Raytraced shadows allow us to dynamically simulate situations like the transition from a clouded sky during rain to a sharply lit noon.
- relatively lower cost of calculating additional shadow casting lights - In our current raytraced shadows implementation we can calculate up to 4 shadow casting lights at the same time without a big performance hit. The cost of adding additional shadow casting lights in traditional shadow mapping is relatively much higher.

We presented the technological benefits of our cooperation with AMD in this recent The Riftbreaker RDNA 2 features presentation:

All of these benefits come at a significant performance cost. Even while using the latest GPUs that support hardware raytracing acceleration, the FPS values with all raytracing effects enabled can be halved when comparing the same scene without raytracing.

Raytraced shadows implementation

Adding raytraced shadows to a scene in The Riftbreaker is a complex process that results in a very detailed map of lit and unlit pixels. During the first raytracing pass, we need to reconstruct the world space positions of all pixels on the visible screen area and cast rays from those pixels towards all the light sources that affect them. We obtain that world space position from the depth buffer. If the ray reaches the light source, it means the surface is lit directly. If the ray is obstructed along the way we are dealing with a shaded surface. Additionally, in order to optimize the process, we discard all raycasts from surfaces with normal maps facing the direction opposite to the light source. The next pass determines the type of shader that is going to be applied at the intersection of the ray with the surface. The DXR API handles that, using a proper shader based on the result of a raytrace and checking it against our shader table.


An example of a typical gameplay scene with all raytraced effects enabled - soft shadows and ambient occlusion.

The Radeon RX 6000 Series cards that we used to develop our raytracing implementation are incredibly powerful, capable of casting millions of rays per second. However, in order to achieve an exact representation of what light would behave like in the real world, you need even more information. In the case of offline rendering, it usually means casting thousands of rays in all directions per each pixel. No present-day hardware is capable of carrying out these calculations in real-time, let alone doing it 60 times a second. It means we have to somehow create an accurate shadow map using incomplete data.


This is the resulting image of the raytracing pass before denoising. Note the fuzzy shadows, as well as the edges of sharp objects - nothing is really clear here.

Not having accurate data is problematic and could result in the degradation of the visual quality of a scene. While we would have most of the information necessary to render the objects and apply their properties, details such as contours, edges, and soft shadows would become blurry and blend into each other. The limited amount of rays that we can cast per frame results in a noisy shadow image. This is where the other GPU Open library comes into play - the AMD FidelityFX Denoiser. Denoising is a complex process made possible by extensive use of temporal techniques that analyze past frames and combine them into a new frame. The AMD denoiser allows us to quickly determine the average values of the available data and determine what kind of properties we should apply to any given pixel, resulting in a sharp image without any visible compromises. 


Here’s the same scene after the denoising step. Everything looks much better without the need for additional rays to be cast.


The denoiser utilizes a white noise pattern animated by the means of the golden ratio in the denoising pattern. This visualisation has been provided by Alan Wolfe and it has also been featured in his blog post on noise patterns: https://blog.demofox.org/2017/10/31/animating-noise-for-integration-over-time/ 

Raytracing a dynamically changing scene

Naturally, the implementation of raytraced shadows was not as simple as enabling a couple of ready-to-use libraries. The Riftbreaker presents its own, unique challenges that required creating specific solutions. 

The first such issue was the massive amount of dynamic objects present in our game world. The premise of The Riftbreaker is that you are a scientist exploring an exoplanet inhabited by numerous alien species of flora and fauna. The player is often attacked by hordes of alien creatures with numbers going into thousands. Each of those creatures has to cast an individual shadow, which is problematic. Coupled with a dynamic vegetation system reacting to wind, shockwaves, and bending forces applied by other entities it was a real optimization issue. 


Thousands of entities interact with each other in real-time.

The main problem was caused by the way that the top-level acceleration structure for raytracing processes data. It is a structure that stores information about entities in the scene for further use during the raycasting pass. This structure can only take in data in the form of a pre-baked bottom-level acceleration structure that contains object vertex information. It is not an issue if we are talking about rocks or buildings, however, all units with skeletal, dynamically blended animation, as well as dynamic vegetation do not fit the assumption of ‘pre-baked’ data at all.


All entities ‘bake’ their current state every frame. Since most objects are animated, we need to have the exact information about their vertex all the time. Here you can see the slight differences in rotation between the buildings - if we didn’t take those into account, the results of the raytracing pass would be inaccurate.

In order to provide the acceleration structure with all the data it needs we had to take drastic measures. Every dynamic object on the scene is individually baked into a completely new, static model that can be processed during the raytracing pass. This process needs to be repeated every frame to maintain accuracy. What adds difficulty to this task is the fact that The Riftbreaker features a dynamic weather system. Because of that, a situation where the light source is at an angle that makes an entity cast shadow into a frame without it being directly visible is entirely possible. This means that during the baking process we need to take into account not only the entities on the visible part of the screen but also from outside of it. 


The image of an entire acceleration structure prepared for just one frame of rendering.

The process of preparing the acceleration structures is very computationally heavy for the CPU and could easily bottleneck the entire renderer. The Schmetterling Engine 2.0 reduces the CPU bottleneck thanks to heavy parallelization of all of the processes that are required to prepare a scene for raytracing. By sharing the operations between the CPU and the GPU we have been able to find the computational power necessary to carry out all of these operations every frame. In the case of our intense benchmarking scenario, which includes about six thousand creatures attacking the player’s base, we managed to reduce the rendering time from 60ms to 17ms thanks to parallelizing the CPU dependent tasks (using an AMD Ryzen 9 3900X CPU and a pre-release version of the Radeon 6800XT).


A snapshot of a single frame from The Riftbreaker. All the processes necessary for raytracing are marked with colored frames, with the legend in the bottom-right. Click the picture to view the full resolution version. The snapshot was taken using the Tracy Profiler. https://github.com/wolfpld/tracy#tracy-profiler

Alpha testing support for raytracing

Another unique challenge our engineers had to solve while implementing raytracing techniques into The Riftbreaker was also connected to the vegetation system, albeit in a slightly different way. Textures used by foliage are mostly alpha-tested and it is not uncommon for a texture in our game to have large areas that are fully transparent. That does not make a difference to a ray, though. Once a ray that we cast encounters a transparent pixel on a texture, it returns a regular ‘hit’. This doesn’t necessarily mean that the pixel from which we are raycasting should be covered by shadow because if the texel that we’ve hit is transparent, then the ray should continue tracing its path towards the light source. We make extensive use of such textures, so it was necessary for us to find a solution to this and add support for alpha-testing in our AnyHit shader.


We make heavy use of the alpha channel to introduce jagged edges and complex shapes on our vegetation textures. Unfortunately, that introduced problems with raytracing hit recognition. 

Our approach was to essentially reintroduce the solution used in traditional rendering techniques. Upon hitting a surface, we retrieve the barycentric coordinates of a triangle at the point of intersection. These coordinates are not sufficient to determine which pixel of the texture was hit, but only the position of the intersection point within a triangle. However, at this point, we can determine which vertices we should take into consideration. Every vertex of a triangle has a set of UVW coordinates that were assigned to it by the graphics designer during the texturing process. By knowing which triangle we hit, what part of the texture should be covering it, and knowing the intersection coordinates within that triangle we can now carry out an alpha test.


Polygons lying underneath the vegetation textures are not transparent to rays. We had to give our rays a method to check if they hit the spot on the texture that was actually opaque.

Before all of the above happens, however, we need to provide our acceleration structure with information about textures and their positions. This is why we have prepared a shader table which is a data bank for the GPU. It lists the textures assigned to scene instances, as well as the values assigned to them in the index and vertex buffers. Thanks to the shader table we can quickly retrieve all the data about the model and texture that are necessary to carry out the next steps of the shading operation. It’s quite an interesting process.


Since The Riftbreaker is set on an uninhabited, alien world, there is a lot of wildlife to be seen (and alpha-tested).

If the result of this operation is ‘opaque’ we apply the shading data and the process is over for this particular ray. In the case of non-opaque hits, we carry out an alpha test. As mentioned before, when the ray intersects with the surface we can get the barycentric coordinates of the point in a triangle that it collided with. All we have is a location within a triangle that was hit. Next, we turn to the index buffer to let us know the indices of the vertices of the said triangle. From that, we can extract the UVW coordinates. With the data provided by the index buffer, we can now dive into the vertex buffer for the data on where these vertices fall on the texture. Only after all these steps do we get a result - a hit with an opaque or transparent surface. If the alpha value at the point of the intersection is lower than the transparency threshold, the ray continues traversing the scene.

All the effort finally pays off when we get an accurate result every frame. Click the image for a full-res version.

What we learned in this process is that alpha-tested materials are much more computationally expensive for raytraced shadows than for traditional shadow mapping and they are best avoided if possible. In our jungle scenario, the dispatch ray cost of a ray that hits an alpha-tested object is about 20% higher than, that of a ray that hits an opaque object.  We optimize our content by limiting the surface area of all transparent objects to reduce the number of ray hits on transparent surfaces. The Riftbreaker’s camera view is isometric, so the number of polygons visible at once is naturally limited and we can easily increase the polycount of most objects without affecting GPU performance. In some cases, it is actually more efficient for us to increase the polycount of an object if we can transform its material from alpha-tested to a fully opaque one. 

Conclusion

We feel that The Riftbreaker improved a lot thanks to the implementation of the techniques mentioned above. The world presented in the game is much more believable now, increasing player immersion. The details we add through raytracing are not always big, but they definitely improve the experience. Seeing a comet flying across the sky that shines a shadow-casting light on the dynamic objects on the ground is a great sight. On the other hand, cloudy days greet the player with soft shadows and a more toned-down color palette. We are confident that The Riftbreaker earned its place among all the productions intended for the next generation of gaming hardware.

The Riftbreaker, coming to PC and consoles in 2021.

In hindsight, implementing raytracing techniques into The Schmetterling Engine 2.0 was an incredibly valuable lesson for us. Working alongside talented engineers from AMD allowed us to share, compare, and verify our findings, as well as learn much about the cutting-edge rendering techniques and solutions. We believe that The Riftbreaker will greatly benefit from the addition of raytracing shadows, as well as other DirectX 12 techniques. We encourage you to come back in the coming weeks, as we are going to describe the process of implementing Raytraced Ambient Occlusion in The Riftbreaker.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like