Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
From Black River Studios, Finding Monsters VR was released along with GearVR consumer launch. This articles covers the process and optimization tips used in the game.
Finding Monsters Adventure VR was released in the Oculus Store in the week of the consumer GearVR launch. The game features Jake and Tess, two siblings who are sucked down a portal and into a magical world full of cute monsters. The player needs to figure out how each monster and scenario plays in order to take pictures of them and complete challenges. We have a VR and a Smartphone version of the game. Each one has it's own unique gameplay. Beto, the project’s lead designer, discussed their similarities and differences in another post and Gabi Thobias made an article about the user experience of the game.
Gameplay Screenshot
In order to get Finding Monsters Adventure VR to have a smooth gameplay experience we had two engineers working dedicated to optimizations for a month and a half. We definitely learned a lot from it and we’ll share a few of these optimization tips and our process, but first, let’s see a breakdown of the game stats:
The game has a total of 13 levels + an Interactive Start Room. We reach at most 130 drawcalls and 180k visible triangles. We have anti-aliasing on (MSAA 2x) and up to 4 monsters can appear at the same time on screen. Monsters have up to 69 bones and we use up to 4 bone influences per vertex. All shaders are custom made in order to achieve the visuals defined by our Art Director.
All necessary runtime libraries will be downloaded and installed on your phone as soon as you put the GearVR on for the first time. GearVR Service is one of those and comes with a developer mode option that will make profiling and testing the games straightforward. When developer mode is enabled, the game will run in VR mode but without the need to have the GearVR attached.
The option is hidden at first, and in order to enable it one has to go to Application Manager -> All -> GearVR Service -> Manage Storage and tap VR Service Version 7 times. After that, Developer Mode will be available as a toogle option.
When developer mode is enabled, the game will run in VR mode but without the need to have the GearVR attached.
Gear VR Developer Mode
From our tests, GearVR Developer Mode didn’t add any noticeable overhead, which is great. The only drawback is that the gyroscope won’t work by default to simulate head tracking. However, this issue is easily overcome by creating a script to simulate it. In our case we used a simple swipe control to rotate the camera.
If you worked on optimization for a while you probably already know that when it comes to optimization you shouldn’t assume anything. For instance, with the advent of high resolution displays, fillrate has become a major issue for mobile games. You might assume that it’s the same scenario for VR, however, VR usually renders to 1024x1024 render textures and the driver overhead has become a more frequent issue. Paying attention to the game’s drawcall number is a golden tip for VR. Oculus suggests keeping them below 100.
Our optimization process has basically the following 3 step: Profiling, Optimizing, Testing.
In the Profiling step we use a Profiler to identify the game's botteneck, i.e, the hotzones of our game. If you’re not optimizing for the game’s bottleneck then you’re wasting time. Once we identify the bottleneck, we need to measure the time we are spending to complete our frame. This will be used later to compare it against our optimized solution.
We then start the Optimization step which is basically reworking that particular part of the game to achieve the same result with less computational resources. Later in this article we'll show a couple of examples of what we did in Finding Monsters VR.
Finally, we need to Test our results. We need to know how much frame time we saved if any. One extremely important idea to keep in mind is that we need to be able to reproduce the test in a scenario exactly or extremely similar to the one we profiled before. Failing to do so may lead to incorrect results. Upon finishing testing, we will go back and repeat these 3 steps until we reach the desired target framerate.
If you’re not optimizing for the game’s bottleneck then you’re wasting time.
In order to achieve a consistent 60 frames per second in VR you must be able to finish your frame in about 14ms. That’s because you should leave enough room for distortion and Timewarp to take place before VSync. Timewarp is a key feature for VR and it works really well for GearVR. It reduces a considerable amount of latency and can compensate for skipped frames smoothly.
Unity comes with a built-in profiler that will assist you in most of what you need. It can breakdown your frame and print timestamps of key parts. One can also timestamp specific scripting regions by using Profiler.BeginSample / Profiler.EndSample. I won’t cover the details of it here since there are already many good posts about it available.
Unity Profiler
Optimizing for Drawcalls
In some scenes of Finding Monsters Adventure VR we were getting about 7-8ms of driver overhead. Our drawcalls were high despite the fact that we were using static and dynamic batching with proper texture atlasing.
Fortunatelly, inspecting drawcalls in Unity 5 has become really easy. It comes with a built-in frame debugger in which you can step through and individually see each drawcall in the order it was issued. In order to see it, select Window -> Frame Debugger.
We soon realized that some objects that were supposed to be batched together weren’t. This was due to the lightmap. We had a lightmap atlas size of 1024x1024 and from 4 to 7 lightmap textures were being use in our scenes. Since lightmap textures are not exposed in the material inspector, I believe this goes unnoticed for a lot of developers. In our case, what we did was to increase the atlas size to 2048x2048 and tweak the lightmap resolutions to all fit in one atlas.
Lightmap Breaking Batch
In order to change the max atlas size select Window -> Lighting and under General GI click on the atlas size drop down menu. If you’re in Unity 4.x and you want to increase above 1024 you need to do it via script by setting MaxWidth and MaxHeight in LightmapEditorSettings.
Lightmap Atlas Size
Another place we identified room for improvement in terms of drawcalls was the UI. At first we used many layers for UI which Unity renders as different drawcalls.
Finding Monsters VR UI
We baked as many of those layers as we could and re-checked all UI elements to see that they were using the minimum number of layers possible.
Baked UI
Optimizing for Loading Time
If you’re getting high loading times, the first thing you should do is check your texture compression settings. Although all GearVR devices support ASTC, Unity will only support it with OpenGL 3 profiles. If you’re building to your device with an unsupported texture compression, Unity will decompress all textures prior to showing the splash screen causing huge loading times.
Another tip is to disable the Autoconnect Profiler option when you’re profiling the loading time. Autoconnect will spend a few seconds trying to connect to the Unity Editor Profiler. In our case this was causing a delay of 10s in loading time.
If you’re getting high loading times, the first thing you should do is check your texture compression settings.
If you’ve done this and are still getting long load times, one way to find the problem is to use ADB to timestamp all Debug.Log calls. You can put a Debug.Log in the Awake method of a script in your first scene. After that you can issue an adb logcat -v time -s Unity > startupTime.txt command in the console. This will write to startupTime.txt; a log of each Unity call with the time it happened.
In our case, we were able to identify some delay due to the fact that a GameObject in the first scene was referencing way more sound assets than needed.
Optimizing for Overdraw
Overdraw is when a pixel is written to the render target more than once. Here are two views of the bedroom scene; one rendered in textured view and the other with an overdraw debug shader.
Optimizing for Overdraw
Overdraw View
In the overdraw view, the more white the pixels are the more overdraw has occurred. The order in which the objects in the scene are rendered matters to the overdraw due to the fact that the GPU can do an early test to check if the fragment will be occluded or not. If it is occluded, it gets discarded and no write happens
Major improvements in terms of overdraw can be done by sorting opaque objects front-to-back. Unity does this sorting based on the distance from an object’s pivot to the camera’s position. However, in some cases, this won’t work well.
For instance, in the particular example above we can see all objects are being rendered before the wall, despite the fact that the wall is further away from the camera. This happens because the wall is a single mesh and its pivot point is centered close to the camera as you can see in the image below.
Optimizing for Overdraw
The way we can improve this is to assign the wall into a different sorting layer. In Finding Monsters VR we created a Custom Material Inspector that exposes Material.renderQueue to the Editor.
Custom Material
This allows us to configure layers just like one can do with Sprites. This is much more controllable ans scalable than doing it in the shader.
Now you must be thinking: It’s great to know how I can reduce overdraw, but how do I know which parts of my game are most overdraw intense? Tools like Mali Graphics Debuggger and Adreno Profiler can capture frames and provide overdraw images. Although this is useful, capturing one frame at a time is not feasible enough. We wanted to be able to check it realtime.
It’s great to know how I can reduce overdraw, but how do I know which parts of my game are most overdraw intense?
Our solution was develop a custom shader to render the overdraw view. The shader is quite simple: It renders addictively and outputs fixed4(0.1, 0.1, 0.1, 1.0) as color. We then use a Camera Shader Replacement to apply it to all objects. This allows us to switch in realtime between textured view and overdraw view and spot right on overdraw intense scenes.
Dynamic CPU and GPU Throttling
Oculus Mobile SDK allows one to underclock CPU and GPU in order to save power consumption and thus decrease overheat. This is called throttling. By adjusting your game to reasonable throttling levels you will increase play session times. This can also be done dynamically depending on your scene needs.
Throughout the game we use a conservative CPU/GPU level. However, there are levels that can have 4 monsters appearing at the same time.
With the current throttling settings we had, we were spending 5-6ms just for updating those monsters (physics, AI, skinning) and this was being the major responsible for going over our frame time budget. Removing monsters was also not an option. We used a a higher CPU throttling setting for those levels in order to account for extra monster overhead.
Dynamic CPU and GPU Throttling
Conclusion
Optimization needs to be considered from day one. Performance oriented teams evaluate and agree on a performance budget prior to the game development. This will greatly reduce the risk to reach the project's end and be stuck with performance issues that would require a huge amount of rework on level design and art assets. That often leads to last minute game quality decrease.
The Engine choice also plays major factor for VR development. Unity has made quite a great progress towards VR development and optimization and there are still many improvements to come. Our experience with it was great and we sure recommend it for developers aiming for VR.
Read more about:
Featured BlogsYou May Also Like