Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
In this series I would like to share with you the impressions I had while porting the Coherent UI demo graphics framework and demo interfaces for the Oculus Rift dev. kit.
In the first part I'll cover the rendering integration of the Rift while in the following posts I'll talk about the strictly UI-related issues in virtual reality.
Much of the problems I'll cover have been tackled by Valve in their porting of Team Fortress 2 for the Rift as well as in their R&D team. Extremely valuable resources on the current state of VR that helped me a lot while doing the port are given below in the references section.
The Oculus Rift is a device that encompasses a head-mounted display and an array of sensors that track the orientation of the head. A good explanation on how to perform the integration and the details of the device can be found in the Rift SDK. It can be freely downloaded after a registration and I encourage anybody who is interested in VR to take a look at it even if you don't have a Rift device yet. The dev kit is still a 'beta' version of the final product. The major issues I find currently are the somewhat low resolution of the display and that there is some yaw drift that really makes the developer's life tough. Oculus is working on these issues and I'm positive that the final consumer product will be amazing. Other than that the experience playing and developing with the Rift is a great one and I'd encourage anyone who hasn't ordered the kit yet to hurry up and do it.
Here at Coherent Labs we have a small 3D application we use for some of our demos. It is based on DirectX 11 and an in-house framework designed for rapid prototyping of rendering tasks. It is not a complete engine but has a fairly good rendering backbone and a complete integration with Coherent UI - a lot of the functionality is exposed to JavaScript and you can create Coherent UI views as well as animate and record the camera movement for demos through the script in the views themselves.
The task at hand was to add support for VR, implemented via the Oculus Rift.
I'll give a brief summary of what the process looked like for our framework. The Oculus SDK is very good at pointing what should be done and the process is not very complicated either. If the graphics pipeline of the engine is written with VR in mind it is actually trivial. Ours was not, so modifications were necessary.
From this.. |
.. to this |
The pipeline of the framework we use is based on a list of render phases that get executed every frame in order. We use the light pre-pass(LPP) technique and have several post-processing effects.
In order to support stereo rendering some phases must be done twice - once for the left eye and once for the right. Usually when drawing for the eyes we simply draw in the left and right helves of the RT for each eye respectively with different view and projection matrices.
The non-VR events look like this:
1) Set View & Projection matrices for the frame
2) Shadow maps building
3) Clear render targets
4) Fill GBuffer
5) Fill lights buffer (draw lights)
6) Resolve lighting (re-draw geometry)
7) Draw UI Views in the world
8) Motion blur
9) HDR to LDR (just some glow)
10) FXAA
11) Draw HUD
12) Present
Of those 4-7 must be done for each eye. LPP can be quite costly in terms of draw calls and vertex processing and even more so in the VR case. Our scenes are simple and hadn't any problems but that's something to be aware of.
I directly removed the motion blur because really makes me sick in VR and the Oculus documentation also points that motion blurs should be avoided. I also removed the HUD drawing as it is handled in another way than a full-screen quad as I'll explain in next posts.
The VR pipeline looks like:
1) Set central View & Projection matrices for the frame
2) Shadow maps building
3) Clear render targets
4) Set left eye View & Projection
4.1) Fill GBuffer
4.2) Fill lights buffer (draw lights)
4.3) Resolve lighting (re-draw geometry)
4.4) Draw UI Views in the world
4.5) Draw HUD
5) Set right eye View & Projection
5.1) Fill GBuffer
5.2) Fill lights buffer (draw lights)
5.3) Resolve lighting (re-draw geometry)
5.4) Draw UI Views in the world
5.5) Draw HUD
6) HDR to LDR
7) FXAA
8) VR Distortion
9) Present
Conceptually it is not that much different and complicated but especially post-effects have to be modified to work correctly.
As I said I draw the left & right eye in the same RT.
The render target before the distortion |
The shared render target has several implications regarding any post-processing routines. The HDR to LDR routine in our application does some glow effect by smearing bright spots in the frame in a low-res texture that gets re-composed on the main render target. This means that some smearing might cross the edge between the eyes and 'bleed' on the other one. Imagine a bright light on the right side of the left eye (near the middle of the image) - if no precautions are taken the halo of the light will cross in the right eye and appear on it's left side. This is noticeable and unpleasant looking like some kind of dust on the eye.
Post-process anti-aliasing algorithms might also suffer as they usually try to find edges as discontinuities in the image and will identify one where the left and right image meet. It is perfectly vertical however so no changes should be done.
The VR Distortion routine is the one that creates the interesting 'goggle' effect seen in screenshots and videos for the Rift. The lenses of the HMD introduce a large pincushion distortion that has to be compensated in software with a barrel distortion. The shader performing this is provided in the Oculus SDK and can be used verbatim. It also modifies the colors of the image slightly because when viewing images through lenses color get distorted by a phenomenon called "chromatic aberration" and the shader compensates for that too.
An important point that is mentioned in the Oculus documentation is that you should use a bigger render target to draw the image and have the shader distort it to the final size of the back-buffer (1280x800 on the current model of the Rift). If you use the same size, the image is correct but the fov is limited. This isextremely important. At least for me having the image synthesized from a same-size texture was very sick-inducing as I was seeing the 'end' of the image. The coefficient to scale the render target is provided by the StereoConfig::GetDistortionScale() method in the Rift library. In my implementation steps 4-8 are actually performed on the bigger RT.
The StereoConfig helper class is provided in the SDK an is very convenient. The SDK works with a right-handed coordinate system while we use a left-handed one - this requires attention when taking the orientation of the sensor (the head) from the device and if you use directly the projection and view adjustment matrices provided by the helper classes. I decided to just calculate them myself from the provided parameters - the required projection matrix is documented in the SDK and the view adjustment is trivial because it only involves moving each eye half the distance between the eyes left or right.
One small detail that kept me wondering for an hour is that if you plug the distortion parameters directly in the shader for both eyes (given by StereoConfig::GetDistortionConfig()) the image will not be symmetric with the outline of the right eye looking like the left one. For the right eye you have to negate the DistortionConfig::XCenterOffset. This is done in the Oculus demo but not very prominently and while there usually are parameter getters for both eyes, there is just one for the DistortionConfig which leads to think it might be the same for both eyes. If you analyze carefully the code in the shader you notice the discrepancy but the API successfully puzzled me for some time.
In the next posts I'll specifically talk about UI in the Rift.
Michael Abrash's blog
Lessons learned porting Team Fortress 2 to Virtual Reality
John Carmack - Latency Mitigation Strategies
Oculus Dev. Center
Read more about:
Featured BlogsYou May Also Like