Sponsored By

Maximizing Youtubeability: Camera Jitter in Counterpart

Video encoding affects which aesthetics we can choose for YouTubeable games. We can optimize this through controlling frame-to-frame detail and where it's located on the screen.

Chris Johnson, Blogger

February 5, 2016

8 Min Read
Game Developer logo in a gray background | Game Developer

We all want to make games played by YouTubers.

That’s only reasonable, given the role that YouTube has in game publicity in 2016. But what technical and aesthetic choices can we make to get the most out of their play sessions?

Turns out there are many, and the effect of these choices is profound. It can range from one extreme, the grayscale silhouette-based platformer Limbo, to perhaps Fallout 4 exterior scenes as an opposite: foliage turns to compressed-out fuzz because the detail density is so high that it steals bandwidth it can’t afford. Realism elements like foliage moving in the wind become a horrible blocky mess. By contrast, with Limbo, an astonishing amount of subtlety and detail can come through because large areas don’t change frame to frame, and the stuff that does change is grayscale. Black and white footage compresses very efficiently when you can throw away all the chroma information because it’s all the same throughout the whole entire video…

Here’s my personal hell for Youtubeability: Counterpart. The aesthetic is a cyberspace wireframe world.

Raw Incompressible Wireframe (click for hi-res)
NoAngleJitter

What are the problems with this? There’s one huge problem that overwhelms all others. Since the game’s playing field is drawn using the following hack attached to a dedicated terrain camera:
public bool wireframe = true;
void OnPreRender ()
{
     if (wireframe) {
          GL.wireframe = true;
     }
}

void OnPostRender ()
{
     if (wireframe) {
          GL.wireframe = false;
     }
}

That means the terrain will be drawn in the color of the underlying polygons, edges only, with simple non-antialiased lines. It’s potentially quite fast (pushing this aesthetic, I chose to have everything drawn as internally lit, with my brightly colored ‘bots’ pure luminous color of high intensity) but the contrast between undrawn areas and lines is extreme. There’s an underlying skybox (actually three, two of which counter-rotate and exist as smaller boxes slightly outside the playing field’s reachable area) but it’s extremely low value and just brings a dark purplish wash to things.

A side note on the skyboxes and optimizing for video compression: though I couldn’t use it, one thing I tried was to attach the other two skyboxes to the camera, not the world. The idea was to produce a slowly moving colorful effect that tended to be the same frame-to-frame, making it easier for YouTube to compress. This was possible, but even set behind all the gameplay elements it still looked like stuff smeared on the screen, and I scrapped the concept. That said, Counterpart’s design features a semitransparent darker bar up top with some indicators displayed on it, and a matching lower bar that doesn’t necessarily do anything. These overlays, especially the bottom one, help the Youtubeability by killing contrast on fast-moving detail: you can still see some motion but it forces the video compression to be less interested in that area. Making them entirely opaque would be even more effective: remember your HUD can also serve to make screen areas invariant and easy to video-compress. The fewer areas of intense detail moving in screen space, the better, and they should be attention focal points, not peripheral vision.

Back to my ridiculous cyberspace aesthetic. If you’ve tried this OpenGL trick you know several things: the lines ‘crawl’ because they’re not antialiased, and they are by definition made out of one-pixel details at whatever resolution you use. Assuming we want this fineness of line but don’t want to clobber YouTube with incompressible video, what can we do to improve this?

Motion Blurred Angle Jitter (click for hi-res)
AngleJitter

This is angle jitter. You’re seeing two images overlaid: the viewpoint, and then the viewpoint rotated very slightly, so distant scenery is slightly offset. We can position the focal point anywhere we like by also jittering the position of the camera. What you’re seeing is angle jitter alone.

This works in two ways. First: you could implement this by rendering two passes per frame, and averaging them together (using the shader mode Blend SrcAlpha OneMinusSrcAlpha). I was interested in getting a bloom effect early on, so I balanced the brightnesses of the lines using the mode Blend One One. But you could render the required angles both to a single frame, and composite them in a shader.

Part of the gameplay of Counterpart involves zipping around fast enough to smash other bots, bouncing into the air where you can’t steer, and racing to reach the bot you match before time runs out. The entire thing is designed to run at extremely high frame rates and be as fluid as possible. So, another option presented itself. Why not apply the angle jitter as a motion blur? That way, at high speeds the jitter quickly gets lost in the larger motion. Each frame displays itself, and the ghost of the previous frame. It produces double lines, which serves as a kind of ‘blur/speed effect’, and they’re about half the intensity of the overlaid lines which contributes to the impression of speed: a kind of vignetting.

And so that’s how it worked out: a one-frame motion blur effect (in other circumstances you can build a simple bloom effect into this, using the Blend One One mode in the shader and brightening areas that exceed a certain range) and the angle jitter. Since it’s a one-frame motion blur, the jitter isn’t ‘dither’ as there’s no randomness. It’s a simple toggle like this:
if (SystemInfo.supportsRenderTextures) {
     blurHack += 1;
     if (blurHack > 1) blurHack = 0;
     blurHackQuaternion = wireframeCamera.transform.localRotation;
     if (blurHack == 0) {
          blurHackQuaternion.y = jitter;
          blurHackQuaternion.x = jitter;
          wireframeCamera.transform.localPosition = positionOffset * -jitter;
     }
     if (blurHack == 1) {
          blurHackQuaternion.y = -jitter;
          blurHackQuaternion.x = -jitter;
          wireframeCamera.transform.localPosition = positionOffset * jitter;
     }
     wireframeCamera.transform.localRotation = blurHackQuaternion;
}

You can see that x and y are jittering back and forth every frame, and the position’s reversed (positionOffset is Vector3(40, -40, 0), at least for now) How do you work out what axes to jitter? If you only jitter y, vertical elements get smoothed but any horizontal detail remains sharp. That turned out not appropriate for Counterpart, as dense areas of distant wireframe ‘sparkled’ when there was no jitter on x. But the real way I worked it out was much simpler…

Exaggerated Angle Jitter (click for hi-res)
HighAngleJitter

All you have to do is set the jitter level far too high, and then you can see exactly what’s being moved where. The idea is to arrive at an algorithm that blurs the right areas in the image, while leaving other elements alone. You might want to combine the jitter with complementary motion of the camera’s localPosition so that middle-field objects are sharply in focus: leaving the camera position fixed guarantees that everything in sight will get roughly the same amount of jitter. If you’re doing this, scale the camera offset by the same jitter variable, as I’m doing, so it’ll remain consistent! In this exaggerated case, you can see the double positions of objects, but with the correct amount of jitter (shown in the second picture) you can see that lines are softened and to some degree anti-aliased, and that there’s a slight depth of field effect because nearby objects move less when the camera’s turned.

If the openGL lines were 3D objects, this effect would be fully in place: since wireframe lines have no thickness, the nature of GL.wireframe resists this technique, yet you can still force depth of field out of it simply by counter-jittering the camera position. Any point in space where the angle and position offset cancel out, becomes perfectly in focus no matter how exaggerated the jitter is. Then you dial it back until relevant areas are faintly blurred and don’t show sparkling/line-crawling.

Because the terrain in question is a solid color lit by ambient lighting only, when the mass of churning wireframe mess combines into a solid floor you get a single color, that subtly diffuses out into visible grid-lines as they approach the viewpoint. The jittering so effectively blurs wireframe sparkling that the area of interest, where objects move, behaves like a simple flat color backdrop. This compresses very well, and focuses attention on the moving objects, where it should be.

Counterpart is currently on Steam Greenlight, trying to find its way to the marketplace.

Counterpart can be played in its current form if you're interested in seeing this jitter, complementary camera motion and depth of field effect in action.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like