Sponsored By

The Full Spectrum Warrior Camera System

Full Spectrum Warrior, more attention than is typical was given to the camera system, imbibing it with a unique autolook system for peeking around corners, among other things. John Giors details the development process, including his debugging techniques and design decisions.

John Giors, Blogger

March 25, 2004

27 Min Read
Game Developer logo in a gray background | Game Developer

Camera systems are among the most important systems in any game. After all, the camera is the window through which the player interacts with the simulated world. Unfortunately, the camera system is often neglected during the development process, as it is easy to take for granted. When making Full Spectrum Warrior (FSW), special attention was given to camera system issues, resulting in unique solutions to several problems.

This article discusses specifics of the camera system as developed for FSW. A brief functional overview of the camera system will be followed by a description of the high-level architecture. Following that, the details of the motion system will be examined. The next section covers FSW's unique "autolook" feature. Then, the bane of every camera system-collision avoidance-will raise its ugly head. Finally, debugging, tuning, and miscellaneous issues will be covered. To wrap it all up, the limitations of the FSW camera system will be discussed, along with some general recommendations that apply to most projects.

FSW System Functional Overview

In FSW, the user controls two teams of four soldiers each, and the primary camera system is under the control of the user (manipulated with the gamepad's right thumbstick). One of the soldiers on each team is the team leader. Pressing the Y button switches teams. The camera performs a fly-by to the other team's leader, assuming the same facing as the soldier. During the fly-by, the camera is not under user control. When the camera arrives at the team leader, camera control returns to the user.

There is a brief lockout period at the beginning of a fly-by during which inputs from the gamepad are not accepted. However, after the initial lockout period, the user may make inputs. In this situation, when an input is detected, the camera cuts immediately to the destination and the user regains control of the system.

The user may also select different soldiers on a team using the d-pad. The camera automatically attaches to the character, performing a mini-fly-by. This allows the user to get different vantages on the current situation.

FSW also has in-game cinematics, which take over camera control. In addition to use of the normal in-game view system, the cinematic camera may move between preset positions.

High-Level Architecture

Although only one camera can be viewed at any particular time, the camera system actually comprises several cameras, each of which may operate simultaneously and independently. Using multiple, independent cameras is useful because different viewing situations have different requirement. Attempting to make one general purpose camera that does everything is likely to lead to an overly-complex system.

Additionally, using multiple cameras allows blending camera positions and orientations during transitional cases. This technique is not currently used in FSW, though cinematic transitions use a filtering algorithm that behaves very much like a blend.

In FSW, each camera is derived from an abstract base class, which provides an interface for common functions, such as obtaining the camera world matrix or zoom factor. The "main camera" implements the primary user-controlled view and fly-by systems. There are several other cameras for cinematics, in-game events, and the playback system.

With several cameras in simultaneous operation, it can be difficult to coordinate their activities and their interactions with other game components. FSW has a "camera manager" object which tracks the cameras. All interactions with other game components take place in part through the camera manager.

Motion System

Instead of using FSW's actual coordinate system (it is a bit unorthodox), the following coordinate system will be used for the remainder of this paper:

  • The view coordinate system is a typical right-handed system. Relative to the screen: the positive x-axis points to the right, the positive y-axis points up, and the positive z-axis points into the screen (away from the viewer).

  • The world coordinate system is similar to the view coordinate system: the positive x-axis points east, the positive y-axis points up, and the positive z-axis points north.

  • Since the camera is always assumed to remain horizontal, the orientation can be specified with only two angles, pan and tilt. Pan rotates around the y-axis, while tilt rotates around the x-axis. Note that, when calculating camera matrices, tilt is applied first, then pan.

It is also worth noting that some cinematic transitions interpolate the camera orientation using quaternions. However, quaternion interpolation is the exception rather than the rule in the FSW camera system.

To control the motion of a camera, it is often useful to move the camera using target points. A target point is the final position where the camera should arrive. The camera controller filters the camera position to move the camera to the target point with a smooth motion. Using target points helps separate the problem of choosing good viewing positions from the problem of providing smooth and predictable motion.

The Basic Proportional Controller (PC)

The PC is common in many camera systems. If you have worked on a camera system, you have probably used a PC, though it might have been called something else--terminology for this particular construct is not very consistent.

The typical implementation is to set the velocity vector, V, equal to a proportion, C, of the difference between the current position, P, and a target position, Pt:

V = C(P1 - P)

One advantage of the PC is its simplicity. In addition, the PC has a nice arrival characteristic. As the current point, P, approaches the target, Pt, the velocity asymptotically approaches zero, following an exponential decay. This gives the PC a "graceful" feeling when it is approaching its destination.
Although the PC is simple and has a nice arrival characteristic, it also has two significant disadvantages.

1. The "exit" characteristic can be abrupt. When the target point, Pt, changes suddenly (as happens when the target is changed from one game object to another), the PC will change velocity instantaneously. This sudden change in velocity makes the camera system feel unnatural.
2. The current position, P, lags behind moving targets. When a target is moving, the PC lags behind by the following proportion, which can be found by rearranging the previous equation:

Lag = |P1 - P| = |V| / C

This lag behind moving points leads to tuning issues. When the PC is tuned for a nice "arrival" characteristic, the lag behind moving objects will often be very large. When the PC is then tuned to reduce lag, the arrival characteristic will often be too fast (and the exit characteristic can grow to excessive levels).

The Modified Proportional Controller (MPC)

To solve the problems encountered with the PC, the FSW camera system uses a modified proportional controller ("MPC" for short). The MPC addresses the two drawbacks of the basic PC in the following manner.

The exit characteristic is controlled. The exit characteristic is controlled by limiting the acceleration of the current position, P. This is a straightforward calculation.

Given a current velocity vector, V, and a desired velocity vector, Vk, calculate the difference:

Vdiff = Vk - V

Assume that Alim is the acceleration limit (a scalar, not a vector). The maximum change in velocity for one frame is:

dVmax = Alim * dt

Now, compare dVmax with Vdiff:

if (|Vdiff| < dVmax)
dV = Vdiff
else
dV = dVmaxVdiff / |Vdiff|

Now, use dV to update the velocity.

V1 = V + dV

Lag is compensated.

Lag is corrected by adding the target point's velocity, Vt, into the desired velocity:

Vk = C(P1 - P) + V1

This desired velocity, Vk, is then used to determine the acceleration, which is limited as mentioned above.

As mentioned previously, camera orientation is parameterized by a pan and a tilt value. The two parameters are updated independently using a one-dimensional version of the MPC described above.

More About Target points

Although the MPC helps solve many of the issues involved in making a camera system operate in a nice, smooth manner, it does not solve every problem, and it can still exhibit problems with certain types of inputs. In particular, when working on the motion system, the following problem arose.

When a moving target point is specified, it is expected that the camera will eventually track exactly onto the moving point, i.e., the camera may have to fly to catch up, but it should eventually be dead-on. That is a reasonable expectation, however, since the camera has limited acceleration (to keep it moving smoothly), it becomes obvious that, when a target point stops abruptly, it is impossible for the camera to do anything but overshoot (since its acceleration is limited and it cannot "stop on a dime"). This means there are only two possible choices: either the motion system can lag behind the target to give it a "buffer space" for handling sudden decelerations, or the motion system can track exactly onto target points, but it will overshoot when the target point decelerates rapidly.

There are rather many potential ways that the MPC could be changed to handle this problem. One option is to ensure that the camera lags behind moving target points. Another is to try to make some sort of "smart" algorithm that overshoots, but then does not attempt to return to the target point.

The FSW solution

There is a deceptively simple alternative to modifying the MPC: avoid the issue by making target points behave reasonably whenever possible. The MPC algorithm remains unchanged, but its inputs are improved. Examples from FSW are:

  • Target points generated from user controls are acceleration-limited. When the user pans or tilts the camera, the thumbstick inputs do not directly affect the velocity. Instead, the user controls the acceleration, and a resistance value limits the maximum velocity.

  • Soldier positions are filtered with a PC prior to target-point generation. The MPC is not used in this situation, since the target points will eventually feed into the MPC, anyway. In addition to keeping camera motion smooth, this creates a "lag behind" feature that the FSW designers specifically requested.

The FSW "Autolook" Feature

Autolook is a unique feature of the FSW camera system that automatically assists looking around corners. In most third-person games, the character is centered on the screen or is viewed at a constant offset (e.g., over the right shoulder). However, because of the nature of realistic urban combat, this does not work well for FSW. Soldiers tend to spend a lot of time near walls and the corners of buildings. Consequently, using a view centered on the soldier will often result in over 50% of the FOV obstructed by wall.

Autolook solves this problem by determining which part of the view is unobstructed. In this technique, horizontal rays (y value is constant) are cast from the camera center towards the left and right of the camera centerline. When rays intersect the environment, the distance to the intersection is multiplied by a weighting factor (which is identical for all rays) and added to (or subtracted from) the angular offset. Lines on the left side of the camera centerline add to the offset, causing the camera to pan right. Lines on the right side are subtracted from the offset, causing the camera to pan left.

Autolook only affects the orientation of the camera. Position is not modified, as changing the position in this case caused disturbing motion. After the autolook angle is calculated, the angle is filtered, preventing jarring motions.

The weighting factor for autolook depends upon the typical viewing distance. This factor is a tune parameter in FSW and was adjusted until an acceptable result was obtained.


During playtesting, autolook was a well-received feature of FSW. But does this mean that it would be useful for other games? It isn't possible to give a definitive answer, but, since autolook changes the view angle outside the direct control of the user, it is reasonable to assume that any game which requires quick and precise screen-space aiming would not be well suited to this technique.

However, FSW uses a world-space reticle, which removes many of the issues involved with screen-space aiming. Since the reticle moves through world space, the precise position of the camera is not critical. It is only necessary that the camera keep the reticle in view. It remains to be seen if this technique is responsive enough for quick-paced action games, but the technique should work well for slower-paced games.

______________________________________________________

Collision Avoidance and Cuts

The FSW camera system uses various techniques to avoid collisions and to prevent obstruction of the view.

Normal View Collisions

The camera attempts to avoid collisions with the environment by casting several rays from the look-at point towards the general vicinity of the camera. Each ray is cast to points that have the same altitude (y value) as the camera. This creates an arc of an inverted cone of rays over the look-at object (see Figure 4 and Figure 5).

Each ray has a weight that determines how strongly it affects the view radius. The rays nearest the camera affect the view radius 100%. The view radius will never exceed the length of these rays. Rays further from the center have progressively smaller effect on the view radius. The farthest rays have a very weak effect.

The calculated view radius is used to generate the target point for the motion system, which smoothly interpolates the position, preventing jarring motion.


Fly-By Collision Avoidance

Another algorithm helps the camera avoid walls when it is performing a "fly-by" between distant points. This algorithm also casts rays from the look-at point, a single ray to the left and a single ray to the right (see Figure 6). Note that the ray casts may not be horizontal, as they originate at the height (y-position) of the look-at point and end at the height of the camera.

When a ray intersects a wall, it applies a small sideways acceleration to the camera, nudging it away from the wall. The size of the side acceleration is proportional to the distance from the look-at point to the intersection.

Look-At Avoidance

A "look-at avoidance" algorithm prevents the camera from passing too close to the look-at point, which would otherwise result in rapid and disorienting camera pan. This algorithm analyzes the distance from the look-at point and the approach velocity. If the camera is in danger of passing too close to the look-at point, its target velocity is modified by reducing its magnitude and moving the vector away from the line between the camera and the look-at point.

Although look-at avoidance is an important consideration for many camera systems, its solution is game dependent. Rather than delving into a detailed discussion of the calculations used by the FSW algorithm (which is beyond the scope of this paper), the following summarizes some important things to consider when working on this problem.

First, determine if look-at avoidance is necessary. In general, look-at avoidance is most problematic during fly-bys. If your system cuts between viewing positions, then look-at avoidance may be a moot point. Even when your system has fly-bys, look-at avoidance may not be an issue if the camera always flies straight towards objects, since it does not need to pass to the opposite side of viewed objects.

When the camera motion controller has lag, it is possible that moving objects may catch up to the camera in certain circumstances, requiring a different type of look-at avoidance. This problem can usually be avoided in games like FSW, since the camera is attached to human characters whose maximum speed is a quick run. However, this can become a serious issue in games with fast moving vehicles. A potential solution for this particular problem is to apply a repulsive force from the look-at point to the camera. The force grows in inverse proportion to the distance between the camera and the look-at.

If your system does need look-at avoidance, consider what general approach you want to use. The FSW system modifies the camera velocity, but there are other solutions. Your camera can maintain its orientation, fly past the viewed object, then turn to face it, preventing rapid-pan problems. Or, you might be able to move the camera's target position so that the camera no longer needs to pass beyond the viewed object.

Whatever your solution, it is important to take the environment into consideration, since look-at avoidance changes the camera position which may unintentionally cause the camera to intercept environmental geometry.

Camera Cuts

In cases when the destination is on the other side of a wall or obstacle, the camera cuts past the obstruction. When this situation arises, the camera animates as it would for a fly-by. It pivots in place (if necessary) to face towards the new look-at point. Then, it begins to fly forward until it is close to an intervening obstacle (usually a wall). The camera cuts to an unobstructed point and continues its fly-by.

The camera also performs a cut instead of a fly-by whenever the destination is beyond a certain distance. This prevents overly lengthy fly-bys which would interrupt gameplay.

Characters And Tagged Objects

In FSW, the camera does not collide with characters. Any character that obstructs the view becomes translucent. Additionally, environmental objects can be tagged so that the camera does not collide with them. This is generally used for thin objects, such as telephone poles.

Debugging and Tuning Aids

FSW has over 100 parameters that affect the camera system. Many of these parameters require tuning by a designer or programmer. With such a large number of parameters, it can be difficult to predict how a change in a value will affect the camera system. Indeed, it can even be difficult to comprehend exactly what a value represents and what it is intended to change.

In order to facilitate the tuning process, FSW has debug displays that graphically display aspects of the camera system. Most of the displays indicate ray casts through the environment. The rays used for collision probing are displayed. Rays that intersect the environment are displayed in a different color from those that do not. The raycasts for the autolook system are also displayed.

An additional debug feature that is very useful is the ability to view the scene from any camera. This allows examining the dynamics of a camera by viewing it from another perspective. This is especially useful for examining raycasts, since raycasts often originate near or behind the eye point, where they are difficult or impossible to see from the normal perspective. It can also be useful to view the camera's motion from an alternate perspective, which can elucidate the source of motion glitches.

The camera system timescale can also be altered to allow closer examination of glitches. Sometimes a slow-motion examination will reveal details that are not visible at normal speed.

Other Considerations

Gameplay Recording And Playback

FSW can record and playback game sessions. This is done by recording the input stream and a minimal amount of other critical information (such as timestamps). To re-create the original gameplay the game is started and the original input stream is fed back into the system. In order for this type of playback system to function properly, the gameplay components must be repeatable, behaving identically during a replay as during the original recorded session.

However, the user can view from different angles in replay, which means that the camera will not be at the same position as it was during the original session. This could disrupt playback, since the camera affects the operation of UI components (e.g., the movement cursor operates relative to the main camera). This problem is solved by adding special replay cameras that function only during replays (they are inactive during normal gameplay). The original gameplay cameras repeat their same behaviors during replay even when the user is not viewing them.

Field Of View (FOV)

As with many features of a game camera system, the field of view (FOV) affects both aesthetics and usability. Note that, in the following, FOV is measured horizontally (the vertical FOV is determined by the aspect ratio of the screen).

In FSW, the typical gameplay FOV is 80 degrees wide. This is wider than in many games (it seems that 60 degrees is the "unwritten standard"). However, the wider FOV provides a better view of the actions and situations in the game, which is helpful in a game like FSW where the user must evaluate and plan strategies involving a large portion of the play area while viewing from a close third-person perspective. An even wider FOV might have been useful, but after 80 degrees, the distorted perspective is too unnerving for most players.

FSW changes the FOV in two situations: (1) The player can zoom the camera to get a closer view of enemies. The result is a reduced FOV, (2) FOV is changed during cinematics, again to provide a zoom effect.

Limitations of the System and Potential Improvements

So now you've learned about the best parts of the FSW camera system. Unfortunately, like all things, it is not perfect. Here are some of the limitations of the FSW camera system and what has been done about them (if anything).

The biggest limitation of the FSW camera system is that the main camera can temporarily clip through walls in certain situations. These problems are always temporary--the camera never gets "hung up" on geometry. The situation is fairly rare, and usually occurs only during a fly-by. It would be nice to fix for aesthetics, but there is the risk of introducing worse problems in the process. In the end, we have decided to live with the issue.

A second limitation of the system is that it does not provide a good playing perspective in tight spaces. This is not a major concern for FSW, since tight spaces have been avoided by design for a multitude of reasons unrelated to the camera system.

Although the main camera satisfied the requirements of the gameplay camera system, it is also very complex. Part of its complexity is due to serving more than one purpose in the game. It handles the user-controlled pan/tilt operation, fly-by functionality, plus a portion of the cinematics system. Splitting the main camera along those functional lines to create multiple cameras would simplify code and, more importantly, would simplify the tuning process.

Recommendations

  • When aesthetics and usability are in conflict, always choose usability. The camera system is too important to fundamental gameplay to allow a cool effect to take precedence over the user's ability to understand and interact with the game. Once the system is stable and functions acceptably for users, then you can experiment with aesthetics. Of course, you can pretty much go wild during cut scenes, so it would be wise to concentrate aesthetic concerns there.

  • Keep motions smooth. E.g., use the MPC described above.

  • Don't pan the camera quickly except under user control, since it can disorient the user.

  • Keep user-control separated from programmatic control as much as possible. When the camera is under user control, give the user as much freedom of motion as possible. Restricting motion feels very unnatural and can make players feel that something has gone wrong.

  • Use a layered design and multiple cameras.

  • Provide visual feedback, tuning, and debugging aids.

  • Always look for alternative solutions to problems. Often, issues with the camera can be resolved by means outside of the camera system itself, such as using translucency to prevent camera obstruction rather than colliding with characters. Sometimes, these alternate solutions are preferred by design, since they are straightforward and provide more information for players (e.g., players will see a translucent character whereas a collision system may move the camera in a way that the user does not understand, since he cannot see the character causing the collision).

Future Directions

  • In general, it would be nice to see more published papers on the topic of real-time camera systems, especially for games.

  • Integrating traditional cinematic camera ideas (from the film industry) into real-time systems. This is an interesting topic, since games often have different requirements from traditional film. However, there is over a century of filmmaking experience that we can tap into. At the same time, this must be approached with caution--implementing cinematic camera system without proper concern for gameplay requirements could be devastating for gameplay.

  • Volumetric examination of the environment instead of linear ray-hits. This might improve the operation of various systems, such as collision avoidance and autolook.

  • Examine more techniques for providing feedback during the tuning process.

References

Bobick, Nick, "Rotating Objects Using Quaternions", Game Developer Magazine, April 1998

Drucker, Steven M., Zeltzer, David, "Intelligent Camera Control in a Virtual Environment", Proceedings of Graphics Interface '94

Hawkins, Brian, "Creating an Event-Driven Cinematic Camera" (two parts), Game Developer Magazine, October/November 2002

He, Li-wei, Cohen, Michael F., and Salesin, David H., "The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing", SIGGRAPH 96 Proceedings, pp 217-224.

Rabin, Steve, "Classic Super Mario 64 Third-Person Control and Animation", Game Programming Gems II, Charles River Media, 2001

Treglia II, Dante, "Camera Control Techniques", Game Programming Gems, Charles River Media, 2000

______________________________________________________

Read more about:

Featuresevent-gdc

About the Author

John Giors

Blogger

John Giors at GDC 2004 has over five years on-the-job game programming experience. In addition, he has five years programming experience in other fields, including embedded systems and image processing. John has an MSEE from Cal Poly, Pomona. He currently works at Pandemic Studios in Westwood, CA, implementing the camera system and in-game user interfaces for Full Spectrum Warrior.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like