Sponsored By

Acting in a Helmet: New tech means more to play with.

Now wrinkles and blushes can be captured, does that have an effect on the performer�s choices? What about the animator�s choices? The actor behind Ethan Mars in Quantic Dream's Heavy Rain explores.

Pascal Langlois, Blogger

July 17, 2012

6 Min Read
Game Developer logo in a gray background | Game Developer

[Now wrinkles and blushes can be captured, does that have an effect on the performer’s choices? What about the animator’s choices? The actor behind Ethan Mars in Quantic Dream's Heavy Rain explores.]

Performing in a headcam...

...whether for full performance capture or just facial, is an experience that an increasing number of actors will experience, until the technology is superceded by less invasive hardware.  In the meantime, HMC ‘s (Headmounted Cameras) seem to be the least worse solution to the challenge of faithfully capturing the indiosyncracies of human facial behaviour.

I should state my interest here.  I do represent a facial capture hardware and software provider (Dynamixyz).  However, my reasoning for doing so is that I am always looking to promote technologies that best preserve the actor’s source performance through to final animation.  

Despite my evident ties with Dynamixyz, as an actor, I will applaud any new technology that improves the delivery of the actor’s performance to the final product.  As much as I respect the talent and artistry of animators, I have always been a great believer in the added chaos that actors bring to the performance. (see previous posts)  At a certain level of faithful capture, this chaos becomes something that can only add personality and depth to digital characters.

There are still obstacles to the complete source performance/data making it through.  Stabilization is an issue for most HMC systems.  There’s a lot of noise, extraneous or erroneous data, captured when the face moves out of the desired fixed position relative to the camera.  This can happen with sharp head movements or any occasion where the head moves within the helmet.  Thankfully, Dynamixyz has developed a system to compensate for this lack of stability meaning less filtering is required – and filtering is a technical culprit for removing faithful capture.  It’s like you have put the actor’s performance on depressant drugs, or their face is suffering from a few too many face-lifts.   It tends to globally reduce the life-like chaos along with the noise.

Another offender is the final render frame rates (especially in Realtime).  You can capture at up to 120fps these days – but the render runs at 60fps or less.  Tongue-twisters, particularly those that include lots of plosives (p’s, b’s etc.) are great tests for the limits of the final frame rate.  You could argue that animated characters aren't going to perform tongue twisters, but I'm highlighting it as a test.  This is a test to see how faithfully the final digital version can match whatever the source performer is doing, and without the help of animation.  If anyone out there has solution – that doesn’t include matching the 120fps capture, or not performing tongue-twisters, please get in touch…

These obstacles mean that the source performance has to work with the technology, rather than attempt to ignore it.  Capture shares as many discomforts and performance challenges as film or theatre.  Many are similar – and some are entirely unique. Implying that the final performance is akin to an actor in heavy makeup is disingenuous at best. And it can be plain offensive to many.

Here’s a shortlist of the main challenges that face a capture actor:

  • Physical discomfort

  • Challenge of imagining entire environments

  • Broken up nature of the shoot itself

  • Unique demands of capturing for interactivity.

  • Performing for a different morphology

  • Performance-altering adjustments controlling for technical or morphological limitations (that most actors are unaware of.)

  • The difference between traditional film, and captured animation "scale taste" (animators often demand “bigger”, and/or ‘clearer’ facial performances; Film, demands the opposite. Animators can always adjust the parameters, or move those sliders)

New aspects to think of:

Now wrinkles and blushes can be captured, does that have an effect on the performer’s choices?  What about the animator’s choices?

I recently had thought provoking message exchange on Linkedin, based on Quantic Dream’s recent tech Demo “Kara”.  The debate was over whether wrinkles and blushes were relevant for 1:1 digital doubles only.

Here’s what I wrote:

“Yep, **** I can see your point.

Kara was close to a digital double for actress Valerie Curry, so wrinkles and colours could be very useful additions. One can add that being an android, one can find an excuse for perfect skin etc... However, almost all character animations define their behaviour against human behaviour. Even creatures with no eyebrows, will often have an equivalent feature that adds emphasis to the total communicated message in a similar way. Wrinkles and skin colour variance could find parallels in, and add to, Alien morphologies too.

On a more mundane note, the engaging of the zygomatic major often results in small crow’s feet by the eyes, and … [crow’s feet wrinkles] presence can signify a felt or unfelt smile (aka duchenne or non-duchenne). There are other less empirical examples of how the muscles and wrinkles on a face during a deformation, add to or mediate the [total] message.

As for wrinkles, I believe their presence can add to the sense of experience behind the expressions, and remove the risk of lifeless looking skin.

Now, Marx said a lot of things, but how about what about what he said to his wife? - "where could I find a face whose every feature, even every wrinkle is a reminder of the greatest and sweetest memories of my life".”

Not only does the animator have new data to parallel aspects on their (not necessarily human model, but the actor has a new responsibility too.

The headcam means one actor can provide the source performance for multiple roles.

It is quite likely that actors will be asked to play characters that do not match them in age, accent, or even morphology.  Making an effort to approach those characters, vocally, physically and facially will deliver performances that can make the most out of the model, and provide the idiosyncratic choices that are also dramatically consistent within the total subjectively experienced performance.  If my performance is driving an older character’s face with more wrinkles, then my face can change to provide a different ROM, and a more characterful neutral position – without the need to adjust the performance digitally, later down the pipeline.  What’s more, these choices are within the toolset I would deploy for film or theatre, but chosen with an awareness of what the technology can work with, and the “frame size” of the capture.  A wideshot can include full body physicality, and rely less on facial performance, or vice versa.

I’m lucky enough to have a headcam I can play with, and a career that includes an ongoing practical education in a variety of performance capture set-ups (I’ve yet to try Xsens, incidentally...).  It is often abundantly clear to me that with the current technology, if the source performance is to be valued, the actor must value it, and help navigate the challenges that might compromise it.  Only then can every part of the pipeline see the benefits, and the digital character be arguably considered the result of a creative collaboration.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like