Sponsored By

Rendered #3: Meshes and Metahumans

On Epic's plans for digital humans in Unreal and Microsoft's plans for digital humans everywhere else.

Kyle Kukshtel, Blogger

March 8, 2021

12 Min Read

Rendered is a monthly newsletter on 3D rendering technology, game engines, volumetric filmmaking, photogrammetry, and everything in-between. It's your guide to emerging realities

On Epic's plans for digital humans in Unreal and Microsoft's plans for digital humans everywhere else.

News

Unreal Announces Metahumans

Two issues ago we talked about the then-just-announced Unreal Engine 5, and some of its aims to smooth out production workflows around high density meshes. Since then, Epic has been on a kick building out a case for itself and that goal, releasing various videos about virtual production and cinematic asset creation/lighting on their Youtube channel.

Epic has effectively no competition here on the software side outside of people’s willingness to switch over from offline tooling to Unreal, so their non-stop PR blitz about virtual production is commendable. A general theme of these videos is talking about how Unreal fits in all aspects of the production stack, but the elephant in the room has (and will likely always be), you know, humans.

Humans, the part of films we probably most watch films for, have been cut out of the conversation (or obscured) when talking about virtual production. Virtual production is largely about everything besides humans. 

Virtual production, by Epic’s reasoning, is largely an efficiency mechanism. Not “cheaper” or “easier,” but efficient. Unreal’s own “Virtual Production Field Guide” goes through great pains to describe how Virtual Production’s primary goal is to “reduce uncertainty”:

All of these efficiencies and increased image quality offer a trickle-down effect to more modest and tightly scheduled productions. By leveraging virtual production techniques with a real-time engine, network series, streaming productions, and indies can all achieve very high quality imagery and epic scope. A real-time engine has the potential to eliminate many of the bottlenecks of budgeting, schedule, and development time that can prohibit smaller-scale productions from producing imagery on par with blockbusters.

This is at odds with the fact that humans are, and have always been, and will always be, the least efficient part of movie production. They run late, have to eat, sleep, be famous, etc. What if you could make a movie using humans, but… not really.

This, in my mind, is the ultimate promise of the recently announced Metahumans. Epic is, in part, saying, “Listen, everything you already do is in Unreal… why not have your humans in Unreal as well?”

Digital humans aren’t an idea that’s particularly new, but the the fact Metahumans work in a realtime context is what differentiates them. We’ve had really great digital humans for the past few decades through robust offline rendering, but offline rendering doesn’t mesh with the virtual production ethos. All the same live, micro-tweaking talked about with virtual production can now too be applied to human performances. 

The pitch is salient, but at the same time you’re making a game engine the central nucleus of a film production. Show of hands: how many people feel comfortable debugging UASSET errors in Unreal while talent is staring at you from the stage and a producer is whispering in your ear that this bug is costing them thousands and the studio manager just walked in to say the next production has arrived and… oops you missed your shot. Every virtual production shoot day is essentially live coding a movie.

Don’t get me wrong, I’m a fan of virtual production and do think it’s what’s next, but I also think all talk of it, especially from Epic, often obscures the fact that you’re running a game engine in the background, which has all the issues game engines normally have. Not only this, but because you’re likely renting out space in someone else’s studio to shoot (you didn’t build your own LED wall right?), you’re relying on the past production to have cleaned up their Unreal workspace (lol) or the studio technical manager to have done it (lol). What happens when you boot up Unreal on shoot day and a Windows update broke Unreal (it happens)?

 

Matt Workman on Twitter

I’m being a bit facetious here too — as good as Metahumans look (what a name too right?), nobody right now will assume these are actual humans… yet. Most of the copy on the main Metahumans page talks about using them in games, with virtual production getting only a passing mention:

Imagine […] digital doubles on the latest virtual production set that will stand up to close-up shots, virtual participants in immersive training scenarios you can’t tell from the real thing: the possibilities for creators are limitless.

These avatars are definitely AAA games quality, a massive leap forward for any indie, but they are also a step forward towards realtime digital humans that are indistinguishable from the “real” thing.

Here we go again

Are we surprised that the most prominent Metahuman demos are an asian man and a black woman, and that most of the people demoing and puppeting their faces (in the closed beta of Metahumans — this tech isn’t publicly accessible right now) are largely men who don’t share their race?

Christian Andorade on Twitter

Rich Hemming on Twitter

StretchSense on Twitter

I spent all of the last issue of Rendered talking about this, saying “emerging technology can and will be used in ways that go beyond whatever cute scenario you’ve imagined for it, and as such we should be taking an active role in preventing these outcomes instead of accepting them as an inevitability.”

The fact the #metahumans tag on Twitter is full of men trying on blackness is a massive issue in the same strain. I’m not a diversity and inclusion expert by any means, nor do I propose to have solutions — but what I have to ask is… what the hell? What sort of precedent are we setting here by giving groundbreaking technology to, what seems to be, the same class of people that has abused technology in the past to reinforce the same social norms. We have to be better. 

If you’re reading this and work at Epic — advocate for diversity and inclusion in even the smallest tests like these. Consider what message you’re sending to your prospective users and other non-white creators that rely on your tools to create their work. Bake this into your process.

Be better.

Microsoft Mesh Announced

At the annual IGNITE conference, Microsoft announced a new cloud service, Mesh:

Microsoft Mesh enables presence and shared experiences from anywhere – on any device – through mixed reality applications.

That’s a pretty big pitch. A clickbait-y Youtube title might be something like “MICROSOFT ANNOUNCES API FOR THE METAVERSE??!?!?!”. But we’re far from that (like and subscribe anyways).

One thing to keep in mind right now, especially as this space (gestures at XR, spatial computing, game engines, volumetric capture, etc.) gets more mainstream traction, is that there is very little unique and new about it from an asset perspectiveAcquisition, via tracking and depth capture, are the “new” things, but most software built in the space lives downstream of that process and assumes a fairly standard input of either 3D meshes or videos.

This is to say that most “new” technologies in the space are able to leverage pre-exisiting infrastructure to interact with these new mediums. Microsoft Mesh, put simply, is Microsoft (likely off some of the cloud work they were already doing with Simplygon) expanding Azure’s offerings to be able to upload/download 3D data. That’s less sexy than “Microsoft Mesh”, but ¯\_(ツ)_/¯.

Obviously Microsoft Mesh is more than just a branding exercise, but part of the point of this newsletter is to cut through the relentless optimism and history re-writing of these spaces.

What is interesting about Mesh is less the actual technical side (we had our own realtime volumetric streaming demo with Depthkit at VFNYC nearly two years ago) and more their pitch on “immersive presence”:

A fundamental aspect of multiuser scenarios is to be able to represent participants in distinct forms depending on the device that they’re joining from. Mesh delivers the most accessible 3D presence with representative avatars via inside-out sensors of the devices. The Mesh platform comes with an avatar rig and a customization studio so you can use the out-of-the-box avatars. The platform is capable of powering existing avatar rigs too with its AI-powered motion models to capture accurate motions and expressions consistent with the user's action.

Alongside avatars, Mesh also enables the most photorealistic 360 holoportation with outside-in sensors. These outside-in sensors can be a custom camera setup like the Mixed Reality Capture Studio, which helps capture in 3D with full fidelity or it could be Azure Kinect that captures depth-sensed images to assist in producing the holographic representations.

Microsoft’s pitch is that avatars, driven by inside-out sensors on devices like the Holoens, and volumetric captures (Microsoft calls it “holoportation”), driven by outside-in sensors on Azure Kinects, can exist side-by-side.

Again, the technicals of this aren’t what’s interesting — what is interesting is that almost every other pitch at some idea of embodied digital collaboration has been one of a totalizing vision — i.e. either everyone is avatars or everyone is a volumetric capture. That “purity” has been a major thorn for everyone involved because buying into either vision meant alienating the other side.

The vision of Mesh is one where those captures exist side-by-side. Obviously this is in Microsoft’s interest as well, as they own solutions to do both avatars (they acquired Altspace in 2017) and volumetric capture (via their Mixed Reality Studio program or aforementioned Azure Kinect), but I do hope other “metaverse”-esque solutions/platforms adopt the same thinking.

One other interesting part of this announcement is that it implies Microsoft will deploy its own solution for volumetric capture/streaming based off the Azure Kinect. Despite ignoring volumetric capture as a first-class use-case for the Kinect family for a long time (the Azure Kinect page now lists it in passing under a “Media” section but doesn’t provide further info), the announcement of Mesh now lights a bit of a fire under that use-case to provide a Microsoft-sanctioned option. Whether this is a proper Microsoft product, or instead a list of “Mesh compatible” vendors, we’ll have to wait and see.

Rendered #2 Follow-up

Since sending out Rendered #2, a few things have happened that are relevant to the discussions there.

The feature-length documentary film Coded Bias was released to a few film festivals (and reviewed in The New York Times)

CODED BIAS explores the fallout of MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.

The book Technocrats of the Imagination was published in early 2020 (missed it!) and talks about the relationship between emerging art movements and the military industrial complex:

John Beck and Ryan Bishop explore the collaborations between the American avant-garde art world and the military-industrial complex during the 1960s, in which artists worked with scientists and engineers in universities, private labs, and museums […] Beck and Bishop reveal the connections between the contemporary art world and the militarized lab model of research that has dominated the sciences since the 1950s.

I also found some more Military Industrial Complex <-> HoloLens stuff (link)

Resources

Naughty Dog’s SIGGRAPH 2020 video presentations are live.

Naughty Dog has been pushing the boundaries of realtime graphics for as long as they’ve been around. These videos are a bit of a swan song for the standard model of 3D rendering as raytracing seems on track to subsume these old methods.

Epic Games’ Crash Course in Unreal

Great overview here of Unreal for anyone interested in getting started with the tool. Walks through the engine front to back to show you the general workflow and pipeline concepts. Need some assets? What Remains of Edith Finch's assets have been added to the Unreal marketplace.

Valve adds in Developer Commentary to Half-Life: Alyx

Alyx is one of the biggest (the biggest?) full throated attempts at making a AAA VR game by one of the best studios around, so a developer commentary on that project is like a master class in VR interaction best practices.

Game Development in 2021 Report Released by Unity

Interesting insight on budgets, projects, platforms, etc. Also generally good information on player trends. It also has startling revelations like:

Creating new content to delight and engage players should still be a top priority for game developers.

To think that these insights are free!

Mobile Computational Photography: A Tour

Really great overview here of different methods of mobile computational photography. I know it’s technically a paper but I think it’s use case is more broadly applicable as a general resource for people looking to get caught up on the space. One of the authors also has a video version of the paper that covers similar topics.

Unity Lighting Pipeline Guide

Unity’s documentation has always been hit or miss, so you’d be forgiven for not knowing there is now a giant guide on setting up render and lighting pipelines in the manual. This thing is full of amazing information, and should be required reading for anyone working in rendering or lighting in Unity.

Briefly Noted

Shutdowns

WaveVR is shutting down it’s app and focusing on “non-VR objectives”Poly is shutting downCarboard is dead.

Releases and Briefly Noted (link dump)

Davinci Resolve 17 is outHoudini Engine now free in Unreal and Unity (!!!)Balenciaga made a giant volumetric game. EF EVE volumetric captures work in NotchCreate your own neural radiance field image with NERFIESRecord3D added a live-streaming volumetric capture (and export) solutionAdobe releases a computational geometry library for free (thanks Dimitri!). Omnivor raises nearly 3mIO Industries and 8i strike a partnership.

Fun

Lee Vermeulen @Alientrap Made a video on scripting AI characters in Modbox using OpenAI GPT3 + @replica_ai voices + speech services

Thanks for much for reading the newsletter. Additionally, if you enjoy reading Rendered, please share this newsletter around! We thrive off of our community of readers, so the larger we can grow that pot of people, the better. Thanks for reading!

Subscribe Here

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like