Sponsored By

Making roomscale VR work: Devs with experience share tips

It's clear that while this panel of developers has a variety of insights into what works already, the sum total of new experiences, new designs, players new to the medium, and new technologies equals a lot more work to be done.

Christian Nutt, Contributor

March 14, 2016

5 Min Read
Game Developer logo in a gray background | Game Developer

The big takeaway from the Roomscale VR panel at GDC 2016 seems to be: We're still figuring this stuff out. This is nothing new, when it comes to developers discussing VR; everyone admits it's a wild frontier.

But listening to the developers talk, it's clear that while they have a variety of insights into what works already, the sum total of new experiences, new designs, players new to the medium, and new technologies equals a lot more work to be done.

That said, what has the panel learned from their adventures so far, in roomscale VR?

The most obvious, according to WEVR's Scott Stephan: It allows for a "deep emotional connection of using your whole body," where the player can "lose the abstraction" associated with more limited, fixed-perspective VR. It's a "really tight, very natural experience."

That is so true that his studio has a rule of thumb: In a horror experience, no creature should be bigger than a small dog, "because after that point, it's not like a fun scare, it's like a survival scare."

But there are major practical challenges just working in roomscale that must be tackled. Owlchemy Labs' Job Simulator 2050 has three handcrafted versions of all of its game environments, which load depending on what hardware you're using (the Vive? PlayStation VR?) and the scale of your real-world environment: the team is "building the space to afford the available tracking space," said Owlchemy CEO Alex Schwartz.

But there's an advantage to roomscale VR for creators -- using our senses means that players tend not to "want to walk through stuff," said Stephan. "It's all of this years of motor skill learning. And it is sort of uncomfortable to do it. It's less of a problem than we imagined it was."

But with environments that players easily understand, it opens up realistic interactions -- and these beget more interactions: "all these real-world human tendencies... our bug list now is not bugs, it's humans doing weird things we didn't expect and having to account for that," Schwartz said. "It's like the problem in an FPS where you can't put a doorknob on a door unless you'll let the player open it... times 10,000," he further explained.

"You have to be very careful about what you are implying to people... and if you take on the work ... you have to really commit to that time," Stephan said.

Stephan gave an example of an early demo the team was working on with a ball. The player could pick up the ball, but if they threw it, it would just drop, as the code had not been implemented yet for that interaction.  "It's so astoundingly disappointing in a very deep way," he said. "You can't bake as much stuff as you could in the past, you really need responsive systems that are meeting people's expectations."

And when it comes to objects in the game world, "there has to be a very clear separation between what's in the inner circle, reachable, and what's not reachable," Schwartz said.

One thing that the group reached consensus on is that HUDs are not desirable at room-scale. Job Simulator 2050 uses in-game objects to great effect (more details on that, here).  Said Schwartz, if you want a HUD, "it seems we need to create objects in the real world that contain all of that information." Trying for a HUD is "just trying to apply prior ideology to a new medium. We should just be inventing what's good in this medium."

"It's nice to be able to use a gaze mechanism, if you look at something for a second then something starts," said independent VR developer Tyler Hurd.

The way the interface works also implies who (or what) the player is, but there are tricks; Stephan uses 40 percent opacity for the representation of the in-game controllers in the environment, for example, to  "dodge that question."

To get the player to interact with the totality of the environment, cues are necessary.

"Provocation on the Y-axis is important ... it's hard to get people to realize that that's a whole dimension they can use," said Stephan. People tend not to bend over or get low unless prompted to do so by the experience. Using sound to telegraph what might be approaching them from behind, for example, is also important.

One really interesting insight touched off by an audience question is that while social, shared VR experiences are on the way, they won't necessarily be meeting places that mimic the way the real world works.

Schwartz expects that it will be a matter of "connecting their own roomscale experiences," and effectively keeping each player at the center of their own experience, that takes off. Void Studios CEO Curtis Hickman has experimented with multiplayer too, and from his description it sounds as if what each player perceives is catered to that player. At a point in the experience, the experiences separate; the other players are dragged away by monsters. If everyone is actually seeing this happening to the others, you'll realize that this must not be happening to anyone, while in some sense happening to everyone.

One thing that was also key is the ramping up of the experience. Once you've given full interactivity to someone with hand presence, you ought not to take it away later in the game. That's like "holding a toy in front of a child that they can't reach, and our inner child is screaming," Schwartz said.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like