Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
A deep-dive into the first few minutes of Valve's VR playground The Lab, examining how it onboards players into room-scale VR invisibly, painlessly, and even joyfully.
The Elegance of First-Time User Experience in Valve’s The Lab
Even almost three years post-release, Valve’s The Lab is still my go-to way of introducing people to virtual reality. Why? It’s polished, and funny, and fun, and there’s a ton of very different content to muck around in. But what makes it such an ideal gateway drug for room-scale VR is the total elegance of its first-time user experience.
In the last year, VR games have, by and large, improved a lot at getting players fluent at their own controls and mechanics. Beat Saber is the phenomenon it is in part because it’s so easy to pick up the controllers and get cube-chopping. Google has shown continued improvement in Tilt Brush and Google Earth by expanding their tutorials to slowly introduce players to their full functionalities. With Creed: Rise to Glory, Survios has finally developed an experience as accessible as it is innovative. This trend is great. But I can think of no game that introduces a new player into the full grammar of room-scale VR as quickly and invisibly as 2016’s The Lab.
So how does Valve do it? Let’s take it step-by-step:
When the The Lab finishes loading, you find yourself in a drab, barren environment. A pair of slightly animated figures. A single wall. A spotlight. A few props. A menu. Subtle ambient sound. This isn’t exciting! This isn’t Half-Life VR! What gives?
Actually, the simplicity of this scene is of great benefit to new users. Just having an HMD covering your eyes can be overwhelming for some new users, but most anyone who hasn’t experienced VR is going to find the realistic parallax a lot to take in. Simple magic is still magic.
So the lack of overwhelming stimulus is a comfort, especially considering how many first-time demos take place in public. Trying to negotiate a bunch of flashing lights and moving parts while still aware that there are people you can’t see looking at you... well, grandpa might find that a little intimidating! This simplicity here ensures that a new user isn’t going to have to worry they’re totally screwing up in front of friends and strangers.
But it’s also very functional. That all the detail of the room is consolidated in one direction means there is no confusion about where you should be looking...
...or what needs doing. But those buttons? They’re too far away to reach without physically walking forward. After a few abortive attempts and just pointing at them, players figure out they need to walk forward. So they walk forward, and just like that room-scale tracking is understood.
Physically walking around a virtual space is not something that anyone was fluent in before room-scale VR. First-timers I’ve put in Beat Saber have about a 50% chance of needing to be told they can take a step. Most everybody I’ve started in Google Earth doesn’t move their body until they decide to lean down to look at a city.
The functionality of the menu ensures that you understand your controllers as ways of interacting directly with the world. There’s no feedback until you physically touch the menu, at which point the controller vibrates and the relevant controller button glows. Soon you’ll learn that the controller has all sorts of abstract functionality too--teleportation, level selection, game-specific functions--but the concept of controllers as hands is introduced first.
After selecting “Play Intro,” the iconic little dudes demonstrate what you’re going to be doing: one of them picks up something and sticks his face in it. The motion blur suggests he’s been sucked into the globe. His friend rejoices, affirming this was the correct action to take. The sequence is basically a real-time three-panel comic.
Compare this approach to, say, the level of abstraction that text-based instructions for this would require: “Grab the mysterious orb. Try, then, to eat it. Instead, get sucked into the world it contains.” Not as clear, right? Text-based or verbal instructions would also need to be localized into dozens of languages.
The goal action is then repeated by the other little dude to reinforce the lesson. Then both controllers start vibrating to call your attention to the blinking button that’ll allow you to get over there.
Pressing the “Navigate” button gives you a lot of feedback: valid teleport locations are reinforced with color, a playspace box, an animated arc, an end-point icon, and a small cylinder appearing above the valid-teleport end-point icon.
I’ve seen a lot of new users get disoriented by teleportation, but not ever in this room. It’s always very clear where you are, and also clear that there’s no rush for you to do anything (no music, no larger story, no big moving parts, etc.).
The level orbs are, to me, one of the most beautiful designs in VR. The cubemap textures contrast starkly with the more cartoony Lab-world, and the perspective-shift within the orb invites curiosity, a curiosity which encourages bringing it closer for a better look--the very action you’re supposed to take to trigger its function.
This is elegant design. Exactly what you most want to do is exactly what you’re supposed to do.
When the orb-world loads, you find yourself among photo-real mountains bathing in sunlight, accompanied by the dopiest/cutest robo-dog this side of Aibo. The message here: there are surprises in VR. Good surprises.
You’re then is reminded of the teleport buttons by haptic feedback, tooltips, and blinking, but as you look down to re-read the instructions, right in your line of sight are some sticks and a dog. So maybe you put two and two together...
But if you want to play fetch right this very minute, you’ll have to ignore the instructions. It’s a little subversive to do so, but it also reinforces The Lab’s goal to get you playing the way you want to play. The agency is yours.
And the world will respond. Shake the stick and the dog, like a dog will, runs over to play.
Two minutes ago a first-time user was probably feeling a little out of their depth getting strapped in to the headset, getting cut off from the world by headphones, and feeling the unfamiliar controllers in their hands.
Now they’re playing fetch on some scenic vista with a very eager companion, blissfully unaware that they’ve just gained proficiency in most everything they need to navigate immersive virtual worlds.
Two deeper VR design philosophies glimpsed in these first minutes are worth a closer look.
1. An Abundance of Feedback
The Lab never tells you anything once. It repeats what it’s explaining as it’s already repeating it. Most every action you take produces multiple forms of feedback--visual, audio, haptic--and sometimes, as we saw with the teleport, multiple reinforcements of that feedback.
This is a smart practice not because players are stupid, but because the experience of VR is so personal. I’ve put hundreds of people in VR for the first time and the #1 culprit for discomfort in VR is not simulator sickness but the fear that they are doing something incorrectly. It doesn’t help matters that there are so many different learning styles, and so many different ways we relate to our own bodies. The more information a VR experience gives, and the more ways that information is represented, the quicker a player can move from feeling like a student to being a full participant.
2. Interchangeability
Most people have a “dominant” hand, but, for the most part, the functionality of our actual human hands is consistent. I can grab my coffee with my left or right hand (or both!) and perform the desired action with it without too much of a problem. This mirroring of functionality might not be right for every VR experience, but it’s a good starting-point for designing interaction.
For one, it’s immersive. I don’t have to further map and metaphorize the controllers, and their most fundamental use--physically interacting with the environment--is the same as my hands. That’s also the first use that any player learns in The Lab.
Secondly, it’s accessible. For players who have use of only one hand, only one mini-game, Longbow, is unplayable. Designing this way also avoids the awkward “I can’t see my hands but I need to switch controllers” moment that new users often have difficulty with.
Third--and the subject of a future post about how expertly this is done in The Lab--designing this way means a lot of the UI has to be diegetic. This means that even when you’re spending time navigating menus, you’re interacting with the world of the experience, rather than just a window or screen.
If there’s interest, I’d love to spend some more time investigating how the rest of The Lab does VR so well, whether in a more linear experience-by-experience fashion or looking more deeply into the underlying design philosophies. Please share with any designers or developers you know who might be interested, and let me know if you’d like to hear about anything in particular!
You May Also Like