Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
It's hard to find good play testers. That's why we shouldn't even try. Instead, embrace the thought that every critic has valid points, and use emotional feedback to find them.
It’s hard to find good playtesters. A playtester has to be a lot of things: subjective enough to give valuable feedback, yet objective enough to make it constructive. Skilled enough to spot what’s wrong, yet innocent enough to see the game with fresh eyes. Set enough not to be swayed by minor graphics or trinkets, yet fluid enough not to try and push your game into something its not. Yes, it’s hard to find good playtesters.
That’s why I’m suggesting that we don’t.
That’s right. I’m saying that you shouldn’t try to find good playtesters. In fact, you should try to find the worst playtesters ever, the whiny, bitchy, annoying powergamers. Then ask them the right questions.
There’s a saying I’ve come across in my writing: there’s no bad critics, only bad critiques. What this means is that every critic has valid points. The hard thing is getting to them. And that’s where we often do it wrong in playtesting.
What we do is try to train our playtesters in the ways of critiquing. If we’ve got a set playtesting group we tend to show them our games over and over, and explain the mechanics, how they’re supposed to work, what they influence and why. We discuss our games with our playtesters.
Nothing wrong with that – if what we want is a group of fellow game designers. And playtesting with designers is a great way to get valuable feedback and ideas. But it’s not player playtesting.
So we grab players and then we ask them questions: did this work? Did that work? Was this broken? What did you expect from that action? We’re asking them the same types of questions that we want the answers to1. And that’s wrong.
Because what we’re doing is attempting to train them into game designers. Players aren’t game designers. Most designers are players, but most players aren’t designers and never will be. To ask them game design questions is like teaching a cat to herd sheep. It’s the wrong thing for the wrong reasons.
In fact, there’s only a single question we should ask our playtesters: how did the game make you feel?
A game is an experience. Without the experience there is no game. There might be work (although even that is often some kind of experience), there might be a competition, there might be a rock wall, but there is no game. Games are all about experiences. And experiences are based in emotions.
Quick, tell me the last time you did something that you didn’t have any emotional reaction about. Tell me the details.
If you’re anything like me2, you’ll only be able to recall the very general sense of having done it, if even that. Try to recal the breakfast you ate three days ago – chances are that, unless there was a fight with your significant other, you were at a fancy restaurant or you’re a gourmet, you won’t even remember it, or you will remember a general sense of “breakfastness”, that breakfast mixed with the memories of every other breakfast you’ve internalized3.
Asking our players what they feel when they play our games enables us to zero in on their experience and tailor it to what we want it to be. It also tells us what’s wrong with our games.
If we envision a tense game and the players say that it’s confusing, or boring, or slow, then we’ve failed. Their emotions tell us how close we come to our ideal game design, all without us having to train them to understand squat.
Decoding those emotions, what they mean in terms of game design, that’s a completely different problem, one that we as game designers are tasked with solving. Our playtesters shouldn’t need to concern themselves with it.
That’s the great part about looking for emotions: every player knows what they feel. They may not understand why they feel the way they do, and it may be difficult or embarrassing for them to express their feelings4, but they do have those feelings.
It’s easy to get defensive when you get emotional feedback. Talking about strategy or mechanics is objective, there is a filter of logic and cognition that lays between the playtesting session and our own reactions to it. Asking for a players feelings that filter is gone. We’re digging into something that reflects directly upon our work, and we tend to get emotional about it ourselves5.
A key thing to remember here is that a player’s emotions are never wrong. The player feels what she feels and if that isn’t what we want her to feel then it’s us that have failed, not the player. Our job here is to dig down into why she feels that way.
One trap that we set when asking for emotions is that we get value words instead of emotions. Value words are things like “good”, “bad”, “I liked it”, “I didn’t like it”.
Value words are difficult because the seem to answer our question: “oh, they liked my game”. But that’s not what we’re asking. Players liking or disliking our game gives us an emotional reaction, it doesn’t do Jack for our understanding of what’s going through their minds. While it might be nice to know that a player liked our game, that’s not what we need to know. Liking or not liking an experience doesn’t say anything about what that experience is: some people love to be scared, others hate it (that’s why you’ve got horror movies on one shelf and rom-coms on another).
So when you get a value word in return, you need to acknowledge it (“OK, thanks, glad you liked it”), which enables the playtester to get it out of the way, and then ask them for their emotions (“so what did you feel when playing?”).
Another trap is to accept the surface emotions as the entire experience. A player can say that they felt tense, which is an emotion, when the reason they felt tense is that they were afraid that they’d miss out (FOMO article). Or they may say that they’re bored when they’re actually confused(large decision spaces article).
So we need to dig down into their emotions, to find out what it is that causes them. That’s when we start to ask questions about timing (“when did you feel that way/when did you start to feel that way”), dynamics (“what part of the game made you feel that way the most”) and interaction (“did something the other players/the game did make your feeling stronger”).
Note that we’re not asking about specific mechanics or resolutions. We’re still asking about feelings and we’re doing our damndest to avoid leading questions (“did this part make you feel good/bad”) or closed questions (questions that can be answered with a single word, mostly yes or no).
Ok, so we get a bunch of emotions. And some of them are what we expect them to be, while others aren’t. And we look at it all and nothing makes sense.
That’s because unless we’re very sure of who our target audience is, and playtest with only them, we’re going to get a bunch of noise. We’re going to get feedback from people who won’t like our game no matter what6.
To get rid of that we need a means to remove the wrong audience replies. That’s where bucketing or clustering comes into the picture.
Try to sort your answers into buckets based on what people felt and why they felt it. You’ll likely find that some people will get your game immediately, even if it’s imperfect, while others will have definite problems. For example, if someone feels frustrated because the game felt too fast for them, and your game is a racing game, then it’s pretty obvious that they’re not your target audience. But if someone feels frustrated because they failed to overtake the leader no matter how fast they went then there might be a problem with the way your game works.
Getting at players emotions isn’t easy. Going from emotions to problems with your game is harder. I’ve found that breaking it down into steps makes it easier.
First I try to cluster the answers and look for patterns. If there’s a bunch of players who say “tense” and then scattered answers of other things, then I can surmise that the game is tense and start digging into what makes it tense (to increase the tension). It also allows me to, in later playtests, ask players who don’t find it tense if they felt any tension outright and then follow up with the leading “why not?” (which is totally legitimate here, I’m looking to lead them into thinking about why they didn’t find the game tense).
Then I try to look at the elements that come to light. Sometimes everything works like I thought it would. Sometimes there are elements that I immediately see need to be tweaked or removed. Mostly there’s a general feeling of “here’s a problem” which I can’t figure out what to do with.
And that’s when I take the game to my game designer buddy playtesting group. Spotting problems is a playtester job, fixing them is for game designers.
More GameDev. articles:
This post previously appeared on Wiltgren.com - Helping Writers and GameDevs be Productive. New updates every Monday.
Read more about:
Featured BlogsYou May Also Like