Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
Overview on choosing participants for focus groups, betas, and usability tests. From defining requirements to screening potential participants.
It is important to test your game with representative users. Talk to them, watch them play your game. You need their independent feedback, after all you are too close to your game.
It is important to get the right participants for your user tests. You will be using the test feedback to change or improve your game. You should try to make sure the participants are the right audience for your game so you don’t waste time or money.
Would you trust anyones feedback on your game? It is important to match the feedback with the type of person who gave the feedback. For example: Who said the combat is boring? Is it someone who isn’t a fan of your game’s genre? Is it a super fan who has spent $1000 in a competitor’s title?
You might think that anyone is appropriate for UX or FTE testing because you expect the game will be intuitive enough for everyone. For most tests you want a specific audience.
Quick! What is the target audience for your game? If you answered, “Males 15-25,” then you lose!
A demographic is not your audience. If this is how you select your participants then you are unlikely to get useful feedback. Simple demographic information is insufficient to know if a person will be a fan of your game. Defining and categorizing participants is key to getting trusted feedback from user testing.
A person’s interests and past behavior is a much better indicator if they will like your game. So, we can define an audience through two major factors: 1. interest, 2. behavior. An interest means the person likes this type of game (genre, theme, platform, etc). Behavior means their experience or level of familiarity with this type of game.
Here are a few ways of categorizing players based on their behavior and what types of user tests are appropriate for each category.
Average users are the majority of the users. They have experience playing several similar games, or at least one title deeply.
Even though they are called average, they are not one homogeneous group.
Games are generally designed for this audience.
Average users are good for most types of users testing.
Expect participants from this group to require extra prompting to give feedback. They might not want to say anything negative.
Power users are the hard core super fans of this type of game. They have played at least one similar game deeply and competitively. They have probably spent money in a F2P game.
Power users are focused on the fastest route to power in your game. They want to constantly play and advance.
Power users are best used for tests focusing on meta game, progression, competition, or monetization.
The novice user does not have much experience playing this type of game. They may lack an understanding of genre specific conventions or mechanics. Examples: energy systems, random boxes, turn based combat, etc.
Since they don’t have the same base level of experience as an average user, they will experience issues that other players won’t.
Novice users are best used for First Time Experience tests or usability tests.
Expect confusion that isn't representative of main audience when testing with novice users.
Influencers are early adopters who play lots of games, stay up to date on gaming news, and talk to lots of people.
They understand what other influencers like and hate. So, they can prevent you from making a big mistake with your audience.
Influencers are best used to give you general feedback about your game. Keep in mind they do not speak for the average user.
Let’s review some practical tips for screening your participants based on their interests and behaviors.
Focus on what comparable titles the participant as played. A comparable title is a game like yours that participants might have played. When someone has played a comparable title, it is a strong indicator that they like that type of game.
Shares some main gameplay concepts or features from your game
Optionally, has similar theme or art style to your game
Is large enough so you can find people who have played it
When screening for player behavior you should focus on what their play-habits were with the comparable title. You may want to know how far they advanced, how much money they spent, how often they played, how competitive or social they were.
How far did they advance: e.g. what level did they reach
How much real money have they spent (for a F2P title)
Avoid being too restrictive with your requirements or you won’t have enough participants. Avoid requiring having played too many comparable titles.
Before we run a user test with potential participants we need to determine their suitability for our test. We do this by asking them questions to see if they meet our requirements.
identify participant requirements
screen participants with a screening survey
follow-up screening
As discussed previously, the participant requirements are: the type of player you want for your test, comparable titles or interests, details of player behavior to identify player type.
Use our requirements to create a screening survey. Be sure requirements are concrete. For example, know what level the player should have reached in each comparable titles.
Participants who pass the first screening survey are asked follow-up questions. This additional screening step is to verify the participants are members of the target audience. You should verify they are not lying or misrepresenting their experience. Follow-up screening should double check key answers from their screening survey. It is ok if the answers are only roughly similar. Be sure to ask another question to show the participant has knowledge of the games they said they played. Failure to perform follow-up screening may result in having some of your participants being “duds”.
After you are prepared identifying requirements, creating screening survey, and follow up questions, then you need a way of driving traffic to your screening survey. Sourcing users will be covered in a future topic.
Let’s run through the process with an example RPG game. For this example we will focus on one comparable title, Marvel: Contest of Champions. This title was picked because of its similarity to our example RPG game: 1. similar reaction-based combat, 2. similar meta game of collecting and upgrading heroes. We want to run the user test with average users and power users.
Our chosen follow up question is about the meta game. “What is your top champion's prestige?” The players in Marvel: Contest of Champions spend most of their time upgrading heroes, it is important. All non-novice players should be able to answer.
All of the following target level and champion prestige is appropriate at the time of writing in mid 2017.
We want users who have played about 3 months. We have chosen the main screening criteria of level. This creates a target level of 30-40. For follow-up screening, our expectations for an average users’ top champion prestige is 2500+, or 3500+ for paying users.
For power users, we want people who have reached near-end game. We have chosen a target level of 55+. As of writing, the current level cap is 60 and takes about a year+ to reach. For follow-up screening, our expectation is a top champion prestige of 3500+.
Learn from my mistakes! Here are a few lessons I learned.
In my early screening surveys, I asked players to select genres that they played. This was instead of asking for a direct comparable title. Asking genres did not work well. The genres had different meanings and definitions to different people. The lack of clear and enforced app-store categorization did not help the issue either, with puzzle games being categorized as RPG games.
In further screening survey iterations I gave example games to help clarify the genres. The added examples didn’t seem to help much.
I had always asked what games the participants were currently playing. This always resulted in a wide range of responses, from one of many games someone was currently playing, to a laundry list of games the person had heard of in the past few years. So this question did not always help with participant selection.
It wasn’t until later iterations of the screening survey that I started to feel more in control of the survey participants. These surveys directly asked about comparable titles and follow up questions clarifying the participants play habits in the comparable titles. The clarifying questions about the play habits was important. They yield much better results than a yes/no or a last played date.
A lack of rigorous follow-up questions was also an early problem that resulted in having participants who clearly were unfamiliar with the game genre we were testing.
When conducting user tests we need to have the right participants so we can trust their feedback. There are two main factors to focus on when screening potential participants. The first factor is the participants interests. Are they interested in games similar to yours? Have they played games that are comparable to yours? The second factor is the person’s behavior playing comparable titles. Does their level of familiarity with the comparable title match the type of user test you will run?
When screening potential participants you will have them take a survey to collect enough information to determine if they are a good fit for your user test. You will ask follow up questions to double check the responses for participants you are interested in.
I hope these guidelines will make your next user test more productive. Get out there and test your damn game!
Read more about:
Featured BlogsYou May Also Like