Sponsored By

A New Approach to Games Testing

Many video game studios tend to have inefficient and outdated QA practices. I want to encourage a new approach to Quality Assurance in video games.

Michael Hart, Blogger

April 11, 2016

16 Min Read

A New Approach to Games Testing

Video games is a billion dollar industry and video games themselves are complex pieces of software that comprise of many parts. I'm always disappointed however to hear about how many video game studios treat their QA department. This doesn't apply to all studios of course, there are plenty out there that do it well, but there are also many out there that don't. Typically, minimal testing by qualified testers is done during the bulk of the game's development, then once the game hits a certain milestone late in the process, a large amount of temporary staff are brought in to test the game that's usually under a very strict deadline. Many overtime hours are spent by programmers, content producers and testers alike, as all of a sudden the floodgates have opened and they are copping a barrage of bugs that need to be fixed before going gold. Once the game has shipped, the majority of those temps have their contracts ended.

This is obviously really inefficient and it surprises me that video games still use this waterfall type of model when the majority of other software has moved to other methodologies. What I want to do with this post is encourage a different approach to video games QA.

QA is a discipline

Firstly, the QA department should be treated just like any other. QA, especially in video games, is often seen as a less skilled, entry level position. I want to put a stop to that right now. QA is not an easy job that just anyone can do. QA is a discipline just like programming, or art, or animation, or design, or sound, and requires its own specific set of skills (not the least of which is often needing to have a certain level of understanding of all of the above listed disciplines). Your internal testers should be permanent full time employees, and be on the development floor(s) with easy access to speak to anyone they need to, not locked away in a basement where the only interaction they have with developers is when they respond to a bug report. They should be trained, qualified and experienced Test Analysts, with degrees or diplomas, and/or software testing qualifications (there are several software testing specific qualifications out there), and their salaries should reflect that. It's just as hard to find talented experienced test analysts as it is to find talented experienced programmers and artists. Don't treat QA as a default stepping stone into other disciplines.

QA from the beginning

Your testers should be involved in the development process from almost the beginning, not just brought in only when there is something "physical" to test. It all starts with the game design document. Your QA staff should be looking over this document even before development proper begins in order to find potential defects. Don't consider it some kind of criticism of your design, moreover embrace it. Qualified and experienced testers will find potential problem areas in the design just like they will find bugs in the code and art, which is the kind of thing you can easily fix before anything is implemented. It's a lot harder, not to mention a lot more expensive, to discover there were issues with the design after the code has been done and you realise you need to rewrite the code, change the design or ditch the feature entirely. After all, this design document is supposed to be the source of truth for your staff. They are going to need something to refer back to so they know exactly how something is supposed to work or look like. If this is inaccurate, these inaccuracies multiply down the line. Allowing your testers this early access to the design document also allows them to start designing their test cases from an early stage.

Note that the testers are not there to suggest design changes or critique the design document, they are there to spot defects. It would be totally up to the senior designers exactly how they would go about fixing those defects.

While it's true that many games tend to deviate from what was originally in the design document, this is another process that needs to change. If the design is modified, then the design document must be modified with it to reflect the new functionality. Your design document is your source of truth and your staff, especially your testers, should be relying on it as a reference point so they can easily know if what they are noticing is actually a bug or not. Don't neglect your design document! Keep it up to date!

QA as part of development

Once the full development actually starts, your QA staff are right there working side by side with everyone else on whatever the feature happens to be. They are designing the test cases they will need to execute on the feature while it is still in development, working in conjunction with the programmers and content producers (who are providing their input along the way) to make sure everything is adequately covered. There should be QA sub-tasks created under the main User Story/Task/Feature Request/whatever you want to call them, the testing is carried out and tracked, and that ticket can not be closed off until QA has been carried out and given it their wax seal of approval. Once it's complete, QA will then need to determine whether they should add regression tests on the feature to regular manual test runs, or whether the feature may be a candidate for automation. Regular manual test runs and automated tests are performed to spot regressions as soon as possible.

This should be happening with every feature that is developed, not just in the game itself but any internal tools that may be required, whether those are simple Maya plugins or a complex suite of internally developed content tools. It's especially important for QA to cast their eyes over these because they are being rolled out to the rest of the development team, and if there is a critical bug that wasn't spotted that prevents them from working you are losing a lot of productivity while your tools team are frantically trying to track the issue down and fix it. QA in this case should be the gatekeeper, who are not only responsible for the testing but also the roll out.

Especially when developing content tools, please don't fall into the trap of judging a bug based on when it may have been introduced. It's all too easy to pass a bug off if it's already in the current build of the tools, and perhaps has been for some time, and the content producers either haven't complained too loudly about it or have been working around it. Judge the bug on its own merit, based on its severity and impact to the development staff. It doesn't matter if the bug has been in there since version 0.01 and it's only just been bugged up now, it should still be fixed if it's causing your content producers headaches. Remember, QA are speaking to and working closely with everyone under this model, and they know what is and isn't causing issues. If they don't believe the next version of the animation visualiser should be rolled out without a fix for a bug, that's been around for a few versions and causes animators to lose 10 minutes each day due to needing to work around it, pay attention to them. Content producers tend to not want to stir the pot too much unless it's an obvious critical blocker (ie, the tool doesn't work), so you need to rely on your QA staff to tell you exactly how severely the bug is affecting them. If the bug is having a negative impact on someone else's work, it should be fixed; if the bug isn't causing any real loss of time, put it into the backlog. But please don't automatically shove a bug into the "maybe in a future version" pile based on how long it's been occurring.

With every feature that is developed, QA is there every step of the way. Some teams may decide to adopt a more "Agile" approach and actually embed the testers into their respective development squads. There's nothing wrong with doing that, especially if your studio is a heavily Agile environment, but I don't believe it's entirely necessary. Either way however, having the QA there working beside everyone else ensures quality on the features as they are being done, and regular test runs and automation spot any regressions that may occur. Ultimately it means far less bugs, overtime and hair pulling towards the end of the project. Your test analysts still report to your test leads and managers, who are still their direct line managers and task coordinators, but also work on supporting their test analysts, removing QA blockers, staying on top of the overall testing goals and objectives, liaising with the development and content leads regularly, prioritising testing tasks, keeping their teams motivated and focused and even assisting in the testing effort if need be.

The end of the tunnel

So your project has reached the point where you need more eyes looking at it, whether you consider this phase alpha, beta, public/private test or the name you decide to call it. Note that I did not say that if you adopt the above approach that you won't need this period, you absolutely still do. Your internal QA team can only be so large and could not possibly spot everything a larger team of testers or players could uncover. However, the role of your internal team doesn't stop here.

This is the phase where they will be keeping on top of the bugs as they are coming in from the larger (often external) team, and effectively triaging them before they ever reach development leads. They are the ones most familiar with the bugs that have previously been entered into the database and if they spot duplicates or reports that are not bugs, they can close them off on the spot. When reports do come in that are genuine, your internal team then works prudently to reproduce the problem within your environment, so your programmers or content producers can fix them more efficiently. These kinds of bugs are often difficult to reproduce so you need your internal QA team jumping onto these to try to track them down as fast as possible.

Under a traditional development methodology, because minimal testing has been done, this is often the point where the proverbial excrement hits the fan. Suddenly there is an avalanche of hundreds or thousands of bugs and you are buried so deep in the reports that you and the rest of your team can only start digging and hope the direction you are digging in is up. This in reality shouldn't be a surprise; if you're leaving all of your testing until the end, that's when you should expect all of the bugs to come in. But it seems this is a mistake that's repeated over and over because there's never enough time allocated at the end of the project to test properly, to fix all of the bugs that do get raised, and to properly verify those fixes. More and more games are either being delayed (sometimes multiple times) because the amount of time needed during this period was misjudged, or are shipping with major bugs - some of them are known but the scarier part is that a lot of them are unknown until the paying customer gets their hands on them. Testing is treated as an afterthought: "Oh yeah we should probably test this now", rather than treating it as a key part of the development process every step of the way. The result is a poor quality product.

However, since your QA team has been testing from the very beginning and working alongside the rest of the development team as features were implemented, the overall volume of bugs coming in during this phase is a lot lower and a lot more manageable. This ultimately means less overtime spent during the typical crunch period at the end of the project, less stressed and happier staff, and the project has a much higher chance of actually shipping on time. There are also less bugs thrown into the "won't fix" pile, or the dreaded "day 1 patch" pile at the end of the project simply because you ran out of time to fix them. This is because many of these bugs would have already been raised and fixed during the project's development, when it was far easier, faster and less costly to fix them.

Metrics

I know some studios put KPIs on their testers that generally tell them to raise x amount of bugs per day. With all due respect, I find that a nonsensical metric as this ultimately leads to inaccuracies. If you impose this kind of thing on them, testers will deliberately hold onto bugs once they have reached their daily quota so they can log them the next day, or deliberately raise duplicates at the end of the day if they don't have enough, or focus on raising frivolous minor bugs in low priority areas when they should be concentrating on the more critical ones that may be more difficult to reproduce first. You don't ask your programmers to write x amount of lines of code per day, or your artists to create x amount of triangles per day, after all. The number of bugs a tester finds is in no way an indication of the amount or quality of work they do.

Using the model I propose above, your tester's time is actually tracked in a similar way to your other developer's time, through the tickets that are used to create features. The testers will be logging their time (including time spent designing and writing test cases) against these tickets just like everyone else. For any time the testers are spending not actually working on those tickets - which will likely mostly be when they are carrying out test runs or exploratory (ad-hoc) testing, this time can be tracked in almost all modern test case management tools.

A metric I prefer to use is the "customer satisfaction" metric. If the customer (which can be anyone from the content producers using the tools to senior management or the publisher reviewing the game, or even the end users) is happy with the quality we have delivered, then we have done our job. If they spot zero or very few bugs, we have also done our job. I don't like saying "how many bugs did you find today?"; I prefer to say "Are our customers pleased with what we gave them?". This is a more positive approach that puts responsibility on your testers. They aren't under pressure or wasting time trying to raise an arbitrary amount of bugs, but they do know they are responsible for any major bugs that might slip through the cracks, so they take extra care to make sure all of their bases are covered. They know that if a major bug pops up on a system or feature that they were responsible for testing, there will be some explaining to do.

The "quality" part of Quality Assurance does not mean "quantity" of bugs. Quality is not measured by the number of bugs found and fixed, it is measured by the number of bugs not present when the customer uses the product. It is measured by how satisfied the customer is when they use the product. They don't care if there were 50,000 bugs fixed during the development of the game, they only care about whether the game works the way they expect it to when they play it. That is what quality is.

Acceptance Criteria

Your QA leads and managers should be required to give sign off on the delivery of any major milestone. They base their sign off on the acceptance criteria - a previously agreed on list of quality requirements a delivery must meet. They will compose a test summary report, or TSR, that details the results of the most recent test runs - making specific note of the tests that failed and any bugs that are still unresolved. They then use this as the basis for whether the acceptance criteria have been met. The acceptance criteria can change from milestone to milestone so it's important that the expected quality standard is agreed on with all of the development and QA leads well in advance.

Senior designers/producers/tech leads/management ultimately would have the final say in whether a milestone ships out or not with known issues. However it is understood that they are the ones that are going to be held responsible if there are any ramifications due to that decision, not QA. Using this model of testing however, the number and severity of these kinds of known issues are drastically reduced.

Your QA team are professionals

Conclusively, you should not try to cut corners with QA on your game project. You might consider it a bit of a waste of time and money to be working like my above proposal, but I will say that this extra time and money is well spent. If you run into these issues late in the project instead of early, which is so very often the case, it will cost you a lot more time and money. Your internal QA team is an important resource, just as important as any of your other teams, and should not be treated as some kind of unskilled or entry level purgatory where hopeful programmers or content producers go when they can't land a job in their preferred discipline. Your QA team is not a group of hopefuls you can hire for minimum wage for a few months and then let go. Your QA team are full-time trained professionals that should be treated and paid as such.

Your QA team is there to help you. However you must restructure your development methodology, rethink your strategies and possibly change your mindset for them to do their job properly. I want to encourage studios that it's time for video games to drop the methods they have traditionally used, and widely adopt a new approach to QA. It will ultimately mean a better quality product at the end of the day. Are you willing to take the plunge?

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like