Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
A QA Tester makes a case for the unification of Automated & Manual testing.
One of the frequent conversations in our industry right now is the use of Automated versus Manual Testing, while both do have strengths and weaknesses, I believe that the unification of the two will result in an even better product than trying to use one alone.
Manual Testing is the practice of having a QA Tester/Engineer/Ninja manually go through and test the product itself, clicking every link, testing all inputs, and generally testing all parts of the product themselves. The tester goes through use cases, positive and negative testing methods, and generally crawls the full product in an effort to find things that break or could simply just be better.
What Automated Testing brings to the table is the ability to automate the process by having a script or other program handle mouse clicks, input, navigation, and more. Many of these tests can be run simultaneously in the background and cover a wide array of use cases, reporting back issues as they are completed or fail their criteria. These automated tests can free up time from repetitive tasks that a tester can apply elsewhere.
Both sound great on paper, but what’s the catch? Well, the catch for Manual Testing is time. It takes a lot of time (and therefore money) to manually test a product, and that time increases exponentially with the size and complexity of the product. From single testers to dedicated teams, time is one concern always associated with manual testing and its drawbacks, because it is time you need to invest over and over again.
Automated Testing is much quicker once the scripts are written (which takes time in and of itself), however, once properly configured it can cover more cases in a shorter amount of time. The main issue that faces Automated Testing is one critical to its core - its automatic nature. An automated test is only able to test what it is coded to test. To get full coverage of a particular use case, you need to manually test to further inform the automated testing, which will refine/modify that test to be more inclusive in the future, this process also requires continuous updating. An automated test lacks the “insight” of a tester deviating from a use case because they know there may be an issue when they do so. It is this deviation, this need to dig deeper than the acceptance criteria, that gives manual testing its strength.
The weaknesses of each method are covered by the strength of the other, so it stands to reason that we would seek to utilize them both Manual and Automated testing in tandem. Being able to run automated scripts and report in the background, while a tester manually investigates a product for edge cases, negative testing, or potential improvements sounds pretty much like what we in the QA business call “the dream”.
This dual approach offers coverage that is not only impressive in breadth, but also depth. Alleviating some of the time issues of Manual Testing allows the tester to focus more on ways the product could be improved, rather than just making sure it doesn’t break.
The quick, repetitive nature of automated tests also allow for more consistent testing across many areas, which provides great coverage, consistent testing, and overall higher code confidence. Automation can be triggered each time a new build is created, or at a set schedule, creating a great amount of flexibility that is not always available with manual testing (due to time constraints, resourcing, etc.).
In the not too distant future, we’ll also see more use of Machine Learning, an important component of Automation. With the potential not only to monitor users with the intent to create test scripts, but also the ability to create “self-healing” scripts, Machine Learning can be leveraged to give automation the intelligence it needs to overcome its current limitations. By monitoring user behaviour with Machine Learning, you can accumulate data needed to assign risk and look for anomalies (in the data). Heatmaps will help find bottlenecks or problem areas, helping determine which tests are needed, where they are needed, and any potential changes in scope due to data gained through user interactions. These benefits can be integrated with a combined Manual & Automated Testing process to drive the product’s quality even higher.
Automation’s recent explosion in demand highlights the increased focus companies have for building higher quality products, which is great for our industry, but I believe even greater products result from the use of both manual and automated testing.
With quality being our chief concern, it’s hard to ignore the benefits and coverage of integrating both Automated and Manual testing, allowing QA to help polish the product to an even greater shine.
You May Also Like