Sponsored By

Beta testing/replication on video game cognition research

An examination of three studies that failed to replicate past findings on the cognitive benefits of video games.

Wai Yen Tang, Blogger

June 30, 2015

10 Min Read
Game Developer logo in a gray background | Game Developer

People learn from the news on recent science and these studies are often breakthroughs. Sometimes, a study would 'contradict' long held beliefs or even another study or they fascinate us with some novel idea or perspectives. These 'breakthrough' studies would often gets press attention, although when examined closely they may be overstated. This happens when academic journals are likely to accept studies that has found breakthroughs rather than studies that could not find breakthroughs or  is 'old news' science. This is called publication bias.

Science is self-correcting by repeating the studies and reaffirm or disconfirm the original findings. This is quite similar to game testing, game bugs are identified, game testers repeatedly replicate the conditions leading to the bug in order to figure how it happened and where it can be remedied in the programming. Unfortunately, beta testing scientific findings is not on every scientists' mind as they are pressed by a lot of people to push the boundaries of the unknown. This results in the audience trusting scientific findings unknowing that it is not been beta tested enough. Without beta testing for video games, it is quick to spot game breaking bugs in the final product. For science, it is not obvious to spot the 'bugs' in the findings, so it is up to the scientists to replicate them.

It is commonly thought that videogames improve hand-eye coordination among other cognitive benefits and these seminal findings were published by Dr. Daphne Bavelier's lab at the University of Geneva. However, Walter Boot and his colleagues (2011) critiqued these findings' methodological shortcomings, putting the conclusions into question. Recently, three studies were published with the purpose of beta testing the original findings. Their results came out not as advertised as the original findings.

Gamers vs. non-gamers

Fernand Gobet and his UK colleagues compared gamers and non-gamers' performances on two cognitive tests: Eriksen Flanker Task (see Wikipedia) and the Change Detection Task. There were no differences in performance between gamers and non-gamers on the two cognitive tests. Let us break down the methodological procedure to understand how they got the results, just like how a programmer looks at the code to find out why the program outputs in such manner. The study had a larger sample size with 92 participants compared to 35 participants in Clark et al. (2011), a larger sample size increases confidence in the results and lessen the impact of extreme data points (aka outliers) skewing the results. They also compared between action gamers and strategy gamers and yet no difference between them were found. Their operational definition of gaming is worthy to examine. They defined gamers based on three questions:

  1. "How many hours a week, on average, do you play video games for?". The responses are categorical from: 0-1, 2-5, 6-10, 11-15, 16-20, 21+. Any participants who played over 0-1 were categorized as gamers.

  2. "On average, what percentage of the games that you play do you complete? (By completed, we mean attaining the highest in-game 'level' or 'rank' or completing the game's storyline." (excluding DLC, sidequests). They categorized 76-100% complete as "experts", 51-75% as "intermediates", 26-50% as "novices" and 0-25% as "control".

  3. "Would you identify yourself predominantly as an action or strategy video gamer?"

These types of questions were typical in cognitive videogames research. Nevertheless, the authors argued that these questions are the study's weaknesses. The questions are based on self-reports, we don't know how "skillful" they actual are in the game, it is possible they play videogames on easy mode. Second, gamers tend to play many genres, so it is quite rare to find someone who predominantly plays one genre. Most importantly, the participants were categorized into two groups (gamers vs. non-gamers), this grouping does not show the full range of videogames experiences, past studies often compared participants with less than 1 hour of videogame per week experience with participants with at least 4 hours of videogame per week experience.

Full spectrum gamers

Nash Unsworth and his American team have addressed the discrete group problem by comparing results of dichotomous groups and the full range of videogames experience with a battery of cognitive tests. These cognitive tests were grouped into three types: working memory, fluid intelligence, and attentional control. In their first study, they first analyzed the data by comparing participants who played first-person shooters for more than 5 hours per week versus participants who have not played first-person shooters and had played less than one hour per week on other videogame genres. This kind of data restriction excluded nearly 75% of the full data set, in effect they cut out the middle and looked at the top and bottom of a specific genre. What they found supports previous studies, gamers outperform non-gamers on a variety of cognitive tests. However, when they analyzed with the full data set, the results are much less rosy as there were fewer significant results and they were small. In their second study, they repeated the cognitive testing across multiple universities and with some non-student populations with a much bigger sample size of 586. Yet again, their results revealed few and small significant results. Let us break down their operational definition on gaming:

  1. How many hours per week you have played video games over the past year? The responses are categorical from: never, 0+ to 1, 1+ to 3, 3+ to 5, 5+ to 10, 10+.

  2. Participants indicated their expertise level from a scale of 1 to 7 on first-person shooters, action games, real-time strategy, puzzle games, role-playing games and music games.

In their second study, they refined the questions into the following:

  1. How many hours per week they played for each genre, they were told that with 24 hours per day and 7 days per week, the maximum number of hours is 168. The genres included first-person shooters, action games, sports,  real-time strategy, role-playing games and others (including sports).

Their measurement for videogame experience are quite similar to Gobet and colleagues and to previous studies. Yet their study yielded results not as wonderful as past studies. Again, these questions are based on self-report, thus they have estimates on what kind of experiences the participants have. Second, they do not have any data regarding genre expertise except from the participants' words. Gobet et al. had mentioned earlier in their paper regarding the Elo rating for chess players. Videogames record players' statistics allowing players to gauge their performance and abilities on a fine level. If cognitive researchers and game designers can formulate together an Elo rating for each genre, this allows a more refined measurement of video game expertise. The first challenge in formulating such rating system is what gets calculated in a game, is the kill/death ratio an appropriate measure of videogame expertise? Does the kill/death ratio or the to-be formulated Elo rating relate to cognitive performance?

So far, these results are correlational, the evidence provided are not enough to render a verdict for videogames improving cognitive performance nor does it provided evidence for a verdict on videogames not improving cognitive performance. Thus, their results are inconclusive, this means they need to gather data and a statistical analysis that can render the verdict against the idea of videogames improving cognitive performance.

Training non-gamers

Don van Ravenzwaaij and his Dutch team conducted two experiments examining how training non-gamers on videogames affected their cognitive abilities over a period of days. The cognitive test was the moving dots task (see video). The results from both experiments revealed that training with videogames did not provide a boost in performance on the moving dots task.

Let us break down the methodological procedure. Their first training experiment had 20 participants divided into playing a first-person shooter (Unreal Tournament 2004) and Sims 2. The participants trained in six sessions, each session started with testing on the moving dots task and playing their video game for two hours. Their second experiment is the same as the first except they doubled the number of participants and doubled the training time per session, so from 2 hours to 4 hours of video game time. They also included a control group who did not play any videogames. Yet, their findings revealed no differences between groups.

Their statistical analysis include Bayesian inference allowing the researchers to render a verdict for or against the idea that videogame affect participants performance on the moving dots task. The Bayesian verdict is against the idea that action video game provide a better boost in performance on the moving dots task than Sims 2 or not playing any videogames.

The authors addressed the limitations of their experiments. They addressed the amount of time spent training with videogames, it is possible the effects may emerge with more time, but they pointed out that past studies found effects with fewer or as much training hours as they did. They addressed whether the repeated moving dots test might affect performance, they pointed out that their Bayesian analyses would reveal such effect.

Discussion

The take home message from these three studies is that failed replications on the cognitive benefits of videogames raised questions about videogames' true potential benefit. Researchers should replicate and repeatedly test these effects, much like GLaDOS' obsession with testing because she can. Aside from repeating tests, improving videogame measurement should help fine tune the results. Furthermore, identifying and obtaining data on key aspects of players' gaming sessions that relates to cognitive abilities. An example could be heat maps of the mouse cursor for strategy players or measuring their action per minutes. I should note that I have picked these three studies from a pool of other cognitive studies that have found significant results, such as Chisholm and Kingstone (2015) or Oei and Patterson (2015). Nevertheless, you should be aware that not all scientific findings gets published in academic journals and failed replications studies is not as sexy and exciting as the next big novel breakthrough you hear often in the news.

References

Boot, W. R., Blakely, D. P., & Simons, D. J. (2011). Do action video games improve perception and cognition? Frontiers in Psychology, 2 . DOI: 10.3389/fpsyg.2011.00226

Chisholm, J. D., & Kingstone, A. (2015). Action video games and improved attentional control: Disentangling selection- and response-based processes. Psychonomic Bulletin & Review, (pp. 1-7). DOI: 10.3758/s13423-015-0818-3

Clark, K., Fleck, M. S., & Mitroff, S. R. (2011). Enhanced change detection performance reveals improved strategy use in avid action video game players. Acta Psychologica, 136 (1), 67-72. DOI: 10.1016/j.actpsy.2010.10.003

Gobet, F., Johnston, S. J., Ferrufino, G., Johnston, M., Jones, M. B., Molyneux, A., Terzis, A., & Weeden, L. (2014). "no level up!": no effects of video game specialization and expertise on cognitive performance. Frontiers in Psychology, 5 . DOI: 10.3389/fpsyg.2014.01337

Oei, A. C., & Patterson, M. D. (2015). Enhancing perceptual and attentional skills requires common demands between the action video games and transfer tasks. Frontiers in Psychology, 6 . DOI: /10.3389/fpsyg.2015.00113

Unsworth, N., Redick, T. S., McMillan, B. D., Hambrick, D. Z., Kane, M. J., & Engle, R. W. (2015). Is playing video games related to cognitive abilities? Psychological Science, 26 (6), 759-774. DOI: 10.1177/0956797615570367

van Ravenzwaaij, D., Boekel, W., Forstmann, B. U., Ratcliff, R., & Wagenmakers, E.-J. (2014). Action video games do not improve the speed of information processing in simple perceptual tasks. Journal of Experimental Psychology: General, 143 (5), 1794-1805. DOI: 10.1037/a0036923

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like