Sponsored By

GDC: Riot Experimentally Investigates Online Toxicity

A summary of Jeffrey Lin's (Lead Designer of Social Systems for League of Legends) standing room only talk at GDC. Dr. Lin described the empirically-based research his team has done investigating the nature of toxic player behavior and how to mitigate it

Jim Cummings, Blogger

March 31, 2013

6 Min Read
Game Developer logo in a gray background | Game Developer

This and other posts can be viewed at Motivate Play.

 

tribunal2Riot Games has gotten a fair amount of press in recent months regarding their empirically-based research on the nature of “toxic” player behavior in League of Legends.  As a result, I wasn’t surprised to find standing room only for Jeffrey Lin’s (Lead Designer of Social Systems for LoL) talk on research-informed measures for managing toxic behavior in online games.

Lin opened with the common notion that online gameplay has an inherently toxic element that must simply be accepted.  However, as this assumption is costly (players leave due to toxicity), Lin and his team – including several Ph.D.’s in fields ranging from cognitive neuroscience to human factors, who themselves are gamers – have sought to challenge it.  Working with designers, marketing, UI, and other production staff, the team of research specialists have conducted a series of empirical studies in-game investigating the nature of toxicity and features for potentially mitigating it.

Riot first constructed “behavior profiles” for individual players, examining the severity of toxic offenses across game sessions.  They found that severe toxicity in a given player is rare; rather, many games seem toxic because a single player is having a single, uncharacteristically “bad day”.  This lead the team to infer that toxicity may perpetuate through a ripple effect, as negativity fleetingly spreads from one player to others.

To investigate this idea the researchers conducted an experiment in which cross-team chat, as one of the main venues for negative interactions, was made optional for individual players.  And indeed, they observed a significant decrease in all measures of toxicity (offensive language, obscenity, and displays of negative attitudes).  Moreover, the total percentage of games using chat remained the same (only 46-47% included no chat, both before and after).  Lin and team therefore concluded that shielding players from toxic behavior can in fact prevent it from spreading.

Following this the team wondered if toxicity and attitudes about it could be changed by engaging players regarding their negative behavior.  This lead to the introduction of LoL’s Tribunal – a system by which the player community votes on whether a given player’s behavior should be sanctioned and the severity of any punishments (usually in terms of how many days a ban should last).  Lin noted that as of two weeks ago, the Tribunal has registered 105 million votes and, perhaps more impressive, has lead to 280,000 reformed players (those that have been punished previously but are currently in positive standing).  With regards to the “accuracy” of social sanctions, Lin also noted approximately 80% agreement between the community and Riot’s in-house team (with the team actually being the more severe of the two parties).

Lin then explained that, up to that point, players being punished with toxicity-related account bans in LoL were typically provided with vague notifications, messages that detailed the length of the ban but lacking real details on why the sanction was coming down on them.  To this end, the team conducted a third experiment in the series, to investigate if explicit feedback on previous behavior would increase reform rate.  All banned players were sent Tribunal reform cards, providing greater details on the player’s offense.  Not only did reports of toxic behavior decrease afterward, but forum posts showed that when offenders went to the forums and complained about how a particular behavior lead to a ban, the community generally agreed with the punishment.  What’s more, according to Lin, is that penalized players have written in to the moderators, apologizing for their behavior and asking for guidance on how to reform or prevent future transgressions.

slide2
Sample League of Legends forums comments shared by Dr. Jeffrey Lin of Riot Games. No copyright of slide contents claimed by Motivate Play.

Finally, Lin closed with a quick summary and partial report on one of the team’s most recent efforts, a study dubbed the “Optimus Experiment”. Reflecting on the psychological literature on priming effects, Lin and colleagues wondered if it might be possible to prime players so as to reduce toxic behavior.  To explain the concept to the non-academic audience, he noted that brief exposure to the color read can cause people to do relatively worse on an exam and that exposure to words related to the elderly can result in people walking more slowly (likely referring to this work on color association and Bargh’s renown experiments on stereotype priming).

The experiment was a 5 (information category) x 3 (color) x 4 (information location) factorial design, in which players received different types of game-related information in different game screens.  Category types included positive player behavior stats, negative player behavior stats, self-reflection notes, fun facts and a control (general gameplay tips).  The font colors for these messages included red, blue (thought to be associated with creativity) and white (control).  Message display location conditions included the loading screen, in-game, both, and none.

Lin briefly shared, for the first time, just a few of the results from the Optimus study.  One interesting finding was that showing a red message about negative behavior during the loading screen lead to a much larger decrease in toxic behaviors (in terms of attitude displays, abuse, and offensive language) than did the exact same message in a white font.  Additionally, showing a blue message about positive, cooperative behavior during the loading screen also lead to a decrease in a negative behavior, while no effect was observed for the same message in white.  And, interestingly, when the question “Who will be the most sportsmanlike?” (a positive behavior message) was presented in red, the toxic behavior metrics actually all increased.

Lin was quick to note that these studies are just the beginning, with a number a potential questions about the nature of toxic player behavior that could be examined. For instance, he briefly mentioned that the observed changes in behavior may be due to the spotlight effect, an assumption that we assume further research could later test more precisely.

Together, these results have lead Lin and colleagues to conclude that players are not innately toxic and that context is key for shaping behavior in online gameplay. To this end, he suggested to the other developers in the audience that it is their responsibility to help their players, to provide the information and mechanics necessary for removing oneself from negative behavior or bad choices rather than to simply remove offensive players from the game.  Altogether, I was quite  impressed with his team’s inclination to conduct these studies as well as the conclusions they took from the collective results.  It would seem that Riot, in turning to their scientists for answers, seeks to route toxicity by understanding and refining the player experience (bottom-up) as opposed to simply extracting offenders in full (top-down).  That is, user-centric, psychology-based design and policy rather than a blind “War on Toxicity” lacking nuance.  Indeed, in not only giving scientists a seat at the table but also relying on their insights for important development decisions, Riot is poising itself as one of the most well-informed designers of player experience, with likely long-term implications for both player retention and revenue.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like