Sponsored By

FACEIT and Google-designed 'Admin AI' bans 20,000 toxic CS:GO players in six weeks

An artificial intelligence designed to address toxicity in competitive games has banned 20,000 players from Counter Strike: Global Offensive as part of its first live implementation.

Chris Kerr, News Editor

October 28, 2019

2 Min Read
Game Developer logo in a gray background | Game Developer

An artificial intelligence designed to address toxicity in competitive games has banned 20,000 players from Counter Strike: Global Offensive (CS:GO) as part of its first live implementation. 

Christened 'Minerva,' the 'Admin AI' was designed by FACEIT with the help of Google Cloud and Jigsaw, and was trained through machine learning to address toxic behavior at scale. 

As revealed in a post of the FACEIT blog, the AI's first practical implementation focused on identifying and acting on toxic messages from the text chat in CS:GO matches. 

After months of training to minimize the likelihood of false positives, the AI was able to weed out harmful messages "without manual intervention" and act on them by issuing offending players a warning for verbal abuse.

Similar messages in a chat were then marked as spam, while the punishment for repeat offenders became increasingly severe. For those of you with a taste for all things numerical, Minerva analyzed over 200,000,000 chat messages over the past few months, resulting in 7,000,000 being marked as toxic. 

In its first month and a half of activity, the AI dished out 90,000 warnings and 20,000 bans for verbal abuse and spam, with the number of toxic messages falling from 2,280,769 in August to 1,821,723 in September -- a decrease of 20.13 percent.

"In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI," wrote FACEIT, commenting on its future hopes for Minerva.

"We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time. In the coming weeks we will announce new systems that will support Minerva in her training."

You can find out more about the toxicity-quashing AI by checking out the full FACEIT blog post.

About the Author

Chris Kerr

News Editor, GameDeveloper.com

Game Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like