Sponsored By

Behavioural Shadowbotting?

Shadowbanning is where an anti-social individual allowed to continue to communicate, but is the only person that can see their communications. This is good for online posts & chats. But could it be applied to behaviour as well?

Ben Lewis-Evans, Blogger

July 20, 2015

6 Min Read
Game Developer logo in a gray background | Game Developer

At GDC2015 Jeffrey Lin from Riot games gave his second GDC talk on the science of behaviour and reducing toxic behaviour in online communities. I have already summarised the talk elsewhere (and it is free to watch on the GDC Vault), but one of the methods he talked about caught my attention at the time. Shadowbanning.

Shadowbanning is a cyberpunk sounding name for something I used to know as "gagging" back in my days as a RPG chat room moderator on Talkcity (Let's hear it for The Lighthouse!). A shadowban is where instead of banning someone out right from using a service, like posting on Reddit, Twitter, or to chat in a game, you just have what they post only show up to them. So it looks like their message is going through, but it is only visible to them. Also, importantly, they don't know only they can see the messages they are posting. So they can continue to see everyone else's and there own content, yet they are not getting any feedback, any reaction, from others to their posts.

The fact that the poster isn't getting any response, but doesn't know why, means that the shadowbanned individual may start feeling that what they are posting is not worth posting anymore. Given that nobody appears to be reacting to it. In other words, they are not getting any reaction, so why bother?

In the best case this may lead them to start posting more positively. At least for a little bit (certainly this is what I saw in my chat moderator days when I would sometimes pretend to gag someone, suddently they became very friendly). But another power of a shadowban is that the banned individual doesn't go off and just make a new account - as they may if they are simply banned - and their negative content is not being seen by anyone. This is powerful because, according to Jeffrey's data, a significant amount of toxicity in online communities is instigated by a relatively small percent of users. Which then cause other users to react, while also sending the message that it is ok to communicate in this negative fashion. 

In this way anti-social behaviour in games is kind of like user generated content in games. Performed by a minority, but impacting on a majority. So if a negative minority can be quarantined via a shadowban the flow on effects on the tone of the community can be larger than would be expected.

Another impact of shadowbans is their uncertainty. Once you know about shadowbans it can lead to some uncertainty around your online behaviour. Have you been shadowbanned or are people just not talking to you? What impact this uncertainty has is not clear, but it may lead to an increased perception of the possibility of enforcement for negative communications - potentially reducing the occurrence of these communications. If not, at least nobody else is seeing any negative posts, but it would also be nice if there was some reformative effect. Reforming is generally more desirable than simply punishing overall. In fact, Jeffrey specifically stated that 79% of those who had recieved a Shadowban, when they ran a study looking into this on the LoL subreddit, reformed. Which is impressive.

So shadowbans sound interesting, and are apparently effective, but what if the problem is behavioural rather than/or alongside communication abuse? What if the player griefs, feeds, team kills, uses hacks/cheats, or otherwise consistently displays anti-social behaviour? Can we take a shadowban approach to this?

SHADOWBOTTING?

Maybe we can? Or maybe someday we can? Maybe we could shadowbot these players. Which is to say not give them access to one brand of "undetectable" cheat software (which apparently is called a "Shadowbot"), but rather, without telling anti-social players, just start matchmaking them into games where their teammates and opponents are all bots.

The bots should be given realistic names (maybe taken, and modified, from the real player population - but not the shadowbotted player's friend list) and even have fake profiles (if profiles can be checked). The bots should also be relatively good team players. Which is to say they not only ignore any toxic communication and behaviour from the shadowbotted player, but also be helpful and forgiving teammates (although not too obviously). For example in a MOBAs they call missing when appropriate. They ping the map and make positive use of communication channels, and they try to help their team. Or in FPS' they help out, playing defence or offence when needed and "play their class".

I am not sure what difficulty that the shadowbots would be best set at. Perhaps some kind of gameplay bot Turing test is needed here. But my gut says as close to, or above, the skill level of the player being shadowbotted. The bots behaviour would of course have to look realistic. But I think that it might be surprising how long it takes for a shadowbotted player to work out what has happened if all the other signs are pointing to the bots being other players (e.g. The shadowbotted individual entered match making as usual, true bots have a large variety of human names, etc).

This shadowbotting approach would also differ from low priority "prisoners island/jail" type queues. Because in those queues negative people are being placed with negative people. This could be unpleasant, but it also risks reinforcing simply "this is how others behave so it is ok that I do". Whereas is shadowbots are good teammates and opponents then the player is actually seeing positive modelling of how they could interact.

Ideally shadowbots would also be combined with some kind of automatic enforcement based on learning algorithms, as also outlined by Jeffrey in his GDC talk. This automatic enforcement could continue to monitor the shadowbotted player to keep them botted if they continue to be negative or return them to the general population if they significantly reform.

This is all theory of course. It reminds me of citizens in The Culture novels (by the sadly departed Ian M Banks) who couldn't accept the utopia they were born into and are instead placed into flawlessly realistic virtual universies in which they can be as nasty and horrible as they like without hurting anyone real. High SciFi for sure, but I think it would be interesting to have the idea taken out of theorising and SciFi and see what would happen in a real ("real") game environment. Or, maybe, just maybe it already has been done...

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like