Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
If the hype is to be believed, the metaverse will completely revolutionise gaming, but when it arrives, can companies make it safe?
It’s just a few months after Meta unveiled its bold but slightly cringe-worthy video outlining its vision for the metaverse. If you’ve watched it, you’d be forgiven for thinking that the metaverse is already here, such has been the level of hype in the media.
If the metaverse is to be fully realised, it’s essential that it’s a safe space where users actually want to spend time. So moderation needs to be built into the foundations of the metaverse. The question is, how?
Despite Mark Zuckerberg’s pledge that “open standards, privacy and safety need to be built into the metaverse from day one,” we’ve already had one high profile story of children being able to access a VR strip club that’s available from the Meta app store, whilst Roblox continues to struggle with its own moderation issues, such as users creating games based on terrorist events like the 2019 Mosque shooting in Christchurch, New Zealand.
UK charity, the NSPCC (the National Society for the Prevention of Cruelty to Children) has already branded many apps, positioning themselves as a metaverse experience as ‘dangerous by design’.
There has also been a litany of bad press about poor user experiences in Meta’s flagship VR project, Horizon Worlds. The Centre for Countering Digital Hate conducted an experiment that found users encounter sexual or explicit content approximately every seven minutes. Similarly, BuzzFeed recently published a story in which it deliberately built a space filled with conspiracy theories and inappropriate content, which stayed up even after it was reported to the company.
To its credit, Meta responded by quickly implementing new safety measures. Players can now opt for a 2-foot boundary around their avatar, and community guides now inhabit introductory areas, known as ‘plazas’, to give new players tips and tricks whilst also doing some basic moderation. It remains to be seen if these incremental measures will actually have any bearing on user experience, or if this safety-first posturing is just designed to placate critics.
Roblox - another supposed frontrunner in the metaverse race - has made negative headlines because of extreme content hosted on the platform. Despite its best efforts to moderate the experiences creators can publish using the game-building tools, there continues to be a problem with in-game ‘condos’, innocuously named private spaces where players meet to talk about and engage in virtual sexual acts.
If this is taking place in an environment that is, for all intents and purposes a kids’ game, then what can we expect when these kinds of virtual worlds become a plaything for adults rather than pre-teens?
Our own research into the experiences of online gamers in the US found that 70% of gamers have experienced some form of toxicity or abuse online, and the Anti-Defamation League’s more substantive research came to similar conclusions. It’s abundantly clear that many in the industry are already struggling with moderation, so ensuring user safety in an ecosystem that’s made far more complex by the convergence of text, voice, gestures, content creation - across thousands of third-party apps - is the moderation equivalent of running before you can walk.
VR is the cornerstone of many companies’ visions for the metaverse. But VR comes with its own unique moderation challenges, as players use voice chat and gestures rather than text - meaning what we consider to be unsafe or toxic behaviour becomes more nuanced and harder to filter.
For example, if a player makes an obscene gesture, will it be possible to moderate that? What if it’s a sexual gesture towards a child, or a nazi salute? What if a player dresses their avatar in an outfit that is culturally insensitive? New approaches and technologies will need to be developed if VR worlds are to be safe spaces for players.
We also must define what we actually mean when we say something is ‘safe’. Should we remove any cause of offence or upset, or only the most egregious forms? How can we moderate speech and online behaviour whilst still respecting people’s rights to freedom of expression? Again, the huge difficulties the current social networks have had with this show the difficulty of coming up with workable solutions.
There are measures companies could implement that can make a difference to how we moderate VR. For example, adding a teleport function that allows users to completely disappear from view if someone begins to misbehave is one way of dealing with issues around proximity. Equally, automatically tracking avatars' actions and banning users who violate the standards or engage in abusive behaviours should be technically possible, although it seems like nobody is doing that at present.
Safeguarding measures in the metaverse certainly shouldn’t be left to the victim or to other users, as is the case with many moderation strategies right now. According to research by Unity, only 40% of online players experiencing toxicity use the available reporting tools, which shows how inadequate this kind of user-led moderation really is.
Maybe there needs to be something akin to an emergency number that players in virtual worlds to instantly alert an authority that’s independent to the game or platform, which has the power to immediately intervene and investigate? This might sound heavy-handed, but behaviours which are tolerated online would often be illegal if repeated in the real world.
Anonymity is often cited as one of the main causes of online toxicity, so de-anonymising accounts would bring a new level of accountability. Implementing facial recognition or fingerprint authentication would ensure players are tied to a specific account and cannot hide behind a username. Removing anonymity and adding additional layers of account security would also mean underage users can’t create accounts to access spaces intended for adults. It also benefits the platforms themselves, as they won’t have to dilute their offering to cater to such broad demographics.
The crux of the moderation challenge is that prevention is only part of the problem. No moderation strategy can prevent 100% of toxic content or behaviour, so platforms need to also consider how they police and penalise players. It’s too easy for serial offenders to simply create a new account and carry on - so there must also be a discussion around more substantive punishments for the worst offenders, and how such bans or restrictions can be balanced against an individual's rights.
The ability to moderate text-based chat is well understood, but real-time moderation of voice chat remains some way off. It’s possible to moderate voice chat now by converting the audio into text and running it through existing moderation solutions. This gives an accurate result, but only after the fact - once the damage has already been done. Advances in audio processing mean that the moderation delay is decreasing, but it’s still a way off being anything like the real-time solution that will be needed for the metaverse.
There is also the complication that voice-chat can be decoupled from the game or social community platform that the player is in. For example, the voice chat function in a game like Call of Duty is implemented by the platform holders Playstation or Microsoft, rather than the game publisher Activision. Or players might even use Discord or Whatsapp as a totally separate chat option. Beyond the technology challenge is the lack of standards or a consistent approach between platforms and publishers.
Riot Games showed just what a problem the moderation of voice chat is when it recently made changes to its policies and practices in order to tackle bad behaviour in Valorant, a game highlighted by the Anti-Defamation League as the most toxic gaming community. The changes led to 400,000 voice and text mutes and 40,000 game bans. Despite the impressive-sounding figures, Riot’s spokesperson was forthright in admitting the changes didn’t have any tangible effect on how regularly players encounter toxicity. I believe this will continue to be the case until they can get a handle on voice chat.
Companies with metaverse ambitions need to remind themselves that, first and foremost, it needs to be a safe and positive experience for users. Aside from the clear need for a duty of care for users, good moderation can be a big value driver. A study conducted by Riot Games found that first-time League of Legends players who encountered toxicity were 320% more likely to churn and never pick up the game again.
So companies need to start to view moderation as a vital function of their bottom line, rather than simply ‘nice to have’. In a highly competitive marketplace, players can and will simply move on to the next best thing.
In the near future, online safety standards may be backed by legislation, taking the decision out of companies' hands. In the UK, new proposed legislation threatens online platforms with financial liability if they fail to better manage inappropriate content. There are already stringent regulations in place in France and even more so in Germany, and similar discussions are manifesting in the US. So everyone with an online presence needs to be prepared for legal changes to their duty of care, and the financial liability that’ll likely come with it.
For many, solving the stark technical challenges that lie in stitching together diverse technologies and systems to create a working metaverse will be the priority. But equal measure must be paid to build safety into its foundations. For decades, quality moderation has been hampered by the limits of technology, nowadays, however, companies are only limited by their priorities.
Read more about:
Featured BlogsYou May Also Like