Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
My critique of a game studies paper analyzing audio in WoW Battlegrounds.
Over on GameStudios, there's an interesting analysis of game audio in World of Warcraft (with specific focus on the Battlegrounds). Here’s my critique, which I think is focused on the sort of simplifying distinctions a lot of game studies people make (especially in assuming a sort of average player when referring to the player). Part of my difficulty in critiquing this piece may be that I have limited World of Warcraft experience and therefore cannot contribute deeper examples from my own experience (however I have played it enough to understand the basics).
“Players do not need to know how a specific named spell sounds like — it is enough to understand whether one is being attacked, or a friendly avatar is attacking an opponent (Jørgensen, 2007a)”
That may be true of the average players experience, but I’d bet there are WoW players out there who gain an advantage from being able to tell such things. And if not audible, then certainly the visual clues of what spell is being cast are identifiable by players. And the difference between such spells is integral to countering them (dispel, heal, remove curse, etc.) or at least understanding how they will change the combat situation. It is this ignorance of the depths of gameplay and player experience that often makes me annoyed by academic game studies.
“When a fanfare signals the assault of a base in AB at the same time as the player is engaged in defending another base, the player automatically groups and filters all present sounds according to relevance. The player is therefore allowed to ignore the fanfare and attend to the most urgent sounds, which in this situation are responsive sounds from his avatar and the interface, and urgency sounds related to the enemies he is fighting. However, if the player is in a less urgent situation defending a base, the fanfare may move into the foreground compared to responsive interface sounds. This also explains why the functional roles of sounds are judged with different urgency in different situations even though the sound is exactly the same.”
Here again the intelligent player will likely register, even in the midst of heavy fighting, the sound that means another base is being assaulted, updating his mental conception of the game with the knowledge that there are enemies assaulting the other base, and responding appropriately tactically (perhaps by running over to the other base after finishing his current fight, perhaps by running over there now to help, knowing the other situation is more critical).
It’s possible of course that he may not register the sound given all the other sounds he’s paying attention to; here the foreground-background distinction makes more sense (although I’d argue it has to do as much with fundamental attentional limits than a conscious choice)
“Gameworld generated sounds, however, are non-dynamic by having no such direct relevance. Interpreting a sound as generated by the gameworld, the player dismisses the sound as having secondary relevance for his choice of actions. An example of gameworld generated sounds in Battlegrounds is the sounds of wood chopping at the lumber mill resource. The sources of these are non-interactable non-player characters, and the inclusion of the sound has no operational function besides identifying the lumber mill and supporting the sense of presence in the gameworld. It is, however, important for the player to be able to identify these as gameworld generated sounds in order to understand that they have no proactive or reactive relevance for his actions. Having no direct relevance for gameplay, gameworld generated sounds are left out of the analysis.”
What a surprise! These sounds, like every other property of the game, matter. They contribute to background noise which may make the above-mentioned interpretation of other sounds more difficult. (even if selectively ignored, there is probably some neuron dedicated to processing taken up by ignoring sounds, though perhaps at less cost than of distinguishing them)
There are also gameworld sounds which do have relevance: the plane flying over in de_train in Counter-strike 1.6 which indicates locational shifts (or the lightning in de_aztec which does the same). I am nitpicking here, but there are perhaps situations in which even the wood chopping has direct relevance to players’ actions. A blinded character (or player) could pick out his location in relation to it (as he can with any repeating sound that fades with distance and has a rather definite direction).
And I do think, however selective hearing and memory, that the player remembers locations in some way in relation to all their properties; the lumbermill without the woodchopping would not be quite as well remembered and thus played in.
In this case the usability is not fixed, every one of these possible consequences depends again on very specific details of game situation which are interpreted by the player. While these may be relatively realistic in a very general case, there are often situations where a DoT means death and direct damage may not, or where the other player being attacked is more important to the outcome of the fight than you are (or where you are healing him and thus keeping him alive is important), etc. etc. blah blah blah.
“The priority levels should not be understood as absolutes, but as different points of a continuum, not least because context decides the interpretation of a signal’s degree of urgency. Negative notifications could therefore also be seen as part of this continuum. However, notifications distinguish themselves from urgency messages by not demanding an evaluation on part of the player.”
He saves a little face here in his explanation of the table, but again underestimates how much player interpretation is going on. If the fire mage hits your teammate for 11000 damage instead of the 6000 he has been hitting for, something is up. And that something is again a relevant piece of information which may be critical in winning a fight against the frost mage.
“Ally generated sounds have no proactive or reactive relation to player actions, and provide therefore notification information to the player. Even though allies are important in accomplishing the final goal of a Battleground, ally actions rarely have direct influence on an individual player’s actions.”
This guy doesn’t really get it, does he? Teamwork is all about paying attention to your allies actions and reacting appropriately in order to win. And teamwork is what wins games in many cases. Maybe Kristine Jørgensen plays Battlegrounds selfishly and unaware of her teammates, but despite a large population of players who play this way, there are many for whom teamwork is a virtue constantly practiced.
“An example is audio produced when allies mount their horses or move around. The most important informative role of such sounds is to provide spatial information, by informing that allies are present.”
Nope. The difference of capability between a mounted and non-mounted character is huge. An ally mounting thus indicates an important state change which will register in the players actions, if he is a good teammate. I don’t see how this is spatial information, as the presence or absence (and specific location) of allies is also key to what sort of actions a player undertakes.
“Another important point is that while visual information can be shut out by closing the eyes, audio has no equivalent shut-down mechanism.”
Turning off the speakers, playing music, etc. Many game players, to signal that they don’t have the advantage of audio, change their in-game names to indicate it. (ex. [nosound] playername or [music]playername)
The conclusion of the article really hits home some of the fundamental distinctions she is making that I don’t agree with.
“Thus, learning what generates a specific sound is learning important gameplay elements, in the same way as learning how to play a game is learning what the different auditory signals mean.”
Seperating these two functions is strange. Learning a game is a continuous process that involves association of audio and visual. Continued association of sounds and visuals lead to a model of results which is applied when playing; learning the meaning of a sound is not independent from learning what generates it (and there are intermediate steps in the continuous process of learning which sounds mean what). Essentially I think this whole article could be much more nuanced, but presents an interesting overview of how game audio works.
Read more about:
Featured BlogsYou May Also Like