Sponsored By

Of Headshots and Player Agency: Part 2

How can we introduce moral directionality without breaking player agency?

Taekwan Kim, Blogger

February 22, 2011

18 Min Read
Game Developer logo in a gray background | Game Developer

This is part two of my conversation (reproduced here verbatim) with Dutch game designer Jack Hoefnagel based on the topic question, “If you're in a game, holding a gun in your hand and pointing it at someone's head, is there a way that a game designer can make every single player not shoot this person?” Please refer to last week’s post for context and continuity.



JH:

I'd like to elaborate on your take on the question I asked you ("How do we make play feel consequential?”). One question that comes to mind is: "How important is technology in this matter?" Has our visual representation reached the level of being able to fully realize empathy in players, if this is even possible?

For instance, the upcoming game L.A. Noire features cutting-edge facial animation which makes the in-game characters feel extremely life-like when interacting with each other. But I wonder if this level of realism is actually necessary for the player to relate and sympathize with the characters and come closer to feeling real, personal emotions when playing this game.

In the game Abe's Oddysee, there's no actual human being in the game, and the graphics of the in-game main character are nowhere near being photo-realistic. Yet many, including myself, related to this character and really felt some true emotions while playing this game. The sound, the vocals and the music, the amazing atmosphere built by the graphic artists, character designers and world designers, combined with the narrative (where the main theme was oppression), and the great game mechanic where you need to save your equally harmless and friendly buddies by trying not to get them killed, make this game a great example of how a game does not have to portray life itself to make a player emotionally relate.

In other words, Abe's Oddysee uses some strong symbolization, which may even prove to be more effective than if this game had tried to look as realistic and life-like as the technology would let it. It seems as though when a game visually tries to be as life-like as possible, the player gets thrown out of that 'magic circle', out of immersion, when these 'realistic' characters are walking against invisible walls or not lip-syncing entirely correct. But this is just the visual part of the story.

On the ludological side, considering our "gun-to-head"-example, this might be an interesting question: "Are game rules able to make a player feel all the consequences felt in real life?" To this question I'd like to add the consideration that game rules are inherently a simplification from life's rules. The point I'm trying to make with this consideration will become clear in the following writing.

I've underlined the words 'feel' and 'all' separately in this question.

- 'Feel', because I wonder to what extent games, as opposed to other art forms, are able to make a human being feel emotions. Have we already met the limit, or have we not yet developed the techniques to cause someone to fully feel love, anger, sadness or happiness when interacting with a multimedia application? Or is the very fact that it is a multimedia application restricting people on an instinctual, subconscious level to feel this?

- 'All', because life, luckily, contains such an incredible amount of considerations when you're about to kill a person. There's criminal law (which in turn includes many consequences like prison time, having a criminal record etc.), considering reputation among family and friends, having a guilty conscience, suffering psychological trauma from witnessing a person's death, motivation and justification for the murder, maybe even religious considerations, and there're many, many more to be found.

The interesting thinking matter here is how a game designer chooses the 'usable' consequences which can be simulated in the game and fit within the context of the world design. In GTA, there's no need to have a criminal record, because it would have no effect on getting a job, or having increased prison time for a repeat offense, because the game designer made the choice of not simulating or 'gamifying' prison time.

Another point about GTA IV I’d like to make is that it’s a self-contained world with simplified consequences. What I mean by that is that all the consequences that you experience in this game are never related to your own life, just Niko’s and your persona’s (your ‘other of play’, as you described in “Understanding Gameplay”). The choices you make are based on your knowledge of games, and learning from the rewarding or punishing events you experience during the game.

What I’m trying to say is that beside the very (maybe even too) obvious “kill/save, or don’t”-moments, there are no choices in this game that you consciously make as a rational and emotional human being, as an individual with your own personality. You don’t use your car to drive over senior citizens because you are such a rebel in real life, and you also don’t do it because you feel you are not one in real life, trying to compensate by playing games that do make you feel like one.

This brings us to an interesting subject, which could answer a part of our consequentialism matter. Taking a step back from narratology/ludology, and looking at games as culture, here's another question I'd like to propose: "In what way does 'ludic literacy' and convention make for desensitization?"

Your writing about 'ludic literacy' was a really interesting point of view, which I used to consider how regular game players are less and less affected by the interactive events occurring on the screen. The following might look a lot like Plato’s Cave theory, but a person who has never played a game, or even heard about one, might be shocked to see a digitalized human being shooting another and blowing him/her apart in a million, very graphical pieces. Yet an average gaming person, maybe even a ‘casual gamer’, to use a trendy name tag, will not even flinch seeing this displayed on a screen while somebody is playing a shooter game.

Of course, our cultural knowledge of movies and TV-shows are also accountable for this desensitization of the display of violence in (multi)media. But that’s exactly the point. If we are so used to seeing death, the most extreme emotional event for a human being, on our screens, then how are we supposed to let our player feel the actual impact of a murder? And one committed by you, even!

Your theory in "Socratic Method Applied to Games" states that playing games is about identity, and using your writing in "Validation Theory", the game should validate that identity/persona ("the other of play"). So one of the most important questions in this writing might be: “How can a game designer validate the player’s identity in the game to implicitly reward him/her for not shooting the person in our example?



TK:

Consequence and Meaningful Play

I think a great place to begin is your question, "Are game rules able to make a player feel all the consequences felt in real life?” It highlights a sort of unstated given: that consequence is required for meaningful experiences. This is more or less true, but—at least when it comes to gameplay—perhaps not in the way we might at first assume.

I apologize for covering familiar ground, but consider, what is the worst possible consequence that can happen to a player in a game? To be proven wrong, in his abilities, his choices, his use of time, etc. But that’s really a rather substantial thing—few things internal to us exist in which we invest as much (a point I have argued many times before).

To repeat myself once more (sorry!), ludic failure itself, and the consequent sense of personal invalidation, is appreciably painful and material. The severity of such pain is enough to require us to give the player the chance to correct himself or repair the damage in some way. This requirement, in turn, creates serious, inherent limitations to ludic consequences.

When we say something breaks a game (eg. a game breaking bug), we mean that it stops progress without any means of recourse through no real fault of the player. (Oppositely, we might also mean that it renders the player so powerful as to remove him from the possibility of ludic failure.) Similarly, a player loses engagement/investment (are these two the same? probably) if an opponent in competitive play is so dominating and beyond the reach of the player that any sort of struggle becomes entirely meaningless.

We should here observe how the terms “broken” and “meangingless” are so intrinsically related to the loss of one’s ability to affect a ludic outcome. Indeed, the act and pact of gameplay is meant to be a guarantee against such occurrences. I believe this is what Huizinga really meant when he said play is “free.” It is not so much free from consequence as it is free from irredeemable decisions. If you fail, we understand that you can try again.

In short, the rather counter-intuitive (and perhaps controversial) conclusion is that meaningful play is changeable play. This is the contract that play within the magic circle represents—that all choices can eventually be fixed. When this implicit agreement is broken, so too is player engagement.

I think you hit upon this nicely when you distinguished “feel” and “all”. The distinction to be drawn is that between psychological and ludic consequence. And the important point here is that the experience of the former does not rely on how strictly, but how responsively, the latter is enforced. Piling on one-sided punishments does little to nothing to create relatable psychological consequence.

Security, Constructed Reality, Empathy

I am afraid I am now going to have to progress onto some highly speculative territory, supported by not much more than personal conjecture and anecdotal evidence. The hope is that, at least, the reasoning will hold up. Anyway, here goes.

The fear of failure is, at its core, intimately tied to our sense of security. We want the world to reflect our understanding (and desire of) it, and any time we are proven invalid is a small chip away from the foundation of our sense of security. Having to reconstruct our worldview to account for inconsistencies is a difficult and time consuming thing—it’s not something one can really do piecemeal.

Worse, when our desires fail, it shows our predictions built upon our understanding to be inaccurate. It reveals how little control over we have over our lives, and how little capable we are of actually grasping any real understanding.

To cut to the relevant matter, nobody wants to live in a world they can’t believe in. Which is to say, the audience will hold on to their desired outcomes for a narrative even when everything about its directionality seems to indicate against them. It is this distance between “what is” and “what should be” that creates the motivation to engage, to care about any given character. (We might call this “expectation fulfillment distance”.)

How often have we discounted the killing off of a main character thinking, “That just can’t be! How are they going to make it right after that?” We can’t conceive of a plot path through which the inconsistencies will be satisfactorily reconciled, which causes us to simply disbelieve the direction of the plot.

When we fear for a protagonist in a movie, then, we are, in a sense, fearing for our own security, our own understanding. We care (expressed through empathy) because the stakes are, in this sense, quite personal. The problem is that the challenge to this security in a game is actually a challenge to the player’s agency.

Let me put this another way. In order for the audience to care about a character’s life, the audience needs to feel it is “not right” for that character to die. But how can we fairly encourage this attitude towards the very possible obstacles to a player’s agency?

Insecurity Disconnect

There’s a good reason why we stop caring about running over innocent bystanders in GTA. As you noted, “You don’t use your car to drive over senior citizens because you are such a rebel in real life,” you do it because, well frankly, it’s pretty freaking hard not to sometimes! If the player takes psychological responsibility for every such occurrence, the game quickly becomes too burdensome for flow to occur.

The conscious efforts to dehumanize in games therefore serve a real engagement purpose. There’s a terrific article that came out on Kotaku some time ago from Mr. Chris Breault in which he recounts his experiences writing some of the secondary dialogue for The Punisher. (For context, it should be explained that the game featured such torture sequences as the player feeding victims into wood chippers or piranha fish tanks, etc.)

Concern arose after I had written some of the many, many "interrogation" lines in The Punisher that play as [you] torture people. I would sometimes write personas who really couldn't handle the outlandish shit they were being subjected to — I'm a human being too, look into your heart, who will feed my cat when I'm gone, etc. etc. It was something that came up in the comics all the time. Bad guys beg for their lives, Castle don't care. These interrogation lines were meant to be darkly humorous, as the player would kill everyone no matter what they said.

I was told to rewrite the lines where anyone expressed a strong desire not to die. It was ‘sadistic’ to kill people who directly asked you not to kill them. This sort of sadism is exactly the stuff that gets us a red flag from the ESRB. I felt pretty bad about this — I had written sadistic material! — before I thought about it. The thinking was, it wasn't sadistic to create elaborate torture sequences as a heavily marketed feature; it was sadistic for the people being tortured to death to raise objections. It was sadistic to suggest that the individuals you killed had resembled human beings, that they were afraid to die.

You know, the almost bizarre conclusion to come to from all this is that, actually there’s such a plethora of pathos involved in the game act of killing a non-player character that we need to remove a lot of it just to keep the game flowing.

But if that’s the case, all we have to do is just put some back in, right? Well, not exactly.

Bringing It Home

Since a player isn’t just a spectator of outcomes, the way we handle expectation distance in games is somewhat different than in non-interactive media. Instead of discounting the sincerity of the direction given by the medium, we discount the sincerity/severity or agency of the actions we ourselves take. When a game forces us to question this distancing, it can create a sense of unfairness in the player.

(Ed. note: To prevent misunderstanding, it should be noted that such distancing from one’s actions in a game is only possible because of the player’s awareness of the ludic contract. It goes without saying that as soon as the player leaves the magic circle to enter the social contract, our expectations and consequent behaviors change significantly [i.e. please don’t mistake the above to support the idea that somehow games cause people to become murderers, etc.].)

Consider the situations (though not many) in Oblivion in which the player needs to steal an item to progress in otherwise “morally upright” quests (one such incident even occurs as part of the main quest). These obligatory actions are then tagged in the player’s record—quite an annoyance for players who work and make an effort to keep it completely clean. Similarly, the player can be erroneously flagged for assault against certain hostiles if the player attacks before they do.

Unfortunately, such blemishes against the player record are more than mere semantics—they are serious matters of whether or not the game recognizes player intent. It’s the same reason that bad camera controls can destroy a game. A game that consistently ignores player input generates a level of frustration that goes beyond pathos.

I believe, then, it is exactly this assumption of intention recognition and an intrinsic amount of fairness in gameplay that provides the (or at least, a) solution to our problem.

Specifically, take, for example, a game in which the player is framed as a murderer (cf. this movie). If we can communicate that the correction of such an unjust perception and the proving of one’s innocence (and identity) is eventually possible through play—that the game recognizes the player’s intentions even if the gameworld does not—we provide a strong incentive which motivates the player from ever shooting and actually becoming such a murderer. The same indignation which frustrates the player will be the exact thing that ultimately keeps the player clean.

This scenario allows us to guarantee the ludic contract through game language on the one hand while simultaneously violating the player’s sense of fairness through social cues on the other, thus creating the expectation distance required for players to care about the lives they have control over.

Taking this line of thought to its conclusion, we can say that social cues in games are more useful to us as a source of challenge than as a source of morality. We need to make the primary challenge—the challenge that can be beat and won through the rules of the game—about changing social opinion to recognize and honor the player’s integrity and morality.

Last Thoughts: On Artifice

I’d like to close by finally going back to the question of how much technology is required to create empathy. When we stop to consider this question, the real crux of the matter might be said to be, “how much of the underlying machinery can show without disturbing suspension of disbelief?” After all, isn’t the final goal of photorealism to hide mechanical artifacts as much as possible?

Ironically, photorealism in games actually draws considerable attention to the machinery. Indeed, our level of technology is still such that when graphical advancements are visibly noticeable, it is exceedingly hard to escape from their direct relation to the underlying hardware. We’re simply not advanced enough yet that photorealism is universal and taken for granted. However, perhaps this is not as detrimental as we might assume.

I must confess, this question was exceedingly difficult for me, and I’m not sure that I can really contribute to any sort of real answer. I think the best I can do here is to submit these two scenes from two different films for consideration. The first is a bunraku performance from the opening scene of Takeshi Kitano’s Dolls. The second (of which, unfortunately, I have no video link) is the Romeo and Juliet scene from Shakespeare in Love.

In both of these examples, little to no effort is made to hide the machinery behind the performances. However, it is, perhaps, the very visibility of their respective substrates that makes these examples truly work. When we full well know the artifice of a performance, and yet still experience the full weight of it, we can’t help but be moved. It’s an expectation fulfillment all of its own: the realization of powerful artistry being achieved.

I propose, then, that the actual effect of artifice visibility ultimately comes down to how much we conceptually connect the artistry to the artist—and this can really go either way. That is to say, the amount of empathy we draw from technology has less to do with its level than it does to do with our valuation of the artist’s abilities in operating through that technology.

With film, the connection between the artist and his art is much more direct and apparent as the responsibility for the portrayal of each character appears more obviously tied to a single artist. The equation between an actor and his character is harder to avoid, which causes our respect for the artists’ talents to bleed into the empathy we feel for their portrayed characters. Part of the security/insecurity tension we feel therefore results from our level of confidence in the artist’s skills.

In games, however, the responsibility is conceptually more diverse—it’s much harder (at least, in the minds of the audience) for an actor to own a character in a game the way he can in a play or movie. Still, we are willing to give this credit when we have respect for a given studio, developer, or individual voice actor, for examples. Hence we are happily willing to disregard weaker graphics (for instance, in indie games) if the auteurship is more heartfelt and appreciable. Once again, our respect for the work colors our experience with it.

To the extent, then, that we are willing to allow for weaker technology as something which does not reflect on the competence of the artists, we can come to the conclusion that it has very little impact on achieving player empathy. Equally, to the extent that a game demonstrates mastery over its own technology, it can impact player empathy significantly.


This series will be concluded next week with a minor epilogue, of sorts.

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like