Sponsored By

Dynamic user interfaces, Adapting to changing situations in games to increase player performance

In order to create higher quality user interfaces they need to be able to adapt to changing situations within games. In order to do this context is categorised and problems are overcome.

Bjorn Buchner, Blogger

September 20, 2014

13 Min Read

Introduction

One way a user interface could be described is that it is much like a tool that enables players to perform actions within a game and with it, accomplish tasks they have at hand. The thing about tools in the real world is that there are a lot of different ones out there. You´ve got big pliers that excel at gripping things, knives that make it easy to slice through materials or something like a hammer that is useful for driving nails into wood. Each tool is different because they are designed to excel at specific tasks. A hammer will never be as good as a knife at cutting things for example. Unlike these real life tools a user interface is expected to be effective at a wide variety of jobs instead of just one.

Of course in real life there are tools that try to be useful for multiple tasks as well such as multi-tools or Swiss army knives. In order for them to do this they have to make compromises. This makes them a jack of all trades, master of none. Multi-tools are useful for a lot of tasks but they are simply less effective at each of them than their dedicated counterparts. A professional woodworker is able to cut down a large tree with a heavy duty saw much easier than with the thin folding saw on a multi-tool. But what if you had a tool that doesn’t try to be several tools at once but rather is a dedicated tool that changes into another dedicated tool based on the task you need it to perform? Would that lead you as a user to be more effective?

This question was the start of my recent graduation thesis and is the core concept of what a dynamic user interface is. A dynamic user interface will add, alter, or remove interface elements depending on the situation (context) a player is currently in. This ensures they have the right amount of information or interface elements at their disposal to complete their goals. I can put this into context a little (pun intended) with an example of how this concept would work in a game.

Example

Suppose there is such a thing as ‘Ultimate Fantasy Online’, a typical mmorpg. One of the players is called Gyle, who is killing boars in to collect their flanks as is mandatory for any decent mmorpg, obviously. Killing boars is not very easy as they are quite vicious. In order for Gyle to deal with this threat he is given necessary information such as his spell icons and their cooldowns, his own health bar as well as the healthbar of the enemy. Once Gyle has all the succulent boar flanks he can possibly stuff in his bag, he sets off on his trusty steed back to town. Because it is not possible for players to fight while riding a horse all of Gyle’s combat related elements gracefully transition away so he can fully take in the scenery as he gallops along muddy roads. After a brief trip Gyle finally arrives at his destination and walks onto the village square. The square is where most of the trading in the area takes place. As soon as Gyle approaches a merchant a few trade related interface elements automatically appear on screen that allow Gyle to focus on selling his loot.

Value

Adapting the interface in the manner outlined above solves a number of problems.

  1. Removing interface elements as soon as they are no longer needed ensures there is less extraneous information for players to filter out. This makes it easier for them to process necessary information because there is less ‘noise’. It also has the simple benefit of having less interface elements that cover up the game world. An example where this is particularly an issue can be seen in Battlefield 4 where it can become very difficult to aim at enemies when the UI will obscure them.

   

Visual clutter in Battlefield 4 (Dice, 2013)

  1. The second problem that dynamic interfaces solve is that there would be less shortage of information in certain situations. Especially specific situations tend to get overlooked because they do not happen very often. This does not mean that these situations cannot be valuable however. If we go back to the tool analogy you will find there are some tools that you really don’t use all that often but when you do get to use them, make the job so much easier. Offering information or interface elements during valuable specific situations keeps a player more well informed. This allows players to make conscious decisions in more situations in the game. Solving these problems should lead to an overall increase in a players’ performance.

Alright, personally I thought that made sense in theory, but one of the challenges I faced was how you would go about actually putting this into practise. Ideas alone are not worth much if you don’t apply it to reality.

Context

The first thing that I needed to know was exactly what a context is. If you don’t know what a context consists of you cannot know when it changed, nor adapt the interface accordingly. In my thesis I compare several taxonomies that experts from the field of context aware design use to define a context. After analyses I ended up with five categories that make up a context which seemed well suited to apply to game worlds.

These five categories are:

Entity, Activity, Time, Location and Reason

Entity

The category of entity is much like the subject of a context, it consists of living creatures or objects. In most cases this is the player who is playing the game, using the interface. Relevant information about the player can be his skill level (beginner, intermediate and veteran), what class he is playing, what weapon he has equipped, how much health he currently has etc. Another example of an entity could be something like: ‘A tank’. In the case of a tank, relevant information could be the amount of ammo it currently has or its healthpoints. These are things a player needs to know when he hops into one.

Activity

The activity category contains all of the actions. It describes what is going on in a situation. This can range from an objective being captured to a goal scored or an enemy reloading.

Time

Time is the category that specifies when the situation occurs. This can be in an absolute manner i.e.: ‘At night’ or ‘At 11:20’. It is more likely that it will be specified relative to the other categories however. You would get something like: ‘After the player dies’ or ‘While standing in the Square’.

Location

Location denotes the area of the gameworld the player is currently at. This could be defined in an absolute manner such as: ‘In an elevator’ to things like: ‘In a city’. Location can also be defined relatively much like the previous category. Doing so you’d get situations such as: ‘Next to a crate’ or ‘Behind a teammate’. Some locations have rules that apply to them and knowing the location can impact the interface drastically. For instance a player who is underwater could need a breath indicator.

Reason

The last category is reason. While the other categories describe the who, what, where, when of a situation, the reason explains why the situation occurs. This reason will most often be either the result of a player’s goal, or as a result of the game’s design. Imagine two players standing opposite of each other. One of the player draws his sword. In one scenario you could conclude this is an act of hostility, and the reason why the player drew his sword was to attack the other player. You’d offer combat elements in that case. However, if the intent of the player was to trade the weapon you’d have to offer him trade elements instead.

In reality it’s unlikely that all categories are equally important for every situation. In an open world game for example a category such as the location will likely be more important than in say, Tetris. Also worth mentioning is that a situation can contain more than one of each category. Depending on the complexity of the situation it is not uncommon for there to be several entities present at once.

All of the categories combined make up a complete context. To make them workable I like to write the situations down as a sentence. An example of a context would be: ‘The player walks into the enemy base to capture their flag whilst his teammate next to him gets shot.’ Using this sentence you can know that a context changes when one of its elements change. It is up to the designer to decide if the change in situation warrants a change in UI.

Difficulties that need to be ironed out

It’s quite difficult for players to play a game and pay attention to the user interface at the same time. An experiment I performed for my thesis showed that players (both male and female) have the tendency to exhibit tunnelvision and focus on a specific area of the screen. Because of this extreme focusing they tend to miss changes in the UI as they happen. This is a problem for user interfaces for games in general but becomes an even bigger problem when you want to change that UI on the go. A player that just saw weapon elements in the bottom right corner of the screen could get confused if it’s suddenly no longer there if it got removed when they were not looking. This confusion would require extra thinking in order to figure out what happened and would negate the benefits in performance. In order for dynamic user interfaces to be able to increase player performance, players should not have to think about the interface but rather the task at hand and the relevant information they need in order to complete it.

Visual cues

After performing research into the attention of player I found numerous sources that claimed it is possible to force the attention of players on critical elements right before they change. This is done by utilizing visual cues, a well-documented method of psychology. A visual cue is an abrupt change in shape, size, luminosity, position, hue or opacity. Most people would experience this phenomenon when trying to read a text with an animated advertisement next to it. It draws your attention away from the text. It can be put to good use as well though!

A visual cue in Guildwars2 (Arenanet, 2012)

The second difficulty that became apparent to me that needed to be overcome is that simply noticing a change in UI is not enough. If a player sees a change happen, but doesn’t understand the change they can still be confused. Players need to be able to instinctively grasp changes when they happen without having to think about it. Luckily I was able to find numerous sources that state that it is through animation that changes in UI can be clarified. It does this by providing some extra information based on how elements behave as they transition between states. Based on this finding I analysed several animation principles that are specific to transitions. I purposefully excluded animation principles from my analysis that can benefit UI by for instance conveying a metaphor but do not necessarily help to clarify a change as that was outside of the scope of my thesis.

Animations

The animation principles that I ended up with are the following:

Motion blur, Arrivals and departures, Anticipation, Follow through and Hierarchical timing

Motion blur

Adding motion blur to interface elements that move at high speeds helps users to keep track of where they are going. This principle is more about leaving behind a trail and doesn’t necessarily have to be a blur. Depending on the art style of the game it can just as well be a painterly line or something stylized like that.

Arrivals and departures

An animation that follows the principle of arrivals and departure will make clear where an element came from and where it is going. If the element originates from a point already on the screen it can grow in from a point. If the element has no logical point of origin on the screen it can simulate coming in from outside the screen by sliding in or simply by fading in. Of course you can use a custom animation that gets the same job done.

Anticipation

An anticipation animation will start briefly before a change occurs. It will give a player a hint of what is going to happen next. It is worth noting that while visual cues are animations in their own right they are extremely short and take place even before the anticipation animation does.

Follow through

A follow through animation is mostly the opposite of an anticipation animation. It takes place after a change in the UI has transpired and function as extra feedback for what has happened. This will also help players that have missed a change by giving them an extra chance.

Hierarchical timing

In some situations there is not just one single element that needs changing. Oftentimes there will be several elements on screen that will need to be replaced. When multiple similar elements need to perform an action you can choose to offset their timing in such a way so that one element starts after which the rest of the elements follow one by one. Doing so will communicate to the player that the first element that started the change was the one with the highest hierarchy and will subsequently draw their attention.

Conclusion

Summing everything up you end up with three main pillars or steps if you will, of dynamic user interfaces. The first one is that information shown in the interface is made relevant to the current situation. The second pillar contains player attention and using visual cues to direct their attention to changing elements. The third and last pillar is about using animation to clarify changes. Applying these principles led to an increase in performance for participants in a usertest I held. I would love to hear if it does for your games as well if you decide to pursue a similar path. Striving to keep this article succinct I don’t go very in depth on each of the main points. However, if you have any questions I would be happy to answer them. I also uploaded a poster that visually summarises these steps for you to download here.

 

Important sources

Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., & Steggles, P. (1999). Towards a better understanding of context and context-awareness. Paper presented at the Handheld and ubiquitous computing.

Eriksen, C. W., & James, J. D. S. (1986). Visual attention within and around the field of focal attention: A zoom lens model. Perception & Psychophysics, 40(4), 225-240.

Pashler, H. E. (1999). The psychology of attention: MIT press.

Schlienger, C., Conversy, S., Chatty, S., Anquetil, M., & Mertz, C. (2007). Improving users’ comprehension of changes with animation and sound: An empirical assessment Human-Computer Interaction–INTERACT 2007 (pp. 207-220): Springer.

Chang, B.-W., & Ungar, D. (1995). Animation: from cartoons to the user interface.

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like