Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
What cues exist at the surface that make us believe that the AI we are interacting with is alive and showing human like qualities? Can these also be used to make an AI seem Alien?
A couple of months ago I was at the annual conference of the European Human Factors and Ergonomics Society during which I attended the keynote presentation by Professor John Lee. The keynote was about "Making the human-technology marriage work" and during the presentation Professor Lee outlined a few of the surface level cues (or if you prefer "stuff you can observe/see") which make technology appear to be more human, or to be more accurate makes humans treat technology as if it has human features. I would like to share those with you and discuss them in terms of AI characters in computer games.
Colour Palette
The first was the colour palette used. Basically if the colour palette is more naturalistic, and more in the range of "life" then people are more likely to react to technology as if it was alive. This is of course easily done in modern computer games and I would guess a pretty basic step in art design.
Opacity of Process
The next is Opacity of Process. This I found a bit more surprising, as it was claimed by Professor Lee that the less Opaque (i.e. more transparent) an technological actors processes are then the more likely we are to treat it as if it is alive and assign human attributes to it. Which basically means if you can, by observing the technology see a reason behind its action, and see why it is choosing to do the things it does, then you are more likely to believe it is alive. The reason I found this a little surprising as a general cue is that I know in game AI if an opponents processes are too transparent then that ruins the illusion of life for me.
For instance the Call of Duty games, especially the recent ones, are well known for making heavy use of scripted events. This works some of the time, and obviously (based on the popularity of the series) the experience it is fun for many people. However for me it has always been far too obvious what the triggers are, and when I know "if I do X, solider Z does Y" every single time, then this reduces the soldier to a lifeless automation in my eyes.
I remember playing one of the Star Trek - Elite Force 2 (I think), and feeling quite nervous and tense during a section of the game where you are creeping around an abandoned ship. Ever so often things would happen, like something moving before you can see it, which added to the tension. However this feeling of tension came to a crashing halt for me, when while crawling inside a duct I was inching forward (tense and nervous remember) and then suddenly in front of me was an alien monster, not moving… and also I discovered immune to my fire.
However when I inched forward just a little more the creature predictably moved quickly away in that scary "what did you just see" type of way. But it was too late, the bad scripting had ruined the tension by making the situation too transparent.
But, I get the feeling that I am being too picky here, and what Opacity of Process is actually talking about, is that yes AI should act in an observable fashion that is similar to how we expect life, or a human actor to act. In other words what it does should generally match our expectations of how we view the world around us. Opacity of Process is referring to the fact that we believe we understand why humans do certain things, because we ourselves would act in that fashion. Therefore if something appears to act like us, it must internally be like us.
Responsiveness
Talking about Opacity of Process easily moves into the next surface cue, and that is Responsiveness. If an AI is to be treated as a human, it should react to us, and to it's environment as a human would. A lack of responsiveness is what can lead to the scripting situations I mentioned above which in turn cause problems with Opacity of Process.
But this idea is pretty basic, if I throw a grenade at an enemy, and the enemy sees that, it should react and try and avoid dying a horrible grenade related mortal injury. It also means the AI should be adaptive and not react the same way all the time. Again, this is slightly in conflict with Opacity, because reducing Opacity relies on you being able to predict what the technology will do. So, it must react, but react in a way that makes sense, and is not random or gives the player the impression that the AI is omniscient.
The AI in the Halo games are usually great at responsiveness, with AI that not only clearly responds to you as the player, but also to other actors and factors in it's environment. It is certainly what I would point to as some of the best AI in shooters on the market today. It also does so in generally an transparent fashion. But sometimes it succeeds at responsiveness by trading off a little bit of opacity by making the way in which it using to be responsive a little too transparent, at least to me.
In my opinion this occurs in targeting, especially when sniping at Halo AI. This is because Halo AI knows when you are targeting it (before you fire) and this sometimes means it acts in a way that makes it appear to be omniscient. Sometimes moving and avoiding your aim even if you are sniping from a hidden location. While I am using this as an example, I personally don't feel that it is as "reality breaking" as bad scripting but it can skill result in me feeling like the AI is less life like and more of a computer process. Perhaps a more grievous example of this over reactiveness was in earlier RTS games where the AI player was effectively running a map hack that let it see exactly what you were doing and react to it.
Facial features
Moving on, Professor Lee also mentioned facial features. Now this one is a given, much like the colour palette. If something has a human like face, that responds in a human like manner, we tend to think of it has human. This even covers quite basic human like faces things that are obviously robotic. What seems to matter to us the most according to Professor Lee are the eyes and the mouth. Again, this is also tied into lessing the Opacity of Process and increasing Responsiveness, as not only should this face appear human like, but it's eyes and mouth should act in the fashion we expect, and respond to us, and the environment. So, if the creature is hurt or surprised, it's eyes should widen. Perhaps they should close briefly if there is a bright flash. This kind of thing. We all know how an otherwise human looking face can be make less believable by bad lip synching when they talk.
Voice Interaction
Talking of talking, that gets us to the last point and that is Voice Interaction. This is one of the stronger cues for treating technology as human; if it talks to you, then you are quite likely to start giving it human qualities. For example my GPS uses a NZ Male voice called "Paul", and all it takes to get people talking to my GPS and thanking it (him) for it's (his) instructions is for me to sit them in the car and say "this is Paul" then his voice does the rest. Again I will point to Halo as a good example of this, with both the Covenant and Human forces talking to you as the player, but also to each other as things happen in the environment (also helping with the impression of Responsiveness and reduced Opacity).
Personally Voice Interaction is where I see some possibility for using the technology in Kinect to advance immersion in traditional controller based games. When talking about Fable 3 Peter Molyneux was enthusiastic about the "touch" mechanic that was being introduced into the game. He correctly pointed out that we get quite connected to things we can touch and hold. Unfortunately "touching" and "holding" in Fable 3 really just accounts to pressing a button - and while it creates a visual of contact, in my opinion it didn't really creature a feeling of connection with whoever I was holding (although seeing someone struggle against you as you pull them along did add to this illusion).
Voice communication through Kinect has greater potential I believe. It doesn't have to be a big factor of the game, but imagine if as well as reacting to your actions, AI in games could react to what you said. Milo was of course a tech demo of this kind of idea, but I can't help but think of trading trash talk with the Marines in Halo, or shouting threats at the Covvies as I charge towards them. I mean… I do that already but how much more immersed would I get if the AI talked back? Of course perhaps this would just take too much of CPU or just too much work, but it would certainly be interesting and perhaps would be more possible in an adventure game type situation?
Voice interaction can also go wrong, and break the illusion of reality however. Like when I enter Aurora in my save of Fable 3, I am greeted with a cacophony of voices talking about how their shop is owned by an hero, or exclaiming that I am a great king. This is again caused by a problem with responsiveness in that the scripted interactions between me and the inhabitants of Aurora, which are supposed to make them seem more alive, have all fired at once producing a break from the illusion as everyone talks over top of each other and repeats canned phrases.
In Summary
So those are the surface cues for making an AI seem like a human. I would like to make one last comment though, and that is it also seems to me that if you want to make your AI appear to be less human you could use these same concepts and basically reverse them. Make the colour palette alien, make their actions a little more random (or a little more predictable if they happen to be robots or zombies). Have them react to the environment in strange ways that are against what a human would do, but show their Alienness. Have their faces move, and look Alien. Humans react especially positively to mammalian type features in faces, and negatively to facial features that are non-mammalian (insectile, reptilian, piscine?). Have them be eerily silent as they move through the environment, or have them talk in an alien fashion, in ways that break the rules of expected grammar.
And there, in the last sentence I feel is really at the heart of this whole matter. Technology we perceive as human, acts or performs it's roles in a way that matches what we would expect a human to do. Technology that is alien, acts in a way that breaks or perverts those expectations. Although, I would guess that you can go so far, and what might be most Alien or uncomfortable is an AI that acts mostly as we expect something living would, but breaks just one or two of those expectations in a slightly unexpected way.
Read more about:
Featured BlogsYou May Also Like