Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
This is the first of a series of posts about designing and developing character customization systems for 3D games. It is based on the experience acquired after doing 4 of them. This first post talks about the requirements and use cases of such systems.
This is the first of a series of posts discussing the problem of character customization in computer games. I am writing this with several targets:
to put in order all the thoughts I have been piling during the design and implementation of 4 character customization systems for commercial games.
to try to help anyone facing the development of such a problem, by providing some analysis of the requirements of such systems.
Please note that while I use “character customization system” all the time, these systems may be used to configure people, cars, butterflies or zorks (whatever they are) depending on the game context.
The target audience are mainly technology programmers but it may be useful to technical 3D artists as well.
I have been developing graphics technology for games for about 10 years now. That wouldn’t be enough to make me an expert in anything except for the fact that i have had to develop 4 different character customization systems from scratch during those years.
The first one was developed for a game called “One: Become a legend“, that became history pretty fast together with the target platform: the Nokia N-Gage. Despite being developed for an non-smart phone with an integer-only ARM processor, the game managed to offer 3D characters with motion-capture animation, normal mapping, and directional+environment lighting, which was pretty cool. The system offered swappable parts with several color layers for every part, and fixed-location decals to add logos to t-shirts, etc. All the characters in the game were designed with the system.
The second one I participated in is the one in “All Points Bulletin” (APB). I only worked on it for a year, but enough to complete pre-production and start production. The game used Unreal Engine 3 which was heavily under development at that time, and didn’t include enough features to implement the ambitious system that the company designers wanted. I think it is one of the best character customization systems ever developed for games and most merit should go to the team that stayed there for years developing it. Some years have passed, but you can still see it in action in the re-spawned “APB Reloaded” game.. It is a pity that the game didn’t succeed in its initial release. because the plans to make it huge along time were promising.
After some years, I worked in Blueside Inc, in the game “Kingdom Under Fire 2” and developed the first iteration of its character system. The game is not out yet, so I cannot really explain much about it. However I can say this much: this game has the potential to be awesome, take my advice and keep an eye on it.
Finally, I decided to take some time and develop “the ultimate” character system and the result is Mutable: what we are offering here at Anticto. I have used this system to be able to develop all the ideas that I had to discard from other systems due to time and budget constraints.
If you are reading this, you probably don’t need an explanation of what a character customization system is, but maybe a slightly deeper analysis can be useful.
A character customization system is used to let the players create their own avatars in games. This usually includes an artist-driven set of modifiers applied to 3D meshes, and textures with some parameters offered to the players. These parameters usually control the general shape of the character, details like facial features, skin colors, hairstyles, clothing and equipment.
How far you want to go with this is a matter of game and art design. You can go as far as Second Life and let you players import their own assets into the game, or you can keep tight control of the aesthetics resting the color palettes and mesh combinations. With more freedom comes more responsibility, and you players probably won’t care about what your artists envisioned for your game: they will try to make the ugliest possible characters, and if they have enough freedom in the decal design etc, you will have to hire a crew of censors to avoid offensive (and even illegal) designs.
The character system affects two branches of the game development pipeline: asset production and engine development. You need to prepare the assets for your system, so until the design and features are not closed and verified, the art production may be subject to changes which your artists won’t like. But we will focus on the technical side in these posts.
A character customization system may need to build characters in several scenarios of a finished game:
Loading time: When you are starting the game or entering a new level. In this scenario, you have most resources available to build the characters, including the GPU if necessary, since there isn’t any heavy real-time action going on. You want to build the most optimized version of your in-game data, and you can take some time to do it.
Customization lobby: With this, I refer to the scenes where the user is changing parameters in real-time to configure its avatar (or any content). In this scenario you usually have a lot of resources available, since the action is focused in the character being customized. You are still rendering a scene in real-time and you want the system to reflect the updates of the 3D model as fast as possible. Often, you want to use assets with higher quality than the ones used in-game and you can afford denser meshes, uncompressed textures, and more rendering calls. This means that the generated model doesn’t need to be fully optimised.
In-game: This happens when the player is in the middle of real-time action and other players join. This is the most difficult use case since you have to build characters without stalling the CPU or the GPU, and with severe memory constraints.
There are many requirements in tension in a character customization system. They will depend on the game going to use it, of course, but in some degree you will always need:
Performance in the construction process. It cannot take long to build a character and it cannot require a lot of memory.
Optimized data generation. You will want your data to be as optimal as data generated directly by your artists. Optimized geometry: with only the required triangles to avoid overdraw and z-fighting. Optimized textures: without wasting space, channels, and using compressed formats. Optimized draw calls: you cannot use more draw calls for your customized character than you would use for a static one.
Flexibility in the range of modifiers that your artists can use to define the customization of characters. These modifiers will probably include mesh merging, morphing and removal, and various image effects to change colours, blend in normalmap effects, projection, etc.
Reusability is not a usual requirement, since developers tend to focus on single projects when developing customization systems. However in the case of a general game engine, or a middleware like the one we develop, it is a key element.
In APB we had a long pre-production stage, where two programmers and two artists worked together defining what would it be possible to customize in the game and how. This included the skin color effects, the skin layers for scars, moles, tattoos, etc., how this would affect the normals, specular and other material properties, etc. It also included how would we model the clothing accessories, the morphs in the body and the face, the hair-style etc. Then we did the same for the customization of the cars.
After that long phase, we threw away all the test assets, produced a many-pages document for artists, developed a tool to define and preview all this data and we implemented the system in the game engine with those effects in mind. It sounds short now, but it was a huge task in terms of man-months. The system was settled in stone and any change in the customization features like adding an extra layer in the skin, or a different morphing parameter would have serious implications in the programming side.
With time, I realized that it is very important to give the control of what can be customized to the the artists so that they can define all the construction process of the assets without requiring of additional programming work. The only way to do this is with a data-driven process: by turning the construction process of the objects into data itself. A little bit like what happened with programmable shading in the GPU: instead of adding stages to the rendering pipeline, at some point, the GPU designers realized it was much better to give us shaders.
In an MMO you may have many characters on-screen, but only a few will be close enough and require many pixels in the final rendered frame. The traditional approach to reduce the cost of complex scenes is to use several levels of detail (LOD) for an object and use cheaper ones when they are far away. Cheaper objects have simpler meshes and smaller textures. In the case of customizable characters it is necessary to build this LODs specifically.
Imagine the case of a necklace. In the highest LOD you probably want to model it with a mesh and a special metal material. In the next LOD it may be enough to model it as a morph of the mesh and a blended path on the torso color and normal maps. In the last LOD you may want to ignore it completely. Having this support for LOD adds complexity to the customization system but it can greatly improve the performance of the resulting data and the build process.
Imagine the case in the customization lobby when the player is changing the skin color of complex character. The player is moving a slider handle, and he or her is looking at the 3D model to see how it looks, expecting real-time visual feedback. What is going on under the hood?
In this case you are using the maximum detail character and you are using the highest resolution textures, maybe a couple of materials with 2048×2048 texture sets including color, normal and specular. Whatever way you decide to use to customize the color it will involve some per-pixel operations like interpolations, soft-light or hard-light effects etc. Moreover, you probably have additional layers on top of the skin, like moles, hair, tattoos, garments modeled as texture effects (like socks, or tight t-shirts), etc, that you need to bake. This adds up to millions of arithmetical and memory operations that you need to do in a few milliseconds to sustain the frame rate.
What can you do? Well, the answer is obvious in the 21st century: use the GPU. It is not difficult to move this operations to a shader and just update its parameters while the player changes the skin color. Of course, you would only use this shader in the customization lobby, and you would bake everything when using the character in-game. But if you have complex customization it will not be possible to move all of it to the shader, so you will have to make several shaders depending on what parameters are being edited of your model. Moreover, you will have to specifically encode the process to generate the “partially baked” resources that your shaders will need, for every case.
This is what we did in some of the systems in the past, and it worked great. But any change in the customizable features of the object implied a lot of work in order to adjust all these processes and shaders, which makes this incompatible with giving the control to the artists as discussed in a previous point.
When you are in-game, you are probably using all of your resources, trying to push the quality to the maximum. Suddenly requiring 2048×2048 pixels x 4 bytes x 3 images to apply and image effect between two images onto a third one, for a character you need to build in the background because he is joining the area, may be a problem. On a PC requesting too much memory is not that terrible: you have a thick OS that will virtualize and swap in and out for you, but it will still be slow. In some consoles and smaller devices though, you will crash if you exceed the available memory.
You have to split all the operations into smaller tasks and organize your code and data to use the minimum amount of memory. This can take some time and will slow down the object construction, but it is not especially difficult. However, again, it depends on what operations you require for each object, and when these change, you may need to review these tasks as well.
My latest attempt to resolve this requirement tension is to use a kind of virtual machine approach. The artist define a diagram with blocks connecting player-controlled parameters and meshes and textures to create an object hierarchy. This is compiled into a set of operations and constant data. This “program” can then be reorganized automatically for the several scenarios described in this post: to have maximum performance (trying to generate shader fragments automatically), to use the minimum memory, and optimised for the different cases where subsets of parameters are modified at run-time.
The virtual machine runs this program in different ways for different scenarios, and it has operations like texture packing, image layer effects with small blocks, etc. It can easily run tasks in parallel and it can automatically apply memory constraints to the program execution.
In future posts I will try to discuss the specifics of the common modifiers like mesh merge, texture pack and image effects, as well as discussing some open problems. Sometimes I will focus on the context of our approach, but in many cases the information may be useful for general development.
Read more about:
Featured BlogsYou May Also Like