Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
In today's Gamasutra cover feature, programming veteran Ben Campbell (Heavenly Sword, Creatures series) boils his tool development experience down to snippets of wisdom, useful for developers of all experience levels!
This article is an attempt to boil the author's tool development experience down into some reusable snippets of wisdom - a kind of first pass "best practices" document. Some of it falls under the realm of common sense, some of it might even conflict with your own experiences, but hopefully you'll find something useful!
Every time someone comes to you with a problem, you have to:
Interpret their observations ("It just stopped working", "I didn't do anything", "It just broke")
Extract an accurate set of symptoms
Form a diagnosis
Figure out if it's a new problem or just another manifestation of a known issue.
Come up with a fix
That's a lot of work. You don't want to be doing any more of that than you have to. So when errors crop up, you want to fix them before they occur again and force you to repeat the whole tiresome process.
In short, you should aspire to laziness. But like all good things, the attainment of true laziness requires some work.
There is also a bit of a conflict here. When users encounter problems, they just want quick workarounds to let them get on with their work. But tracking down an issue and fixing it properly once and for all might take you a while.
A good compromise is to provide the user with an immediate kludge (if there is one), but make them give you a postmortem snapshot of the data involved so that you can replicate, examine and fix the problem at leisure. The last thing you want is agitated users breathing down your neck while you work. Maybe the error-handling code in your tools could automatically provide some kind of postmortem data snapshot mechanism to make things really easy.
So kludges have their place - they let the user get on with their job. But whatever you do, don't allow multiple kludges to accumulate to the point that certain tasks become voodoo. As soon as a proper fix is in place, make sure the kludge is killed off!
You should also treat user mistakes as bugs. If one person made the mistake, then others probably will too. And they'll come and hassle you every time it happens. So try and figure out how you can bulletproof things to prevent human error.
Don't add a feature without knowing what problem it is solving. If a user has requested a new feature, step back and make sure that it's not really just a workaround in disguise.
This relies on you having a good grasp of what task the user is actually trying to perform. All too often programmers will add features that address what they think the user is trying to do, but ends up as a superficial and ineffectual solution.
Smile when you get bug reports! Encourage them! If people are reluctant to come to you with bugs, a culture of voodoo workaround is likely to develop. Rather than telling you that there is a problem, your users will just come up with crackpot workarounds that seem to work but are more likely than not to cause massive problems elsewhere.
But it gets worse. These workarounds will propagate around the team until they are accepted as the official way to do things. And nobody will question them any more. In extreme cases, this voodoo can infect new projects, even ones using completely different tools.
For example, on a project I once worked on, artists got into the habit of routinely using the "Freeze Transforms" tool in Maya to work around a particular unsolved issue in the exporter. The next project used a different export pipeline, but the practice persisted. And we ended up with silly, silly geometry, such as 1-metre-square objects offset from their origin by 2 kilometres. That made for some big bounding spheres and culling that was... umm... non-optimal. And the worst thing was that it wasn't visually obvious, so nobody would immediately realise why things were running slower than they should be.
So make sure you bite your tongue, grit your teeth and smile when people come to you with bug reports. You'll save yourself pain in the long run.
As the saying goes: when all you've got is a hammer, everything looks like a nail. C++ is the main hammer employed in game development and, inevitably, it gets called into service in all sorts of inappropriate situations.
Make sure you know some scripting languages. Python, Perl, Ruby, Lua, PHP (it's not just for web apps!), whatever. They're all fantastic in various ways.
Try and attain at least a passing familiarity with some of the unix-style tools and environments. Particularly unix shell scripting. It's pretty hard to find a more versatile system when you want to glue together disparate programs into an automated process. There are a number of packages which provide windows ports of such tools (cygwin being the most prominent example).
The big argument you hear (especially from non technical management types), is that not all the programmers know how to use language X or tool Y. This really isn't an issue. If you can handle working on something as complex as a game in C++, then you're you're not going to have too much trouble picking up a modern scripting language or tool or three as you go along.
Anything that needs to be done more than once is a candidate. How many late night final candidate discs have been borked by someone who has had too much coffee and pizza and too little sleep?
Scripting (particularly unix-style shell scripting) was built for this kind of stuff - moving things about, incrementing version numbers, checking files, collating disc images, performing sanity checks, starting burns, invoking test suites... all boring drudge work that humans shouldn't be trusted with.
Think beyond build processes through. Step back for a moment and look at the various production pipeline paths in your project. Where do you regularly see human error creeping in? Where do the stalls occur? Where are the bottlenecks? Are there any steps you could decouple from each other?
Do your artists require programmer help to add new assets into the game? Figure out how to bypass the programmer.
Don't trust your artists to keep within memory budgets? Make sure there's something that automatically sets off flashing red lights and alarms if budgets are exceeded.
Do your designers have to manually pull files from various places around the network to try out levels? Hack up a noddy little script with a simple text menu: "Press 1 to install 'Lava world', 2 for 'Ice world', 3 for 'Jungle world'"...
The other great benefit of automation is that it provides inherent documentation along the way. The source code for the automation tool is effectively a recipe that lists the steps required to implement the process. And like all good code, there'll be comments there that explain the rationale behind each step. Right?
Well... obviously. But there are other benefits to using text formats. To begin with, they can also contain meta-data and annotation.
For example, a geometry export tool could always start an output file with a header block of comments saying which version of the tool was used, the source file the data came from, a timestamp, the name of the machine it was run on and the username of the person who kicked it off. This is great for assigning blame when things go pear-shaped.
Text formats are much easier to debug. Debugging binary formats always seems to take lots of painstaking examination using a hex viewer, pencil and paper to note down all the offsets you'll need to keep track of. For particularly knotty binary formats, you tend to end up writing special tools just to debug them. No such problems with text formats - just fire up your favourite text editor!
If you can, prefer line-oriented text formats. Many existing text processing tools work best line by line, particularly the ones with a unix heritage. For example, want a list of the textures used by all the exported ".mesh" files in a directory?
$ cat *.mesh | sed -n "s/^texture '\(.*\)'/\1/p" | sort | uniq
...would do the trick nicely. This scans all the ".mesh" files in the current directory for lines of the form "texture ''", sorts them alphabetically, removes any duplicates and outputs the resulting list to stdout, one filename per line. The 'sed' command in the above example could easily be replaced by grep, perl or awk according to taste (or lack thereof). Regular expressions are great, even if they do tend toward being write-only code!
Another benefit of using text files is that they get you up and running quickly. You can always lay nicer interfaces on top, but users can hit the ground running with nothing more than a text editor.
Incidentally, it's probably worth encouraging all the non-programmers on the team to install a reasonable text editor. There are lots of great free ones out there, and they are all much better than Notepad. For extra points, make up some syntax-highlighting files for any of your file formats that you might reasonably expect to be viewed or edited on a users machine.
Don't overestimate the readability of XML. It really isn't all that readable, even with a good XML editor. The verbosity gets in the way. If your production processes require that members of your team regularly have to manually read or edit XML, they will hate you for it. There are lots of good reasons to use XML, but the limited readability it provides shouldn't be your deciding factor. If your files are designed to be machine-generated and machine-read, then XML provides a great structure to build within. But if they are designed to be written or read by people, steer clear.
At the end of your production pipelines you'll probably be wanting to cook things down into platform-specific binary formats. But try and keep your intermediate formats text based as much as is reasonable.
A tool to let you create binary files from a text description can be useful (one I wrote can be found at http://binify.sf.net). The idea is the files can easily be annotated with comments and other meta-data, making an otherwise unreadable binary format quite easy to follow. The final text-to-binary conversion is generic, so you don't need a new tool to handle each new file format (thus reducing potential new sources of error). It's also a great way to manually hack up binary files for testing. The downside is that such a generic description language doesn't really provide much semantic information about the file format, making it less amenable to further automated processing (as opposed to using, say, XML). So this approach is usually only good for that final text-to-binary step.
There's something really satisfying about centralised servers. But things tend to go wrong from time to time, and great big central servers just become great big single points of failure. So think twice before you centralise things and make sure the benefit outweighs the risk.
The version control system you've already got in place can probably shoulder the burden of synchronising data across multiple users in a lot of cases. Not to mention all sorts of other useful goodies it provides - branches, merging, check-in comments, possible issue-tracker integration, an existing backup strategy and of course...version history! So if you've got data you need to share, consider just using plain files under version control.
The other drawback of centralisation is the increased complexity of your project infrastructure. It makes things more brittle, it's harder to set up new users, it's harder to diagnose and fix when things go wrong and it's harder to duplicate the development environment at a different site if the need arises. For example, how easy would it be set up another team working on a separate project but using the same tools and technology? How about an off-site developer or team? Could you provide your publisher with enough of a development environment to do full localisation (and corresponding testing) without your involvement? Does your infrastructure discourage prototyping? What if your central server blows up? Can people work offline until a replacement is in place?
This is not to say centralised services don't have their place - they do. Just make sure you think carefully about their goals and possible ramifications first.
Most experienced people in the games industry these days probably have a great story about some amazing tool or uber-editor that spent months in development, lovingly crafted byte by byte by a programmer who really cares. And when it finally arrives it's a thing of beauty - slick, smooth, featuring the loveliest modern user-interface widgets known to mankind and packed with a million useful features the programmer thoughtfully added.
And it's totally unsuited to the task.
There are three big problems:
Users don't really know what they want.
Programmers don't really listen to users.
The nature of the task the tool addresses will change over time anyway.
So just don't bother. Hack something up quickly and get it in use as soon as you can. Use feedback from users to direct your development. Note down which particular atrocities people yell about the loudest and fix them first. You'll end up with a much better tool in the long run.
There are probably some Agile practices which are relevant here, but I don't really have enough experience to comment further. But please feel free to insert your favourite methodology-related buzzwords here.
Make your release process as simple as possible. Automate everything so that all you have to do is hit a button to pack up a new build of your tools and email everyone who needs to know. Use a packager to pack things up so your users don't have to do anything manually except run the setup program. For windows, Innosetup or the Nullsoft installer are nice easy options.
Be careful to increment your version numbers and make them easy to check. When problems occur, there's nothing more annoying than not knowing what version of a tool the user has installed.
Don't force upgrades. It's tempting to just push tool updates out automatically, but there are often valid reasons for users to retain older versions. For example, a user working on a critical piece of work under a tight deadline won't want to risk introducing potential problems by installing a new version of the tool. If the old tool does the job, let them delay their update until later.
Maintain change logs so users can find out what the new version fixes (and what it might potentially break). Maybe you can generate change logs directly from your source control check-in comments.
When something goes wrong, users don't really care what went wrong, or why. All they really want to know is how to fix it as quickly as possible so they can get on with whatever it is they do.
Your error messages should be designed to be helpful to the user, not to you. You've got a debugger, they don't. Try and phrase them in such a way that they will help a user figure out how to fix things. Consider linking the error to a separate helpfile with more information. If you've got a wiki, create a page for each error, and add a button your error dialog which pops up a browser. Give each error message a unique code to aid linking (eg "ERR015 - mesh has non-manifold geometry").
Batch up your error reporting if you can. During a process, don't bail out on the first error if you can sensibly keep going. Try and show up further errors before admitting defeat. For example, if you're exporting a scene from a 3D app, the user will be pretty annoyed if they have to rerun the export 50 times, fixing one trivial little mistake each time. Edit, export, DOH! Edit, export, DOH! Edit, export, beat up tools programmer...
Consider doing a preliminary sanity-checking pass over your data to catch common problems. Don't force the user to wait for 10 minutes of processing before telling them they've named something wrong. Of course, you should also allow users to cancel lengthy operations.
Using exception handling is generally frowned upon in game code, but it can be great in tools. It's a nice easy way to let lower-level code throw-and-forget. Higher-level code generally has a much better sense of context, and is much better placed to construct useful error messages for the user.
Don't bother to output warnings. Despite all best intentions, users will ignore them and they'll just accumulate over time to form a big nasty unreadable mess which obscures actual information. For example, the cryptic green-falling-letters effect in "The Matrix" is actually just the warning message output from their renderfarm tools. True story.
It's perfectly valid to have a '-debug' or '-verbose' option in your tool to help you track down problems, just don't burden your users with lots of rubbish output. They don't need to know the names of all 5000 files that your tool loaded successfully, they only need to know the name of the one that failed.
The more often users come and hassle you the less time you've got for important programmer stuff. You've probably got lots of important meetings scheduled. Long meetings. In the cafe across the road from the office. By yourself.
So it's in your interest to have a good body of documentation available to point users at. And it's great fun nurturing your contemptuous-glaring skills when they come to you with simple problems which are plainly covered by the docs. They soon learn.
The main thing is to avoid the build up of rule-of-thumb oral history and superstition. This kind of voodoo always comes back to bite you in the end, so you're much better off nipping it in the bud with documentation.
Consider giving users write access to documentation, or use a wiki or some other user-editable system. Offload as much as you can onto the users themselves. They'll do a better job than you anyway.
HOWTOs and tutorials covering common tasks are particularly useful, and are nicely informal and easy to write.
If you can, check the master documentation of the tool in alongside it's source code. Firstly, it'll help you keep the docs in sync with the tool as they both evolve. Secondly, it'll reduce the likelihood that the docs will get lost in the mists of time. This tends to happen a lot when transferring tools and tech to other teams - the recipient team just assumes there aren't any docs.
Many tools end up with documentation consisting entirely of redundant menu-item descriptions. For example:
File->Open - open a file
This should be interpreted as a warning sign. Or a cry for help.
It's usually worth sacrificing total technical control for ease of use. For example, when setting up GUIs for custom shader parameters, consider abstracting away from the hardware a little bit. If you just expose every parameter or flag available in the render pipeline, the artist will begin to feel that they need a degree in advanced maths to work it all out. Even quite a simple shader model can feel pretty overwhelming when all the parameters are exposed. Instead, find out what concepts they are already familiar with and see if you can make your tool present itself in a similar fashion.
Find out the terminology your users are familiar with, and use that where appropriate. Don't invent your own weird and wonderful terms for existing concepts (unless there is comedy value in doing so).
If you expose every little detailed option, users will miss the ones that really matter.
Don't add options where an automatic decision could be made. Adding options to a tool is all too often a weasely cop-out in disguise. Remember: every extra configuration option is an extra thing that the user could get wrong.
During the course of a project, you'll probably find yourself (and others) hacking up lots of little tools, plugins and scripts to help out with odd jobs here and there.
It's worth collecting these microtools and checking them in alongside all the proper grown-up tools. You never know when one of them might come in handy. Of course a lot of them will only work in very specific circumstances, so make sure you add some notes outlining their intended purpose and limitations. A concise comment block at the top of the script or source code usually suffices.
The chances are that someone else will need something similar to an existing microtool, and the code can be generalised far enough to cover it. Some may even evolve far enough to be considered 'proper' tools in their own right.
At the very least these microtools provide documentation, of a sort, on how particular tasks can (or should) be performed.
Don't forget to collect tools created by non-programmer team members (eg MEL or MaxScript from artists). These often get overlooked, but they exist because they are useful.
When it comes down to it, good tools are really a collaboration between the tool developers and the tool users. If the communication isn't there, you're going to end up with tools that suck.
Try actually using your own tools. An obvious point, but it's amazing how often this gets overlooked. If you're working on a level editor, try actually creating a working level with it.
Learn what your users actually do. For example, if you're working on tools for your art team, spend a day going through the tutorials for Max or Maya or whatever other major apps they use. Gain some empathy!
Sit down and watch over your users shoulders as they work. Don't say anything, just watch for a while. You'll be amazed at how much pain users will endure over little things that could easily be fixed.
Read more about:
FeaturesYou May Also Like