Sponsored By

Opinion: Putting The Audio Back In Audio Programmer

Nicolas Fournel, principal programmer at Sony Computer Entertainment, talks about the rarity of audio programmers and clears up misconceptions about the role, in this <a href="http://altdevblogaday.org/">#altdevblogaday</a>-reprinted opinion piece.

Nicolas Fournel, Blogger

June 15, 2011

6 Min Read
Game Developer logo in a gray background | Game Developer

[Nicolas Fournel, principal programmer at Sony Computer Entertainment, talks about the rarity of audio programmers and clears up misconceptions about the role, in this #altdevblogaday-reprinted opinion piece.] Hi, my name is Nicolas and I’m an audio programmer… In the game industry, there are not too many of us. You will always find graphics, gameplay or physics programmers on a team, but audio programmers are a rare breed. Actually, small studios sometimes don’t even have one, and it is often the new guy who will inherit the task of dealing with a middleware sound engine (frontend coding comes a close second as a mandatory rite of passage for new programmers). Even bigger companies may only have one audio programmer who will be shared between various projects and teams, or will be assigned to audio part-time only. Supply And Demand In a sense, this scarcity is relatively nice for us, as there usually is a lot more demand than offer. On the other hand, this leads to quite a few misconceptions about the role and what an audio programmer could potentially bring to a team. It starts during the audio programmer’s interview itself. Since there are so few of us, a candidate is rarely interviewed by his / her peers. Usually he /she will end up talking to two different groups of people: the other game programmers and the sound designers / audio director. Both groups will ask questions about what they are familiar with (and it is important, don’t get me wrong, to have strong basic programming skills, or to know how to use a DAW) but very rarely about audio programming itself. During my career, I have answered numerous questions about 3D maths, optimisation, multithreading, C++ tricks, yet no one ever asked me to give the pros and cons of the various algorithms to write a reverberation, or to say when I should use polyphase filters for example. Side note: it becomes even more confusing when recruiters throw the term “audio engineer” in the mix and you start receiving applications from both Bob Katz wannabes and software engineers at the same time…  Interestingly, I also have seen a few “sound programmers” positions advertised in Japan, which were actually referring to synthesizer programming (i.e. creating patches on synthesizers in the audio department) and not coding. All that being said, what is the role of an audio programmer then? A recent post by Brian Schmidt (from the Xbox audio hall of fame) on the Video Games Musician’s mailing list gives us a few hints. In the game industry, he said, right now you can find three types of audio-related programming positions: implementation programmer, engine programmer and DSP programmer. And Then They Were Three Let’s see what they are doing in a bit more details, shall we? The implementation programmer will add audio to the various game components. Typically, he will load and unload banks of audio assets, and trigger background music, dialog lines or sound effects (hopefully a lot of that will be data-driven too). He will also update all these elements when needed (e.g. change their volume, pitch, pan etc…). In order to do all that, he will call functions from the in-house audio engine or from some audio middleware such as FMod. Obviously there is no need for him to be an audio wizard to do so. However, it is often a very good position to familiarize yourself with the game’s code base since you will most likely have to dive a bit into all of the sub-systems. The engine programmer, based on Schmidt’s post, will create a layer that sits on top of the lower level audio system. He may be responsible for building the interactive music system that intelligently crossfades between tracks, for dealing with sound effects instance limiting, dialog queuing etc… I would rather call that working on various audio components than developing an engine, but you get the idea. This position will require a deeper understanding of system programming than the implementer job: for example in the case of the interactive music mentioned above, he will usually have to deal with streaming, multithreading etc… The programmer is also more likely to encounter some sound or musical terminology, but again, an extensive knowledge of audio at this level is hardly needed. Finally, at the lowest level, DSP programmers will deal with the audio samples themselves: they will write highly optimized mixing routines, filters, reverberations, etc… These guys are the ones who will know more about audio: Fast Fourier Transforms, FIR and IIR filters, convolution, generation of waveforms without aliasing etc… Of course, sometimes a single programmer will be able to do it all (don’t let him go!) or to do a combination of the tasks described above. Also, this is a very runtime-oriented list: writing audio tools (e.g. to package sound effects into banks, or to make them less repetitive with scripting) is another very important aspect of game audio development. And The Audio In All That? As you can see, these three job descriptions show various levels of involvement with audio. Since there are so few game audio programmers available, why not let them work on audio, rather than vaguely audio-related matters? The implementer and the mid-level audio system developer we described above don’t necessarily have to be audio experts. Calling the sound API to trigger sound effects can be done by any game programmer. Writing a system that streams files based on some logic (for interactive music or dialog) as well. I strongly believe the audio programmer’s main role and focus should be to imagine new audio-centric solutions to the problems of the game teams and the sound department. This is not the case nowadays. New game designs based on audio analysis, sound effects generated at runtime with procedural audio, perceptual voice management, spectrally-informed mixing, audio shaders, content-aware audio tools: these are only a few examples of what an audio programmer could be working on and which is almost totally absent from our games and pipelines today. This is in part why game sound has not evolved as much as its graphical counterpart. If you have a programmer with a deep knowledge of audio and have him working exclusively on triggering sound effects, streaming music tracks or on some basic .Net tools, you are missing a world of opportunities. This will be the connecting thread of my future posts, as I try to put the audio back into audio programming and to show all the wonderful things we can do for you, your sound team’s workflow and your games. ;) [This piece was reprinted from #AltDevBlogADay, a shared blog initiative started by @mike_acton devoted to giving game developers of all disciplines a place to motivate each other to write regularly about their personal game development passions.]

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like