Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Interactive music sequencer design for games is a real mouthful -- or rather an earful -- and a pretty large topic to boot. Scott Patterson addresses some of the design issues in making a computer music language, and the issues related to providing interactive music functions for game control.
Interactive music sequencer design for games is a real mouthful -- or rather an earful -- and a pretty large topic to boot. I'll begin with a summary of the motivations for making interactive music and then get in to the design discussion. To make the design discussion a bit more manageable, I will assume a lot of familiarity with music and synthesizer details and put greater focus on design options and programming methods. First, I will talk about the design issues in making a computer music language. Second, I will consider the controls available with audio synthesis. Third, I will add the issues related to providing interactive music functions for game control. Throughout, I will focus on concepts that directly influence interactivity, and point out approaches that make the complexity more manageable and help to make implementation practical.
Motivations for Interactive Music
Most forms of electronic entertainment include music. Games, movies, and TV shows are seen as incomplete without music. We hear music at the introduction to news programs, on our mobile phone rings, in department stores, and at coffee shops.
Music is its own form of entertainment. We listen for styles, attitudes, technology, improvisation, composition, and skilled performances. Our memories associate music with past situations, friends, places. We associate music with love and hate.
Games include music for many reasons: To identify with a particular audience. To establish attitude, tension, and mood. To let them hear the pride and glory of success or shame and ridicule of defeat. To march them off to the drums of war. To take them to a magical place. To set them in the past. To set them in the future. To take them to alternate worlds. To bring them back to reality. Music instantly adds definitions and associations beyond what the visuals can do.
Games are interactive. This means a player has control over the game in some way and the game asks the player to interact in some way. This control and interaction are the basis for how a game becomes immersive and entertaining. The quality of control and interaction are the basis of successful games.
Therefore, it is natural to want to mix the immersive quality of control and interaction in computer games with the immersive qualities of music. How do we control music? How can we create musical interaction? This is the motivation for this article.
The reasons for developing your own interactive music sequencer code are the same as the reasons for any code development. You may want standard features across many platforms. There may not be systems available that meet your needs. You may want an implementation that you can optimize for delivering the particular features you need. You may want control over your own code to provide improvements, enhancements, and reliability in line with internal scheduling requirements.
Making a Computer Music Language
Event Blocks
Music can be described as events or commands occurring over time. We can build our music command language with "event blocks" composed of three elements: time, event type, and event details. The time is stored relative to previous events and is called a delta-time, the event type is identified with a number, and the event details are zero or more parameters that are defined by the event type.
Figure 1 - Event Block
Delta-Time | Event Type | Event Parameters |
With this basic design we can describe any groupings of events and time intervals. Building our computer music language is now a task of choosing how to store the delta-time, what event types, and what details are needed for each event type.
MIDI (Musical Instrument Digital Interface) Music
The MIDI specification has been around for a long time. Most composition and sequencing software provides compatibility with MIDI input and output ports, and these products also provide options to save and load MIDI files.
Since the MIDI specification defines a delta-time storage format and a useful set of event types and parameters, we should start with this model as a reference. This will make it easier to convert the MIDI language to our custom music language.
Keep in mind that our goal is to create music directly on a machine with audio capabilities, so we want to take what is useful from the MIDI design, but not restrict ourselves to its features. We are also likely to get music from composers in the MIDI file format, so we will want to convert the MIDI file music language to our own custom language.
MIDI Events
The MIDI specification defines several events called the channel voice messages, and there are also meta-events defined in the MIDI file format 1.0 specification. Some of the meta-events simply contain text string data. I summarize the MIDI events and meta-events we are interested in Table 1, Table 2, and Table 3.
Table 1 - MIDI Channel Voice Message
Event Type | Event Parameters |
---|---|
Note Off | Note Number |
Release Velocity | |
Note On | Note Number |
Attack Velocity | |
Pitch Wheel | Pitch Bend LSB |
Pitch Bend MSB | |
Control Change | Controller ID |
Controller Value | |
Program Change | Program Number |
Poly Key Pressure | Note Number |
Pressure Value | |
Channel Pressure | Pressure Value |
Table 2 - Basic MIDI Event Types
Meta Event Type |
---|
End of Track |
Set Tempo |
Time Signature |
Denominator |
MIDI Clocks per Metronome Click |
32nd Notes in a MIDI Quarter Note |
Key Signature |
Major or Minor Indicator |
Table 3 - Text MIDI Event Types
Meta Event Type |
---|
Text Event |
Sequence/Track Name |
Instrument Name |
Lyric |
Marker |
Cue Point |
Of the MIDI channel voice messages, the control change event is unique in that it specifies an additional event identifier, the controller ID. Examples of common controller ID numbers are 7, which refers to volume and 10, which refers to pan position. Many controller ID numbers are not commonly used and we can have musicians insert control change messages with controller IDs to mark the music in special ways.
The basic meta-events listed give us ways to get time organization details and tempo changes. The text meta-events give us ways to get custom data embedded in strings. Composers can put special track settings in the track name strings or special kinds of playback commands in lyric strings. These methods will save us from having to write custom tools for composers when deadlines are looming.
MIDI Channel and Tracks
The "channel" in MIDI channel voice messages refers to the fact that these messages are sent with a channel number from 1 to 16 (0 to 15). The channel number is used by MIDI devices to route commands. We are not routing commands to MIDI gear, so we won't talk about channels anymore. Type 1 MIDI files have any number of tracks where each track contains a series of event blocks. It is the track concept that is useful for our data organization.
Sequences, Tracks, Events, Instruments, and Voices
This is where we leave the world of MIDI and talk about our custom music sequencer language. A quick summary of the terminology used in the rest of this paper is in order: A Sequence is a collection of Tracks that run simultaneously. Each Track is a sequence of Events that control the current Instrument. Certain types of Track Events will turn on and off Voices of the current Instrument. Some types of Track Events will modify all active Track Voices. You can think of an Instrument as the default settings given to a Voice that is turned on with a NoteOn Event. I will present a list of class definitions that display this structure in the implementation section.
Event Type Possibilities
Table 4 lists many event type ideas so you can get a taste of the possibilities. There are a lot of details that could be discussed ast to how these event types might be implemented, but I will just mention the options here and move on to the other parts of this discussion.
Table 4 - Event Type Ideas
Event Type | Notes |
---|---|
Basic | These are like traditional MIDI events. |
Note Off | Note Number |
Release Velocity Optional | |
Note On | Note Number |
Attack Velocity | |
SetTrackVolume | Value |
SetTrackPitchBend | Value |
SetTrackPan | Value |
SetTrackEffect | Any Effect Type |
Value | |
SetInstrument | Program Change |
Track End | Loop to Start if the Sequence is Flagged |
Test Values | We can have an array of test values at the sequence level or track level (the value could have a name and have the string mapped to a lookup number) |
Set Test Value | Sequence or Track Test Value |
Test Value ID | |
Value | |
Inc Test Value | Sequence or Track Test Value |
Dec Test Value | Sequence or Track Test Value |
DecZ Test Value | Do Not Decrement Below Zero |
Advanced | |
SetSequenceVolume | Value |
SetSequenceEffect | Any Effect Type |
Value | |
SetListenerWorldPosition | Sets 3D Pan and Volume |
SetTrackWorldPosition | Sets 3D Pan and Volume |
SetInstrument If | Sequence or Track Test Value |
Position Marker | Marker Name (String Mapped to Number) |
Jump to Position Marker | Marker Name (String Mapped to Number) |
Jump to Position Marker If | Sequence or Track Test Value |
Marker Name (String Mapped to Number) | |
State Change | Fades, Ducking, Mute Groups |
Change Sequence State | Set immediate state or set a target state and interpolation time |
Change Sequence State If | Sequence or Track Test Value |
Change Track State | Set immediate state or set a target state and interpolation time |
Change Track State If | Sequence or Track Test Value |
Change Voice State | Set immediate state or set a target state and interpolation time |
Change Voice State If | Sequence or Track Test Value |
Arrangement | |
Jump to Track | Jump to a New Track Data |
Jump to Track If | Sequence or Track Test Value |
Gosub to Track | Jump to New Track Data, Returns When Done |
Gosub to Track If | Sequence or Track Test Value |
Callback | |
Callback | Calls Game Code, Could Change Test Values |
Callback If | Sequence or Track Test Value |
Sequencer Data Structures
The main data structures that we could use for our music sequencer are listed in Code Example 1.
Code Example 1 - Music Sequencer Data Structures
typedef list< Sequence * > SequencePtrList_t;
typedef list< Track * > TrackPtrList_t;
typedef list< Voice * > VoicePtrList_t;
class MusicSequencer_t {
MusicSequencerState_t State;
SequencePtrList_t ActiveSequencePtrList;
SequencePtrList_t FreeSequencePtrList;
TrackPtrList_t ActiveTrackPtrList;
TrackPtrList_t FreeTrackPtrList;
VoicePtrList_t ActiveVoicePtrList;
VoicePtrList_t FreeVoicePtrList;
};
class SequenceState_t {
Tempo_t Tempo;
Volume_t Volume;
};
class Sequence_t {
SequenceState_t State;
SequenceState_t BeginState; // Interactive feature
SequenceState_t EndState; // Interactive feature
SequenceInterpolator_t Interpolator; // Interactive feature
TimeUnit_t TimeElapsed;
TimeUnit_t TimeStep;
CallbackFunc_t *pCallback; // Interactive feature
TrackPtrList_t TrackPtrList;
};
class TrackState_t {
Volume_t Volume;
PitchBend_t PitchBend;
Pan_t Pan;
Effect_t Effect;
};
class Track_t {
TrackState_t State;
TrackState_t BeginState; // Interactive feature
TrackState_t EndState; // Interactive feature
TrackInterpolator_t Interpolator; // Interactive feature
Sequence *pOwner;
char *pEvent;
Instrument_t *pInstrument;
VoicePtrList_t VoicePtrList;
};
class VoiceState_t {
SynthVolume_t Volume;
SynthPitch_t Pitch;
SynthPan_t Pan;
SynthEffect_t Effect;
};
class Voice_t {
VoiceState_t CurrentState;
VoiceState_t BeginState; // Interactive feature
VoiceState_t EndState; // Interactive feature
VoiceInterpolator_t Interpolator; // Interactive feature
Track_t *pOwner;
char nKey;
};
Event Data Structures
To implement the event type commands we can have the event type command numbers correspond to an array lookup that holds the relevant function pointer and the byte length of the event type and parameters. Code Example 2 shows this code.
The function pointers give us a quick way to get to the code associated with each event type. The byte lengths give us a quick way to step to the next event block.
Code Example 2: Event Type Data Structures
// Example Note Off Event Block
typedef struct {
char nEventType;
char nKey;
// no release velocity
}NoteOff_EventBlock_t;
void NoteOff_Function( Track_t *pTrack )
{
// the pEvent is pointing at our event block
NoteOff_EventBlock_t *pNoteOffEB = (NoteOff_EventBlock_t *)pEvent;
// walk through this track's voices and turn off
// any that have pVoice->nKey == pNoteOffEB->nKey
}
// Example Note On Event Block
typedef struct {
char nEventType;
char nKey;
char nVelocity;
}NoteOn_EventBlock_t;
void NoteOn_Function( Track_t *pTrack )
{
// the pEvent is pointing at our event block
NoteOn_EventBlock_t *pNoteOnEB = (NoteOn_EventBlock_t *)pEvent;
// try to get a voice from the free list or
// try to get a voice from the active list if possible
// if we have a voice, turn it on with the pNoteOnEB->nKey
// and pNoteOnEB->nVelocity and other state information
}
enum enumEventType
{
EVENT_TYPE_NOTEOFF,
EVENT_TYPE_NOTEON,
.
.
.
EVENT_TYPE_COUNT
};
typedef void (*EventFuncPtr_t)(Track_t *);
typedef struct {
EventFuncPtr_t pFunc; // pointer to command function
int nLength; // byte length of command
}EventTypes_t;
static EventTypes_t aET[EVENT_TYPE_COUNT] = {
{ NoteOff_Function, sizeof(NoteOff_EventBlock_t) },
{ NoteOn_Function, sizeof(NoteOn_EventBlock_t) },
.
.
.
};
Timing
Different computer systems will have different ways of providing timing callbacks or threads that wake up at specific intervals. I will simply assume that we can have a function called at a specific interval that I will call the "audio frame callback". We can think of the time between the callbacks as the audio frame. During each callback we need to update our notion of how much time has passed and we need to send out all of the commands that have "timed out".
In the sequence data structure listed in Code Example 1 there is the TimeStep quantity that should be set based on the tempo, the delta-time parts per quarter note, and callback timing. We add the TimeStep to the TimeElapsed on each audio frame to keep track of the time. Since these time parameters are in the sequence structure, we can only change the tempo for the whole sequence. If we wanted to change tempo for each track individually we could put these parameters and the tempo setting in the track structure.
Audio Synthesis Control
Control
Certain interactive music effects are possible if sequencer commands can be tied to audio synthesis parameters. The SetTrackEffect event type could represent any effect parameter that the audio synthesis system provides. Some audio synthesis parameters are simply set before a note is played and may not respond to updates until the next note is played. Some audio synthesis parameters can be altered during notes.
The behavior of the audio synthesis system that we are using will determine what kind of control we can have, and music will have to be written with the control issues in mind.
By defining immediate or target state changes for sequences, tracks, or voices we can manage what could be many controls changes with simple ChangeState commands.
Connecting Synth to Sequencer
The key to connecting our audio synthesis to our music sequencer is the SetInstrument event type. This command looks up in to a table of instrument definitions and sets the pInstrument field of our Track data structure. When a NoteOn command occurs the parameters from the pInstrument are transferred to the voice to be started.
We will need to manage the available audio synthesis voices to provide good voice stealing logic. When we have used all of the available voices we may want to end an active voice because a new note on request is evaluated to have a higher priority. The priority system could use several kinds of weights. These weights could be based on the note ontime, time until note end, envelope stage, volume levels, count of this type of voice active, instrument priority number, track priority number, and sequence priority number.
Interactive Music
Interpolation
Game programmers are very familiar with interpolation. I mention it here because our ability to interpolate between sequence states, track states, and voice states provides us with interactive control of music.
Transitions
Transitions can be defined as one or more changes occurring over a time interval. A transition might mean an interpolation of some kind of state data done over a specific time interval. Or a transition might mean a new track section, musical cadence, key change or other compositional technique. A transition could be a combination of these things.
Transitions may be triggered by a combination of game logic and music language logic. It is useful to provide API functions for game code to directly set target states. These kinds of implicit and explicit controls over transitions are another key element of the interactive control of music.
Meanings
Some of the meanings that we might want to attach to the control of music are categorized in Table 5.
Table 5 - Music Meanings
Category/Type | Description |
---|---|
Self | What State is the Player In? |
Health | Confidence in the Music |
Power | Strength in the Music |
Skill | Sharpness and agility in the Music |
Mood | Heaviness or Lightness in the Music |
Familiar | Music That is Familiar in the Game |
Unknown | Music That is Unknown and of a Foreign Style |
Others | What State are the NPC's In? |
Friends | Pleasing Attitiude in the Music |
Enemies | Harsh Attitude in the Music |
Love | Sweetness in the Music |
Hate | Violence in the Music |
Familiar | Music that is Familiar in the Game |
Unknown | Music That is Unknown and of a Foreign Style |
Location | What is the Current Location Like? |
Secrets | Occasional Secret Melodies or Instruments Play |
Hints | Sudden Burst When Looking the Correct Way |
Safety | Even, Predictable Music |
Danger | Irregular, Ominous Music |
Magic | Chimes, Echos, and Sprinkles in the Music |
Familiar | Music that is Familiar in the Game |
Unknown | Music That is Unknown and of a Foreign Style |
Situation | What Kind of Situation Are We In? |
Safety | Even, Predictable Music |
Danger | Irregular, Ominous Music |
Magic | Chimes, Echos, and Sprinkles in the Music |
Preparation for Battle | Drums of War, Mechanized Beats |
Tension | Sharp Tones and Dynamic Changes |
Adrenaline | Tempo is Up, Mechanized Beats |
Time is Running Out | Tempo is Up, Chaotic Passages |
Reward | Triumphant Music |
Failure | Whimpering Music |
Familiar | Music that is Familiar in the Game |
Unknown | Music That is Unknown and of a Foreign Style |
Transition Types
Some of the many transition types are mentioned in Table 6.
Table 6 - Music Transition Types
Type | Description |
---|---|
Quick | Music that Stomps on the Previous Music |
Slow | Subtle Alterations and State Changes |
Fading | Fading Whole Sequences or Just Some Tracks |
Intensity | Instrument Dynamics |
Effects | Any Synthesis Parameter Changing |
Key | Compositional Changes |
Chord | Compositional Changes |
Harmony | Compositional Changes |
Melody | Compositional Changes |
Accompaniment | Compositional Changes |
Percussion | Compositional Changes |
Transposing | Compositional Changes |
Layering | Compositional Changes |
Fills | Enter from any beat position, push music decoration events in to a queue. |
Rythmic | Lagging, Ahead, Modified Swing |
Randomness | Controlled Randomness of Varying Parameters |
Instrument | Switching Instrument |
Timing | Switching Tempo |
Design Influences
There are four important factors in the discussion of interactive music: game design, game programming, music design, music programming. Music programming is influenced by the other factors in the following ways:
Game design will influence music design.
Music design will influence music programming.
Game design will influence game programming.
Game programming will influence music programming.
To point out these influences I will present some hypothetical examples.
Design Example #1
Game Design: Through player skill, a character can achieve a powered-up state. This state can last a very long time and a sound effect might get monotonous. We want to hear his energy in the music.
Music Design: Transition the melody and percussion track instruments to add DSP effects that add color and depth to the instruments.
Programming Design: Two sequence states are created and the game can choose when to set each target state.
Design Example #2
Game Design: When our player goes near a location in a level we want to hint that there is danger using the music.
Music Design: Fade down the main melody track and fade up the danger melody track.
Programming Design: Based on distance from the location, set the track target states for volume.
Design Example #3
Game Design: Lets say we have a game design where we change from day to night. Lets say that the player's role is more offensive in the day and more defensive at night. We want energy music during the day and tense and scary music at night.
Music Design: To keep it simple, we will describe three tracks of the music: melody, accompaniment, and percussion. We will define "energy", "mellow", and "creepy" versions of each of the three tracks. Again keeping it simple, we will define a "hard" and "soft" version of each of the instruments for each track.
Game Type | 12 Noon | 3 PM | 6 PM | 9 PM | 12 Midnight |
---|---|---|---|---|---|
Control Value | 0.0 | 1.0 | 2.0 | 3.0 | 4.0 |
Melody Track | Energy | Energy | Mellow | Mellow | Creepy |
Melody Instrument | Hard | Soft | Soft | Soft | Hard |
Accompaniment Track | Energy | Mellow | Mellow | Creepy | Creepy |
Accompaniment Instrument | Hard | Hard | Soft | Soft | Soft |
Percussion Track | Energy | Energy | Energy | Mellow | Creepy |
Percussion Instrument | Hard | Soft | Soft | Soft | Hard |
Programming Design: We generate our control value based on the game time. This control value is used to interpolate the instrument states and initiate transitions for the track states.
So when our game time reaches 3pm, the melody instrument transitions to the "soft" state. When our game time reaches 6pm, the "energy" melody track fades down and the "mellow" track fades up.
Conclusions
This article covered many motivations and implementation ideas for interactive music, and presented some core programming concepts that can provide the flexibility and control needed for interactive music while also pointing out the design influences, meanings, and transition types. The next thing to do with all of these ideas is implement your own interactive music sequencer and make sure interactive music is part of your game's design.
Read more about:
FeaturesYou May Also Like