Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
The original Sound Blaster combined a digital-to-analog converter (DAC) with an FM synthesizer. When it came out, game programmers finally had access to decent music and decent sound effects, but people had trouble programming the thing. In this article, you'll the information you need to start developing your own digitized sound code.
June 1, 1997
Author: by Erik Lorenzen
In the early days of PC sound, we had the PC speaker and its lesser-known sidekick, the Programmable Interval Timer (PIT). You could program the frequency of this hardware timer, and hook the timer up to the speaker. By changing this frequency, you could produce a surprising variety of sound effects and music.
There was only one problem--gamers and programmers hated it. So the world moved onto generation two: the AdLib FM Synthesizer. You could program it to produce decent music and even certain sound effects.
There was still a problem. The sound effects stunk.
Along came the Sound Blaster. It combined a digital-to-analog converter (DAC) with an FM synthesizer. Now, game programmers had access to decent music and decent sound effects. Finally.
There was much rejoicing. Until the question arose as to how to program the thing.
In developing DiamondWare's Sound ToolKit, we discovered scarce and poorly written documentation on the Sound Blaster. In this article, we'll give you the information you need to start developing your own digitized sound code.
Theory of Digitized Sound
As with most areas of computer programming, it is good to know some general background theory. It's often the case that the "easy" pitfalls are far from obvious. With sound, there are a number of possible problems that will cause audible glitches.
When digitized sounds are produced from an analog source, signals travel down an interconnect cable, go through a preamplifier, and are eventually processed by an analog to digital converter. We mention this because each step will have an effect on sound quality and overall volume.
Figure 1 shows the difference between continuous (analog) and discrete (digitized) waves. Unlike analog waves, digitized sounds are quantized, meaning that the amplitude at each point can have only one of a finite number of values. In addition, the sounds are discretely sampled; that is, the value is sampled periodically. Both facts raise important considerations. We'll discuss the amplitude problem first, then the time problem.
The range of values for each sample is finite. In a typical game playback system, this is 8 bits, or -128 to +127. The softest sound that can be represented is +/-1, and any sound softer than this threshold will be lost at 0. The loudest sound is about +/-127. Any sound louder than this is clipped, producing a harsh and nasty noise.
When you're mixing two or more sounds together, this clipping limit applies to the total mix. Thus, you want to produce your sounds at low volume levels so you can mix several of them without problems. If you are stuck with preproduced sounds, there are two ways to cut their dynamic range. The first is straightforward--you divide each sample by a constant. This is often implemented as a shift-right operation. It's fast and easy, but if you divide too much you will lose the softer parts of your sounds.
The other method is to dynamically compress your sounds. Basically, this will soften the louder parts and raise the softer parts. This is what pop radio stations do. After this process, the entire sound is equally loud. The implementation of this is beyond the scope of this article, but we mention it here for completeness. Quality sound processing software has the ability to compress dynamic range.
A digitized sound is considered a discretely sampled waveform because it is not contiguous in the time domain. In other words, you're sampling the waveform at T=1 and T=2 but there's no information for T=1.5. When it's played back, there's obviously sound at every moment--even in between the sample points, which is the crux of the problem.
To understand this dilemma, let's look at analog waveforms. It turns out that every possible analog waveform can be represented by the sum of a finite number of pure sine waves.
The plot of a sound on an oscilloscope is a time- domain plot, as shown in Figure 2a. That is, while the Y-axis represents amplitude, the X-axis represents time. It's possible to transform a wave into the frequency- domain. A plot after such a transformation, shown in Figure 2b, uses the X-axis to represent frequencies; each point farther to the right would be a higher frequency. You could read the graph to determine, for example, how loud the 1KHz component is. The highest frequency sine wave in this series is no higher than the highest frequency in the original wave. Figure 2c illustrates a square-wave pulse, comprising an infinite frequency of components, as you can see in its frequency-domain plot, shown in Figure 2d.
Now, let's get back to digital sound. It turns out that to "capture" a given frequency, you need to sample the wave at a rate at least twice as high as that frequency. For example, if you had a sound at 1KHz, you should sample it at 2KHz or higher. This is known as the Nyquist rate.
Figure 3 demonstrates the effects of sampling a wave above and below the Nyquist rate. In Figure 3a, we see the original analog wave and its spectrum in the frequency domain. If this signal is sampled above the Nyquist rate, as shown in Figure 3b, it can be correctly reconstructed. If the signal is sampled below the Nyquist rate, as shown in Figure 3c, there is not enough information to properly reconstruct the wave and aliasing occurs.
How can we capture a sine wave with only two points per cycle? We know in advance that it is a sine wave. There's only one possible sine wave that can be formulated, given two points during each cycle. Any waveform can be broken down into sine waves, so we have a way of discretely capturing (digitizing) analog sounds.
There's one last issue we must cover--playback. Because the waves are discretely sampled and are represented by few points, they're going to look quite square, as shown in Figure 4. Square looking digital signals connote "high frequency." If you play them back as is, they're going to have lots of noise--in fact, you'll hear a complete harmonic overtone of your sample. This lends a nasty metallic sound. The solution is to filter the output of the DAC in analog, eliminating all frequencies above half the sampling rate.
The Sound Blaster Family
We've seen 10 models of Sound Blaster from Creative Labs and dozens of clones from third parties. Fortunately, they all share the common architecture first presented in the Sound Blaster 1.5. By far, the Sound Blaster 16, Sound Blaster PRO2, and Sound Blaster 2.1 sold the most units, and they're still selling today.
This article will focus on the Sound Blaster 1.5, because it's the lowest common denominator. The higher models all support the Sound Blaster 1.5's modes.
Detecting the Sound Blaster
You can find the Sound Blaster by parsing the BLASTER environment variable. It's the easiest method, and it's recommended by Creative Labs and other manufacturers, especially because it can't crash the user's machine. This is how we'll do it here.
The BLASTER environment variable comprises several sections. A typical BLASTER variable might look like this:
BLASTER=A220 I5 D1 H5 P330 M250 T6
where:
* A is the base port address (here, it's 220h). Values may be 210h to 280h.
* I is the IRQ (interrupt request) level. Values may be 2, 3, 5, 7, or 10. * D is the 8-bit DMA channel. Values may be 0, 1, or 3.
* H, if present, is the 16-bit DMA channel. Values may be 5, 6, or 7.
* P, if present, is the port for MPU401, either 300h or 330h.
* T is the type of Sound Blaster card.
You can program the Sound Blaster in 14 easy steps:
* Reset the DSP (digital signal processor) and put it into a known state.
* Set up your interrupt service routine (ISR). * Enable the IRQ the Sound Blaster card is using.
* Program the DAC speaker.
* Program the DMA controller for a single cycle transfer.
* Program the playback rate (time constant).
* Program the DSP output/input for single-cycle transfer.
Transfer begins immediately after step 7. At this point, you can go draw some graphics, read a disk, and get some coffee. When the buffer is done, the Sound Blaster will generate an interrupt. This will transfer control to the ISR.
Next:
* Acknowledge the DSP.
* Send the programmable interrupt controller (PIC) an end of interrupt (EOI).
* Program the Sound Blaster to play another buffer or set a flag to show that we are done playing.
After we have finished all data transfers, we need to:
* Disable the DAC speaker. * Disable the IRQ.
* Unhook the ISR.
* Reset the Sound Blaster DSP, leaving it in a good state, ready to work with other applications.
Detect and Reset
Before we assume that the presence of the BLASTER variable means we have a Sound Blaster in the system, we can check to see if a DSP really does exist at the specified port, shown in Table 1, and attempt to send it a reset by doing the following:
* Write a 1 to the sb_RESET port.
* Wait three microseconds (msec).
* Write a 0 to sb_RESET.
* Read sb_READ_STATUS (up to 65,535 times), waiting for the msb (most significant bit) to be set.
* If the msb never gets set, no Sound Blaster card is present.
* Read sb_READ_DATA. If the return value is AAh, a Sound Blaster card is present.
* If the return value is not AAh, repeat Steps 4 to 6 until the count runs out, or a Sound Blaster card is found.
Reading and Writing
To read data from the DSP, we must read from sb_READ_STATUS until the msb is set. Then, read from sb_READ_DATA:
unsigned sb_ReadDSP(unsigned baseport) while(!(0x80 & inp(baseport + sb_READ_STATUS))); //waiting for the //MSB to be set return((unsigned)inp(baseport + sb_READ_DATA)); }
To write to the DSP, read from sb_COMMAND_STATUS until the msb is reset. Then, write the desired command (or command data) to sb_WRITE_COMMAND:
void sb_WriteDSP(unsigned baseport, unsigned value) { while(0x80 & inp (baseport +sb_WRITE_STATUS)); // wait for the MSB to be clear } outp(baseport + sb_WRITE_COMMAND, (int)value); }
Handling DSP Interrupts
The DSP will generate an interrupt when ever it's done recording or playing a DMA buffer. To keep the system from crashing and the Sound Blaster playing, we need to set up an ISR. Each interrupt must be acknowledged by reading sb_ACKIRQ. This tells the DSP that you have received the interrupt, and it can stop pulling the line.
Using DSP commands
A DSP revision of 1.xx accepts 20 commands, as shown in Table 2. We will only need to use five commands to get up and running. To simplify the following discussion, we will use the example functions
sb_ReadDSP and sb_WriteDSP.
The DAC speaker controls what we hear (and what the Sound Blaster hears). With the speaker on, we can hear the digitized playback, but the Sound Blaster can't hear us (record), and vice versa.
To turn on the speaker, send sb_DACSPKRON to the DSP and wait 112 msec for the DSP to complete the operation:
sb_WriteDSP(baseport, sb_DACSPKRON);
Turning the speaker off is similar; send sb_DACSPKROFF to the DSP and wait even longer (220 msec)--no one said this hardware was fast:
sb_WriteDSP(baseport, sb_DACSPKROFF);
The sb_SETTIMECONST command sets how many samples per second the DSP will record or playback, but it doesn't take a sampling rate directly. We must convert from Hz to the Sound Blaster time constant. The time constant is always an unsigned byte:
tc = 256 - (1000000 / (num_channels * sampling_rate)); sb_WriteDSP(baseport, sb_SETTIMECONST); sb_WriteDSP(baseport, rate);
To play a sound, send the DSP one of the output sound commands. We'll use sb_PLAY8BITMONO. Follow this command with 2 bytes representing the size of the buffer. The buffer can be between 1 and 65,536 bytes. No one would want to program the Sound Blaster to transfer 0 bytes, so 0 means 1 byte, and 65,535 means 65,536 bytes:
lowbyte = (unsigned char)(buffsize - 1); highbyte = (unsigned char)((buffsize - 1) >> 8)); sb_WriteDSP(baseport, sb_PLAY8BITMONO); sb_WriteDSP(baseport, lowbyte); sb_WriteDSP(baseport, highbyte);
Interrupt Programming
No discussion of Sound Blaster DSP programming would be complete without mention of the 8259A PIC. Integrally related is the 80x86 processor's interrupt mechanism, including the vector table. Let's go over the steps involved in an IRQ and its handling.
The Sound Blaster must go through nine steps to build an IRQ:
* The Sound Blaster DSP signals the PIC that it wants to interrupt the CPU.
* The PIC checks the interrupt mask register (IMR) to see if this is cool.
* If so, the PIC checks to see if any higher- priority IRQ's are being serviced.
* If so, it waits until they send an end of interrupt (EOI) to the PIC.
* If not, the PIC sends a signal to the CPU over a dedicated line.
* If the interrupt flag is set, the CPU replies with an interrupt acknowledge (INTA).
* The PIC then sends an IRQ and the IRQ level.
* The CPU pushes the flags, CS, and IP registers--in that order--on the current stack
* The CPU jumps to the address specified in the vector table for this IRQ.
To respond to an IRQ (for ISR's only):
* Tell the hardware (the Sound Blaster DSP) to stop pulling the interrupt line.
* Send an EOI to the PIC.
* Set a global variable--a flag--for main program loop (this step is optional).
* Prepare the next sound buffer.
* Return from interrupt (IRET instruction).
There's a simple INT 21 (DOS) call to hook the interrupt vector, which is even easier in C. We also need to enable our interrupt in the PIC itself. To do this, read the IMR, reset the bit corresponding to the IRQ level to which the Sound Blaster DSP is set, and write the IMR:
#define dig_IMRPORT 0x21 temp = inp(dig_IMRPORT); //Read the IMR temp &= dig_onmask[irqlevel]; //Enableour channel outp(dig_IMRPORT, temp); //Write the IMR
Programming the DMA Controller
The 8237A high-performance programmable direct memory access (DMA) controller provides a way to transfer data between memory and the I/O bus without using CPU. If you program it properly, it allows for easy and nearly overhead-free data transfers. Make a mistake, however, and you've as good as sent a garbage truck to dump a pile of trash in memory!
An AT-class machine has two DMA controllers and eight DMA channels. The DMA controllers have 44 I/O ports and four modes of operation.
We'll only discuss channels 0, 1, and 3 (2 is used by the floppy controller). These are the 8-bit channels. Channels 4 to 7 are 16-bit channels and aren't used by 8- bit Sound Blasters.
We're interested in single-cycle DMA mode, which means one byte is transferred by the DMA controller for each data request (DREQ) it receives from the Sound Blaster DSP.
There are nine steps to programming a DMA controller. Steps 2 through 8 employ either shared registers--used for all channels--or channel specific registers:
* Disable interrupts.
* Disable our DMA channel (shared register).
* Reset the flip-flop (shared register).
* Set our channel's mode (channel-specific register).
* Program the address register (channel-specific register).
* Program the page register (channel-specific register).
* Program the count register with one less than the actual transfer count (channel-specific register).
* Enable the DMA channel (shared register).
* Enable interrupts
The DMA controller works with physical addresses, not with segment:offset addresses, not with selectors, and so on. In real mode, it's easy to translate from segment:offset to a physical address (protected-mode selectors can be translated as well). The DMA controller works with a page number, which is literally the physical address divided by 65,536. A DMA buffer cannot cross such a physical page address; you must verify that your buffer meets this criterion! Within the physical page, the DMA controller increments an offset. Code to translate from segment:offset to physical page and offset is:
off = (*((unsigned _far *)&(sound))); seg = (*((unsigned _far *)&(sound) + 1)); seg <<= 4; padd = seg + off; // calc physical address page = padd >> 16; // calc page number
Don't be put off by our method of obtaining the segment and offset of a pointer. This may seem complex, but we're simply taking a pointer to sound; the first word this points to is the offset, and the second is the segment.
Explanations of the workings of the DMA controller tend to be very lengthy. Fortunately, the DMA controller is very well documented. We refer you to two books that provide in-depth explanations for further reading: Developer Kit for Sound Blaster Series, 2nd Ed. (Creative Labs, 1993); and The Indispensable PC Hardware Book (Addison-Wesley, 1993), by Hans-Peter Messmer.
The Code
The code that accompanies this article compiles with Microsoft C/C++ 7, Borland C/C++ 3.1 and 4.0. It should port easily to other DOS C environments. We used the large memory model when we compiled it. We tested it with a Sound Blaster 1.5, 2.1, Pro 2, 16, and AWE32. The code is available on CompuServe in the SDFORUM in the GDMag Library.
Read more about:
FeaturesYou May Also Like