Friday, 27 January 2012

Handling multi-channel audio in NAudio

One of the recurring questions on the NAudio support forums is to do with how you can route different sounds to different outputs in a multi-channel soundcard setup. For example, can you play one MP3 file out of one speaker and a different one out of the other? If you have four outputs, can you route a different signal to each one?

The first issue to deal with is that just because your soundcard has multiple outputs, doesn’t mean you can necessarily open WaveOut with multiple outs. That depends on how the writers of the device driver have chosen to present the card’s capabilities to Windows. For example a four output card may appear as though it were two separate stereo soundcards. The good news is that if you have an ASIO driver, you ought to be able to open it and address all the outputs.

Having got that out of the way, in NAudio it is possible for audio streams to have any number of channels. The WaveFormat class has a channel count, and though this is normally set at 1 or 2, there is no reason why you can’t set it to 8.

What would be useful is an implementation of IWaveProvider that allows us to connect different inputs to particular outputs, kind of like a virtual patch bay. For example, if you had two Mp3FileReaders, and wanted to connect the left channel of the first to output 1 and the left channel of the second to output 2, this class would let you do that.

So I’ve created something I’ve called the MultiplexingWaveProvider (if you can think of a better name, let me know in the comments). In the constructor, you simply provide all the inputs you wish to use, and specify the number of output channels you would like. By default the inputs will be mapped directly onto the outputs (and wrap round if there are less outputs than inputs – so a single mono input would be automatically copied to every output), but these can be changed.

Creating and Configuring MultiplexingWaveProvider

In the following example, we create a new four-channel WaveProvider, so the first two outputs will play left and right channel from input1 and the second two outputs will have the left and right channels from input2. Note that input1 and input2 must be at the same sample rate and bit depth.

var input1 = new Mp3FileReader("test1.mp3");
var input2 = new Mp3FileReader("test2.mp3");
var waveProvider = new MultiplexingWaveProvider(new IWaveProvider[] { input1, input2 }, 4));

Then you can configure the outputs, which is done using ConnectInputToOutput:


The numbers used are zero-based, so connecting inputs 2 and 3 to outputs 0 and 1 means that test2.mp3 will now play out of the first two outputs instead of the second two. In this example I have connected input 1 (i.e. the right channel of test1.mp3) to both outputs 2 and 3. So you can copy the same input to multiple output channels, and not all input channels need a mapping.

Implementation of MultiplexingWaveProvider

The bulk of the work to achieve this is performed in the Read method of MultiplexingWaveProvider. The first task is to work out how many “sample frames” are required. A sample frame is a single sample in a mono signal, a left and right pair in a stereo signal, and so on. Once we have worked out how many sample frames we need, we then attempt to read that many sample frames from every one of the input WaveProviders (irrespective of whether they are connected to an output – we want to keep them in sync). Then, using our mappings dictionary, work out if any of the channels from this input WaveProvider are needed in the output. Since samples are interleaved in both input and output waveproviders, we can’t do just one Array.Copy – we must copy each sample across individually and put it into the right place.

A well behaved Read method will always return count unless it has reached the end of its available data (and then it should always return 0 in every subsequent call). The way we do this is work out the maximum number of sample frames read out of any of the inputs, and use that to report back the count that is read. This means that we will keep going until we have reached the end of all of our inputs. Because buffers might be reused, it is important that we zero out the output buffer if there was no available input data.

Here’s the implementation as it currently stands:

public int Read(byte[] buffer, int offset, int count)
    int sampleFramesRequested = count / (bytesPerSample * outputChannelCount);
    int inputOffset = 0;
    int sampleFramesRead = 0;
    // now we must read from all inputs, even if we don't need their data, so they stay in sync
    foreach (var input in inputs)
        int bytesRequired = sampleFramesRequested * bytesPerSample * input.WaveFormat.Channels;
        byte[] inputBuffer = new byte[bytesRequired];
        int bytesRead = input.Read(inputBuffer, 0, bytesRequired);
        sampleFramesRead = Math.Max(sampleFramesRead, bytesRead / (bytesPerSample * input.WaveFormat.Channels));

        for (int n = 0; n < input.WaveFormat.Channels; n++)
            int inputIndex = inputOffset + n;
            for (int outputIndex = 0; outputIndex < outputChannelCount; outputIndex++)
                if (mappings[outputIndex] == inputIndex)
                    int inputBufferOffset = n * bytesPerSample;
                    int outputBufferOffset = offset + outputIndex * bytesPerSample;
                    int sample = 0;
                    while (sample < sampleFramesRequested && inputBufferOffset < bytesRead)
                        Array.Copy(inputBuffer, inputBufferOffset, buffer, outputBufferOffset, bytesPerSample);
                        outputBufferOffset += bytesPerSample * outputChannelCount;
                        inputBufferOffset += bytesPerSample * input.WaveFormat.Channels;
                    // clear the end
                    while (sample < sampleFramesRequested)
                        Array.Clear(buffer, outputBufferOffset, bytesPerSample);
                        outputBufferOffset += bytesPerSample * outputChannelCount;
        inputOffset += input.WaveFormat.Channels;

    return sampleFramesRead * bytesPerSample * outputChannelCount;


Looking at the code above, you will probably notice that this could be made more efficient if we knew in advance whether we were dealing with 16, 24 or 32 bit input audio (it currently has lots of calls to Array.Copy to copy just 2, 3 or 4 bytes). And I might make three versions of this class at some point, to ensure that this performs a bit better. Another weakness in the current design is the creation of buffers every call to Read, which is something that I generally avoid since it gives work to the garbage collector (update – this is fixed in the latest code).

I have written a full suite of unit tests for this class, so if it does need some performance tuning, there is a safety net to ensure nothing gets broken along the way.


NAudio 1.5 also has a ISampleProvider interface, which is a much more programmer friendly way of dealing with 32 bit floating point audio. I have also made MultiplexingSampleProvider for the next version of NAudio. One interesting possibility would be then to build on that to create a kind of bus matrix, where every input can be mixed by different amounts into each of the output channels.


This class actually has uses beyond supporting soundcards with more than 2 outputs. You could use it to swap left and right channels in a stereo signal, or provide a simple switch that selects between several mono inputs.

You also don’t need to output to the soundcard. The WaveFileReader will happily write multi-channel WAV files. However, there are no guarantees about how other programs will deal with WAVs that have more than two channels in them.


I’ve already checked in the initial version to the latest codebase, so expect this to be part of NAudio 1.6. The only caution is that I might change the class name if I come up with a better idea.

Post a Comment