How to Properly Align Sound Data In An Audio Buffer (e.g. 20bit to 24bit)

I'm just getting started on working with WASAPI. My program basically creates a custom signal (ranging from -1.0F to 1.0F) based on frequency, time, and amplitude.. and stores these parameters in a class for future use. Then, I want to load that data into an audio buffer that is compatible with the WASAPI mixer format stored in WAVEFORMATEXTENSIBLE. So when the WASAPI requests more buffer bytes, I can easily copy the audio buffer into the WASAPI buffer. I am fairly certain I am aligning the channels properly inside the buffer. It's the byte order inside the channel allotment that worries me. Basically I'm using a memcpy statement to copy the converted integer values into the byte buffer. I'm not sure this puts the bytes in the correct order so that the windows audio mixer plays it correctly. This is the function that I use to copy and paste my signal integer value into the buffer in the selected channels.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
template<>void WASAPI::copy_value_to_selected_channels<int>(const int valueToCopy, BYTE* buffer, unsigned int bufferIdx, const unsigned int bitsPerChannel,
    const unsigned int channelCount, unsigned long channelBufferMask, unsigned long channelActiveMask)
{
    int channelsLeft;
    int byteIdxWithinBlock;
    int bytesPerChannel;
    channelsLeft = channelCount;
    byteIdxWithinBlock = 0;
    bytesPerChannel = (bitsPerChannel / 8);
    while (channelsLeft > 0)
    {
        if ((channelBufferMask & 1) == 1)
        {
            if ((channelActiveMask & 1) == 1)
            {
                memcpy((void*)&(buffer[bufferIdx + byteIdxWithinBlock]), (void*)(&valueToCopy), bytesPerChannel);
            }
            else
            {
                memset((void*)&(buffer[bufferIdx + byteIdxWithinBlock]), 0, bytesPerChannel);
                // fill with zeros
            }
            channelsLeft = channelsLeft - 1;
            byteIdxWithinBlock += bytesPerChannel;
        }
        channelBufferMask >>= 1;
        channelActiveMask >>= 1;
    }
}


This is the debugged output that I get out of the program. I picked numbers near the 20bit maximum to capture all the bytes. You can tell the float to integer conversion is working correct using this equation.

The "signal_audiobufferfriendly_int" number shown in the debugged output is the "valueToCopy" parameter in the function above.

signal_audiobufferfriendly_int=signal_float*audio_signal_max;
audio_signal_max=( (2^activeBits)/2 ) - 1
signal_audiobufferfriendly_int=int(499179.8603629374)=499179

I copied the value to the left and right channels only for simplicity.

sample_index=00087904, byte_index=00027072, signal_float=0.9521118402, signal_audiobufferfriendly_int=00499179
bytes_in_hexidecimal= EB.9D.07. EB.9D.07. 00.00.00. 00.00.00. 00.00.00. 00.00.00.
bytes_in_decimal = 235.157.007. 235.157.007. 000.000.000. 000.000.000. 000.000.000. 000.000.000.
bytes_in_binary = 11101011.10011101.00000111. 11101011.10011101.00000111. 00000000.00000000.00000000. 00000000.00000000.00000000. 00000000.00000000.00000000. 00000000.00000000.00000000.

Here's the binary debug string generator function in case you are curious
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
TSTRING binary_string(BYTE* buffer, const unsigned int bufferBytesToConvertToString, const unsigned int bytesPerDelimiter, const TCHAR delimiter)
{
    const int BUFFERSIZE = 0x800;
    TSTRING tStrBinaryOutput = _T("");
    TSTRING tStrBinaryByte = _T("");
    unsigned char byteChar;
    TCHAR tczBinaryTemp[BUFFERSIZE];

    for (unsigned int bIdx = 0; bIdx < bufferBytesToConvertToString; bIdx++)
    {
        tStrBinaryByte = _T("");
        byteChar = buffer[bIdx];

        while (byteChar != 0) { tStrBinaryByte = (byteChar % 2 == 0 ? _T("0") : _T("1")) + tStrBinaryByte; byteChar /= 2; }
        while (tStrBinaryByte.length() < 8)
        {
            tStrBinaryByte = _T("0") + tStrBinaryByte;
        }
        if (bIdx % bytesPerDelimiter == 0 && bIdx != 0)
        {
            _stprintf_s(tczBinaryTemp, _T("%c%s."), delimiter, tStrBinaryByte.c_str());
            tStrBinaryOutput += tczBinaryTemp;
        }
        else
        {
            _stprintf_s(tczBinaryTemp, _T("%s."), tStrBinaryByte.c_str());
            tStrBinaryOutput += tczBinaryTemp;
        }
    }

    return tStrBinaryOutput;
}

The WAVEFORMATEXTENSIBLE structure in this case is:
Format.wFormatTag=WAVE_FORMAT_EXTENSIBLE
Format.nChannels=6
Format.nSamplesPerSec=48000
Format.nAvgBytesPerSec=864000
Format.nBlockAlign=18
Format.wBitsPerSample=24
Format.cbSize=22
wValidBitsPerSample=20
dwChannelMask=0x0000003f;
SubFormat=KSDATAFORMAT_SUBTYPE_PCM;

My debug print functions print each byte separately... in ascending memory address order.

Does it look like the bytes are aligned properly?
Last edited on
Topic archived. No new replies allowed.