@toad: why would he? o_O What does windows.h have to do with anything?
@OP:
One potential problem I see is that you might not produce enough audio in your audio handler.
Example:
if SoundHandler produces 3000 samples but SDL is requesting 4000 samples, you'll end up with 1/4th silence because you'll only use what's remaining in the buffer.
Also... if SoundHandler produces 6000 samples but SDL is requestion 4000, you'll play all 4000 samples, but then will be left with only 2000 in your sound, which means NEXT time the callback is called, you will only fill the buffer halfway.
So unless SDL is requesting the
exact number of samples (or an exact multiple) that SoundHandler is generating... then this code will have periodic gaps of silence.
The best option I can see in this setup is to not double buffer the audio. This is kind of a wasted step anyway. There's no point (I can see) in loading the audio into soundchannels[i].sound.samples only to copy it to the output buffer. Just load it to the output buffer directly. It'll be faster, it'll use less memory, and you can generate, on demand, exactly how much audio you need.
EDIT:
A "quick fix" without changing how your soundchannels work would be to put all this audio generation in a while loop... so it continues until the soundchannel isn't producing any more audio, or until the output buffer has been completely filled. As of right now you're only giving each channel one pass.
IE: you're doing this:
1 2 3 4
|
if( need_to_output_sound_data )
{
output_as_much_audio_as_we_have();
}
|
When you should do something like this:
1 2 3 4
|
while( audio_is_needed )
{
output_as_much_audio_as_we_have();
}
|
On a side note...software audio mixing sucks. This is one of the many reasons I dislike SDL.