Suppose for the sake of argument the IAudioClient::Initialize format was
stereo, 16-bit integer, 44.1 kHz sampling rate.
You will need to feed the IAudioRenderClient an average of 44100 "frames" of
data every second. These will be in packets of approximately 10 milliseconds -
that is, approximately 441 frames.
Due to alignment considerations, some hardware will insist on slightly more -
perhaps packets of 448 frames, or 10.15-ish milliseconds.
Each frame consists of two "samples" which are 16-bit integers - one for the L
channel and one for the R channel.
So, yes. If GetBuffer() talks about 441 frames, you have a buffer size of 441 *
2 * 16 / 8 bytes - or more generically, GetBufferSize() *
WAVEFORMATEX.nBlockAlign bytes, where WAVEFORMATEX.nBlockAlign =
WAVEFORMATEX.nChannels * WAVEFORMATEX.wBitsPerSample / BITS_PER_BYTE.
-----Original Message-----
From: wdmaudiodev-bounce@xxxxxxxxxxxxx
[mailto:wdmaudiodev-bounce@xxxxxxxxxxxxx] On Behalf Of Jerry Evans
Sent: Monday, September 14, 2015 12:09 PM
To: wdmaudiodev@xxxxxxxxxxxxx
Subject: [wdmaudiodev] Re: WASAPI exclusive mode audio render issue (2)?
Ach. Typo.
If getBuffer() claims to have filled 441 frames16bit
- do we mean that the buffer contains 441 * n channels of data, i.e.
for
we've a total buffer size of 441 * 2 * 16? with data interleaved as
[L0][R0][L1][R1][...][Ln][Rn]