[wdmaudiodev] Problem in AVStream Capture and renderer filter synchronisation

  • From: swapnil kamble <swap.kamble@xxxxxxxxx>
  • To: wdmaudiodev@xxxxxxxxxxxxx
  • Date: Thu, 12 Feb 2009 16:32:38 +0530

Hi All,

        I have 2 separate filters, each having their own DispatchProcess
routines. As my current driver is based on "avssamp" sample. Currently my
both filter are having one single unique dataformat of 44100 , 16bit , 2

Capture Process :-

         In case of capture filter's DispatchProcess, I read from a file and
render it on Capture pin's Data obviously depending on Capture Pin's
BytesAvailable. So here I calculate how much data to be rendered


    // Compute the number of bytes of audio data necessary to move the

    // forward TimeDelta time.  Remember that TimeDelta is in units of


    ULONG Samples = (ULONG)( (m_WaveFormat.nSamplesPerSec * TimeDelta) /
10000000 );

    ULONG Bytes = Samples * (m_WaveFormat.wBitsPerSample / 8) *

This comes out to be 5884 and ProcessPin->BytesAvailable is 8192. So 5884
bytes are copied on Pin.

Render Process :-

    In case of render process where same file is rendered with same format
44100,16,2 ProcessPin->BytesAvailable is 1764. I checked with data it is
getting received correctly but just  in less bytes and which is different
from captures request(8192), the slower rate, may be causing problems . I
found that it was using Time Interval for DPC based on
VideoInfoHeader->AvgTimerPerFrame which is 333667. My conjecture is that
since it was a sample, only capture, and having complete data ready to send
in CaptureProcess, M$ used same interval value for video as well as too
audio .
             To me this looked more dark when checked file of 11sec is
taking 40sec to render at this rate(333667) on render filter by graphedit.
So I tried with lower different values could make it tentatively 11-12 sec,
but I am sure this will not be accurate. Do you know what standard values to
provide in AVStream in Timer for DPC ? Do I need different timer values for
capture and render ?

So Can I increase the size of reception of data in RenderProcess ? Or Is
there any other way of getting both of them working in correct way ?


|| Hare Krishna Hare Krishna Krishna Krishna Hare Hare ||
|| Hare Rama    Hare Rama   Rama   Rama    Hare Hare ||

Other related posts: