[wdmaudiodev] Re: Issue with AVStream audio renderer

  • From: Mike Horgan <mhorgan@xxxxxxxxx>
  • To: "wdmaudiodev@xxxxxxxxxxxxx" <wdmaudiodev@xxxxxxxxxxxxx>
  • Date: Tue, 12 Apr 2011 10:03:27 -0700

Tim,

Thx for the response.

The capture filter is not doing any timestamping.  It is reading from the 
device (48 khz audio).  As the output pin on the capture filter transitions 
from "acquire" to "run" it synchronizes with the input audio from the device.  
When the pin's process() method is called, if there is no audio buffered yet 
from the device method doesn't adjust the offset on the ksstream, but it does 
return success.  It doesn't block.  I see it's process() method get called a 
number of times at the beginning before any audio is available.  It seems that 
the process() method gets called about once every ppq interval.  In my case 
this defaults to 2 so from then on it has 96 samples of input audio everytime 
it is called.

The render filter's process() method doesn't expose a clock and it doesn't do 
anything with the timestamps on the incoming stream buffers.  I haven't 
actually hooked the renderer up to the hw yet, so it just dumps the data by 
adjusting the stream offset and then returning.

The logging would indicate that once steady state is achieved (after all the 
allocator frames are pre-filled), the capture filter's process() method gets 
called and adjusted by 96 samples until it is full and then a call is made to 
the process() method on the renderer filter.

It doesn't strike me as a situation where it is just spinning as quick as 
possible in kernel space moving data from capture filter to render filter.  It 
should be gated by the data coming in and I can move the mouse around and do 
other things on the desktop while in this mode.  I just can't seem to access 
GraphEdit's UI.

As to why the device isn't class compliant: Right or wrong, the hw has never 
been class compliant.  The perception has always been that there has always 
been a lot of dsp processing which goes on in the driver which required a 
custom driver.  I suspect (though I've never actually done it) that one could 
perform that processing in the context of a filter driver under usbaudio.  That 
might be more complex than what we are trying to achieve.  The driver must also 
support ASIO.  This is currently done via a custom driver interface so the 
AVStream aspects aren't involved.  I don't think this same thing could be done 
with a filter driver under usbaudio, so ASIO would need to build on the system 
audio stack?

Mike Horgan

From: wdmaudiodev-bounce@xxxxxxxxxxxxx 
[mailto:wdmaudiodev-bounce@xxxxxxxxxxxxx] On Behalf Of Tim Roberts
Sent: Tuesday, April 12, 2011 12:24 PM
To: wdmaudiodev@xxxxxxxxxxxxx
Subject: [wdmaudiodev] Re: Issue with AVStream audio renderer

Mike Horgan wrote:
I am developing an AVStream miniDriver (WDM) which is exposing two filters:

Audio capture filter
Audio renderer filter

The audio device itself is a USB device.

The capture filter seems to be functioning.  I can drop that filter into 
GraphEdit and render it's output pin to Default DirectSound Device and play the 
graph.  I have a signal generator driving the input of my audio device and when 
the graph is running I can hear the tone on the speakers.  The state of 
everything seems fine since I can use the GraphEdit buttons to 
start/stop/pause/start/stop the graph.

The render filter I worked on next.  Right now it's Process() method just dumps 
any audio it gets on the floor.  It does this by 
KsStreamPointerAdvanceOffsets()'ing the leading edge everytime it gets called.

I can drop both filters into GraphEdit and if I render the output pin of the 
capture filter, it connects right up to the input pin of my render filter.  
However, if I start the graph the start button stays depressed and I don't see 
the stop button in the UI go red.  From debugging (and logging) I see the 
capture filter preload all the allocator frames with audio from the device and 
then the render filter moves from "pause" to "run".  The capture filter moved 
through the states to run at the beginning and the render filter moved all the 
way to "pause".  The logs show that render filter is continually fed stream 
pointers and the addresses match the order that the capture filter filled them. 
 The capture filter continues to get called and fills buffers so it seems at 
one level the graph is running.

How are you handling the pacing and timestamping here?  Is your capture filter 
actually reading from your device?  (And if so, why is your device not USB 
Audio Class compliant so you can user usbaudio.sys?)  Are you blocking to wait 
for the capture device to deliver data?

Here's example.  If you were producing fake data in your capture filter with no 
timing considerations, and you were sucking up the data in your render filter 
with no timing considerations, the two filters would communicate as fast as 
possible, in kernel mode, using 100% CPU, while doing absolutely nothing.

Is your capture filter putting timestamps on the packets?  Is your render 
filter handling timestamps?


--

Tim Roberts, timr@xxxxxxxxx<mailto:timr@xxxxxxxxx>

Providenza & Boekelheide, Inc.

Other related posts: