My first guess is that all the capture buffers are ending up queued to the rendering device. This is relatively easy to determine, just reduce the number of rendering buffers, to insure it is less than the number of capture buffers. If this is the issue, then the solution is in how the process dispatch manages the buffer depth for the rendering filter. The process dispatch may manage the size of the queued buffers by returning STATUS_PENDING when enough buffers are queued, then explicitly requesting a process dispatch from the interrupt handler. There are other techniques for managing this situation as well. Hary ________________________________ From: wdmaudiodev-bounce@xxxxxxxxxxxxx [mailto:wdmaudiodev-bounce@xxxxxxxxxxxxx] On Behalf Of Dale Hill Sent: Friday, April 22, 2011 2:39 PM To: wdmaudiodev@xxxxxxxxxxxxx Subject: [wdmaudiodev] Full duplex audio filter All, I'm working on a audio filter that used the AVStream Minidriver model. The minidriver actually creates two separate filters: a filter-centric filter for rendering and a pin-centric filter for capture. If the filter is only rendering or only capturing it works just fine. However if the filter is running in full duplex mode, the rendering seems to dominate the processing such that the capture only occurs after the rendering has completed. Questions: 1. Has anyone experience this sort of thing before? Are there some things about full-duplex rendering/capturing that I need to account for. Are AVStream mini drivers inherently single threaded? 2. If this is a matter of slowing down the rendering side, can someone recommend some techniques for doing so? Any additional insight would be appreciated. TIA, Dale