pete c wrote: > > >>>I do not do the synchronization. On receiving either a microphone > packet or a speaker >>>packet, I simply forward it to their processing > library. > With your WDM Driver(Capture)/GFX(render) implementation do you make > sure that the render packets come into the processing library before > the capture packets, > How could I possibly do that? The endpoints start at different times. I can't control that. It's up to the operating system. > so youre making sure that the render side is always ahead of the > capture side? Or does this not matter to the processing algorithm > library, render packets come in whenver with respect to capture packets? > That's up to the library. I feed in the data, they do the buffering. As I said, it's a tricky problem. You can't just assume that you'll get one input buffer and one output buffer and process them together. You have to accumulate the data in some kind of circular buffering scheme to make sure you have enough. If the circular buffers start to get out of whack (that is, one is running empty), then you have to decide how to handle it. Do you insert silence? Do you repeat a section? > In the case of your client or anyone else who would want to design an > AEC that would be globally applied to any generic off the shelf vista > audio application like audition, audacity, cooledit or even the > generic windows sound recorder that you have mentioned in your > previous posts, you have no choice but go with the design you have > implemented for you client (WDM-capture/GFX APO-render), or am i wrong > here, could we take the red pill with the recommended microsoft DMO > approach and apply the AEC system wide? > No, there is no choice. The Microsoft DMO approach applies to one application at a time. If you choose to buck that trend, then you must do something else. -- Tim Roberts, timr@xxxxxxxxx Providenza & Boekelheide, Inc.