I wrote a virtual audio device driver, based on the MSVAD SIMPLE sample. It basically does what the original sample does (saving 'play' output to a .wav file), but now I would like to change that behavior to sending that output to a USER mode process - which is not the same process that is using this virtual audio device via the WaveOut/WaveIn interface... So, the first question that comes to my mind, before wasting days to find out that an audio device driver cannot communicate with more than one process at a time, is: Can an audio device driver, based on MSVAD, be "opened" by both a regular multimedia application (using either WaveIn/WaveOut or DirectSound API) *and* an application that opens it via CreateFile() and communicates with it via DeviceIoControl() ? In other words, Can I take an MSVAD SIMPLE sample, add in its DriverEntry() something like this: driverObject->MajFunc[IRP_MJ_CREATE] = CreateHandler; driverObject->MajFunc[IRP_MJ_CLOSE] = CloseHandler; driverObject->MajFunc[IRP_MJ_DEVICE_CONTROL] = IoctlHandler; And expect to get the communication between the audio driver and the user-mode program (that opened the device driver via CreateFile()) to work? In my question, I am assuming of course that I implement CreateHandler(), CloseHandler() and IoctlHandler() in the same way I implemented them when I wrote "raw" WDM device driver (i.e. those that did not need to comply with port/miniport scheme). To better explain my question, I will try to describe the goal: Instead of simply saving the audio output to a .wav file (as MSVAD does in CMiniportWaveCyclicStreamMSVAD::CopyTo()), I would like to copy it to a MAPPED memory buffer that is shared between the driver and a user-mode process. To the best of my understanding from Microsoft's white paper, "User-Mode Interactions: Guidelines for Kernel-Mode Drivers", this is the recommended way of sending data from the kernel-mode driver to the user-mode application (and vice versa): http://www.microsoft.com/whdc/driver/kernel/KM-UMGuide.mspx The WDK documentation, on the other hand, suggests a method that - while it is based on the same principle (IRP_MJ_DEVICE_CONTROL) - seems to require following different protocol and rules. I am pretty confused here since I am not totally sure that I understand the terminology. http://msdn2.microsoft.com/en-us/library/ms794752.aspx Any idea which approach or direction to take? Is there a sample out there (or just a code snippet) that demonstrates that type of communication between the driver with and a user-mode process while being used by another application (via the DirectSound API)? The goal is to continue using the MSVAD-based driver that I already wrote without resorting to a complete re-write as a KS AVStream driver. Many thanks in advance, Don