[wdmaudiodev] Re: WHQL for virtual audio driver

  • From: "Vincent Burel \(VB-Audio\)" <vincent.burel@xxxxxxxxxxxx>
  • To: <wdmaudiodev@xxxxxxxxxxxxx>
  • Date: Tue, 3 Nov 2015 19:40:12 +0100

Yes, but to make a virtual audio driver with a callback to user mode is a big
work (especially the validation) …

So, I would like to mention that Voicemeeter can do it for you: “Voicemeeter
Banana” features a virtual insert with an ASIO interface.

Then a user application, through this virtual ASIO insert, can take advantage
of the 2 virtual I/O and the aggregation of 3 physical devices (through 22
synchronized channels).

Note also that next coming Voicemeeter version can be used as audio engine
thanks to VoicemeeterRemote API …



For people searching such solution to process audio in a simple user
application, with all connectivity and audio interfaces, you can join our forum
and start to play with the current pre-release version

VoicemeterRemote API forum:
http://vbaudio.jcedeveloppement.com/forum/viewforum.php?f=8



Regards

Vincent Burel

www.voicemeeter.com



De : wdmaudiodev-bounce@xxxxxxxxxxxxx [mailto:wdmaudiodev-bounce@xxxxxxxxxxxxx]
De la part de Tim Roberts
Envoyé : mardi 3 novembre 2015 19:05
À : wdmaudiodev@xxxxxxxxxxxxx
Objet : [wdmaudiodev] Re: WHQL for virtual audio driver



Matthew van Eerde wrote:



Tim, when you say a “pipe to user mode”, do you have in mind:

1. an audio playback endpoint that can be used by a dumb app (like
Virtual Audio Cable does it), or

2. a custom interface which the driver would expose to “in the know”
user-mode code?



The closest thing to audio routing that Windows supports today is the WASAPI
audio loopback interface. In principle a driver could be packaged with a
user-mode service which includes a WASAPI loopback client; this could deliver
the audio back to the driver via a custom mechanism, and the driver could do
whatever it liked with it (e.g., deliver it up on a recording endpoint.)


People simply want to do system-wide audio effects processing, for reasons that
vary from silly to fascinating. As a consulting driver writer, I have had 3 or
4 such requests in the last few years. Ideally, they'd like to write an
effects processor that can be slipped in to the data stream just like an APO,
at either the capture or render end, but generically -- not associated with a
specific piece of hardware.

Now, whether I agree with it or not, I do understand the justifications that
led to the tight coupling between GFX APOs and drivers, and that coupling makes
the APO path unworkable in the generic case. As a fallback, the next design
choice is to have a fake audio device to which all of the applications can be
routed. The fake device's audio stream can then be routed to a user-mode
service, where it can be massaged and written to real speakers.

The WASAPI loopback interface lets me intercept the stream, but as far as I
know that's a read-only path -- there's no way for me to manipulate it and spew
it back out. I suppose if there were a "null" audio sink, I could use the
WASAPI loopback hooks on that, and redirect it to another audio sink. Hmm,
that option never occurred to me before.



--
Tim Roberts, timr@xxxxxxxxx
Providenza & Boekelheide, Inc.

Other related posts: