Windows entirely relies on you to set the pace.
The exact mechanism for this varies by driver model/miniport. WaveCyclic uses
GetPosition.
If the GetPosition call indicates that there is room for more audio data,
Windows will get it from the apps and then write it into the buffer.
From: Matthieu Collette<mailto:matthieu.collette@xxxxxxx>
Sent: Wednesday, April 27, 2016 2:30 AM
To: Matthew van Eerde<mailto:Matthew.van.Eerde@xxxxxxxxxxxxx>
Cc: wdmaudiodev@xxxxxxxxxxxxx<mailto:wdmaudiodev@xxxxxxxxxxxxx>
Subject: Re: [wdmaudiodev] How to perform adaptative resampling ?
You are definitely right, writing an audio driver is hard, interesting but hard.
The remote playback device sends me data feedback, such as current frame number
being played and how full the playback buffer is.
Using this feedback data and other data such as the current sample rate and the
remote playback buffer size, I have the kind of architecture you were referring
to in a previous mail.
Without using a DMA buffer on the driver side, isn't there is a way to
implement GetPosition and compute some kind of fake position depending on the
audio feedback received from the remote playback device ?
How the GetPosition returned value is used by the system to adjust the pace ?
I can't limit myself to Windows10 at the moment.
Thanks for your answers
2016-04-26 19:38 GMT+02:00 Matthew van Eerde
<Matthew.van.Eerde@xxxxxxxxxxxxx<mailto:Matthew.van.Eerde@xxxxxxxxxxxxx>>:
The documentation is all going to be about how to write an audio driver that
feeds a digital to analog converter (DAC) that is wired to a speaker, or is fed
by an analog to digital converter (ADC) that is wired to a microphone.
Writing an audio driver is, to be honest, kind of hard.
What you’re doing – writing a virtual audio driver that pretends to be feeding
a DAC, but is actually doing much more complicated things – is much harder.
You’ll have to figure out how to *pretend* to be wired to a DAC (so you don’t
confuse Windows,) while still actually servicing your downstream client, all
without underrunning or overrunning your buffer.
After the drift problems are solved, there’s also A/V sync to consider. That
will be very challenging to address within the WaveCyclic constraints.
If you can limit yourself to Windows 10, I would suggest going with
IMiniportWaveRTOutputStream instead – that gives you nice SetWritePacket and
GetOutputStreamPresentationPosition methods, which are much more flexible.
https://msdn.microsoft.com/en-us/library/windows/hardware/dn946534(v=vs.85).aspx
From: Matthieu Collette<mailto:matthieu.collette@xxxxxxx>
Sent: Tuesday, April 26, 2016 10:08 AM
To: Matthew van Eerde<mailto:Matthew.van.Eerde@xxxxxxxxxxxxx>
Cc: wdmaudiodev@xxxxxxxxxxxxx<mailto:wdmaudiodev@xxxxxxxxxxxxx>
Subject: Re: [wdmaudiodev] How to perform adaptative resampling ?
Hi Matthew !
Thanks for your answer.
I have more questions :)
According to the documentation here:
https://msdn.microsoft.com/en-us/library/windows/hardware/ff536716(v=vs.85).aspx
the position returned represents the miniport driver's best estimate of the
byte position of the data currently in the DAC or ADC.
Does "in the DAC or ADC" mean in my remote playback device ?
In my first attempt, I used a circular buffer and experienced some odd
behaviours such as noise from times to times, due to a bad estimation I
presume, so I decided to get rid of that buffer management and directly send
audio data from kernel to user space using a socket. The only method I am
interested in is IDmaChannel::CopyTo to get audio data. I would prefer not
using a circular buffer.
Computing precisely this estimation seems to be a problem to be because I have
to rely on the feedback data periodically sent by the remote playback device
and I have no guarantee to receive this feedback on time nor receiving this
feedback at all.
2016-04-26 18:05 GMT+02:00 Matthew van Eerde
<Matthew.van.Eerde@xxxxxxxxxxxxx<mailto:Matthew.van.Eerde@xxxxxxxxxxxxx>>:
The Windows audio clock is driven by the audio endpoint – that is to say, you.
You in turn are feeding an application, which is feeding a remote device. So to
avoid drift issues, you need to be driven by the remote device.
Once you’ve got that architected, implement IMiniportWaveCyclic::GetPosition so
as to keep pace with the remote device. Then it will all just work.
From: Matthieu Collette<mailto:matthieu.collette@xxxxxxx>
Sent: Tuesday, April 26, 2016 5:04 AM
To: wdmaudiodev@xxxxxxxxxxxxx<mailto:wdmaudiodev@xxxxxxxxxxxxx>
Subject: [wdmaudiodev] How to perform adaptative resampling ?
Hi,
I have modified the MSVAD driver sample to be able to send audio data to a
remote device.
This driver sends audio data to a local application using a connection oriented
socket, then this application sends audio data to the remote device using a
datagram oriented socket. As much as possible, audio data must not be altered
to preserve a bit perfect audio path from the driver to the remote playback
device.
It works almost as expected regardless the clock synchronisation I am actually
facing to, problem I am still struggling with.
When both the source (computer + driver) and remote playback device are
configured to use a specific audio format (sample rate, bit depth, channels,
...), I can see that after a few minutes of continuous playback, a buffer under
run occurs on the remote device. Less often it is a buffer over run but in both
cases, the problem is similar : both devices have a distinct clock which I can
synchronise.
I do not have access to the local streaming application nor I have access to
the remote playback device. The only feedback I have during playback is how
much the remote device playback buffer is filled, feedback sent by the remote
playback device.
Using this feedback, I want to maintain a constant amount of audio data in that
buffer, and to achieve that, I need a way to accelerate or slow down the pace
at which I send audio data from the driver to this remote device.
My driver implements both IDmaChannel and IMiniportWaveCyclicStream interfaces.
I wonder how, using such interfaces, I can slow down / accelerate the pace at
which I send audio data to the remote device in order to provide it a constant
amount of samples.
Could it be possible to use IMiniportWaveCyclicStream::SetNotificationFreq and
/ or IPortWaveCyclic::Notify method to do what I need ?
Assuming that I can just rely on the remote device audio feedback, is it
relevant to use a wave cyclic driver ?
Thanks in advance for any tips !
Cheers
Matt