[wdmaudiodev] Re: WaveRT model conceptual flaw

  • From: "Jose Catena" <jc1@xxxxxxxxxxx>
  • To: <wdmaudiodev@xxxxxxxxxxxxx>
  • Date: Thu, 4 Jun 2009 21:18:39 +0200

Hi again,

The issue is not the cpu overhead (although also valid), but timing
relationship directly translated to added latency, I see that the real
implications are not being understood.
The latency is doubled if we use a timer instead of the audio interrupt, not
because processing time, but because syncing. We'll see it in detail.
This kind of stuff is very familiar for me and I have a lot of experience
with it. I have developed systems that must comply with very hard timing
requirements, sometimes even below 20us (maximum response time), or 1 ms
along complex distributed system with communication lines between. I really
know what works, what doesn't, and why. Not because I'm something special,
this is proven and rather basic computing technology, it's only that ppl
that didn't have to meet hard rt requirements may ignore, and I couldn't.
And don't think that Windows is different... the principles are valid for
any situation, at the end is just math. So, if you want to understand how
the implications are much more important than what you think, please read.
Risking to be boring, I need to start from the beginning to clear the
misunderstanding I'm seeing here.

An advanced audio application wants to be notified when the number of
samples ahead of the playback position is below a specified threshold, or
what's the same at a given point in the playback position. The minimum audio
latency achievable is given by the maximum time the system will delay the
notification since that condition happens, and the optimum is the min
multiplied by two to counter for near 100% processing time. This is
extensible to any "real time" processing, it is, anything that must be done
before a given timeline after an *external* event happened. Any delay in
such notifications is directly translated to latency.

Note the definition for system rt latency, I rephrase: the maximum time to
process an EXTERNAL event. For us, that event is a given point in the
playback position.
The audio interrupt is synchronous with the playback buffer position, not
simply a timer. It signals exactly when the playback position reaches the
specified point, so only the system rt latency counts to the delay until the
relevant code is executed.
The timer interrupt is not synchronous with the audio playback position,
just a periodic timer. So, to begin with, we have to add the timer interrupt
period to the max system rt latency. An example: if we estimate the max sys
latency to be 2 ms, we want to be waked up at t + 4ms, set timer resolution
to 2 ms, we will be signaled anytime between +2 and +4 ms plus the system
Ideal audio processing period = max_sys_latency = 2 ms
a) for audio interrupt: audio_latency = sys_rt_latency * 2 = 4 ms
b) for timer: audio_latency = (max_sys_latency + timer_period) * 2 = (2 + 2)
* 2 = 8 ms.
This is like 2+2=4 for ppl that have worked with rt requirements.
If you don't know why optimal audio latency (the most time ahead current to
be written every period) is 2 * overall notification latency, disregard it,
it scales the same in both cases so the result proportion is the same:

We may reduce this unnecessarily added latency in the case of the timer (but
never eliminate), by selecting higher resolution for the timer. But then we
will multiply the frequency of interrupts, which is not good either. Loose /
loose for the timer option anyway.
For example, if we use 1 ms period, the latency will scale by 1.5 while the
interrupt frequency by two.
And please note, for advanced audio apps like multi-track editors, where low
latency and performance is most needed, doubling the timer interrupt rate
does as well the overall audio processing rate, so the overall overhead is
doubled along the whole path (ie all DMO/directshow object, mixing, etc,
will process blocs half in size at doubled rate). And still the overall
latency is always larger than with the audio interrupt.

Regarding to the difference between timer and other interrupts, I think I
was more than clear, please see earlier post. Just a brief resume:
At any interrupt other than the timer, the system only needs to enumerate
the tasks associated with that specific event, most probably only one. Of
those, the one with higher priority will be switched to if higher than
current task. THAT'S ALL.
Nothing else. Here we DO NOT NEED: update current task execution timers,
update a list of software timers, process the whole system task list, etc.

Anyway, after all, interrupts are there for a reason, the reason I
There is no advantage in using timed polling instead, and there are
important disadvantages.
This has been a very big mistake, unforgettable if done in the most used OS
by large, and for which most advanced audio software is available.
All I have explained is not an opinion, are facts I know as well as that
2+2=4. If you still don't understand, I'm very sorry, I know I'm not good as
teacher, I tend to assume audience knows what I consider prerequisites to
the issue, but tried my most.

Jose Catena


WDMAUDIODEV addresses:
Post message: mailto:wdmaudiodev@xxxxxxxxxxxxx
Subscribe:    mailto:wdmaudiodev-request@xxxxxxxxxxxxx?subject=subscribe
Unsubscribe:  mailto:wdmaudiodev-request@xxxxxxxxxxxxx?subject=unsubscribe
Moderator:    mailto:wdmaudiodev-moderators@xxxxxxxxxxxxx


Other related posts: