[wdmaudiodev] Re: WaveRT model conceptual flaw

  • From: "Daniel E. Germann" <deg@xxxxxxxx>
  • To: <wdmaudiodev@xxxxxxxxxxxxx>
  • Date: Thu, 4 Jun 2009 08:35:09 -0500

Jose Catena and Tim Roberts wrote...
...
(a lot of stuff about WaveRT, interrupts and timers)
...

You're probably right, Jose. There is probably a little more overhead involved in updating a list of timers than there is in handling a device interrupt. And if there were no timer interrupts running without WaveRT, I think you might have a point here. But Windows already has a timer interrupt that runs regardless of whether WaveRT is there or not, and I have a feeling that decrementing timers, setting events and figuring out which task to run next are small things compared to all that is involved in a full context switch -- which happens in both the timer and the interrupt scenarios. What's the old saying -- something about straining out a gnat and swallowing a camel? ;-) Just my 2 cents.

And I don't know if it works this way, but here's another thought. From the scheduler's point of view, device interrupts are not predictable. Timer interrupts are. The scheduler knows when the next timer interrupt will happen, and could take advantage of that information when deciding how to schedule tasks. But the scheduler does not know when the next interrupt from a device (in this case, a sound card) will happen. The audio stack might have a good guess, but the scheduler doesn't have a clue. Therefore, it can't use that information to help determine how to schedule tasks. A "smart" scheduler could potentially avoid many extra context switchers in the timer scenario that it could not avoid in the interrupt scenario.

I think another advantage of WaveRT is that more of the work happens in user mode. The less time drivers spend in kernel mode, the less chance they have of crashing the system. And my experiences with WinUSB (user mode USB drivers) have also been good. I really like the concept of doing "driver stuff" from user mode whenever possible. It's safer, easier to develop, and a better experience for the end user if something goes awry.

Now, the other changes that came along for the ride -- those I take issue with. I can see why the "default endpoint" concept can be a good thing. But taking sample rate and bit depth control away from application -- the only ones that know what the "right" rate is -- is a disaster for anyone who cares and irrelevent for anyone who doesn't because it provides absolutely no benefit. And then there's the whole mixer thing... but I digress.

-Dan

******************

WDMAUDIODEV addresses:
Post message: mailto:wdmaudiodev@xxxxxxxxxxxxx
Subscribe:    mailto:wdmaudiodev-request@xxxxxxxxxxxxx?subject=subscribe
Unsubscribe:  mailto:wdmaudiodev-request@xxxxxxxxxxxxx?subject=unsubscribe
Moderator:    mailto:wdmaudiodev-moderators@xxxxxxxxxxxxx

URL to WDMAUDIODEV page:
http://www.wdmaudiodev.com/

Other related posts: