[haiku-bugs] Re: [Haiku] #7285: BSoundPlayer sends buffers late, with ever-increasing latency

  • From: "Pete" <trac@xxxxxxxxxxxx>
  • Date: Fri, 29 Apr 2011 00:03:35 -0000

#7285: BSoundPlayer sends buffers late, with ever-increasing latency
------------------------------+-----------------------
   Reporter:  Pete            |      Owner:  axeld
       Type:  bug             |     Status:  new
   Priority:  normal          |  Milestone:  R1
  Component:  Kits/Media Kit  |    Version:  R1/alpha2
 Resolution:                  |   Keywords:
 Blocked By:                  |   Blocking:
Has a Patch:  0               |   Platform:  All
------------------------------+-----------------------

Comment (by Pete):

 Replying to [comment:8 bonefish]:
 >
 > I don't understand your analysis.
 Possibly because I'm not sure I did, either... (:-))
 I took my time responding because I want to make sure I'm covering all the
 subtle possibilities this time.  There seem to be a lot of confounding
 factors involved.  Forgive me if a lot of the following is what you
 already expected.

 First, let's break down the original problem in a bit more detail, which
 in fact is basically as you described -- using the  lateness computed by
 the ControlLoop after waking up is wrong.  If the wake-up is late, due to
 a blackout, the loop will always report the lateness, whatever the current
 EventLatency (which is what the MediaEventLooper both computes the waiting
 time from, and checks lateness against).  The mixer would always respond
 to that by notifying the producer, which would correspondingly increase
 its own latency a bit more.  This would happen every time there was a
 blackout at the 'wrong' time, until the latency got longer than could be
 covered by the available buffer pool, and the sound would go to pieces.

 And this is what I meant by the 'wrong time'.  If the blackout occurred,
 say, while the loop was sleeping, it would not make anything late.  It had
 to happen actually at the wakeup time (or between then and when lateness
 was calculated) to cause the problem.  And changing any of the latencies
 in the chain (downstream, internal, or upstream) couldn't change the
 situation in any way, because both the 'expected' time and the wakeup time
 were based on the same latency.

 So, your suggestion to move the control loop's test for lateness seemed
 rational, but in practice it doesn't help much.  I put the calculation at
 the top of the loop, where the head of the queue gets checked, but -- for
 shorter buffers anyway -- the glitches still happened about as often.  I
 realized that this is because it ''is'' a loop.  If the buffer duration is
 longer than the latency, things actually work, because there will normally
 be no buffers kept in the queue.  When each buffer is sent for processing,
 the loop goes into infinite timeout until it is woken by the next input
 buffer.  If the input is late (or is signalled late) upstream latency will
 be increased as it should, and it won't be late next time.

 If the buffer duration is ''shorter'' than the latency, though, buffers
 will spend some time in the queue, and will be accessed there by the loop
 at the moment the previous buffer is handled (rather than there being an
 idle interval).  If there is a delay here, the situation is the same as
 before -- whatever the current latency value, the event will be determined
 to be 'late', and we're back to the endless increase.

 Hope that makes more sense this time.  And in general, yes, you're right.
 The "seamless" latency handling is broken.  But actually I don't think
 it's really broken in the control loop.  In fact I've ended up putting it
 back the way it was (excepting an early fix for a sign error in the
 initial lateness tests).  I think the real point is that the mixer was
 using the information wrongly.

 Taking lateness information from the control loop is never going to work,
 as I think I've shown.  What I had to do was move the (upstream) lateness
 check (and NotifyLateProducer call) into the mixer's BufferReceived()
 method, and replace the original check in HandleInputBuffer with a limited
 adjustment for the mixer's own EventLatency to handle the delays happening
 in its control loop.  The 'lateness' parameter itself is now just ignored.
 [I'm not sure what BeOS meant the 'lateness' in the HandleInputBuffer call
 to represent.  It's hard because under no circumstances can I make BeOS
 actually generate a LateNoticeReceived()!]

 With these mods, I no longer get runaway latency, and I can play live MIDI
 input through SqueekySynth.  There is still the occasional audible hiccup
 -- and corresponding "MAJOR Glitch!!" reported, but it is tolerable, and
 overall latency stays under 30msec.

 Still have no clue as to the basic cause of the blackouts.  Is it possible
 that a task-switch could occur, even though the thread is high priority?

 Anyway, I'll have to tidy up the code -- removing all the extraneous
 printfs etc. -- and then post the patch.

-- 
Ticket URL: <http://dev.haiku-os.org/ticket/7285#comment:9>
Haiku <http://dev.haiku-os.org>
Haiku - the operating system.

Other related posts: