[haiku-commits] Re: haiku: hrev49508 - src/kits/media

  • From: Ingo Weinhold <ingo_weinhold@xxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Tue, 18 Aug 2015 03:48:10 +0200

On 08/17/2015 11:52 PM, Dario Casalinuovo wrote:

But even if they didn't use the enqueue time... is that by itself a
reason to not use it?

The BeOS one worked well, i'm also wondering if it's a false positive
issue. Anyway at this point why not use the dequeue_time such as i'm
doing or both.

As Julian tried hard to explain: Because the lateness value you're computing now is not useful and computing it based on the enqueuing time would actually solve problems.

Using it makes the lateness handling implementation simply more
resilient to error cases. It doesn't make anything worse, but
improves the results in edge cases. Improvements over BeOS have been
done in many other parts of Haiku as well, so that's not unusual.

There are cases where similar problems occurs also using the
enqueue_time, since we are not considering how much time passed between
enqueue and dequeque. I find the "enqueque - event_time" lateness
calculus just bad, it's more logic to define it as "event_time -
RealTime()". I think the enqueue_time could still be used to make things
much precise and avoid problems but without creating wrong logic.

Unfortunately the lateness parameter of HandleEvent() is not documented properly. Maybe some Be Newsletter or sample code sheds some light on what it is actually supposed to mean (probably not really -- at least the sample code I've worked with was fairly buggy). However, there are obvious issues with the status quo, which a lateness computed based on the buffer arrival time would not have:

1. What you compute now, definitely is not a particularly insightful value, since it could be computed just as well by the HandleEvent() implementation.

2. By ignoring the enqueuing time, information is lost to the implementation of a derived class.

3. (Really just a consequence of the above.) Existing code expects the lateness to be the buffer arrival lateness. E.g. cf. the audio mixer's AudioMixer::HandleInputBuffer() implementation [1]. The lateness value is passed to NotifyLateProducer(), which notifies the producer that its buffer arrived late (*). However, ATM the buffer arrival time is ignored, so there is obviously something askew. As Julian already explained, it is important to assign the blame for the lateness correctly. If the buffer arrived in time, but something causes the wake-up from the wait in ControlLoop() to be delayed, assigning the blame to the upstream node is simple wrong. It will produce buffers earlier, so they will arrive earlier, but the wake-up time will not change, so that, unless the cause of the delayed wake-up has disappeared, the buffer will still appear late. If, however, the node receiving the buffer was assigned the blame (if it actually finished processing the buffer late), its own latency would be increased, compensating for the delays.

So, long story short: It simply makes sense to define HandleEvent()'s lateness parameter as the lateness of buffer arrival and compute it respectively.

For example i'm considering about adding a new function such as
BMediaEventLooper::SuggestSchedulingLatency() to calculate the
scheduling latency using also the enqueue value.

I've done some code hacks to try to adjust the scheduling latency and
the nodes looked like more reactive and smooth than now. As said before
the nodes can work out with the different latency in various portions of
the code, and try to adjust things once the load change or when we have
a pathologic lateness. In this case the function mentioned will take
care to never increase the scheduling latency but use an average
formula, taking care of various limits such as the
estimate_max_scheduling_latency value.

It would appear helpful to have a way to get some (fairly) accurate scheduling latency estimate and use it to adjust the value the node uses dynamically. However, that requires kernel support that we don't have ATM and one would at least need another value: how much longer processing the buffer takes depending on the system load. In theory that would allow us to dynamically adjust the node's latency according to the system load. In practice, we're not only lacking the required kernel support, the implementation would be more complicated, and it probably wouldn't work anyway, since the total latency of the node chain wouldn't allow us to do the adjustments quick enough on load changes.

So I'd just leave things as they are: Use a poor guesstimate for the scheduling latency and adjust (i.e. increase) the internal latency according to the actual lateness values.

I mean... you say that there is no decision of who is late. But what
would you do with the lateness value then?

It's up to the BBufferConsumer/BBufferProducer to decide this.

Unless there is an exact definition what the lateness value means, it is not possible to make any decision based on it. As explained above, there's really only one meaning that is useful -- lateness of buffer arrival -- so we should define that as the intended meaning and compute it accordingly.

How do you propose would a node later on find out who was
responsible for lateness, who should get the late-notice, given only
the value, without any definition? It does not seem possible to me
to use it meaningfully then.

Starting from the fact that blackouts shouldn't happen,

Not sure why you're so hung up on the "blackouts". Yes, there's obviously an issue that should be fixed, but the same effect could be caused by heavy system load. So this is something that the media kit should be able to handle gracefully.

and if it's
happening the problem is elsewhere, we may provide additional facilities
so that the underlying layer decide what to do.

[I assume you actually mean the "layer" implemented on top of (aka class derived from) BMediaEventLooper and BMediaNode and friends.]

As Julian and I explained, using a different semantics of lateness would already do the trick (modulo bugs in other places like AudioMixer::HandleInputBuffer(), of course).

[...]
Hard real-time is only possible if the operating system (i.e. kernel
and everything around it) supports it, which is not the case with Haiku.

I don't see anything that would prevent to run a different kind of
media_kit based on handling semaphores and areas instead of relying on
ports and areas.

Me neither. However, I also don't really see a reason to do that.

CU, Ingo

(*) AudioMixer::HandleInputBuffer() guards the lateness handling block with a check against fLastLateness. This behavior is incorrect. E.g. when a lateness of 1s has been handled and the producer notified, it will start (and finish) producing buffers 1s earlier. If thereafter a buffer is say .5s late, this lateness will simply be ignored and no further latency adjustment will happen. Instead every late buffer (beyond an acceptable jitter) should be reported to the producer. The producer in turn must track late notices, so it e.g. doesn't apply both late notices of two subsequent buffers, even if the first latency adjustment was made after the second buffer was produced. Thus, latencies will be adjusted until the buffers no longer arrive late.

[1] http://cgit.haiku-os.org/haiku/tree/src/add-ons/media/media-add-ons/mixer/AudioMixer.cpp?id=92a3fa86dbc1f99f2c063a967fc650da499bad34#n322



Other related posts: