[haiku-development] Re: Ideas related to the media_kit

  • From: Julian Harnath <julian.harnath@xxxxxxxxxxxxxx>
  • To: <haiku-development@xxxxxxxxxxxxx>
  • Date: Tue, 17 Nov 2015 13:10:47 +0100

Hey Dario,

On 17.11.2015 12:25, Dario Casalinuovo wrote:

The final thing is, where's the warranty that the producer hasn't
already scheduled other buffers so our late notice will take effect
after n buffers?

Well, first, I wouldn't expect that n in this case is large. Detecting that a buffer is late at the consumer happens quickly, and the documentation says that a producer is to handle a lateness notice immediately.
And even if a few late buffers pile up, then the producer will (and rightfully so) get lateness notices for them. As soon as it tunes up its latency (or whatever its run-mode dictates), things will run in order again.

How can we avoid that the n buffers which the producer already scheduled
before we sent the LateNotice will not cause other notices itself making
things even complex to be solved? We may run into an ever-increase
latency by definition.

Why would we? In this example there are two possible cases:
1) The producer gets several lateness notices before it gets the chance to increase its latency.
2) The producer gets a lateness notice, increases its latency, but then gets several more notices, which are now outdated.

Both cases are not a problem, because both methods BBufferConsumer::NotifyLateProducer() and BBufferProducer::LateNoticeReceived() have an additional paramater "performanceTime" which contains the time at which the lateness notification happened. By looking at that, the producer can see...
in 1) that the notices belong together and only one adjustment is necesssary
in 2) that the second set of notices is outdated and can be ignored.

So, no run-away latency happens.

Where's the warranty that our LateNotice will get blocked, and possibly
block us from receiving the next buffers? Since the LateNotice is sent
through the same port which is used to read the BUFFER_RECEIVED message.

I don't see what you mean - who would block what here?

On the other hand, we avoid the possibility of conflicting read/writes.
If the buffer was late at node X in the chain depending on the various
run_modes it can behave differently. It could be skipped and in this
case the chain returns immediately to the server.

How's this different from the way it currently works? If the run-mode is B_DROP_DATA, and there is lateness at node X, the node will drop the buffer and that's it.

We could be in
B_INCREASE_LATENCY, then log what we need let the buffer flow through
the chain and when the media_server will evaluate the current status, it
will increase the latency for this chain.

What would the media server do differently to the latencies than what the nodes currently do by themselves?

Also, additionally, why we should receive the buffer then schedule it,
then handle it?

Well, why not? :-P Isn't that what nodes are supposed to do?

Wasn't better if the media_server scheduled it to be
executed at some cycle without any complication?

Asking your own question: why?

Can you outline any specific situation where the server-centric approach would really improve things? Where it works better than the current approach (out of principle, not because of bugs in our current implementation)? Any special load that it can handle but the current approach can't?


Best regards,
Julian

Other related posts: