[haiku-development] Re: Ideas related to the media_kit

  • From: Dario Casalinuovo <b.vitruvio@xxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Tue, 17 Nov 2015 15:16:13 +0100

Well, first, I wouldn't expect that n in this case is large. Detecting
that a buffer is late at the consumer happens quickly, and the
documentation says that a producer is to handle a lateness notice
immediately.
And even if a few late buffers pile up, then the producer will (and
rightfully so) get lateness notices for them. As soon as it tunes up its
latency (or whatever its run-mode dictates), things will run in order again.


This is why the media_kit can be defined as soft real time. Latency could
be set to get the right value just at the next cycle, why add the
possibility that more delay can happen? Even if the implementation will try
to recover, there's an undefined amount of behavior to take into account.


Why would we? In this example there are two possible cases:
1) The producer gets several lateness notices before it gets the chance to
increase its latency.

2) The producer gets a lateness notice, increases its latency, but then
gets several more notices, which are now outdated.


There's not a certainty that we could preview this. Why we want to have the
possibility that this happen?


Both cases are not a problem, because both methods
BBufferConsumer::NotifyLateProducer() and
BBufferProducer::LateNoticeReceived() have an additional paramater
"performanceTime" which contains the time at which the lateness
notification happened. By looking at that, the producer can see...
in 1) that the notices belong together and only one adjustment is
necesssary
in 2) that the second set of notices is outdated and can be ignored.


Which parameter you will take into account to skip next late notices? What
will allow you to decide which notice is the right one?


How's this different from the way it currently works?


The difference is that this decision is taken server side and there's
guaranty that any status change will be *always* taken into account before
to execute the next cycle.


If the run-mode is B_DROP_DATA, and there is lateness at node X, the node
will drop the buffer and that's it.


What I've been trying to expose is that there's not anything in the current
implementation that can't be done in the cycle based one. Instead there's a
lot of undefined behavior in the node-centric one.


We could be in
B_INCREASE_LATENCY, then log what we need let the buffer flow through
the chain and when the media_server will evaluate the current status, it
will increase the latency for this chain.


What would the media server do differently to the latencies than what the
nodes currently do by themselves?


As said the difference is that we can adjust the behavior of every single
node being sure that any decision we take to solve the problems will take
effect at the next cycle. This for example delete by design the need of a
BMediaRoster::RollNode call. Among any detail that we can discuss, I hope
that you can see how this is bad from a conceptually point of view, even if
"it works". It can work better.



Also, additionally, why we should receive the buffer then schedule it,
then handle it?


Well, why not? :-P Isn't that what nodes are supposed to do?


Why not just handle this server side and let the media_server decide at
which moment the thread will be activated and process the buffer?


Wasn't better if the media_server scheduled it to be
executed at some cycle without any complication?


Asking your own question: why?

Can you outline any specific situation where the server-centric approach
would really improve things?


I've already pointed out how making all working in a atomic way can improve
the design by excluding certain situations to happen.


Where it works better than the current approach (out of principle, not
because of bugs in our current implementation)?


I've previously pointed out the case of buffers already scheduled, and how
flawed is the LateNotice protocol, even if you can hack it out it looks to
me as an hack itself. This is just a little part of what happens behind in
the media_kit. Since this is the pattern used in the entire media_kit, you
can expect situations like this happens more frequently than you can
preview.

Any special load that it can handle but the current approach can't?


Jack which use a design like this, can achieve 0 latency processing with a
negligible throughput loss. The media_kit for the reasons we already
discussed can't achieve this.

--
Best Regards,
Dario

Other related posts: