[haiku-commits] Re: haiku: hrev49508 - src/kits/media

  • From: Ingo Weinhold <ingo_weinhold@xxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Fri, 21 Aug 2015 00:22:29 +0200

On 19.08.2015 15:58, Dario Casalinuovo wrote:

From what i can see, the Be's released nodes expect that
lateness is
the dequeue lateness...

If you have concrete pointers, feel free to post them. However...

The first example that comes to my mind is the calculus done in the
LoggingConsumer. It's ignoring lateness but do the same calculus i'm
doing in my implementation of BMediaEventLooper to check if it's late.

If anything, that only contradicts your point. Why compute a value that is already passed in as an argument?

The same formula is showed more than one time in the bebook.

Unless at any of these occurrences it is also mentioned that this is how the lateness value passed to HandleEvent() is computed, that doesn't help either.

It's right, but still i can't see why we should have a scheduling
latency if we don't use it in a meaningful way.

The thing is, the scheduling latency is not particularly useful,
save for guessing some initial total node latency. Afterward only
the total node latency is relevant. You can't really separate its
components -- scheduling and event latency -- anyway. E.g. heavy
system load may cause the thread to wake up slower, be preempted
more often, and generally do computations more slowly due to
processor cache effects. It is a fairly useless exercise to try and
separate the two conceptual latency components.

Well at this point i would change the lateness parameter name to some
more clear name, such as enqueueLatency.


I don't think *Latency is a fitting name. Since the whole queue thing is an implementation detail, I'd rename it to "arrivalLateness" if I had to. It would be even better, if we could actually get the time of arrival at the port instead of the time the thread read the message from the port.

It also doesn't matter which component you increase, since the net
effect will be exactly the same (handling an event will start that
much earlier).

Let consider for simplicity that all nodes will have the same jitter in
such situations, such as X, then i'm observing that instead to increase
all nodes latency of an X value, the nodes can recover better if they
use some fraction of this, taking into account also other things.

That doesn't sound plausible. If a node detects that it is too slow (i.e. it took more time to process a buffer than its current latency allows for) and you distribute that lateness over all nodes, you will give the node to blame only a fraction of the additional time it needs while the other nodes get additional time that they possibly don't need. In particular all additional latency given to upstream nodes will evaporate, since each nodes waits until the time it is supposed to process the buffer (according to the node's (total downstream) latency) anyway.

Using a useful lateness semantics will 1. fix the actual problem and
2. be a fairly simple change. It will not require introducing
additional API. The "flexibility improvements" you mention are very
vague. At least the automatic scheduling adjustment part you
mentioned won't improve anything. It will just make things more
complicated, since you'll have to additionally manipulate the
lateness value to avoid double adjustments of the total node latency.

Well, not really. If we adjust the scheduling latency using the enqueue
lateness, then we can subtract this value and try to report only the
part needed for the late notice.

Increasing the latency of a node when the previous (upstream) node was too slow doesn't make much sense. The previous node will continue to be too slow, since its latency remains the same.

Comparing the time difference between actual and intended wake-up time with the scheduling latency and using it to adjust the latter does make some sense, but ... <cf. quoted paragraph from my previous mail>

Sorry, this is all so vague that it's hard to say anything about it.
You imply that we deal with complex graphs.

Isn't the actual media_kit something supporting complex graphs? It can
theoretically have any number of nodes with any kind of N:N connections.
The fact that Haiku doesn't have applications using it extensively
doesn't mean that we shouldn't go in the direction that almost all
general purpose realtime processing systems are going. We have to
attract developers, otherwise what's the point of Haiku?

AFAIK the media kit supports arbitrary DAGs and I don't see why it shouldn't work fine with those as is (module bugs). I don't think it can deal with cycles. TBH I have no clue what requirements professional audio workstations would have in this respect. I believe the professional tools on other platforms use their own application internal framework anyway, so they probably would only have a few media nodes to interact with the hardware nodes (and maybe the mixer node).

However, the most complex thing I see is the audio mixer node, which
has multiple inputs and multiple outputs. Pretty much everything
else is basically just simple chains of nodes. I don't see what
graph algorithms you intend to use and for what (concrete) purpose.

Since the media_kit isn't so smart at such things compared to jack or
AudioUnits, i doubt we'll see any. Just remember how many companies such
as steinberg were interested in using BeOS to do DAW processing, but
resigned. Isn't it a reason to leave the media_kit as something for
Desktop purposes (MediaPlayer etc.) but provide an improved kit for

I believe the most commonly accepted reading of BeOS history is that Be's "focus shift" drove those companies away, not a lacking media kit.

That aside, I have to admit that I don't have the faintest idea how the popular OSS audio frameworks compare to the media kit, nor do I know what is required (and currently missing) to enable which type of audio applications. If you do and you see insurmountable limitations, feel free to start a new media kit -- or, to save time, just port an OSS framework that satisfy your needs.

CU, Ingo

Other related posts: