[haiku-development] Media Kit design, BeOS message passing and future of it

  • From: Barrett <b.vitruvio@xxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Sun, 5 Jan 2014 14:54:42 +0100

Hello, while doing some study on how the media_kit work i read an old topic
about the BeOS design flaws. Into this thread a BeOS developer (JBQ) talked
about the fact that in professional audio the BeOS kernel was adding
additional latency due to the message passing, and this was cause of
glitches. I'm actually reading the media kit and media server code trying
to understand how it's implemented in Haiku.
From what i read, in Haiku buffers get never copied when they are passed
between media_nodes, they are shared among ports copying the reference to
the area, which is shared between apps. So i suspect this isn't the source
of such latency.

My question is, how the Haiku implementation is in this regard? When the
media_kit has been developed, any possible solution has been taken in
consideration to remove by design the cause?

Actually i'm doing this research since i'm trying to find a good topic for
my university thesis. Actually this just an idea, and i may end up with
something totally different, but it would be interesting to see if the
general design of the kit could be mantained, but changing it's way to
work, so that it can support any kind of data and rely on a real time
message passing model [1] [2].

Another addition should be to remove all those messy input/output,
source/destination, media_format structs out there and introduce classes
like BMediaEndPoint, BMediaInput, BMediaOutput (derived from the first) and
BStreamFormat. The idea come from the EndPoint code sample from the Be era,
but actually the result would be to modify the entire API to rely on this
class and remove the structs, differently from what it's done in the code
sample. At this point part of the negotiation and callback protocol could
be automatized and moved to this EndPoint class. The reality is that there
are a lot of such classes out there, the first two that comes to my mind
are the Cortex's Connection and the system mixer's Input/Output classes.
I'm sure i've seen more out there, and all are basically the same thing
expressed depending of the app needs. So in my personal vison, a hypotetic
future kit should supply something like that.

What i know, is that Be Inc. before to go down to the ring, has been
developing a new version of the kit. Unfortunately i can't get any
information about it, but basing on the words of the previously mentioned
dev and some comments i get in the newsletters and samples, i think they
were going out of this limitation. So the only thing i imagine they were
going with a callback based system, and shared memory.

I would like just to hear what do you think about it, and possibly if you
could give me some advice, i'll be glad.

Best Regards

[1] www4.in.tum.de/publ/papers/broyGrosuKleinFME97.pdf
[2] www.cse.msstate.edu/~yogi/dandass-mpirt-2004.pdf

Other related posts: