[haiku-commits] Re: haiku: hrev49503 - src/kits/media src headers/os/media
- From: Julian Harnath <julian.harnath@xxxxxxxxxxxxxx>
- To: <haiku-commits@xxxxxxxxxxxxx>
- Date: Mon, 3 Aug 2015 18:43:08 +0200
Hey Dario,
On 03.08.2015 13:51, Dario Casalinuovo wrote:
The algorithm flux is far more understandable in my version than before
due to it's linearity.
I don't find the old algorithm particularly un-linear. It also doesn't
make too much sense IMO to reason with big-O notation here -- it's an
event loop and most of the time it will generally be sleeping. Also, the
inner loop in the old implementation will be broken out of whenever an
event needs attention.
You even misuse variables like "lateness" which should be a delta,
in whichever way you feel. Some comments below:
What do you mean for that?
You assign the future timestamp from waitUntil to lateness, and then
only in the end actually compute the lateness from it by substracting
the current time. It doesn't make too much sense conceptually and is
kinda confusing to the reader.
Because the code tried to solve a linear problem with nested loops. The
O notation means an upper bound, not the average case. If we are
pathologically late the function may just go to handle events without
moving the port before, then at the next iteration we will have to redo
queue calculus, but there are not events so we read the port. In the
meantime more events may have come up and we go in a vicious circle
where we do quadratic iterations, where we are fine with linearity. In
my algorithm, if there are both messages to read and events queued the
ratio is 1:1, we always read the port and handle an event.
For reference, an alternative solution for this problem is in this patch
I wrote some time ago:
https://github.com/orangejua/haiku/commit/e34a7bb1583ace314e9c9132c6ec1534f94bca9c
It simply checks for messages via WaitForMessage(0) in case of late events.
I'm sorry, actually in this way we do less calculus, if there were
another performance opportunity it's to reverse the current ordering of
instructions and wait for messages then do calculus.
While it's always good to aim for performance, I think a few CPU cycles
more or less in this loop won't make much of a difference. There's not
much work done in it anyway, already now. Timing-wise it will certainly
be dominated by the scheduling latency and the work done by the node itself.
This also might be intended as a continuation of the previous arguments.
From bebook BMediaEventLooper::HandleEvent:
"lateness indicates how late the event is"
It's not the lateness from when the client called "AddEvent" as before.
It's how late we dispatched the event.
Also, on a side note, this field wasn't present in BeOS, why?
Not sure why BeOS didn't have it -- but I agree with Stephan that the
enqueue time does make sense to use. When events are running late, the
event producer must start enqueueing them earlier. If there is
additional jitter between the enqueue and the dispatch it's better to
have that as part of the lateness, so the producer can compensate for it
by producing earlier.
I recommend taking a look at bug ticket #7285 [0], especially bonefish's
comments about the seamlessness constraint which needs to be met here.
(You can also see from the comments there that I was initially skeptical
on the enqueue time as well, but later I changed my mind.)
Let me explain :
(AddEvent, time 1, performance time 5) -> we record the event queued at
time 1
>
(HasEvent(), time approx 1) -> Wait until 5 - latency
(WaitMessage(), time approx 5 - latency) -> Well dispatch message
At this point if we were using the enqueued time what value we return?
Well, in your example there is no lateness, so we'd return lateness 0.
Not using the enqueue time becomes problematic in case the
BMediaEventLooper becomes preempted in the wrong place, as explained by
bonefish in comment 8 of [0].
[0]
https://dev.haiku-os.org/ticket/7285
Regards,
--
Julian
Other related posts: