[haiku-commits] Re: r38439 - haiku/trunk/src/apps/mediaplayer

  • From: Stephan Assmus <superstippi@xxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Mon, 30 Aug 2010 12:53:48 +0200

Am 30.08.2010 12:34, schrieb Axel Dörfler:
Stephan Assmus<superstippi@xxxxxx>  wrote:
Am 30.08.2010 11:59, schrieb Axel Dörfler:
superstippi@xxxxxx wrote:
Log:
For videos that are longer than 240 frames, we anticipate
that the graphical precision of the seeking slider is not
enough to seek to individual frames on purpose. So we filter
the requested seeking position to keyframes if there is a
video track. The difference in snappiness when seeking is
like night and day.
Shouldn't you check how far away the key frame is from the original
target? I mean it's only important for bad quality movies with
larger
time spans between key frames, but the difference might be
noticeable.
What would be the gain? If the seek slider is imprecise anyway, why
make
the code more complicated and potentially slower?

If I have a movie that has a key frame about every four seconds, I
still might want to seek between them, and I might easily be able to do
so via the slider; the 240 frame heuristic is pretty weak at that, I
think.

Granted, it should be smarter. It's a quick way to make sure one can still seek short animations clips by single frame stepping... :-) Pretty much the only situation where it's useful. The frame count should be derived from the actual pixel width of the seek slider...

If I understood the change correctly, you will now be able to seek to
keyframes only if the video has more than 240 frames which usually
corresponds to about 10 seconds of video.
Can't we use the number of keyframes vs. the number of actual graphical
seek positions as a base for this (useful) optimization instead?

At the moment, it doesn't work very well to seek to a position that requires many frames to be decoded to reach it (since MediaPlayer tries to avoid the artifacts when seeking to non-keyframes). The problem here is that advertised node latency is completely violated in this situation. When watching a movie, one or even two seconds imprecision don't really matter much at all, but sudden huge delays (user noticable), during which nodes run out of buffers (implementation defect), are a problem. So this really is quite a useful solution for the moment. To fix the media node problems, one would have to introduce yet more threads for decoding, so fetching the next frame or chunk of audio from the media node producer threads can have a guaranteed maximum latency. I don't really like to complicate things even more, there are enough threads running as it is.

Best regards,
-Stephan

Other related posts: