[haiku-development] Re: MediaPlayer in latest build

  • From: David McPaul <dlmcpaul@xxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Tue, 15 Dec 2009 10:45:14 +1100

2009/12/15 Axel Dörfler <axeld@xxxxxxxxxxxxxxxx>:
> David McPaul <dlmcpaul@xxxxxxxxx> wrote:
>> 2009/12/15 Axel Dörfler <axeld@xxxxxxxxxxxxxxxx>:
>> >> The performance problem is the constant memcpy from reader buffer
>> > > to
>> >> chunk buffer.  In theory it would be better if the cache could
>> > > just
>> >> take ownership of each buffer given.
>> > Indeed, that's an API fault. I guess I will rework this as well,
>> > then.
>> That is rather a big change.  Is there a particular problem you are
>> trying to solve?  A situation where performance is an issue?
>
> A bad API that requires needless copies?

I look at it more from a how much time do we have.  I don't like the
copies taking place but I am not sure it it causing an issue.  I think
all the readers would need changing.

>> > The only case you can (and should) try to improve is the latency
>> > involved with I/O.
>> Having a seperate reader thread is what improves I/O latency even
>> with
>> a chunk cache size of 1.
>>
>> More cached chunks help when the reader is sometimes too slow but is
>> no help when the reader is always too slow.
>
> Sure, but that's not the issue I'm trying to solve. AFAIK there is no
> logic anywhere that would automatically wait with playback until enough
> buffers are precached.
> Before I rework the reader API, I will think about this some more, and
> present any plans on this list before I'm going to implement them.

Sure.

>> > In the case of audio, it also makes sense to read
>> > ahead the whole file in order to let the disk rest and eventually
>> > save
>> > some power.
>> Media files are too big to buffer in memory.
>
> Huh? Media files in general are too big? I could cache hundreds of mp3
> files in RAM if that would make any sense.

In general, the files I play are in the hundreds of Mb.  1080p video is Gb.

>> > Ie. it makes sense to batch several I/Os together - no
>> > matter if they end up as one large I/O or several smaller ones.
>> > Of course, at least in the case of hard drives, the kernel should
>> > do
>> > the precaching. However, that doesn't help you in low memory
>> > situations, or when doing network I/O.
>> I am thinking such things should be done in front of the reader by
>> the
>> file layer or network layer.
>
> That would have pretty much the same effect, and could be done as well.
> It would have the added advantage that not everything would be
> invalidated by seeking, but only those parts which actually need to be
> refetched would need to be loaded.
> However, since there could be more than one track in the file, one
> wouldn't know exactly what data to prefetch - for media files that are
> made for streaming, this wouldn't pose a problem, but for those that
> aren't, different pieces of a file might need to be prefetched at the
> same time.
>
> In any case, the idea to put the cache in front of the reader sounds
> worth further investigation, and would completely negate the need of
> the MediaExtractor thread.
>
>> >> The problem of the mp3 reader needs to be solved at
>> >> the mp3 reader level.  Right now the mp3 reader reads a frame at a
>> >> time but really should do a larger read and break it up into
>> > > frames.
>> > That would indeed be a good idea, but wouldn't solve any latency
>> > problems at all.
>> But would reduce I/O which is the slowest part of the process right
>> now.
>> Can you describe the problem you are trying to solve?
>
> I'm trying to redesign a critical part of the media playback experience
> to ensure no dropped audio frames even if the device/system is
> (extremely) busy, as much as possible (video isn't as important).

My current feeling is that the audio driver is unable to feed the
audio device in time and this is where we get glitches in the sound.

I am not sure if the reading and decoding process is the problem.

But I mentioned caching based on time rather than chunks or memory so
that we can guarantee say 1 second of audio/video playback before
playback fails.  Or a feedback mechanism might try and increase or
decrease that if the decode thread is working faster than the reader
thread.

> Also, I want to reduce the power consumption during media replay, and
> batching I/Os as much as possible makes a measurable difference
> (especially for audio where you could park the disk for several
> minutes).

Ok.

>> Your original commit message mentioned that the hickups still occur?
>
> Yes, this has little to do with the hickups that occur all the time.
> It's more about those that would still happen in critical situations
> even if the system works otherwise fine.

What was the effect of my changes.  Did it make the hickups worse?

-- 
Cheers
David

Other related posts: