[haiku-development] Re: MediaPlayer in latest build

  • From: David McPaul <dlmcpaul@xxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Mon, 14 Dec 2009 07:18:28 +1100

2009/12/14 Axel Dörfler <axeld@xxxxxxxxxxxxxxxx>:
> David McPaul <dlmcpaul@xxxxxxxxx> wrote:
>> Ok I have committed a fix for the ChunkCache to make video work again.
>>  I am surprised audio worked.
>
> Did'nt see this mail before answering to your commit message; I guess
> you can consider most questions answered, thanks!
>
>> b) even if we modfiied the code to search for a chunk large enough we
>> still might not find one and again no memory available.
>>
>> I understand you were trying to allow the cache to use all the memory
>> pool assigned to it but unless we want to make the cache a lot more
>> complicated I don't see this working very well.
>>
>> The cache is there to smooth over the reader being unable to keep up
>> with the decoder, I would prefer if the limit was based on time (ie
>> we
>> cache x seconds of chunks) but a limited no works fine.
> [...]
>> I might also remove the 2048 rounding on the buffers as I don't think
>> it is valid anymore as the buffers can just match what is read in.
>
> I've merged this part together, as I think this is where the problem in
> your implementation lies now: the performance critical parts of the
> chunk cache (besides the disk I/O itself) are now in a rather bad shape,
> as they do (unchecked) memory allocations, and are using non-realtime
> memory.

It's a queue of 10 pointers constantly changing, the buffers
themselves are in rtm memory.  STL containers do allow you to pass in
your own allocator so we could put it in the rtm pool but it hardly
seems worth it.

I would prefer to use STL containers rather than private kernel structures.

The performance problem is the constant memcpy from reader buffer to
chunk buffer.  In theory it would be better if the cache could just
take ownership of each buffer given.

> I think the solution is to only try to cover half (or two thirds or
> something like that) of the complete chunk buffer at any time. A fixed
> number of chunks doesn't really make any sense, since a chunk can be
> arbitrarily sized.
> Furthermore, a time based cache doesn't really make much sense either,
> as the thing you are most interested in is avoiding disk I/O.

Hmm, I don't see the purpose of ChunkCache as trying to buffer as much
of the stream in memory as it can.

I think it is just there in case of a drop in I/O performance (Less
for disk I/O but more useful for network I/O).  Once the cache is
filled then each decoder GetNextBuffer call is matched by a Reader
GetNextBuffer to replace the removed Chunk.  It is the same amount of
Disk I/O just x chunks ahead

>> The FindKeyFrame method does not change the buffer position so no
>> need
>> to empty the cache.
>
> Ah, makes sense again. We might still want to lock there, or are readers
> supposed to do their own locking?

If they need to.  The readers I am familiar with implement this call
without doing any I/O or affecting the stream in any way.

I think you are seeing the ChunkCache as a mechanism to buffer up
small chunks in memory and somehow reduce disk I/O and I don't think
that will work.  The problem of the mp3 reader needs to be solved at
the mp3 reader level.  Right now the mp3 reader reads a frame at a
time but really should do a larger read and break it up into frames.

-- 
Cheers
David

Other related posts: