[haiku-commits] Re: haiku: hrev47576 - src/add-ons/media/plugins/ffmpeg

  • From: Colin Günther <coling@xxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Tue, 29 Jul 2014 18:42:34 +0200

Hi Stephan,

> 
> these changes are highly appreciated. Pointed out some style violations
> below:

thank you for taking the time reading through the diff. It is highly 
appreciated. I corrected and implemented most of your comments. So I will only
answer below to those remarks, that I didn't address in commit hrev47595.

 
> >   #ifdef TRACE_AV_CODEC
> >       char buffer[1024];
> >       string_for_format(*inOutFormat, buffer, sizeof(buffer));
> > @@ -700,24 +710,35 @@ AVCodecDecoder::_DecodeVideo(void* outBuffer, 
> > int64* outFrameCount,
> >   status_t
> >   AVCodecDecoder::_DecodeNextVideoFrame()
> >   {
> > +    assert(fTempPacket.size >= 0);
> 
> Wrong assert. Conflicts with first if-check in the while-loop.
The assert ensures that we don't have negative packet sizes. This can actually 
occur when the fTempPacket wasn't correctly initilized with zero values. The 
if-statement in the while-loop doesn't account for a negative size value this
is why avcodec_decode_video2() would be fed with the negative size value and
crash. So when the assert() fires it really would point towards a bug in the 
FFmpeg Plugin. So, now do you still think that this assert is wrong?

Note: The code right below, follows the assert immediately, and is for better 
understandment of what the if-statement is.
> 
> >       bool firstRun = true;
> >       while (true) {
> >           media_header chunkMediaHeader;
> > -        status_t err = GetNextChunk(&fChunkBuffer, &fChunkBufferSize,
> > -            &chunkMediaHeader);
> > -        if (err != B_OK) {
> > -            TRACE("AVCodecDecoder::_DecodeVideo(): error from "
> > -                "GetNextChunk(): %s\n", strerror(err));
> > -            return err;
> > -        }
> > +
> > +        if (fTempPacket.size == 0) {
> > +            // Our packet buffer is empty, so fill it now.
> > +            status_t getNextChunkStatus    = GetNextChunk(&fChunkBuffer,
> 


> > +                &fChunkBufferSize, &chunkMediaHeader);
> > +            if (getNextChunkStatus != B_OK) {
> > +                TRACE("AVCodecDecoder::_DecodeNextVideoFrame(): error 
> > from "
> > +                    "GetNextChunk(): %s\n", strerror(err));
> > +                return getNextChunkStatus;
> > +            }
> > +
> > +            fTempPacket.data = static_cast<uint8_t*>(const_cast<void*>(
> > +                fChunkBuffer));
> 
> I wonder about the const_cast. Should fChunkBuffer be declared mutable?
This cast is like it is, because fTempPacket.data is declared as void* (without 
a const attribute). As it is an FFmpeg provided structure definition we can't 
do anything about it.
Leaving the const declaration in fChunkBuffer as is would be my recommendation. 
This way it transports the meaning "leave my data alone" to the point. Even 
FFmpeg doesn't mess with the content of fTempPacket.data, so it would make 
sense to change their declaration and add the const attribute.


> > ##########################################################################
> > ##
> >
> > Commit:      97f5a12f3692b17cb93edeb0f37939ee0290fc7d
> 
> > @@ -730,6 +748,31 @@ AVCodecDecoder::_DecodeNextVideoFrame()
> >                   fChunkBuffer));
> >               fTempPacket.size = fChunkBufferSize;
> >
> > +            fContext->reordered_opaque = chunkMediaHeader.start_time;
> > +                // Let ffmpeg handle the relationship between start_time 
> > and
> > +                // decoded video frame.
> > +                //
> > +                // Explanation:
> > +                // The received chunk buffer may not contain the next 
> > video
> > +                // frame to be decoded, due to frame reordering (e.g. 
> > MPEG1/2
> > +                // provides encoded video frames in a different order 
> > than the
> > +                // decoded video frame).
> > +                //
> > +                // FIXME: Research how to establish a meaningful 
> > relationship
> > +                // between start_time and decoded video frame when the 
> > received
> > +                // chunk buffer contains partial video frames. Maybe 
> > some data
> > +                // formats contain time stamps (ake pts / dts fields) 
> > that can
> > +                // be evaluated by FFMPEG. But as long as I don't have 
> > such
> > +                // video data to test it, it makes no sense to implement 
> > it.
> > +                //
> > +                // FIXME: Implement tracking start_time of video frames
> > +                // originating in data chunks that encode more than one 
> > video
> > +                // frame at a time. In that case on would increment the
> > +                // start_time for each consecutive frame of such a data 
> > chunk
> > +                // (like it is done for audio frame decoding). But as 
> > long as
> > +                // I don't have such video data to test it, it makes no 
> > sense
> > +                // to implement it.
> > +
> >   #ifdef LOG_STREAM_TO_FILE
> >               if (sDumpedPackets < 100) {
> >                   sStreamLogFile.Write(fChunkBuffer, fChunkBufferSize);
> > @@ -740,13 +783,6 @@ AVCodecDecoder::_DecodeNextVideoFrame()
> >   #endif
> >           }
> >
> > -        if (firstRun) {
> > -            firstRun = false;
> > -
> > -            fHeader.start_time = chunkMediaHeader.start_time;
> > -            fStartTime = chunkMediaHeader.start_time;
> > -        }
> > -
> >   #if DO_PROFILING
> >           bigtime_t startTime = system_time();
> >   #endif
> > @@ -857,12 +893,13 @@ AVCodecDecoder::_UpdateMediaHeaderForVideoFrame()
> >       fHeader.type = B_MEDIA_RAW_VIDEO;
> >       fHeader.file_pos = 0;
> >       fHeader.orig_size = 0;
> > +    fHeader.start_time = fRawDecodedPicture->reordered_opaque;
> >       fHeader.u.raw_video.field_gamma = 1.0;
> >       fHeader.u.raw_video.field_sequence = fFrame;
> >       fHeader.u.raw_video.field_number = 0;
> >       fHeader.u.raw_video.pulldown_number = 0;
> >       fHeader.u.raw_video.first_active_line = 1;
> > -    fHeader.u.raw_video.line_count = fContext->height;
> > +    fHeader.u.raw_video.line_count = fRawDecodedPicture->height;
> >
> >       TRACE("[v] start_time=%02d:%02d.%02d field_sequence=%lu\n",
> >           int((fHeader.start_time / 60000000) % 60),
> 
> I don't quiet understand this commit. How exactly is the start time now
> handled by FFmpeg? It just seems to take a start time which is already
> established and stores it in "reordered_opaque". This is then somehow
> transferred by FFmpeg to the decoded picture, so that we match the start
> times to pictures in case they have been re-ordered. Right? But why is
> the start-time correct in the first place?
Well, my usage of the wording "handling" might have been somewhat over the top. 
As you are correctly assuming, all that FFmpeg does is assigning the 
reordered_opaque to the correct video frame (By the way: it is done this way by 
the Google Chrome folks and this is where I realized the meaning of the 
reordered_opaque field).
Correctness of the start-time is assumed by the code. Start-time really is 
delivered by the call to GetNextChunk(). So what ever implements GetNextChunk() 
is responsible for delivering the correct start_time. You might think about it 
that way:
"Hey, FFmpeg decoder, I have an encoded video frame here, where I know exactly 
at what time it should be played. Please decode it for me and leave its 
start-time as is. And by the way, you are allowed to reorder frames as you 
like. I will just assume that the video frame returned by you, was actually 
encoded in the data chunk I've just passed to you".

Does this street speak explanation help to clarify your question?

-Colin

Other related posts: