[haiku-commits] haiku: hrev47576 - src/add-ons/media/plugins/ffmpeg

  • From: coling@xxxxxx
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Sat, 26 Jul 2014 16:39:34 +0200 (CEST)

hrev47576 adds 6 changesets to branch 'master'
old head: a07d97ee3bad8c8223de804cd21a397fbaca5347
new head: f78313627450a8c919f7ddb70f79f9f0662ee661
overview: http://cgit.haiku-os.org/haiku/log/?qt=range&q=f783136+%5Ea07d97e

----------------------------------------------------------------------------

0adda4f: FFMPEG plugin: Refactor video decoding function.
  
  - Factor out the deinterlacing and color converting part to make the code more
    readable. This makes it easier to understand which code belongs to the 
actual
    decoding process and which code to the post processing.
  
  - There seems to be no performance impact involved (I just looked at the 
spikes
    of the process manager) in factoring out this part, but one can always 
inline
    the method if a closer performance assesment (e.g. by enabling the profiling
    the existing profiling code) suggests so.
  
  - Document the _DecodeVideo() method a little bit. Maybe someone can document
    the info parameter, as I'm a little bit clueless here.
  
  - No functional change intended.
  
  Signed-off-by: Colin Günther <coling@xxxxxx>
  (cherry picked from commit c5fa095fa73d47e75a46cfc138a56028fcc01819)

70a9edb: FFMPEG plugin: Tell the FFMPEG library to handle incomplete data.
  
  - It is just one flag that needs to be set, so that streaming video data can 
be
    handled by the FFMPEG library.
  
  - For reference: This flag is based on FFMPEG's 0.10.2 video decode example
    (doc/example/decoding_encoding.c).
  
  - The _DecodeNextVideoFrame() method needs to be adjusted (still to come), to
    take streamed data into account. So the flag on its own doesn't help, but it
    is a reasonable step in that direction.
  
  Signed-off-by: Colin Günther <coling@xxxxxx>

db59a66: FFMPEG-Plugin: Implement decoding of streamed video data.
  
  - This commit makes the mpeg2_decoder_test successfully decode the test video
    into 84 consecutive PNG images, yeah :)
  
  - If this commit broke playing video files for you please file a bug report.
    I've tested only with one video file (big_buck_bunny_720p_stereo.ogg) that
    everything still works.
  
  - The implementation has some shortcomings though, that will be addressed with
    some later commits:
      1. Start time of media header is wrongly calculated. At the moment we are
         using the start time of the first encoded data chunk we read via
         GetNextChunk(). This works only for chunk that contain one and exactly
         one frame, but not for chunks that contain the end or middle of a 
frame.
      2. Fields of the media header aren't updated when there is a format change
         in the middle of the video stream (for example the pixel aspect ratio
         might change in the middle of a DVB video stream (e.g. switch from 4:3
         to 16:9)).
  
  - Also fix a potential bug, where the CODEC_FLAG_TRUNCATED flag was always
    set, due to missing brackets.
  
  Signed-off-by: Colin Günther <coling@xxxxxx>

254a534: FFMPEG-Plugin: Refactor out update of media_header.
  
  - Main purpose is to make reading the function DecodeNextFrame() easier on the
    eyes, by moving out auxiliary code.
    Note: The media_header update code for the start_time is still left in
    DecodeNextFrame(). This will be addressed in a later commit specially
    targetted on handling start_time calculations for incomplete video frames.
  
  - Also updated / added some documentation.
  
  - No functional change intended.
  
  Signed-off-by: Colin Günther <coling@xxxxxx>

97f5a12: FFMPEG-Plugin: Simplify start time calculation of video frame.
  
  - We let FFMPEG keep track of the correct relationship between presentation
    start time of the encoded video frame and the resulting decoded video frame.
    This simplyfies our code, meaning less lines of code to maintain :)
  
  - Update documentation and pointing out some corner cases when calculating the
    correct presentation start time of a decoded video frame under certain
    circumstances.
  
  - Fix doxygen: Use doxygen style instead of javadoc style.
  
  - No functional change intended.
  
  Signed-off-by: Colin Günther <coling@xxxxxx>

f783136: FFMPEG-Plugin: Fix doxygen style and typo.
  
  - No functional change intended.
  
  Signed-off-by: Colin Günther <coling@xxxxxx>

                                           [ Colin Günther <coling@xxxxxx> ]

----------------------------------------------------------------------------

2 files changed, 283 insertions(+), 151 deletions(-)
.../media/plugins/ffmpeg/AVCodecDecoder.cpp      | 428 ++++++++++++-------
.../media/plugins/ffmpeg/AVCodecDecoder.h        |   6 +-

############################################################################

Commit:      0adda4f68fd08e07ecd04df298e61974888e886a
URL:         http://cgit.haiku-os.org/haiku/commit/?id=0adda4f
Author:      Colin Günther <coling@xxxxxx>
Date:        Tue Jul 15 21:15:55 2014 UTC

FFMPEG plugin: Refactor video decoding function.

- Factor out the deinterlacing and color converting part to make the code more
  readable. This makes it easier to understand which code belongs to the actual
  decoding process and which code to the post processing.

- There seems to be no performance impact involved (I just looked at the spikes
  of the process manager) in factoring out this part, but one can always inline
  the method if a closer performance assesment (e.g. by enabling the profiling
  the existing profiling code) suggests so.

- Document the _DecodeVideo() method a little bit. Maybe someone can document
  the info parameter, as I'm a little bit clueless here.

- No functional change intended.

Signed-off-by: Colin Günther <coling@xxxxxx>
(cherry picked from commit c5fa095fa73d47e75a46cfc138a56028fcc01819)

----------------------------------------------------------------------------

diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
index 326e72d..9db9929 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
@@ -637,6 +637,23 @@ AVCodecDecoder::_DecodeAudio(void* _buffer, int64* 
outFrameCount,
 }
 
 
+/*! Fills the outBuffer with an already decoded video frame.
+
+       Besides the main duty described above, this method also fills out the 
other
+       output parameters as documented below.
+
+       @param outBuffer Pointer to the output buffer to copy the decoded video
+               frame to.
+       @param outFrameCount Pointer to the output variable to assign the 
number of
+               copied video frames (usually one video frame).
+       @param mediaHeader Pointer to the output media header that contains the 
+               decoded video frame properties.
+       @param info TODO (not used at the moment)
+
+       @return B_OK Decoding a video frame succeeded.
+       @return B_LAST_BUFFER_ERROR There are no more video frames available.
+       @return other error codes
+*/
 status_t
 AVCodecDecoder::_DecodeVideo(void* outBuffer, int64* outFrameCount,
        media_header* mediaHeader, media_decode_info* info)
@@ -757,109 +774,14 @@ AVCodecDecoder::_DecodeNextVideoFrame()
 //     fContext->frame_rate);
 
                if (gotPicture) {
-                       int width = fOutputVideoFormat.display.line_width;
-                       int height = fOutputVideoFormat.display.line_count;
-                       AVPicture deinterlacedPicture;
-                       bool useDeinterlacedPicture = false;
-
-                       if (fRawDecodedPicture->interlaced_frame) {
-                               AVPicture rawPicture;
-                               rawPicture.data[0] = 
fRawDecodedPicture->data[0];
-                               rawPicture.data[1] = 
fRawDecodedPicture->data[1];
-                               rawPicture.data[2] = 
fRawDecodedPicture->data[2];
-                               rawPicture.data[3] = 
fRawDecodedPicture->data[3];
-                               rawPicture.linesize[0] = 
fRawDecodedPicture->linesize[0];
-                               rawPicture.linesize[1] = 
fRawDecodedPicture->linesize[1];
-                               rawPicture.linesize[2] = 
fRawDecodedPicture->linesize[2];
-                               rawPicture.linesize[3] = 
fRawDecodedPicture->linesize[3];
-
-                               avpicture_alloc(&deinterlacedPicture,
-                                       fContext->pix_fmt, width, height);
-
-                               if (avpicture_deinterlace(&deinterlacedPicture, 
&rawPicture,
-                                               fContext->pix_fmt, width, 
height) < 0) {
-                                       TRACE("[v] avpicture_deinterlace() - 
error\n");
-                               } else
-                                       useDeinterlacedPicture = true;
-                       }
-
 #if DO_PROFILING
                        bigtime_t formatConversionStart = system_time();
 #endif
 //                     TRACE("ONE FRAME OUT !! len=%d size=%ld (%s)\n", len, 
size,
 //                             pixfmt_to_string(fContext->pix_fmt));
 
-                       // Some decoders do not set pix_fmt until they have 
decoded 1 frame
-#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
-                       if (fSwsContext == NULL) {
-                               fSwsContext = sws_getContext(fContext->width, 
fContext->height,
-                                       fContext->pix_fmt, fContext->width, 
fContext->height,
-                                       
colorspace_to_pixfmt(fOutputVideoFormat.display.format),
-                                       SWS_FAST_BILINEAR, NULL, NULL, NULL);
-                       }
-#else
-                       if (fFormatConversionFunc == NULL) {
-                               fFormatConversionFunc = resolve_colorspace(
-                                       fOutputVideoFormat.display.format, 
fContext->pix_fmt,
-                                       fContext->width, fContext->height);
-                       }
-#endif
+                       _DeinterlaceAndColorConvertVideoFrame();
 
-                       fDecodedDataSizeInBytes = avpicture_get_size(
-                               
colorspace_to_pixfmt(fOutputVideoFormat.display.format),
-                               fContext->width, fContext->height);
-
-                       if (fDecodedData == NULL)
-                               fDecodedData
-                                       = 
static_cast<uint8_t*>(malloc(fDecodedDataSizeInBytes));
-
-                       fPostProcessedDecodedPicture->data[0] = fDecodedData;
-                       fPostProcessedDecodedPicture->linesize[0]
-                               = fOutputVideoFormat.display.bytes_per_row;
-
-#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
-                       if (fSwsContext != NULL) {
-#else
-                       if (fFormatConversionFunc != NULL) {
-#endif
-                               if (useDeinterlacedPicture) {
-                                       AVFrame deinterlacedFrame;
-                                       deinterlacedFrame.data[0] = 
deinterlacedPicture.data[0];
-                                       deinterlacedFrame.data[1] = 
deinterlacedPicture.data[1];
-                                       deinterlacedFrame.data[2] = 
deinterlacedPicture.data[2];
-                                       deinterlacedFrame.data[3] = 
deinterlacedPicture.data[3];
-                                       deinterlacedFrame.linesize[0]
-                                               = 
deinterlacedPicture.linesize[0];
-                                       deinterlacedFrame.linesize[1]
-                                               = 
deinterlacedPicture.linesize[1];
-                                       deinterlacedFrame.linesize[2]
-                                               = 
deinterlacedPicture.linesize[2];
-                                       deinterlacedFrame.linesize[3]
-                                               = 
deinterlacedPicture.linesize[3];
-
-#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
-                                       sws_scale(fSwsContext, 
deinterlacedFrame.data,
-                                               deinterlacedFrame.linesize, 0, 
fContext->height,
-                                               
fPostProcessedDecodedPicture->data,
-                                               
fPostProcessedDecodedPicture->linesize);
-#else
-                                       
(*fFormatConversionFunc)(&deinterlacedFrame,
-                                               fPostProcessedDecodedPicture, 
width, height);
-#endif
-                               } else {
-#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
-                                       sws_scale(fSwsContext, 
fRawDecodedPicture->data,
-                                               fRawDecodedPicture->linesize, 
0, fContext->height,
-                                               
fPostProcessedDecodedPicture->data,
-                                               
fPostProcessedDecodedPicture->linesize);
-#else
-                                       
(*fFormatConversionFunc)(fRawDecodedPicture,
-                                               fPostProcessedDecodedPicture, 
width, height);
-#endif
-                               }
-                       }
-                       if (fRawDecodedPicture->interlaced_frame)
-                               avpicture_free(&deinterlacedPicture);
 #ifdef DEBUG
                        dump_ffframe(fRawDecodedPicture, "ffpict");
 //                     dump_ffframe(fPostProcessedDecodedPicture, "opict");
@@ -897,3 +819,120 @@ AVCodecDecoder::_DecodeNextVideoFrame()
                }
        }
 }
+
+
+/*! This function applies deinterlacing (only if needed) and color conversion
+    to the video frame in fRawDecodedPicture.
+
+       It is assumed that fRawDecodedPicture wasn't deinterlaced and color
+       converted yet (otherwise this function behaves in unknown manners).
+
+       You should only call this function in _DecodeNextVideoFrame() when we
+       got a new picture decoded by the video decoder.
+
+       When this function finishes the postprocessed video frame will be 
available
+       in fPostProcessedDecodedPicture and fDecodedData 
(fDecodedDataSizeInBytes
+       will be set accordingly).
+*/
+void
+AVCodecDecoder::_DeinterlaceAndColorConvertVideoFrame()
+{
+               int width = fOutputVideoFormat.display.line_width;
+               int height = fOutputVideoFormat.display.line_count;
+               AVPicture deinterlacedPicture;
+               bool useDeinterlacedPicture = false;
+
+               if (fRawDecodedPicture->interlaced_frame) {
+                       AVPicture rawPicture;
+                       rawPicture.data[0] = fRawDecodedPicture->data[0];
+                       rawPicture.data[1] = fRawDecodedPicture->data[1];
+                       rawPicture.data[2] = fRawDecodedPicture->data[2];
+                       rawPicture.data[3] = fRawDecodedPicture->data[3];
+                       rawPicture.linesize[0] = 
fRawDecodedPicture->linesize[0];
+                       rawPicture.linesize[1] = 
fRawDecodedPicture->linesize[1];
+                       rawPicture.linesize[2] = 
fRawDecodedPicture->linesize[2];
+                       rawPicture.linesize[3] = 
fRawDecodedPicture->linesize[3];
+
+                       avpicture_alloc(&deinterlacedPicture,
+                               fContext->pix_fmt, width, height);
+
+                       if (avpicture_deinterlace(&deinterlacedPicture, 
&rawPicture,
+                                       fContext->pix_fmt, width, height) < 0) {
+                               TRACE("[v] avpicture_deinterlace() - error\n");
+                       } else
+                               useDeinterlacedPicture = true;
+               }
+
+               // Some decoders do not set pix_fmt until they have decoded 1 
frame
+#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
+               if (fSwsContext == NULL) {
+                       fSwsContext = sws_getContext(fContext->width, 
fContext->height,
+                               fContext->pix_fmt, fContext->width, 
fContext->height,
+                               
colorspace_to_pixfmt(fOutputVideoFormat.display.format),
+                               SWS_FAST_BILINEAR, NULL, NULL, NULL);
+               }
+#else
+               if (fFormatConversionFunc == NULL) {
+                       fFormatConversionFunc = resolve_colorspace(
+                               fOutputVideoFormat.display.format, 
fContext->pix_fmt,
+                               fContext->width, fContext->height);
+               }
+#endif
+
+               fDecodedDataSizeInBytes = avpicture_get_size(
+                       colorspace_to_pixfmt(fOutputVideoFormat.display.format),
+                       fContext->width, fContext->height);
+
+               if (fDecodedData == NULL)
+                       fDecodedData
+                               = 
static_cast<uint8_t*>(malloc(fDecodedDataSizeInBytes));
+
+               fPostProcessedDecodedPicture->data[0] = fDecodedData;
+               fPostProcessedDecodedPicture->linesize[0]
+                       = fOutputVideoFormat.display.bytes_per_row;
+
+#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
+               if (fSwsContext != NULL) {
+#else
+               if (fFormatConversionFunc != NULL) {
+#endif
+                       if (useDeinterlacedPicture) {
+                               AVFrame deinterlacedFrame;
+                               deinterlacedFrame.data[0] = 
deinterlacedPicture.data[0];
+                               deinterlacedFrame.data[1] = 
deinterlacedPicture.data[1];
+                               deinterlacedFrame.data[2] = 
deinterlacedPicture.data[2];
+                               deinterlacedFrame.data[3] = 
deinterlacedPicture.data[3];
+                               deinterlacedFrame.linesize[0]
+                                       = deinterlacedPicture.linesize[0];
+                               deinterlacedFrame.linesize[1]
+                                       = deinterlacedPicture.linesize[1];
+                               deinterlacedFrame.linesize[2]
+                                       = deinterlacedPicture.linesize[2];
+                               deinterlacedFrame.linesize[3]
+                                       = deinterlacedPicture.linesize[3];
+
+#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
+                               sws_scale(fSwsContext, deinterlacedFrame.data,
+                                       deinterlacedFrame.linesize, 0, 
fContext->height,
+                                       fPostProcessedDecodedPicture->data,
+                                       fPostProcessedDecodedPicture->linesize);
+#else
+                               (*fFormatConversionFunc)(&deinterlacedFrame,
+                                       fPostProcessedDecodedPicture, width, 
height);
+#endif
+                       } else {
+#if USE_SWS_FOR_COLOR_SPACE_CONVERSION
+                               sws_scale(fSwsContext, fRawDecodedPicture->data,
+                                       fRawDecodedPicture->linesize, 0, 
fContext->height,
+                                       fPostProcessedDecodedPicture->data,
+                                       fPostProcessedDecodedPicture->linesize);
+#else
+                               (*fFormatConversionFunc)(fRawDecodedPicture,
+                                       fPostProcessedDecodedPicture, width, 
height);
+#endif
+                       }
+               }
+
+               if (fRawDecodedPicture->interlaced_frame)
+                       avpicture_free(&deinterlacedPicture);
+}
diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h
index 527da4cd..292541d 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h
@@ -65,6 +65,7 @@ private:
                                                                        
media_decode_info* info);
 
                        status_t                        _DecodeNextVideoFrame();
+                       void                            
_DeinterlaceAndColorConvertVideoFrame();
 
 
                        media_header            fHeader;

############################################################################

Commit:      70a9edbb1181302caff7f4d1147e60bfa124568a
URL:         http://cgit.haiku-os.org/haiku/commit/?id=70a9edb
Author:      Colin Günther <coling@xxxxxx>
Date:        Tue Jul 15 21:31:54 2014 UTC

FFMPEG plugin: Tell the FFMPEG library to handle incomplete data.

- It is just one flag that needs to be set, so that streaming video data can be
  handled by the FFMPEG library.

- For reference: This flag is based on FFMPEG's 0.10.2 video decode example
  (doc/example/decoding_encoding.c).

- The _DecodeNextVideoFrame() method needs to be adjusted (still to come), to
  take streamed data into account. So the flag on its own doesn't help, but it
  is a reasonable step in that direction.

Signed-off-by: Colin Günther <coling@xxxxxx>

----------------------------------------------------------------------------

diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
index 9db9929..8d52cc5 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
@@ -430,6 +430,11 @@ AVCodecDecoder::_NegotiateVideoOutputFormat(media_format* 
inOutFormat)
        fContext->extradata = (uint8_t*)fExtraData;
        fContext->extradata_size = fExtraDataSize;
 
+       if (fCodec->capabilities & CODEC_CAP_TRUNCATED)
+               // Expect and handle video frames to be splitted across 
consecutive
+               // data chunks.
+               fContext->flags |= CODEC_FLAG_TRUNCATED;
+
        TRACE("  requested video format 0x%x\n",
                inOutFormat->u.raw_video.display.format);
 

############################################################################

Commit:      db59a667044f697d0bb8fdff23597105cd7df4b2
URL:         http://cgit.haiku-os.org/haiku/commit/?id=db59a66
Author:      Colin Günther <coling@xxxxxx>
Date:        Thu Jul 17 07:27:06 2014 UTC

FFMPEG-Plugin: Implement decoding of streamed video data.

- This commit makes the mpeg2_decoder_test successfully decode the test video
  into 84 consecutive PNG images, yeah :)

- If this commit broke playing video files for you please file a bug report.
  I've tested only with one video file (big_buck_bunny_720p_stereo.ogg) that
  everything still works.

- The implementation has some shortcomings though, that will be addressed with
  some later commits:
    1. Start time of media header is wrongly calculated. At the moment we are
       using the start time of the first encoded data chunk we read via
       GetNextChunk(). This works only for chunk that contain one and exactly
       one frame, but not for chunks that contain the end or middle of a frame.
    2. Fields of the media header aren't updated when there is a format change
       in the middle of the video stream (for example the pixel aspect ratio
       might change in the middle of a DVB video stream (e.g. switch from 4:3
       to 16:9)).

- Also fix a potential bug, where the CODEC_FLAG_TRUNCATED flag was always
  set, due to missing brackets.

Signed-off-by: Colin Günther <coling@xxxxxx>

----------------------------------------------------------------------------

diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
index 8d52cc5..bdf9662 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
@@ -15,6 +15,7 @@
 
 #include <new>
 
+#include <assert.h>
 #include <string.h>
 
 #include <Bitmap.h>
@@ -251,8 +252,12 @@ AVCodecDecoder::SeekedTo(int64 frame, bigtime_t time)
 {
        status_t ret = B_OK;
        // Reset the FFmpeg codec to flush buffers, so we keep the sync
-       if (fCodecInitDone)
+       if (fCodecInitDone) {
                avcodec_flush_buffers(fContext);
+               av_init_packet(&fTempPacket);
+               fTempPacket.size = 0;
+               fTempPacket.data = NULL;
+       }
 
        // Flush internal buffers as well.
        fChunkBuffer = NULL;
@@ -430,10 +435,11 @@ AVCodecDecoder::_NegotiateVideoOutputFormat(media_format* 
inOutFormat)
        fContext->extradata = (uint8_t*)fExtraData;
        fContext->extradata_size = fExtraDataSize;
 
-       if (fCodec->capabilities & CODEC_CAP_TRUNCATED)
+       if (fCodec->capabilities & CODEC_CAP_TRUNCATED) {
                // Expect and handle video frames to be splitted across 
consecutive
                // data chunks.
                fContext->flags |= CODEC_FLAG_TRUNCATED;
+       }
 
        TRACE("  requested video format 0x%x\n",
                inOutFormat->u.raw_video.display.format);
@@ -516,6 +522,10 @@ AVCodecDecoder::_NegotiateVideoOutputFormat(media_format* 
inOutFormat)
        inOutFormat->require_flags = 0;
        inOutFormat->deny_flags = B_MEDIA_MAUI_UNDEFINED_FLAGS;
 
+       av_init_packet(&fTempPacket);
+       fTempPacket.size = 0;
+       fTempPacket.data = NULL;
+
 #ifdef TRACE_AV_CODEC
        char buffer[1024];
        string_for_format(*inOutFormat, buffer, sizeof(buffer));
@@ -700,24 +710,35 @@ AVCodecDecoder::_DecodeVideo(void* outBuffer, int64* 
outFrameCount,
 status_t
 AVCodecDecoder::_DecodeNextVideoFrame()
 {
+       assert(fTempPacket.size >= 0);
+
        bool firstRun = true;
        while (true) {
                media_header chunkMediaHeader;
-               status_t err = GetNextChunk(&fChunkBuffer, &fChunkBufferSize,
-                       &chunkMediaHeader);
-               if (err != B_OK) {
-                       TRACE("AVCodecDecoder::_DecodeVideo(): error from "
-                               "GetNextChunk(): %s\n", strerror(err));
-                       return err;
-               }
+
+               if (fTempPacket.size == 0) {
+                       // Our packet buffer is empty, so fill it now.
+                       status_t getNextChunkStatus     = 
GetNextChunk(&fChunkBuffer,
+                               &fChunkBufferSize, &chunkMediaHeader);
+                       if (getNextChunkStatus != B_OK) {
+                               TRACE("AVCodecDecoder::_DecodeNextVideoFrame(): 
error from "
+                                       "GetNextChunk(): %s\n", strerror(err));
+                               return getNextChunkStatus;
+                       }
+
+                       fTempPacket.data = 
static_cast<uint8_t*>(const_cast<void*>(
+                               fChunkBuffer));
+                       fTempPacket.size = fChunkBufferSize;
+
 #ifdef LOG_STREAM_TO_FILE
-               if (sDumpedPackets < 100) {
-                       sStreamLogFile.Write(fChunkBuffer, fChunkBufferSize);
-                       printf("wrote %ld bytes\n", fChunkBufferSize);
-                       sDumpedPackets++;
-               } else if (sDumpedPackets == 100)
-                       sStreamLogFile.Unset();
+                       if (sDumpedPackets < 100) {
+                               sStreamLogFile.Write(fChunkBuffer, 
fChunkBufferSize);
+                               printf("wrote %ld bytes\n", fChunkBufferSize);
+                               sDumpedPackets++;
+                       } else if (sDumpedPackets == 100)
+                               sStreamLogFile.Unset();
 #endif
+               }
 
                if (firstRun) {
                        firstRun = false;
@@ -746,24 +767,28 @@ AVCodecDecoder::_DecodeNextVideoFrame()
                bigtime_t startTime = system_time();
 #endif
 
-               // NOTE: In the FFmpeg code example I've read, the length 
returned by
-               // avcodec_decode_video() is completely ignored. Furthermore, 
the
-               // packet buffers are supposed to contain complete frames only 
so we
-               // don't seem to be required to buffer any packets because not 
the
-               // complete packet has been read.
-               fTempPacket.data = (uint8_t*)fChunkBuffer;
-               fTempPacket.size = fChunkBufferSize;
+               // NOTE: In the FFMPEG 0.10.2 code example decoding_encoding.c, 
the
+               // length returned by avcodec_decode_video2() is used to update 
the
+               // packet buffer size (here it is fTempPacket.size). This way 
the
+               // packet buffer is allowed to contain incomplete frames so we 
are
+               // required to buffer the packets between different calls to
+               // _DecodeNextVideoFrame().
                int gotPicture = 0;
-               int len = avcodec_decode_video2(fContext, fRawDecodedPicture,
-                       &gotPicture, &fTempPacket);
-               if (len < 0) {
-                       TRACE("[v] AVCodecDecoder: error in decoding frame 
%lld: %d\n",
-                               fFrame, len);
-                       // NOTE: An error from avcodec_decode_video() seems to 
be ignored
-                       // in the ffplay sample code.
-//                     return B_ERROR;
+               int decodedDataSizeInBytes = avcodec_decode_video2(fContext,
+                       fRawDecodedPicture, &gotPicture, &fTempPacket);
+               if (decodedDataSizeInBytes < 0) {
+                       TRACE("[v] AVCodecDecoder: ignoring error in decoding 
frame %lld:"
+                               " %d\n", fFrame, len);
+                       // NOTE: An error from avcodec_decode_video2() is 
ignored by the
+                       // FFMPEG 0.10.2 example decoding_encoding.c. Only the 
packet
+                       // buffers are flushed accordingly
+                       fTempPacket.data = NULL;
+                       fTempPacket.size = 0;
+                       continue;
                }
 
+               fTempPacket.size -= decodedDataSizeInBytes;
+               fTempPacket.data += decodedDataSizeInBytes;
 
 //TRACE("FFDEC: PTS = %d:%d:%d.%d - fContext->frame_number = %ld "
 //     "fContext->frame_rate = %ld\n", (int)(fContext->pts / (60*60*1000000)),

############################################################################

Commit:      254a53409eaa75e6f21d56b4f5f0bfcdd427c6b3
URL:         http://cgit.haiku-os.org/haiku/commit/?id=254a534
Author:      Colin Günther <coling@xxxxxx>
Date:        Thu Jul 24 21:08:48 2014 UTC

FFMPEG-Plugin: Refactor out update of media_header.

- Main purpose is to make reading the function DecodeNextFrame() easier on the
  eyes, by moving out auxiliary code.
  Note: The media_header update code for the start_time is still left in
  DecodeNextFrame(). This will be addressed in a later commit specially
  targetted on handling start_time calculations for incomplete video frames.

- Also updated / added some documentation.

- No functional change intended.

Signed-off-by: Colin Günther <coling@xxxxxx>

----------------------------------------------------------------------------

diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
index bdf9662..275a1dd 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
@@ -661,7 +661,7 @@ AVCodecDecoder::_DecodeAudio(void* _buffer, int64* 
outFrameCount,
                frame to.
        @param outFrameCount Pointer to the output variable to assign the 
number of
                copied video frames (usually one video frame).
-       @param mediaHeader Pointer to the output media header that contains the 
+       @param mediaHeader Pointer to the output media header that contains the
                decoded video frame properties.
        @param info TODO (not used at the moment)
 
@@ -743,24 +743,8 @@ AVCodecDecoder::_DecodeNextVideoFrame()
                if (firstRun) {
                        firstRun = false;
 
-                       fHeader.type = B_MEDIA_RAW_VIDEO;
                        fHeader.start_time = chunkMediaHeader.start_time;
                        fStartTime = chunkMediaHeader.start_time;
-                       fHeader.file_pos = 0;
-                       fHeader.orig_size = 0;
-                       fHeader.u.raw_video.field_gamma = 1.0;
-                       fHeader.u.raw_video.field_sequence = fFrame;
-                       fHeader.u.raw_video.field_number = 0;
-                       fHeader.u.raw_video.pulldown_number = 0;
-                       fHeader.u.raw_video.first_active_line = 1;
-                       fHeader.u.raw_video.line_count
-                               = fOutputVideoFormat.display.line_count;
-
-                       TRACE("[v] start_time=%02d:%02d.%02d 
field_sequence=%lu\n",
-                               int((fHeader.start_time / 60000000) % 60),
-                               int((fHeader.start_time / 1000000) % 60),
-                               int((fHeader.start_time / 10000) % 100),
-                               fHeader.u.raw_video.field_sequence);
                }
 
 #if DO_PROFILING
@@ -810,6 +794,7 @@ AVCodecDecoder::_DecodeNextVideoFrame()
 //                     TRACE("ONE FRAME OUT !! len=%d size=%ld (%s)\n", len, 
size,
 //                             pixfmt_to_string(fContext->pix_fmt));
 
+                       _UpdateMediaHeaderForVideoFrame();
                        _DeinterlaceAndColorConvertVideoFrame();
 
 #ifdef DEBUG
@@ -851,6 +836,42 @@ AVCodecDecoder::_DecodeNextVideoFrame()
 }
 
 
+/*! Updates relevant fields of the class member fHeader with the properties of
+       the most recently decoded video frame.
+
+       It is assumed tat this function is called in _DecodeNextVideoFrame() 
only
+       when the following asserts hold true:
+               1. We actually got a new picture decoded by the video decoder.
+               2. fHeader wasn't updated for the new picture yet. You MUST 
call this
+                  method only once per decoded video frame.
+               3. This function MUST be called before
+                  _DeinterlaceAndColorConvertVideoFrame() as the later one 
relys on an
+                  updated fHeader.
+               4. There will be at maximumn only one decoded video frame in 
our cache
+                  at any single point in time. Otherwise you couldn't tell to 
which
+                  cached decoded video frame te properties in fHeader relate 
to.
+*/
+void
+AVCodecDecoder::_UpdateMediaHeaderForVideoFrame()
+{
+       fHeader.type = B_MEDIA_RAW_VIDEO;
+       fHeader.file_pos = 0;
+       fHeader.orig_size = 0;
+       fHeader.u.raw_video.field_gamma = 1.0;
+       fHeader.u.raw_video.field_sequence = fFrame;
+       fHeader.u.raw_video.field_number = 0;
+       fHeader.u.raw_video.pulldown_number = 0;
+       fHeader.u.raw_video.first_active_line = 1;
+       fHeader.u.raw_video.line_count = fContext->height;
+
+       TRACE("[v] start_time=%02d:%02d.%02d field_sequence=%lu\n",
+               int((fHeader.start_time / 60000000) % 60),
+               int((fHeader.start_time / 1000000) % 60),
+               int((fHeader.start_time / 10000) % 100),
+               fHeader.u.raw_video.field_sequence);
+}
+
+
 /*! This function applies deinterlacing (only if needed) and color conversion
     to the video frame in fRawDecodedPicture.
 
@@ -858,7 +879,8 @@ AVCodecDecoder::_DecodeNextVideoFrame()
        converted yet (otherwise this function behaves in unknown manners).
 
        You should only call this function in _DecodeNextVideoFrame() when we
-       got a new picture decoded by the video decoder.
+       got a new picture decoded by the video decoder and the fHeader variable 
was
+       updated accordingly (@see _UpdateMediaHeaderForVideoFrame()).
 
        When this function finishes the postprocessed video frame will be 
available
        in fPostProcessedDecodedPicture and fDecodedData 
(fDecodedDataSizeInBytes
diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h
index 292541d..87082d1 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.h
@@ -63,12 +63,15 @@ private:
                                                                        int64* 
outFrameCount,
                                                                        
media_header* mediaHeader,
                                                                        
media_decode_info* info);
-
                        status_t                        _DecodeNextVideoFrame();
+                       void                            
_UpdateMediaHeaderForVideoFrame();
                        void                            
_DeinterlaceAndColorConvertVideoFrame();
 
 
                        media_header            fHeader;
+                                                                       // 
Contains the properties of the current
+                                                                       // 
decoded audio / video frame
+
                        media_format            fInputFormat;
                        media_raw_video_format fOutputVideoFormat;
 

############################################################################

Commit:      97f5a12f3692b17cb93edeb0f37939ee0290fc7d
URL:         http://cgit.haiku-os.org/haiku/commit/?id=97f5a12
Author:      Colin Günther <coling@xxxxxx>
Date:        Sat Jul 26 12:53:33 2014 UTC

FFMPEG-Plugin: Simplify start time calculation of video frame.

- We let FFMPEG keep track of the correct relationship between presentation
  start time of the encoded video frame and the resulting decoded video frame.
  This simplyfies our code, meaning less lines of code to maintain :)

- Update documentation and pointing out some corner cases when calculating the
  correct presentation start time of a decoded video frame under certain
  circumstances.

- Fix doxygen: Use doxygen style instead of javadoc style.

- No functional change intended.

Signed-off-by: Colin Günther <coling@xxxxxx>

----------------------------------------------------------------------------

diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
index 275a1dd..0856331 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
@@ -705,14 +705,32 @@ AVCodecDecoder::_DecodeVideo(void* outBuffer, int64* 
outFrameCount,
     To every decoded video frame there is a media_header populated in
     fHeader, containing the corresponding video frame properties.
 
-       @return B_OK, when we successfully decoded one video frame
+       Normally every decoded video frame has a start_time field populated in 
the
+       associated fHeader, that determines the presentation time of the frame.
+       This relationship will only hold true, when each data chunk that is
+       provided via GetNextChunk() contains data for exactly one encoded video
+       frame (one complete frame) - not more and not less.
+
+       We can decode data chunks that contain partial video frame data, too. In
+       that case, you cannot trust the value of the start_time field in 
fHeader.
+       We simply have no logic in place to establish a meaningful relationship
+       between an incomplete frame and the start time it should be presented.
+       Though this     might change in the future.
+
+       We can decode data chunks that contain more than one video frame, too. 
In
+       that case, you cannot trust the value of the start_time field in 
fHeader.
+       We simply have no logic in place to track the start_time across multiple
+       video frames. So a meaningful relationship between the 2nd, 3rd, ... 
frame
+       and the start time it should be presented isn't established at the 
moment.
+       Though this     might change in the future.
+
+       \return B_OK, when we successfully decoded one video frame
  */
 status_t
 AVCodecDecoder::_DecodeNextVideoFrame()
 {
        assert(fTempPacket.size >= 0);
 
-       bool firstRun = true;
        while (true) {
                media_header chunkMediaHeader;
 
@@ -730,6 +748,31 @@ AVCodecDecoder::_DecodeNextVideoFrame()
                                fChunkBuffer));
                        fTempPacket.size = fChunkBufferSize;
 
+                       fContext->reordered_opaque = 
chunkMediaHeader.start_time;
+                               // Let ffmpeg handle the relationship between 
start_time and
+                               // decoded video frame.
+                               //
+                               // Explanation:
+                               // The received chunk buffer may not contain 
the next video
+                               // frame to be decoded, due to frame reordering 
(e.g. MPEG1/2
+                               // provides encoded video frames in a different 
order than the
+                               // decoded video frame).
+                               //
+                               // FIXME: Research how to establish a 
meaningful relationship
+                               // between start_time and decoded video frame 
when the received
+                               // chunk buffer contains partial video frames. 
Maybe some data
+                               // formats contain time stamps (ake pts / dts 
fields) that can
+                               // be evaluated by FFMPEG. But as long as I 
don't have such
+                               // video data to test it, it makes no sense to 
implement it.
+                               //
+                               // FIXME: Implement tracking start_time of 
video frames
+                               // originating in data chunks that encode more 
than one video
+                               // frame at a time. In that case on would 
increment the
+                               // start_time for each consecutive frame of 
such a data chunk
+                               // (like it is done for audio frame decoding). 
But as long as
+                               // I don't have such video data to test it, it 
makes no sense
+                               // to implement it.
+
 #ifdef LOG_STREAM_TO_FILE
                        if (sDumpedPackets < 100) {
                                sStreamLogFile.Write(fChunkBuffer, 
fChunkBufferSize);
@@ -740,13 +783,6 @@ AVCodecDecoder::_DecodeNextVideoFrame()
 #endif
                }
 
-               if (firstRun) {
-                       firstRun = false;
-
-                       fHeader.start_time = chunkMediaHeader.start_time;
-                       fStartTime = chunkMediaHeader.start_time;
-               }
-
 #if DO_PROFILING
                bigtime_t startTime = system_time();
 #endif
@@ -857,12 +893,13 @@ AVCodecDecoder::_UpdateMediaHeaderForVideoFrame()
        fHeader.type = B_MEDIA_RAW_VIDEO;
        fHeader.file_pos = 0;
        fHeader.orig_size = 0;
+       fHeader.start_time = fRawDecodedPicture->reordered_opaque;
        fHeader.u.raw_video.field_gamma = 1.0;
        fHeader.u.raw_video.field_sequence = fFrame;
        fHeader.u.raw_video.field_number = 0;
        fHeader.u.raw_video.pulldown_number = 0;
        fHeader.u.raw_video.first_active_line = 1;
-       fHeader.u.raw_video.line_count = fContext->height;
+       fHeader.u.raw_video.line_count = fRawDecodedPicture->height;
 
        TRACE("[v] start_time=%02d:%02d.%02d field_sequence=%lu\n",
                int((fHeader.start_time / 60000000) % 60),

############################################################################

Revision:    hrev47576
Commit:      f78313627450a8c919f7ddb70f79f9f0662ee661
URL:         http://cgit.haiku-os.org/haiku/commit/?id=f783136
Author:      Colin Günther <coling@xxxxxx>
Date:        Sat Jul 26 13:03:09 2014 UTC

FFMPEG-Plugin: Fix doxygen style and typo.

- No functional change intended.

Signed-off-by: Colin Günther <coling@xxxxxx>

----------------------------------------------------------------------------

diff --git a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp 
b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
index 0856331..7fa1aab 100644
--- a/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
+++ b/src/add-ons/media/plugins/ffmpeg/AVCodecDecoder.cpp
@@ -652,22 +652,22 @@ AVCodecDecoder::_DecodeAudio(void* _buffer, int64* 
outFrameCount,
 }
 
 
-/*! Fills the outBuffer with an already decoded video frame.
+/*! \brief Fills the outBuffer with an already decoded video frame.
 
        Besides the main duty described above, this method also fills out the 
other
        output parameters as documented below.
 
-       @param outBuffer Pointer to the output buffer to copy the decoded video
+       \param outBuffer Pointer to the output buffer to copy the decoded video
                frame to.
-       @param outFrameCount Pointer to the output variable to assign the 
number of
+       \param outFrameCount Pointer to the output variable to assign the 
number of
                copied video frames (usually one video frame).
-       @param mediaHeader Pointer to the output media header that contains the
+       \param mediaHeader Pointer to the output media header that contains the
                decoded video frame properties.
-       @param info TODO (not used at the moment)
+       \param info TODO (not used at the moment)
 
-       @return B_OK Decoding a video frame succeeded.
-       @return B_LAST_BUFFER_ERROR There are no more video frames available.
-       @return other error codes
+       \returns B_OK Decoding a video frame succeeded.
+       \returns B_LAST_BUFFER_ERROR There are no more video frames available.
+       \returns other error codes
 */
 status_t
 AVCodecDecoder::_DecodeVideo(void* outBuffer, int64* outFrameCount,
@@ -689,7 +689,7 @@ AVCodecDecoder::_DecodeVideo(void* outBuffer, int64* 
outFrameCount,
 }
 
 
-/*! Decode next video frame
+/*! \brief Decode next video frame
 
     We decode exactly one video frame into fDecodedData. To achieve this goal,
     we might need to request several chunks of encoded data resulting in a
@@ -872,10 +872,10 @@ AVCodecDecoder::_DecodeNextVideoFrame()
 }
 
 
-/*! Updates relevant fields of the class member fHeader with the properties of
+/*! \brief Updates relevant fields of the class member fHeader with the 
properties of
        the most recently decoded video frame.
 
-       It is assumed tat this function is called in _DecodeNextVideoFrame() 
only
+       It is assumed that this function is called in _DecodeNextVideoFrame() 
only
        when the following asserts hold true:
                1. We actually got a new picture decoded by the video decoder.
                2. fHeader wasn't updated for the new picture yet. You MUST 
call this
@@ -909,7 +909,7 @@ AVCodecDecoder::_UpdateMediaHeaderForVideoFrame()
 }
 
 
-/*! This function applies deinterlacing (only if needed) and color conversion
+/*! \brief This function applies deinterlacing (only if needed) and color 
conversion
     to the video frame in fRawDecodedPicture.
 
        It is assumed that fRawDecodedPicture wasn't deinterlaced and color


Other related posts: