[haiku-commits] r38807 - haiku/trunk/src/add-ons/media/plugins/ffmpeg

  • From: superstippi@xxxxxx
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Fri, 24 Sep 2010 18:54:42 +0200 (CEST)

Author: stippi
Date: 2010-09-24 18:54:42 +0200 (Fri, 24 Sep 2010)
New Revision: 38807
Changeset: http://dev.haiku-os.org/changeset/38807

Modified:
   haiku/trunk/src/add-ons/media/plugins/ffmpeg/AVFormatReader.cpp
   haiku/trunk/src/add-ons/media/plugins/ffmpeg/AVFormatReader.h
Log:
Rewrote finding keyframes and seeking. The problem was that
in many situations, FindKeyframe() was unable to reliably
predict what frame Seek() would be able to seek to.
 * Refactored a new base class StreamBase from the old
   StreamCookie, renamed StreamCookie to just Stream.
 * In FindKeyframe(), Stream will create a "ghost" StreamBase
   instance. That one will be used to actually seek in the
   stream without modifying the AVFormatContext of the real
   Stream. From that we can tell what position we can /really/
   seek to. For AVIs mostly, it is important to still use
   av_index_search_timestamp(), since for many AVIs I tested,
   reading the next packet after seeking did not produce a
   timestamp, however the index entry contained just the correct
   one. If the next packet does contain a PTS, it will still
   override the index timestamp, though.
 * Contrary to my previous belief, there was still a locking
   problem with how MediaPlayer used the BMediaTracks. The video
   decoding thread and the playback manager both used
   FindKeyframe() without holding the same lock. We support this
   now by using one BLocker per Stream. (The source BDataIO is
   still protected by another single lock.) With the new ghost
   stream stuff, the locking problem became much more of a problem,
   previously the FindKeyframe() had a much rarer race condition
   which would only trip when the decoding thread would cause new
   index entries to be inserted into the index.
 * Use the same ByteIOContext buffer size that avformat would be
   using if it initialized the ByteIOContext through other API.
 * Don't leak the probe buffer in case of error.
 * Don't leak the ByteIOContext buffer in the end.
 * Do not discard other stream packets anymore, this makes the
   ASF demuxer happy and ASF files can now be seeked as well as
   with ffplay itself.

With these changes, all my MPEG test streams work. Some could be seeked
before, but would show bad artifacts. Some streams would completely loose
video after seeking once. My MPEG2 test stream works much better now,
although audio is slightly out of sync, unfortunately. All my test AVIs
work as good as before, MP4 and MKV still work perfectly. The single
test ASF I got is now perfectly seekable.


Modified: haiku/trunk/src/add-ons/media/plugins/ffmpeg/AVFormatReader.cpp
===================================================================
--- haiku/trunk/src/add-ons/media/plugins/ffmpeg/AVFormatReader.cpp     
2010-09-24 16:40:56 UTC (rev 38806)
+++ haiku/trunk/src/add-ons/media/plugins/ffmpeg/AVFormatReader.cpp     
2010-09-24 16:54:42 UTC (rev 38807)
@@ -91,21 +91,20 @@
 }
 
 
-// #pragma mark - AVFormatReader::StreamCookie
+// #pragma mark - StreamBase
 
 
-class AVFormatReader::StreamCookie {
+class StreamBase {
 public:
-                                                               
StreamCookie(BPositionIO* source,
-                                                                       
BLocker* streamLock);
-       virtual                                         ~StreamCookie();
+                                                               
StreamBase(BPositionIO* source,
+                                                                       
BLocker* sourceLock, BLocker* streamLock);
+       virtual                                         ~StreamBase();
 
        // Init an indivual AVFormatContext
                        status_t                        Open();
 
        // Setup this stream to point to the AVStream at the given streamIndex.
-       // This will also initialize the media_format.
-                       status_t                        Init(int32 streamIndex);
+       virtual status_t                        Init(int32 streamIndex);
 
        inline  const AVFormatContext* Context() const
                                                                        { 
return fContext; }
@@ -115,31 +114,18 @@
        inline  int32                           VirtualIndex() const
                                                                        { 
return fVirtualIndex; }
 
-                       status_t                        GetMetaData(BMessage* 
data);
-
-       inline  const media_format&     Format() const
-                                                                       { 
return fFormat; }
-
                        double                          FrameRate() const;
                        bigtime_t                       Duration() const;
 
-       // Support for AVFormatReader
-                       status_t                        GetStreamInfo(int64* 
frameCount,
-                                                                       
bigtime_t* duration, media_format* format,
-                                                                       const 
void** infoBuffer,
-                                                                       size_t* 
infoSize) const;
-
-                       status_t                        FindKeyFrame(uint32 
flags, int64* frame,
-                                                                       
bigtime_t* time) const;
-                       status_t                        Seek(uint32 flags, 
int64* frame,
+       virtual status_t                        Seek(uint32 flags, int64* frame,
                                                                        
bigtime_t* time);
 
                        status_t                        GetNextChunk(const 
void** chunkBuffer,
                                                                        size_t* 
chunkSize,
                                                                        
media_header* mediaHeader);
 
-private:
-       // I/O hooks for libavformat, cookie will be a StreamCookie instance.
+protected:
+       // I/O hooks for libavformat, cookie will be a Stream instance.
        // Since multiple StreamCookies use the same BPositionIO source, they
        // maintain the position individually, and may need to seek the source
        // if it does not match anymore in _Read().
@@ -160,42 +146,38 @@
                        int64_t                         
_ConvertToStreamTimeBase(bigtime_t time) const;
                        bigtime_t                       
_ConvertFromStreamTimeBase(int64_t time) const;
 
-private:
+protected:
                        BPositionIO*            fSource;
                        off_t                           fPosition;
                        // Since different threads may read from the source,
                        // we need to protect the file position and I/O by a 
lock.
+                       BLocker*                        fSourceLock;
+
                        BLocker*                        fStreamLock;
 
                        AVFormatContext*        fContext;
                        AVStream*                       fStream;
                        int32                           fVirtualIndex;
 
+                       media_format            fFormat;
+
                        ByteIOContext           fIOContext;
 
                        AVPacket                        fPacket;
                        bool                            fReusePacket;
 
-                       media_format            fFormat;
-
-       mutable bool                            fStreamBuildsIndexWhileReading;
                        bool                            fSeekByBytes;
-
-                       struct KeyframeInfo {
-                               bigtime_t               time;
-                               int64                   frame;
-                               int64                   streamTimeStamp;
-                       };
-       mutable KeyframeInfo            fLastReportedKeyframe;
+                       bool                            
fStreamBuildsIndexWhileReading;
 };
 
 
-
-AVFormatReader::StreamCookie::StreamCookie(BPositionIO* source,
+StreamBase::StreamBase(BPositionIO* source, BLocker* sourceLock,
                BLocker* streamLock)
        :
        fSource(source),
        fPosition(0),
+       fSourceLock(sourceLock),
+
        fStreamLock(streamLock),
 
        fContext(NULL),
@@ -204,42 +186,48 @@
 
        fReusePacket(false),
 
-       fStreamBuildsIndexWhileReading(false),
-       fSeekByBytes(false)
+       fSeekByBytes(false),
+       fStreamBuildsIndexWhileReading(false)
 {
+       // NOTE: Don't use streamLock here, it may not yet be initialized!
+       av_new_packet(&fPacket, 0);
        memset(&fFormat, 0, sizeof(media_format));
-       av_new_packet(&fPacket, 0);
-
-       fLastReportedKeyframe.time = 0;
-       fLastReportedKeyframe.frame = 0;
-       fLastReportedKeyframe.streamTimeStamp = 0;
 }
 
 
-AVFormatReader::StreamCookie::~StreamCookie()
+StreamBase::~StreamBase()
 {
+       av_free(fIOContext.buffer);
        av_free_packet(&fPacket);
        av_free(fContext);
 }
 
 
 status_t
-AVFormatReader::StreamCookie::Open()
+StreamBase::Open()
 {
+       BAutolock _(fStreamLock);
+
        // Init probing data
+       size_t bufferSize = 32768;
+       uint8* buffer = static_cast<uint8*>(av_malloc(bufferSize));
+       if (buffer == NULL)
+               return B_NO_MEMORY;
+
        size_t probeSize = 2048;
-       uint8* probeBuffer = (uint8*)malloc(probeSize);
        AVProbeData probeData;
        probeData.filename = "";
-       probeData.buf = probeBuffer;
+       probeData.buf = buffer;
        probeData.buf_size = probeSize;
 
        // Read a bit of the input...
        // NOTE: Even if other streams have already read from the source,
        // it is ok to not seek first, since our fPosition is 0, so the 
necessary
        // seek will happen automatically in _Read().
-       if (_Read(this, probeBuffer, probeSize) != (ssize_t)probeSize)
+       if (_Read(this, buffer, probeSize) != (ssize_t)probeSize) {
+               av_free(buffer);
                return B_IO_ERROR;
+       }
        // ...and seek back to the beginning of the file. This is important
        // since libavformat will assume the stream to be at offset 0, the
        // probe data is not reused.
@@ -249,12 +237,12 @@
        AVInputFormat* inputFormat = av_probe_input_format(&probeData, 1);
 
        if (inputFormat == NULL) {
-               TRACE("AVFormatReader::StreamCookie::Open() - "
-                       "av_probe_input_format() failed!\n");
+               TRACE("StreamBase::Open() - av_probe_input_format() failed!\n");
+               av_free(buffer);
                return B_NOT_SUPPORTED;
        }
 
-       TRACE("AVFormatReader::StreamCookie::Open() - "
+       TRACE("StreamBase::Open() - "
                "av_probe_input_format(): %s\n", inputFormat->name);
        TRACE("  flags:%s%s%s%s%s\n",
                (inputFormat->flags & AVFMT_GLOBALHEADER) ? " 
AVFMT_GLOBALHEADER" : "",
@@ -266,25 +254,23 @@
 
        // Init I/O context with buffer and hook functions, pass ourself as
        // cookie.
-       if (init_put_byte(&fIOContext, probeBuffer, probeSize, 0, this,
+       memset(buffer, 0, bufferSize);
+       if (init_put_byte(&fIOContext, buffer, bufferSize, 0, this,
                        _Read, 0, _Seek) != 0) {
-               TRACE("AVFormatReader::StreamCookie::Open() - "
-                       "init_put_byte() failed!\n");
+               TRACE("StreamBase::Open() - init_put_byte() failed!\n");
                return B_ERROR;
        }
 
        // Initialize our context.
        if (av_open_input_stream(&fContext, &fIOContext, "", inputFormat,
                        NULL) < 0) {
-               TRACE("AVFormatReader::StreamCookie::Open() - "
-                       "av_open_input_stream() failed!\n");
+               TRACE("StreamBase::Open() - av_open_input_stream() failed!\n");
                return B_NOT_SUPPORTED;
        }
 
        // Retrieve stream information
        if (av_find_stream_info(fContext) < 0) {
-               TRACE("AVFormatReader::StreamCookie::Open() - "
-                       "av_find_stream_info() failed!\n");
+               TRACE("StreamBase::Open() - av_find_stream_info() failed!\n");
                return B_NOT_SUPPORTED;
        }
 
@@ -293,7 +279,7 @@
                = (inputFormat->flags & AVFMT_GENERIC_INDEX) != 0
                        || fSeekByBytes;
 
-       TRACE("AVFormatReader::StreamCookie::Open() - "
+       TRACE("StreamBase::Open() - "
                "av_find_stream_info() success! Seeking by bytes: %d\n",
                fSeekByBytes);
 
@@ -302,16 +288,18 @@
 
 
 status_t
-AVFormatReader::StreamCookie::Init(int32 virtualIndex)
+StreamBase::Init(int32 virtualIndex)
 {
-       TRACE("AVFormatReader::StreamCookie::Init(%ld)\n", virtualIndex);
+       BAutolock _(fStreamLock);
 
+       TRACE("StreamBase::Init(%ld)\n", virtualIndex);
+
        if (fContext == NULL)
                return B_NO_INIT;
 
        int32 streamIndex = StreamIndexFor(virtualIndex);
        if (streamIndex < 0) {
-               TRACE("  Bad stream index!\n");
+               TRACE("  bad stream index!\n");
                return B_BAD_INDEX;
        }
 
@@ -324,278 +312,23 @@
        // Make us point to the AVStream at streamIndex
        fStream = fContext->streams[streamIndex];
 
-       // Discard all other streams
-       for (unsigned i = 0; i < fContext->nb_streams; i++) {
-               if (i != (unsigned)streamIndex)
-                       fContext->streams[i]->discard = AVDISCARD_ALL;
-       }
+// NOTE: Discarding other streams works for most, but not all containers,
+// for example it does not work for the ASF demuxer. Since I don't know what
+// other demuxer it breaks, let's just keep reading packets for unwanted
+// streams, it just makes the _GetNextPacket() function slightly less
+// efficient.
+//     // Discard all other streams
+//     for (unsigned i = 0; i < fContext->nb_streams; i++) {
+//             if (i != (unsigned)streamIndex)
+//                     fContext->streams[i]->discard = AVDISCARD_ALL;
+//     }
 
-       // Get a pointer to the AVCodecContext for the stream at streamIndex.
-       AVCodecContext* codecContext = fStream->codec;
-
-#if 0
-// stippi: Here I was experimenting with the question if some fields of the
-// AVCodecContext change (or get filled out at all), if the AVCodec is opened.
-       class CodecOpener {
-       public:
-               CodecOpener(AVCodecContext* context)
-               {
-                       fCodecContext = context;
-                       AVCodec* codec = 
avcodec_find_decoder(context->codec_id);
-                       fCodecOpen = avcodec_open(context, codec) >= 0;
-                       if (!fCodecOpen)
-                               TRACE("  failed to open the codec!\n");
-               }
-               ~CodecOpener()
-               {
-                       if (fCodecOpen)
-                               avcodec_close(fCodecContext);
-               }
-       private:
-               AVCodecContext*         fCodecContext;
-               bool                            fCodecOpen;
-       } codecOpener(codecContext);
-#endif
-
-       // initialize the media_format for this stream
-       media_format* format = &fFormat;
-       memset(format, 0, sizeof(media_format));
-
-       media_format_description description;
-
-       // Set format family and type depending on codec_type of the stream.
-       switch (codecContext->codec_type) {
-               case AVMEDIA_TYPE_AUDIO:
-                       if ((codecContext->codec_id >= CODEC_ID_PCM_S16LE)
-                               && (codecContext->codec_id <= CODEC_ID_PCM_U8)) 
{
-                               TRACE("  raw audio\n");
-                               format->type = B_MEDIA_RAW_AUDIO;
-                               description.family = B_ANY_FORMAT_FAMILY;
-                               // This will then apparently be handled by the 
(built into
-                               // BMediaTrack) RawDecoder.
-                       } else {
-                               TRACE("  encoded audio\n");
-                               format->type = B_MEDIA_ENCODED_AUDIO;
-                               description.family = B_MISC_FORMAT_FAMILY;
-                               description.u.misc.file_format = 'ffmp';
-                       }
-                       break;
-               case AVMEDIA_TYPE_VIDEO:
-                       TRACE("  encoded video\n");
-                       format->type = B_MEDIA_ENCODED_VIDEO;
-                       description.family = B_MISC_FORMAT_FAMILY;
-                       description.u.misc.file_format = 'ffmp';
-                       break;
-               default:
-                       TRACE("  unknown type\n");
-                       format->type = B_MEDIA_UNKNOWN_TYPE;
-                       break;
-       }
-
-       if (format->type == B_MEDIA_RAW_AUDIO) {
-               // We cannot describe all raw-audio formats, some are 
unsupported.
-               switch (codecContext->codec_id) {
-                       case CODEC_ID_PCM_S16LE:
-                               format->u.raw_audio.format
-                                       = media_raw_audio_format::B_AUDIO_SHORT;
-                               format->u.raw_audio.byte_order
-                                       = B_MEDIA_LITTLE_ENDIAN;
-                               break;
-                       case CODEC_ID_PCM_S16BE:
-                               format->u.raw_audio.format
-                                       = media_raw_audio_format::B_AUDIO_SHORT;
-                               format->u.raw_audio.byte_order
-                                       = B_MEDIA_BIG_ENDIAN;
-                               break;
-                       case CODEC_ID_PCM_U16LE:
-//                             format->u.raw_audio.format
-//                                     = 
media_raw_audio_format::B_AUDIO_USHORT;
-//                             format->u.raw_audio.byte_order
-//                                     = B_MEDIA_LITTLE_ENDIAN;
-                               return B_NOT_SUPPORTED;
-                               break;
-                       case CODEC_ID_PCM_U16BE:
-//                             format->u.raw_audio.format
-//                                     = 
media_raw_audio_format::B_AUDIO_USHORT;
-//                             format->u.raw_audio.byte_order
-//                                     = B_MEDIA_BIG_ENDIAN;
-                               return B_NOT_SUPPORTED;
-                               break;
-                       case CODEC_ID_PCM_S8:
-                               format->u.raw_audio.format
-                                       = media_raw_audio_format::B_AUDIO_CHAR;
-                               break;
-                       case CODEC_ID_PCM_U8:
-                               format->u.raw_audio.format
-                                       = media_raw_audio_format::B_AUDIO_UCHAR;
-                               break;
-                       default:
-                               return B_NOT_SUPPORTED;
-                               break;
-               }
-       } else {
-               if (description.family == B_MISC_FORMAT_FAMILY)
-                       description.u.misc.codec = codecContext->codec_id;
-
-               BMediaFormats formats;
-               status_t status = formats.GetFormatFor(description, format);
-               if (status < B_OK)
-                       TRACE("  formats.GetFormatFor() error: %s\n", 
strerror(status));
-
-               format->user_data_type = B_CODEC_TYPE_INFO;
-               *(uint32*)format->user_data = codecContext->codec_tag;
-               format->user_data[4] = 0;
-       }
-
-       format->require_flags = 0;
-       format->deny_flags = B_MEDIA_MAUI_UNDEFINED_FLAGS;
-
-       switch (format->type) {
-               case B_MEDIA_RAW_AUDIO:
-                       format->u.raw_audio.frame_rate = 
(float)codecContext->sample_rate;
-                       format->u.raw_audio.channel_count = 
codecContext->channels;
-                       format->u.raw_audio.channel_mask = 
codecContext->channel_layout;
-                       format->u.raw_audio.byte_order
-                               = 
avformat_to_beos_byte_order(codecContext->sample_fmt);
-                       format->u.raw_audio.format
-                               = 
avformat_to_beos_format(codecContext->sample_fmt);
-                       format->u.raw_audio.buffer_size = 0;
-                       
-                       // Read one packet and mark it for later re-use. (So 
our first
-                       // GetNextChunk() call does not read another packet.)
-                       if (_NextPacket(true) == B_OK) {
-                               TRACE("  successfully determined audio buffer 
size: %d\n",
-                                       fPacket.size);
-                               format->u.raw_audio.buffer_size = fPacket.size;
-                       }
-                       break;
-
-               case B_MEDIA_ENCODED_AUDIO:
-                       format->u.encoded_audio.bit_rate = 
codecContext->bit_rate;
-                       format->u.encoded_audio.frame_size = 
codecContext->frame_size;
-                       // Fill in some info about possible output format
-                       format->u.encoded_audio.output
-                               = media_multi_audio_format::wildcard;
-                       format->u.encoded_audio.output.frame_rate
-                               = (float)codecContext->sample_rate;
-                       // Channel layout bits match in Be API and FFmpeg.
-                       format->u.encoded_audio.output.channel_count
-                               = codecContext->channels;
-                       format->u.encoded_audio.multi_info.channel_mask
-                               = codecContext->channel_layout;
-                       format->u.encoded_audio.output.byte_order
-                               = 
avformat_to_beos_byte_order(codecContext->sample_fmt);
-                       format->u.encoded_audio.output.format
-                               = 
avformat_to_beos_format(codecContext->sample_fmt);
-                       if (codecContext->block_align > 0) {
-                               format->u.encoded_audio.output.buffer_size
-                                       = codecContext->block_align;
-                       } else {
-                               format->u.encoded_audio.output.buffer_size
-                                       = codecContext->frame_size * 
codecContext->channels
-                                               * 
(format->u.encoded_audio.output.format
-                                                       & 
media_raw_audio_format::B_AUDIO_SIZE_MASK);
-                       }
-                       break;
-
-               case B_MEDIA_ENCODED_VIDEO:
-// TODO: Specifying any of these seems to throw off the format matching
-// later on.
-//                     format->u.encoded_video.avg_bit_rate = 
codecContext->bit_rate;
-//                     format->u.encoded_video.max_bit_rate = 
codecContext->bit_rate
-//                             + codecContext->bit_rate_tolerance;
-
-//                     format->u.encoded_video.encoding
-//                             = media_encoded_video_format::B_ANY;
-
-//                     format->u.encoded_video.frame_size = 1;
-//                     format->u.encoded_video.forward_history = 0;
-//                     format->u.encoded_video.backward_history = 0;
-
-                       format->u.encoded_video.output.field_rate = FrameRate();
-                       format->u.encoded_video.output.interlace = 1;
-
-                       format->u.encoded_video.output.first_active = 0;
-                       format->u.encoded_video.output.last_active
-                               = codecContext->height - 1;
-                               // TODO: Maybe libavformat actually provides 
that info
-                               // somewhere...
-                       format->u.encoded_video.output.orientation
-                               = B_VIDEO_TOP_LEFT_RIGHT;
-
-                       // Calculate the display aspect ratio
-                       AVRational displayAspectRatio;
-                   if (codecContext->sample_aspect_ratio.num != 0) {
-                               av_reduce(&displayAspectRatio.num, 
&displayAspectRatio.den,
-                                       codecContext->width
-                                               * 
codecContext->sample_aspect_ratio.num,
-                                       codecContext->height
-                                               * 
codecContext->sample_aspect_ratio.den,
-                                       1024 * 1024);
-                               TRACE("  pixel aspect ratio: %d/%d, "
-                                       "display aspect ratio: %d/%d\n",
-                                       codecContext->sample_aspect_ratio.num,
-                                       codecContext->sample_aspect_ratio.den,
-                                       displayAspectRatio.num, 
displayAspectRatio.den);
-                   } else {
-                               av_reduce(&displayAspectRatio.num, 
&displayAspectRatio.den,
-                                       codecContext->width, 
codecContext->height, 1024 * 1024);
-                               TRACE("  no display aspect ratio (%d/%d)\n",
-                                       displayAspectRatio.num, 
displayAspectRatio.den);
-                   }
-                       format->u.encoded_video.output.pixel_width_aspect
-                               = displayAspectRatio.num;
-                       format->u.encoded_video.output.pixel_height_aspect
-                               = displayAspectRatio.den;
-
-                       format->u.encoded_video.output.display.format
-                               = pixfmt_to_colorspace(codecContext->pix_fmt);
-                       format->u.encoded_video.output.display.line_width
-                               = codecContext->width;
-                       format->u.encoded_video.output.display.line_count
-                               = codecContext->height;
-                       TRACE("  width/height: %d/%d\n", codecContext->width,
-                               codecContext->height);
-                       format->u.encoded_video.output.display.bytes_per_row = 
0;
-                       format->u.encoded_video.output.display.pixel_offset = 0;
-                       format->u.encoded_video.output.display.line_offset = 0;
-                       format->u.encoded_video.output.display.flags = 0; // 
TODO
-
-                       break;
-
-               default:
-                       // This is an unknown format to us.
-                       break;
-       }
-
-       // Add the meta data, if any
-       if (codecContext->extradata_size > 0) {
-               format->SetMetaData(codecContext->extradata,
-                       codecContext->extradata_size);
-               TRACE("  extradata: %p\n", format->MetaData());
-       }
-
-       TRACE("  extradata_size: %d\n", codecContext->extradata_size);
-//     TRACE("  intra_matrix: %p\n", codecContext->intra_matrix);
-//     TRACE("  inter_matrix: %p\n", codecContext->inter_matrix);
-//     TRACE("  get_buffer(): %p\n", codecContext->get_buffer);
-//     TRACE("  release_buffer(): %p\n", codecContext->release_buffer);
-
-#ifdef TRACE_AVFORMAT_READER
-       char formatString[512];
-       if (string_for_format(*format, formatString, sizeof(formatString)))
-               TRACE("  format: %s\n", formatString);
-
-       uint32 encoding = format->Encoding();
-       TRACE("  encoding '%.4s'\n", (char*)&encoding);
-#endif
-
        return B_OK;
 }
 
 
 int32
-AVFormatReader::StreamCookie::Index() const
+StreamBase::Index() const
 {
        if (fStream != NULL)
                return fStream->index;
@@ -604,7 +337,7 @@
 
 
 int32
-AVFormatReader::StreamCookie::CountStreams() const
+StreamBase::CountStreams() const
 {
        // Figure out the stream count. If the context has "AVPrograms", use
        // the first program (for now).
@@ -620,7 +353,7 @@
 
 
 int32
-AVFormatReader::StreamCookie::StreamIndexFor(int32 virtualIndex) const
+StreamBase::StreamIndexFor(int32 virtualIndex) const
 {
        // NOTE: See CountStreams()
        if (fContext->nb_programs > 0) {
@@ -637,17 +370,8 @@
 }
 
 
-status_t
-AVFormatReader::StreamCookie::GetMetaData(BMessage* data)
-{
-       avmetadata_to_message(fStream->metadata, data);
-
-       return B_OK;
-}
-
-
 double
-AVFormatReader::StreamCookie::FrameRate() const
+StreamBase::FrameRate() const
 {
        // TODO: Find a way to always calculate a correct frame rate...
        double frameRate = 1.0;
@@ -662,8 +386,10 @@
                                frameRate = av_q2d(fStream->r_frame_rate);
                        else if (fStream->time_base.den && 
fStream->time_base.num)
                                frameRate = 1 / av_q2d(fStream->time_base);
-                       else if (fStream->codec->time_base.den && 
fStream->codec->time_base.num)
+                       else if (fStream->codec->time_base.den
+                               && fStream->codec->time_base.num) {
                                frameRate = 1 / 
av_q2d(fStream->codec->time_base);
+                       }
                        
                        // TODO: Fix up interlaced video for real
                        if (frameRate == 50.0f)
@@ -679,7 +405,7 @@
 
 
 bigtime_t
-AVFormatReader::StreamCookie::Duration() const
+StreamBase::Duration() const
 {
        // TODO: This is not working correctly for all stream types...
        // It seems that the calculations here are correct, because they work
@@ -696,205 +422,40 @@
 
 
 status_t
-AVFormatReader::StreamCookie::GetStreamInfo(int64* frameCount,
-       bigtime_t* duration, media_format* format, const void** infoBuffer,
-       size_t* infoSize) const
+StreamBase::Seek(uint32 flags, int64* frame, bigtime_t* time)
 {
-       TRACE("AVFormatReader::StreamCookie::GetStreamInfo(%ld)\n",
-               VirtualIndex());
+       BAutolock _(fStreamLock);
 
-       double frameRate = FrameRate();
-       TRACE("  frameRate: %.4f\n", frameRate);
-
-       #ifdef TRACE_AVFORMAT_READER
-       if (fStream->start_time != kNoPTSValue) {
-               bigtime_t startTime = 
_ConvertFromStreamTimeBase(fStream->start_time);
-               TRACE("  start_time: %lld or %.5fs\n", startTime,
-                       startTime / 1000000.0);
-               // TODO: Handle start time in FindKeyFrame() and Seek()?!
-       }
-       #endif // TRACE_AVFORMAT_READER
-
-       *duration = Duration();
-
-       TRACE("  duration: %lld or %.5fs\n", *duration, *duration / 1000000.0);
-
-       #if 0
-       if (fStream->nb_index_entries > 0) {
-               TRACE("  dump of index entries:\n");
-               int count = 5;
-               int firstEntriesCount = min_c(fStream->nb_index_entries, count);
-               int i = 0;
-               for (; i < firstEntriesCount; i++) {
-                       AVIndexEntry& entry = fStream->index_entries[i];
-                       bigtime_t timeGlobal = entry.timestamp;
-                       bigtime_t timeNative = 
_ConvertFromStreamTimeBase(timeGlobal);
-                       TRACE("    [%d] native: %.5fs global: %.5fs\n", i,
-                               timeNative / 1000000.0f, timeGlobal / 
1000000.0f);
-               }
-               if (fStream->nb_index_entries - count > i) {
-                       i = fStream->nb_index_entries - count;
-                       TRACE("    ...\n");
-                       for (; i < fStream->nb_index_entries; i++) {
-                               AVIndexEntry& entry = fStream->index_entries[i];
-                               bigtime_t timeGlobal = entry.timestamp;
-                               bigtime_t timeNative = 
_ConvertFromStreamTimeBase(timeGlobal);
-                               TRACE("    [%d] native: %.5fs global: %.5fs\n", 
i,
-                                       timeNative / 1000000.0f, timeGlobal / 
1000000.0f);
-                       }
-               }
-       }
-       #endif
-
-       *frameCount = fStream->nb_frames;
-//     if (*frameCount == 0) {
-               // Calculate from duration and frame rate
-               *frameCount = (int64)(*duration * frameRate / 1000000LL);
-               TRACE("  frameCount calculated: %lld, from context: %lld\n",
-                       *frameCount, fStream->nb_frames);
-//     } else
-//             TRACE("  frameCount: %lld\n", *frameCount);
-
-       *format = fFormat;
-
-       *infoBuffer = fStream->codec->extradata;
-       *infoSize = fStream->codec->extradata_size;
-
-       return B_OK;
-}
-
-
-status_t
-AVFormatReader::StreamCookie::FindKeyFrame(uint32 flags, int64* frame,
-       bigtime_t* time) const
-{
        if (fContext == NULL || fStream == NULL)
                return B_NO_INIT;
 
-       TRACE_FIND("AVFormatReader::StreamCookie::FindKeyFrame(%ld,%s%s%s%s, "
-               "%lld, %lld)\n", VirtualIndex(),
-               (flags & B_MEDIA_SEEK_TO_FRAME) ? " B_MEDIA_SEEK_TO_FRAME" : "",
-               (flags & B_MEDIA_SEEK_TO_TIME) ? " B_MEDIA_SEEK_TO_TIME" : "",
-               (flags & B_MEDIA_SEEK_CLOSEST_BACKWARD) ? " 
B_MEDIA_SEEK_CLOSEST_BACKWARD" : "",
-               (flags & B_MEDIA_SEEK_CLOSEST_FORWARD) ? " 
B_MEDIA_SEEK_CLOSEST_FORWARD" : "",
-               *frame, *time);
-
-       double frameRate = FrameRate();
-       if ((flags & B_MEDIA_SEEK_TO_FRAME) != 0)
-               *time = (bigtime_t)(*frame * 1000000.0 / frameRate + 0.5);
-
-       int64_t timeStamp = _ConvertToStreamTimeBase(*time);
-
-       TRACE_FIND("  time: %.2fs -> %lld (time_base: %d/%d)\n", *time / 
1000000.0,
-               timeStamp, fStream->time_base.num, fStream->time_base.den);
-
-       int searchFlags = AVSEEK_FLAG_BACKWARD;
-       if ((flags & B_MEDIA_SEEK_CLOSEST_FORWARD) != 0)
-               searchFlags = 0;
-
-       int index = av_index_search_timestamp(fStream, timeStamp, searchFlags);
-       if (index < 0) {
-               TRACE("  av_index_search_timestamp() failed.\n");
-               // Best is to assume we can somehow seek to the time/frame
-               // and leave them as they are.
-       } else {
-               if (index > 0) {
-                       const AVIndexEntry& entry = 
fStream->index_entries[index];
-                       timeStamp = entry.timestamp;
-               } else {
-                       // Some demuxers use the first index entry to store some
-                       // other information, like the total playing time for 
example.
-                       // Assume the timeStamp of the first entry is alays 0.
-                       // TODO: Handle start-time offset?
-                       timeStamp = 0;
-               }
-                       
-               bigtime_t foundTime = _ConvertFromStreamTimeBase(timeStamp);
-               // It's really important that we can convert this back to the
-               // same time-stamp (i.e. FindKeyFrame() with the time we return
-               // should return the same time again)!
-               if (_ConvertToStreamTimeBase(foundTime) < timeStamp)
-                       foundTime++;
-               bigtime_t timeDiff = foundTime > *time
-                       ? foundTime - *time : *time - foundTime;
-       
-               if (timeDiff > 1000000
-                       && (fStreamBuildsIndexWhileReading
-                               || index == fStream->nb_index_entries - 1)) {
-                       // If the stream is building the index on the fly while 
parsing
-                       // it, we only have entries in the index for positions 
already
-                       // decoded, i.e. we cannot seek into the future. In 
that case,
-                       // just assume that we can seek where we want and leave 
time/frame
-                       // unmodified. Since successfully seeking one time will 
generate
-                       // index entries for the seeked to position, we need to 
remember
-                       // this in fStreamBuildsIndexWhileReading, since when 
seeking back
-                       // there will be later index entries, but we still want 
to ignore
-                       // the found entry.
-                       fStreamBuildsIndexWhileReading = true;
-                       TRACE_FIND("  Not trusting generic index entry. "
-                               "(Current count: %d)\n", 
fStream->nb_index_entries);
-               } else
-                       *time = foundTime;
-       }
-
-       TRACE_FIND("  found time: %.2fs (%lld)\n", *time / 1000000.0, 
timeStamp);
-       if ((flags & B_MEDIA_SEEK_TO_FRAME) != 0) {
-               *frame = int64_t(*time * frameRate / 1000000.0 + 0.5);
-               TRACE_FIND("  found frame: %lld\n", *frame);
-       }
-
-       fLastReportedKeyframe.frame = *frame;
-       fLastReportedKeyframe.time = *time;
-       fLastReportedKeyframe.streamTimeStamp = *time;
-
-       return B_OK;
-}
-
-
-status_t
-AVFormatReader::StreamCookie::Seek(uint32 flags, int64* frame,
-       bigtime_t* time)
-{
-       if (fContext == NULL || fStream == NULL)
-               return B_NO_INIT;
-
-       TRACE_SEEK("AVFormatReader::StreamCookie::Seek(%ld,%s%s%s%s, %lld, "
+       TRACE_SEEK("StreamBase::Seek(%ld,%s%s%s%s, %lld, "
                "%lld)\n", VirtualIndex(),
                (flags & B_MEDIA_SEEK_TO_FRAME) ? " B_MEDIA_SEEK_TO_FRAME" : "",
                (flags & B_MEDIA_SEEK_TO_TIME) ? " B_MEDIA_SEEK_TO_TIME" : "",
-               (flags & B_MEDIA_SEEK_CLOSEST_BACKWARD) ? " 
B_MEDIA_SEEK_CLOSEST_BACKWARD" : "",
-               (flags & B_MEDIA_SEEK_CLOSEST_FORWARD) ? " 
B_MEDIA_SEEK_CLOSEST_FORWARD" : "",
+               (flags & B_MEDIA_SEEK_CLOSEST_BACKWARD)
+                       ? " B_MEDIA_SEEK_CLOSEST_BACKWARD" : "",
+               (flags & B_MEDIA_SEEK_CLOSEST_FORWARD)
+                       ? " B_MEDIA_SEEK_CLOSEST_FORWARD" : "",
                *frame, *time);
 
-       int64_t timeStamp;
-
-       // Seeking is always based on time, initialize it when client seeks
-       // based on frame.
        double frameRate = FrameRate();
        if ((flags & B_MEDIA_SEEK_TO_FRAME) != 0) {
+               // Seeking is always based on time, initialize it when client 
seeks
+               // based on frame.
                *time = (bigtime_t)(*frame * 1000000.0 / frameRate + 0.5);
-               if (fLastReportedKeyframe.frame == *frame)
-                       timeStamp = fLastReportedKeyframe.streamTimeStamp;
-               else
-                       timeStamp = *time;
-       } else {
-               if (fLastReportedKeyframe.time == *time)
-                       timeStamp = fLastReportedKeyframe.streamTimeStamp;
-               else
-                       timeStamp = *time;
        }
 
-       TRACE_SEEK("  time: %.5fs -> %lld, current DTS: %lld (time_base: 
%d/%d)\n",
-               *time / 1000000.0, timeStamp, fStream->cur_dts, 
fStream->time_base.num,
-               fStream->time_base.den);
+       int64_t timeStamp = *time;
 
        int searchFlags = AVSEEK_FLAG_BACKWARD;
        if ((flags & B_MEDIA_SEEK_CLOSEST_FORWARD) != 0)
                searchFlags = 0;
 
        if (fSeekByBytes) {
-               searchFlags = AVSEEK_FLAG_BYTE;
-               BAutolock _(fStreamLock);
+               searchFlags |= AVSEEK_FLAG_BYTE;
+
+               BAutolock _(fSourceLock);
                int64_t fileSize;
                if (fSource->GetSize(&fileSize) != B_OK)
                        return B_NOT_SUPPORTED;
@@ -903,6 +464,11 @@
                        return B_NOT_SUPPORTED;
 
                timeStamp = int64_t(fileSize * ((double)timeStamp / duration));
+               if ((flags & B_MEDIA_SEEK_CLOSEST_BACKWARD) != 0) {
+                       timeStamp -= 65536;
+                       if (timeStamp < 0)
+                               timeStamp = 0;
+               }
 
                bool seekAgain = true;
                bool seekForward = true;
@@ -982,34 +548,77 @@
                                return B_ERROR;
                        }
                }
-
        } else {
-               int64_t minTimeStamp = timeStamp - 1000000;
-               if (minTimeStamp < 0)
-                       minTimeStamp = 0;
-               int64_t maxTimeStamp = timeStamp + 1000000;
+               // We may not get a PTS from the next packet after seeking, so
+               // we try to get an expected time from the index.
+               int64_t streamTimeStamp = _ConvertToStreamTimeBase(*time);
+               int index = av_index_search_timestamp(fStream, streamTimeStamp,
+                       searchFlags);
+               if (index >= 0) {
+                       if (index > 0) {
+                               const AVIndexEntry& entry = 
fStream->index_entries[index];
+                               streamTimeStamp = entry.timestamp;
+                       } else {
+                               // Some demuxers use the first index entry to 
store some
+                               // other information, like the total playing 
time for example.
+                               // Assume the timeStamp of the first entry is 
alays 0.
+                               // TODO: Handle start-time offset?
+                               streamTimeStamp = 0;
+                       }
+                       bigtime_t foundTime = 
_ConvertFromStreamTimeBase(streamTimeStamp);
+                       bigtime_t timeDiff = foundTime > *time
+                               ? foundTime - *time : *time - foundTime;
+               
+                       if (timeDiff > 1000000
+                               && (fStreamBuildsIndexWhileReading
+                                       || index == fStream->nb_index_entries - 
1)) {
+                               // If the stream is building the index on the 
fly while parsing
+                               // it, we only have entries in the index for 
positions already
+                               // decoded, i.e. we cannot seek into the 
future. In that case,
+                               // just assume that we can seek where we want 
and leave
+                               // time/frame unmodified. Since successfully 
seeking one time
+                               // will generate index entries for the seeked 
to position, we
+                               // need to remember this in 
fStreamBuildsIndexWhileReading,
+                               // since when seeking back there will be later 
index entries,
+                               // but we still want to ignore the found entry.
+                               fStreamBuildsIndexWhileReading = true;
+                               TRACE_FIND("  Not trusting generic index entry. 
"
+                                       "(Current count: %d)\n", 
fStream->nb_index_entries);
+                       } else {
+                               // If we found a reasonably time, write it into 
*time.
+                               // After seeking, we will try to read the 
sought time from
+                               // the next packet. If the packet has no PTS 
value, we may
+                               // still have a more accurate time from the 
index lookup.
+                               *time = foundTime;
+                       }
+               }
 
-               if (avformat_seek_file(fContext, -1, minTimeStamp, timeStamp,
-                       maxTimeStamp, searchFlags) < 0) {
-                       TRACE("  avformat_seek_file() failed.\n");
+               if (avformat_seek_file(fContext, -1, INT64_MIN, timeStamp, 
INT64_MAX,
+                               searchFlags) < 0) {
+                       TRACE("  avformat_seek_file()%s failed.\n", fSeekByBytes
+                               ? " (by bytes)" : "");
                        return B_ERROR;
                }
        
-               // Our last packet is toast in any case. Read the next one so we
-               // know where we really seeked.
+               // Our last packet is toast in any case. Read the next one so
+               // we know where we really sought.
+               bigtime_t foundTime = *time;
+       
                fReusePacket = false;
                if (_NextPacket(true) == B_OK) {
-                       if (fPacket.pts != kNoPTSValue) {
-                               *time = _ConvertFromStreamTimeBase(fPacket.pts);
-                               TRACE_SEEK("  seeked time: %.2fs\n", *time / 
1000000.0);
-                               if ((flags & B_MEDIA_SEEK_TO_FRAME) != 0) {
-                                       *frame = *time * frameRate / 1000000LL 
+ 0.5;
-                                       TRACE_SEEK("  seeked frame: %lld\n", 
*frame);
-                               }
-                       } else
+                       if (fPacket.pts != kNoPTSValue)
+                               foundTime = 
_ConvertFromStreamTimeBase(fPacket.pts);
+                       else
                                TRACE_SEEK("  no PTS in packet after 
seeking\n");
                } else
                        TRACE_SEEK("  _NextPacket() failed!\n");
+       
+               *time = foundTime;
+               TRACE_SEEK("  sought time: %.2fs\n", *time / 1000000.0);
+               if ((flags & B_MEDIA_SEEK_TO_FRAME) != 0) {
+                       *frame = *time * frameRate / 1000000.0 + 0.5;
+                       TRACE_SEEK("  sought frame: %lld\n", *frame);
+               }
        }
 
        return B_OK;
@@ -1017,11 +626,13 @@
 
 
 status_t
-AVFormatReader::StreamCookie::GetNextChunk(const void** chunkBuffer,
+StreamBase::GetNextChunk(const void** chunkBuffer,
        size_t* chunkSize, media_header* mediaHeader)
 {
-       TRACE_PACKET("AVFormatReader::StreamCookie::GetNextChunk()\n");
+       BAutolock _(fStreamLock);
 
+       TRACE_PACKET("StreamBase::GetNextChunk()\n");
+
        // Get the last stream DTS before reading the next packet, since
        // then it points to that one.
        int64 lastStreamDTS = fStream->cur_dts;
@@ -1087,6 +698,23 @@
                }
        }
 
+//     static bigtime_t pts[2];
+//     static bigtime_t lastPrintTime = system_time();
+//     static BLocker printLock;
+//     if (fStream->index < 2) {
+//             if (fPacket.pts != kNoPTSValue)
+//                     pts[fStream->index] = 
_ConvertFromStreamTimeBase(fPacket.pts);
+//             printLock.Lock();
+//             bigtime_t now = system_time();
+//             if (now - lastPrintTime > 1000000) {
+//                     printf("PTS: %.4f/%.4f, diff: %.4f\r", pts[0] / 
1000000.0,
+//                             pts[1] / 1000000.0, (pts[0] - pts[1]) / 
1000000.0);
+//                     fflush(stdout);
+//                     lastPrintTime = now;
+//             }
+//             printLock.Unlock();
+//     }
+
        return B_OK;
 }
 
@@ -1095,15 +723,14 @@
 
 
 /*static*/ int
-AVFormatReader::StreamCookie::_Read(void* cookie, uint8* buffer,
-       int bufferSize)
+StreamBase::_Read(void* cookie, uint8* buffer, int bufferSize)
 {
-       TRACE_IO("AVFormatReader::StreamCookie::_Read(%p, %p, %d)\n",
+       TRACE_IO("StreamBase::_Read(%p, %p, %d)\n",
                cookie, buffer, bufferSize);
 
-       StreamCookie* stream = reinterpret_cast<StreamCookie*>(cookie);
+       StreamBase* stream = reinterpret_cast<StreamBase*>(cookie);
 
-       BAutolock _(stream->fStreamLock);
+       BAutolock _(stream->fSourceLock);
 
        if (stream->fPosition != stream->fSource->Position()) {
                off_t position
@@ -1123,14 +750,14 @@
 
 
 /*static*/ off_t
-AVFormatReader::StreamCookie::_Seek(void* cookie, off_t offset, int whence)
+StreamBase::_Seek(void* cookie, off_t offset, int whence)
 {

[... truncated: 746 lines follow ...]

Other related posts:

  • » [haiku-commits] r38807 - haiku/trunk/src/add-ons/media/plugins/ffmpeg - superstippi