[haiku-development] Re: GSoC EHCI

  • From: Gabriel Hartmann <gabriel.hartmann@xxxxxxxxx>
  • To: Philippe Houdoin <philippe.houdoin@xxxxxxxxx>, Jérôme Duval <jerome.duval@xxxxxxxxx>, haiku-development <haiku-development@xxxxxxxxxxxxx>
  • Date: Tue, 2 Aug 2011 23:01:24 +0000

Hi Philippe and Jérôme and the Development mailing list,

Ok, so I've been digging around in ehci and I don't understand much of
it at all.  Similarly dealing with KDL crashes etc. could similarly
take up all my time and accomplish little.  There are less than 3
weeks left and I think I'd probably just start understanding that
stuff and then run out of time.  So I'd like to do something that's
substantive instead of just teaching myself about ehci and kernel
debugging.  I'd like to try to interpret what little payload data I'm
getting.  In a previous e-mail we spoke briefly about this.  See
below:

>> If I didn't miss something, these results indicates that the payload
>> data after the payload header needs to be deframed and passed to the
>> VideoProducer before hoping seeing anything but a black frame :-)
>>
>> For that, you'll need to implement UVCCamDevice::FillFrameBuffer()
>> and/or UVCCamDevice::GetFrameBitmap().
>> You may find it easier to move the UVC deframing in a dedicated
>> object, and thus create an UVCDeframer class.

>I've been looking at the Sonix GetFrameBitmap and FillFrameBuffer
>code.  Could you give me a little more detail about how deFramers
>work.  It looks like a CamDeframer makes reference to a Blist of
>Frames.  How is this list populated?  Otherwise the two methods which
>need implementing look pretty straight forward.


I think this approach is a good idea, so that if in the future ehci is
fixed at least the payload data can be interpretted easily.  In any
case, my questions about deframers still stands.  I believe I should
be creating a UVCDeframer class which extends CamDeframer.  This
should then implement the virtual functions defined in CamDeframer.
Is there any documentation about what exactly these functions are
supposed to do?  Then I implement FillFrameBuffer or GetFrameBitmap
using the Deframer and we should be pretty close to showing pictures
on screen...at least for 20 seconds right?

I'm trying to figure this out on my own, but I just keep climbing up
the inheritance tree into less and less familiar territory.  I guess
the key questions are what are the overridden functions in the
Deframer supposed to do, and who populates the BList of frames
(fFrames)?

-- Gabriel

Other related posts: