[haiku-development] Re: A tale of two accelerant API's

  • From: looncraz <looncraz@xxxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Tue, 12 Feb 2013 15:47:24 -0800

On 2/12/2013 15:04, Axel Dörfler wrote:
What's basically missing is a MuxHWInterface that divides the virtual screen area to different backends. That also reminds me of something of relevance to the original thread topic: it's probably best to create an accelerant HWInterface factory that creates HWInterfaces different for each head. I don't think a single HWInterface can deal with more than one screen yet, and having a common approach (ie. the aforementioned MuxHWInterface) would probably the most flexible solution.


Okay, I knew there was a missing piece of the puzzle. It would be best if the frame buffer were a simple rect, and the DrawingEngines merely need to handle a single frame buffer - whether it is split behind the scenes to multiple outputs is immaterial. This would then be handled on a card-by-card basis.

It would be annoying to finally have 3D drivers, and then having to rewrite that part once again.

There is really little choice in the matter... When a 3d stack is ready and working, everything will need to be given a 3d context, the Desktop will need to be a scene, WindowBuffers would be treated as textures and the 2d drawing into them can remain unchanged, but the CompositeEngine would merely act as an intermediary with the video card rather than a multithreaded rendering director.

The real question I have is whether or not there is any real benefit to this... 2d acceleration isn't going to go away and you can perform pretty much all the same tricks in 2d as with 3d in terms of the Desktop environment... and nothing prevents you from mixing the two, either...


[...]
added while the current frame is being rendered.  A group of threads,
one per core by default, use specialized DrawingEngines attached to the
frame buffer for the screen in question - this is where hardware
acceleration comes in.  The only action these threads perform is calling
into DrawingEngine::DrawWindowBuffer(WindowBuffer* buffer, IntRect
source, IntRect destination).

With 3D this would apparently happen earlier, as the actual compositing would not need to happen - the card would do that pretty much automatically given the scene (ie. the window textures, their alpha mask, and where to draw them with what scale/perspective).


It is also possible for windows to not have buffers at all, have everything paint to the back buffer in their order, and then flip the buffers... but we lose performance - sometimes drastically - this way... doesn't mean it is the right way.

3d desktops sound nice, but they offer precious little, IMHO. Not sure using it for something as simple as compositing is really worthwhile... not to mention you will still need a fall-back mode.

If the buffers can exist in VRAM, though, then we see a rather low
memory requirement... and it is entirely feasible to hold one buffer in
video RAM and the other in system RAM. (mmap, FTW!).

Only on systems where this actually makes a difference, of course :-)

It would be up to the accelerant... if it had a way to allocate memory for the WindowBuffer it could decide from where to gain that memory.... for now I just malloc() It would be trivial to implement HWInterface::AllocateWindowBuffer(...) but then accelerants will need memory management... so it may be a good idea to create a generic system for them to use...

--The loon

Other related posts: