[haiku-appserver] Re: user needs to know about the clipping region?

  • From: Stephan Aßmus <superstippi@xxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Thu, 24 Nov 2005 21:20:58 +0100

Hi all,

it is very funny that this discussion takes off, right when I have also 
thought about it a lot lately. I will try to keep this short...

What I think we want:
- the current level of hardware acceleration that is possible with the API 
and implemented in the existing drivers
- transparency for some situations, but without flickering please
- low CPU usage for drawing
- small memory footprint

The different concepts:

1) Current R5 (and Haiku) concept with "hard" clipping regions
 Pros:
 - very small memory footprint
 - using all HW acceleration currently available
 Cons:
 - CPU overhead when having to paint exposed parts of windows
 - no transparency and flickering when it absolutely has to be used
   (like when dragging a transparent bitmap and stuff is painted
   underneath it)


2) Using one buffer in main memory for every window
 Pros:
 - little CPU usage when repainting exposed parts
 Cons:
 - uses lots of RAM
 - uses the CPU to draw everything, no HW acceleration at all
 - if not done smart, lots of unnecessary CPU usage when drawing
   stuff in windows which are covered anyways


3) Using one (offscreen) buffer in graphics RAM for every window
 Pros:
 - compositing is fast and uses no CPU (done by GPU)
 - almost no CPU usage when repainting exposed parts
 - all kinds of HW acceleration possible
 - flicker free drawing and transparent overlayes
 Cons:
 - very slow on drawing operations done by the CPU
 - uses lots of the limited *graphics* RAM (needs special VM for
   graphics memory)
 - we would need a lot of hardware documentation (and not possible
   with current driver API)


4) Using an offscreen *framebuffer* in mainmemory
   (Haiku in double buffered mode)
 Pros:
 - flicker free drawing and transparent overlayes
 Cons:
 - uses the CPU to draw everything, no HW acceleration at all
 - high CPU usage on repainting exposed parts


As you can see all these concepts have some difficulties associated with 
them. 3) looks tempting, but look at how long it has taken Apple to 
redesign MacOS X to fully explore it. It even involves a sophisticated 
graphics memory manager, because you want to have for example bitmaps in 
graphics memory too. And how do you tell frequently used bitmaps from one 
time bitmaps.... it has all kinds of stuff that goes with it. Text 
rendering is done by the GPU as well in MacOS X (if Quartz 2D extreme is 
enabled).


What I have had rolling in my head for the past days is this:

Stick with the clipping based approach, but how about an offscreen buffer 
in main memory for just the parts in the frame buffer with transparency? 
This way, we could clip incomming drawing instructions in Painter two 
times. For the region that does not intersect the transparency containing 
region it goes right to screen with HW acceleration and all. But if the 
transparency region is touched, for example if the user drags something 
that is displayed as a transparent bitmap, then the remaining bit of 
drawing goes into a "temporary" surface in main memory, the part of the 
transparent buffer is composed on top of it, and off it goes to the 
framebuffer on the graphics card.
Two probles with this idea:
- It would only allow transparent drawing on the server side, since the 
order of graphics calls has to be in the order of windows on screen. This 
can be guaranteed only for drawing that happens entirely on app_server, 
like the drawing of window borders, the mouse cursor and stuff being 
dragged.
- The concept does not know how to handle video overlays.

Feel free to extend any of the list above with pros and cons, and comment 
on the stuff I said. I don't want to make this mail even longer. :-)

Best regards,
-Stephan


Other related posts: