[haiku-appserver] Re: user needs to know about the clipping region?

  • From: Adi Oanca <adioanca@xxxxxxxxxxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Fri, 25 Nov 2005 10:36:24 +0200

Stephan Aßmus wrote:

Hi all,

it is very funny that this discussion takes off, right when I have also thought about it a lot lately. I will try to keep this short...

I've been thinking of it for about 2 years. I even talked to DW a little, forwared him some documents to read. :-)
It was clear to me since that moment that we need hold clipping regions to local coordinates, but I felt that fir the moment, to avoid lots of conversions, it would be good to keep them in screen coords. But next, we talked about it and we all know what happened. :-)


2) Using one buffer in main memory for every window
 Pros:
 - little CPU usage when repainting exposed parts
 Cons:
 - uses lots of RAM
 - uses the CPU to draw everything, no HW acceleration at all
 - if not done smart, lots of unnecessary CPU usage when drawing
   stuff in windows which are covered anyways


3) Using one (offscreen) buffer in graphics RAM for every window Pros: - compositing is fast and uses no CPU (done by GPU) - almost no CPU usage when repainting exposed parts - all kinds of HW acceleration possible - flicker free drawing and transparent overlayes Cons: - very slow on drawing operations done by the CPU - uses lots of the limited *graphics* RAM (needs special VM for graphics memory) - we would need a lot of hardware documentation (and not possible with current driver API)

As you can see all these concepts have some difficulties associated with them. 3) looks tempting, but look at how long it has taken Apple to redesign MacOS X to fully explore it. It even involves a sophisticated graphics memory manager, because you want to have for example bitmaps in graphics memory too. And how do you tell frequently used bitmaps from one time bitmaps....

Who said I would be easy? :-)

it has all kinds of stuff that goes with it. Text rendering is done by the GPU as well in MacOS X (if Quartz 2D extreme is enabled).

I'm not sure about that, I can't see how the GPU can do AA text.

        Still, I have to read the available docs.

5) Using graphics RAM and maim mem RAM for d.b. windows
Pros:
- window composition is very fast - done by GPU; NO repainting when moving windows; various window effects
- drawing is faster: done in main mem at first then use AGP to transfer in graphics RAM, then a super fast blit.
- transparency at window level; shadows too
- accelerated scrolling, rectangle fill
- overlays - just another surface; not sure faster than the drawing method.
- with full capable 3D drivers, some primitives can be drawn directly on screen with gradients and AA.
Cons:
- uses a (not so big) surface in main mem to draw invalidated parts of windows.
- main mem buffer has to be cleared with transparency color after a window finished drawing and contents has been fetched to graphics RAM.


2) Using one buffer in main memory for every window
 Pros:
 - little CPU usage when repainting exposed parts
+ can be used as a start for the real 3D acc interface ;-)
 Cons:
 - uses lots of RAM
 - uses the CPU to draw everything, no HW acceleration at all
 - if not done smart, lots of unnecessary CPU usage when drawing
   stuff in windows which are covered anyways

What I have had rolling in my head for the past days is this:

Stick with the clipping based approach, but how about an offscreen buffer in main memory for just the parts in the frame buffer with transparency? This way, we could clip incomming drawing instructions in Painter two times. For the region that does not intersect the transparency containing region it goes right to screen with HW acceleration and all. But if the transparency region is touched, for example if the user drags something that is displayed as a transparent bitmap, then the remaining bit of drawing goes into a "temporary" surface in main memory, the part of the transparent buffer is composed on top of it, and off it goes to the framebuffer on the graphics card.

:-) Funny how engineers come to the same solution, sometimes. :-)
This is OK with me for transparent windows ATM. However, if you want to do this for layers as well, you need to modify the clipping code a bit, because a surface may be visible for more than 1 layer.


Two probles with this idea:
- It would only allow transparent drawing on the server side, since the order of graphics calls has to be in the order of windows on screen. This can be guaranteed only for drawing that happens entirely on app_server, like the drawing of window borders, the mouse cursor and stuff being dragged.

You can use more memory. :-) Have a buffer screen size. Make it transparent and draw the transparent windows. Then copy those region with transparency on screen.
More of less like that, but you got the idea. ;-)


- The concept does not know how to handle video overlays.

Should it? IMO, it shouldn't care.


bye, Adi.

Other related posts: