[openbeos] Re: display server differences BeOS/MacOSX/X

  • From: DarkWyrm <bpmagic@xxxxxxxxxxxxxxx>
  • To: openbeos@xxxxxxxxxxxxx
  • Date: Wed, 19 Dec 2001 17:35:23 -0500

All right - an RFC from the ppl working on the app_server. Be warned - I'm 
not an expert, but I play one in real life, so please be kind - I'll comment 
as best I can. :^)

>A lot of applications will just redraw their whole view when requested 
>an update for a specific region.
IMHO, not a very good idea, but this is true.

>Since quartz is supposedly having a full 
>bitmap for every window in memory (in order to show some transparency 
>effects, animations etc), reallocating that bitmap and redrawing it in 
>RAM, then taking that bitmap and blit it with transperency of n *other*
>bitmaps, all takes an enormous amount of bandwidth on the memory bus.
This is likely the reason why Be never put transparency for windows into the 
API - because of the performance hit.

>It would be better if the videocard would be able to store all the window
>contents independently, and do all the transperancy stuff and blitting 
>from that into the framebuffer. Then it's at least not your own memorybus, 
>and the PCI/AGP bus only gets the drawing commands and application-generated 
>bitmaps. You would need a lot of videoram in that case, and the 
>videoram requirement increases as you open up more windows... And, that 
>videoram is not backed by a swapfile.
It's simply a matter of being very resource-expensive. I've been 
contemplating alpha effects with windows and how it could be effective and 
not as much of a resource hog. I'll go out on a limb and explaing the basics 
of how I think the server works and let you make your own conclusions.

The server constructs a server-side looper for both windows and applications. 
The window loopers handle requests from views which eventually result in 
graphics function calls which render to the frame buffer. The part I'm 
working on (currently) is figuring out the exact rendering. I'd assume that 
the updated region for each window is clipped to the screen, other windows, 
etc. and then rendering is done.

What probably could be done for transparency is to have some way of setting 
aside and creating bitmaps only for transparent regions. These regions would 
make life much easier because then rendering would generally entail bitmap 
blitting for the alpha regions and just the regular drawing commands for 
everything else. Just a thought.

>Also, I'm not sure if today's videocards can actually do that amount of 2D 
acceleration.
Me neither. Considering the push for more and more 3D speed (thanks to QIII, 
UT, and Half-Life), it wouldn't suprise me if what you said about using 3D 
would be a better way to go.

>They can do screen-to screen blits, but can they do that with the full 128 
or 256 MB 
>of videoram too? Can they combine data from n bitmaps simultaneously and 
blit it to the framebuffer ?
I know the 2D accelerator API, and it's seems pretty simplistic to me. The 
only blitting supported is the screen-to-screen, strictly in the frame buffer 
and nothing else. However, because there appears to be plenty of space for 
more hook functions, I don't see why we couldn't extend the existing driver 
API in R2 to include  such things.

--DarkWyrm


Other related posts: