Stephan Aßmus wrote:
Hi all,
it is very funny that this discussion takes off, right when I have also thought about it a lot lately. I will try to keep this short...
2) Using one buffer in main memory for every window Pros: - little CPU usage when repainting exposed parts Cons: - uses lots of RAM - uses the CPU to draw everything, no HW acceleration at all - if not done smart, lots of unnecessary CPU usage when drawing stuff in windows which are covered anyways
3) Using one (offscreen) buffer in graphics RAM for every window Pros: - compositing is fast and uses no CPU (done by GPU) - almost no CPU usage when repainting exposed parts - all kinds of HW acceleration possible - flicker free drawing and transparent overlayes Cons: - very slow on drawing operations done by the CPU - uses lots of the limited *graphics* RAM (needs special VM for graphics memory) - we would need a lot of hardware documentation (and not possible with current driver API)
As you can see all these concepts have some difficulties associated with them. 3) looks tempting, but look at how long it has taken Apple to redesign MacOS X to fully explore it. It even involves a sophisticated graphics memory manager, because you want to have for example bitmaps in graphics memory too. And how do you tell frequently used bitmaps from one time bitmaps....
Who said I would be easy? :-)
it has all kinds of stuff that goes with it. Text rendering is done by the GPU as well in MacOS X (if Quartz 2D extreme is enabled).
I'm not sure about that, I can't see how the GPU can do AA text.
Still, I have to read the available docs.
2) Using one buffer in main memory for every window Pros: - little CPU usage when repainting exposed parts + can be used as a start for the real 3D acc interface ;-) Cons: - uses lots of RAM - uses the CPU to draw everything, no HW acceleration at all - if not done smart, lots of unnecessary CPU usage when drawing stuff in windows which are covered anyways
What I have had rolling in my head for the past days is this:
Stick with the clipping based approach, but how about an offscreen buffer in main memory for just the parts in the frame buffer with transparency? This way, we could clip incomming drawing instructions in Painter two times. For the region that does not intersect the transparency containing region it goes right to screen with HW acceleration and all. But if the transparency region is touched, for example if the user drags something that is displayed as a transparent bitmap, then the remaining bit of drawing goes into a "temporary" surface in main memory, the part of the transparent buffer is composed on top of it, and off it goes to the framebuffer on the graphics card.
Two probles with this idea:
- It would only allow transparent drawing on the server side, since the order of graphics calls has to be in the order of windows on screen. This can be guaranteed only for drawing that happens entirely on app_server, like the drawing of window borders, the mouse cursor and stuff being dragged.
- The concept does not know how to handle video overlays.
Should it? IMO, it shouldn't care.
bye, Adi.