[haiku-appserver] Re: user needs to know about the clipping region?

  • From: Adi Oanca <adioanca@xxxxxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Sun, 27 Nov 2005 19:55:22 +0200

Hi,

Stephan Aßmus wrote:
   I know. You are talking about BView overlays. AFAIUnderstood. But you
also were talking dragging something with the mouse. And in this case
only, HW overlays can be used.

You're confusing me. Video overlays have nothing to do with alpha blending. They can not be used in this way. They usually even have a different colorspace.

OK, to end this: they cannot be used for what you want. They are a particular case which can be used when dragging something with the mouse, only!


Also, there is no support in the driver API to allocate something in the
unused graphics memory. At least Rudolf said this, and I have not seen
anything usable either while working briefly on AccelerantHWInterface.

There is (optional)support to allocate a limited number of bitmaps in video mem - 3 to be more exact. We could use this for dragging things. Yes, this is very limited and there are no hooks to allocate additional graphics memory.

I don't know, we might be able to "trick" the driver into allocating stuff for us in the unused video memory by allocating overlays, which we don't intend to use as such. But then there is no API to alpha blend an area from offscreen video memory to onscreen video memory. (Or is there?)

There is a solution. Have you read what Rudolf said about allegro game library? AFAIRemember, it requires a frame buffer double the size of screen. The driver gives it, but it only uses the first half to display on screen. Then there are 2 choices: 1) you draw in the 2nd part and instruct the card to start retracing from it (swap buffers), then draw in the 1st part and swap again, and so on; 2) you could just use the first part, prepare things in the 2nd, then blit using 2D HW with of without transparency.
I'm not sure about this: but I think you can even request a frame buffer 3 times the screen size.


And what about the case when you actually want to use a _video_ overlay?!? Can't have both "fake" and "real" video overlays at the same time, I'd guess.

There are 3 overlays. If you use one for dragging something there are still 2 remaining. Also, using video overlay is determined automatically, and if not possible there's always the normal way of playing videos - using normal bitmaps.


- graphics memory buffer with final scene
- main memory buffer for drawing the solid background behind the overlays
- main memory buffer with the transparent overlays

:-) Ah, but you didn't mention the 3rd buffer. ;-)
To be sure: the 2nd main memory buffer holds the final scene - a region
copied from the first buffer composed with the transparent bitmaps above it.

Look what I have written above. No, only the buffer in the graphics memory
holds the final scene. The first main memory buffer and the second main
memory buffer are composed, and the result is written to the graphics buffer
on the fly. The background and the transparent overlay stay apart in two
different buffers. The client can draw to the background, while the
transparency buffer is maintained in the app_server only (where order of
drawing can be enforced).

:-) When I read your previous mail I thought in two ways. First is the one that I wrote in my previous mail, 2nd is the one you describe here. Did not wrote about it because you said: "Should not flicker the least". In fact it will flicker. You first have to copy the image from the 1st main mem buffer (the one with opaque data) and then copy from the transparency buffer. It will flicker.

No, you don't understand. It will not flicker. Flickering is when you first draw something onscreen and then draw something else on top of it (like you say, but that's not how it would work). I am meaning to draw someting only once. A + B = C. A and B are main memory, C is graphics memory. You don't have to do C = A; C += B. You can do A + B = C straight away.

Oh, you want to put pixel by pixel! OK.
Sorry for bothering that much, my mind is set on GPU fetching data from main memory which would require a bitmap in our case. :-)




Thanks,
Adi.

Other related posts: