[haiku-commits] Re: BRANCH orangejua-github.app_server [551438b9be6b] in src: servers/app tests/servers/app/unit_tests servers/app/drawing/Painter

  • From: looncraz <looncraz@xxxxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Sat, 25 Jul 2015 16:46:00 -0500

On 7/25/2015 16:07, Julian Harnath wrote:


That would pose a number of new issues though. What about the DrawState? Currently, the layers do not influence the draw state (except for setting the draw/blend mode). You can simply insert BeginLayer/EndLayer calls into your flow of drawing operations, and everything should keep working as expected, just with added transparency on parts of the drawing.

True, my original idea was to optionally create named layers, that would then have a stored draw state which would affect only its own drawing.


Another issue would be resource management, it's not clear anymore when the layer data could be removed.

True, my idea would require keeping all the lower and higher layer data separated. I was under the impression you were doing this, so am I to understand the only real reason why you are expecting to be gaining any performance with Web+ is that you can create a new opacity level for new draws? And that would be the entire idea behind layers?

If that's the case, maybe it shouldn't be called BeginLayer(), but Set[Draw/View]Opacity(uint8 opacity)?


In the end, if you do it like that, keep the layers around and let them have separate DrawStates, you could just as well call the layers "BView"s and allow for semi-transparent views to be stacked (which might just be what you're working on?).


Yes, a composited window's child views will all default to alpha-mode drawing and a fully transparent background color, as well as the entire window being able to set an opacity, so using BViews as layers would work decently, though input events will still be captured by the foremost view. In my current model the client will have to redraw the entire view stack below/above a given update region, which is a serious performance issue that could only be resolved by independently rendering each view into its own buffer (or capturing all of its drawing commands and replaying them on the server side, which your changes may have made easier to accomplish).


A modernized (far-)future-app_server should IMHO use a GPU-accelerated backend, and with that, alpha-blending becomes a breeze. The things I implemented for the layers -- write the commands into BPictures, give those to a special picture-player to figure out the bounding box, then draw with special offsetting and then composite -- would most probably be completely unneeded then because it's barely worth it when you have the performance of a GPU available. But all that is far in the future, and for now we have just the CPU, and need to see how things can be done more efficiently with what we have :)


The current app_server should be using some hardware acceleration already, though I've been working exclusively without it.

On that note, have you tried to get the test_app_server working? I can get it to compile, but I get a worthless, tiny, executable and no warnings or errors or any clue as to what the problem may be (though I only worked on it for a few minutes :-/).

--The loon



Other related posts: