>>1. Locking is done for every drawing instruction. >>2. Blitting is done for every instruction instead of when an >>update is >>over. IF you have a 200x200 rect in which random lines are drawn, >>imagine what drawing with the CPU in the backbuffer(vidMem) then blit >>on >>frontbuffer would mean. > > I've been wondering about this myself -- how to optimize it, even > though I don't plan on doing it yet. The things I've been considering > are things like changing the locking mechanism to avoid so many context > switches. Check for the next message(if any), if it's a drawing message don't unlock, if it isn't (or is last in buffer) unlock. [This requires we have an interval for drawing messages codes DM_min <= dm <= DM_max] Just a quick thought... :-) >>3. From the BeBook: "DisableUpdates() , EnableUpdates() >>These functions disable automatic updating within the window, and >>re-enable it again. Any drawing that's done while updates are >>disabled >>is suppressed until updates are re-enabled. If you're doing a lot of >>drawing within the window, and you want the results of the drawing to >>appear all at once, you should disable updates, draw, and then re- >>enable >>updates." >> This looks to me like an off screen window is created (maybe in >>videoMem, but I seriously doubt it as the frame buffer is screen's >>width >>and height and there are only the overlay hooks available for >>allocating >>mem in video memory) with the contents copied of what's visible on >>screen then all drawing is done in there and when EnableUpdates() is >>called this surface is copied on screen. Correct me if I am wrong. >> This must be implemented. > > This is also something I wonder about the more I think about it. The > purpose is also to reduce the number of invalidation messages being > passed around. Update messages. :-P > I don't remember where, but I remember seeing some > source code which used it for this very reason. I think I remember too seeing it somewhere... sample code maybe? > It's a very good trick > when used properly. I suspect that it reduces context switching by > eliminating redundant messages. As a result, I suspect that somehow the > messages are queued client side, but I'm not sure how we should do it. Wait, you may be confusing things... (me too also :-) ) IMHO: Disable/EnableUpdates() disable/enable _UPDATE_ messages being sent to client side thus issuing a BView::Draw(). Begin/EndViewTransaction() caches the whole BView::Draw() in a BPicture just like you said - for every BView in window. When EndViewTransaction() is called, app_server will update each affected view (the ones for which BPicture is not empty(or NULL)) but instead of sending the _UPDATE_ message if renders the associated BPicture's data. What do you think? >> If you don't mind, I'd like to hear them when you have some time. >>Thanks. > > I'm typing while sleep-deprived so bear with me. :-))) oooook. > DisplayDriver would > not exist as we know it -- the rendering buffer and the graphics > library portions of the class would be separated from each other. I > suspect that the graphics portions of DisplayDriver would probably end > up in the Layer class, a la Syllable. I'm not sure it's a good idea. Layer is key part of the high_level app_server. How drawings are done it's not it business - remember what I've explained about different backends for drawing onscreen - by CPU, OpenGL/AGP, PixelShaders, pdf, BView, etc. My conclusion is that we have to keep this in a module which can be interchangeable. DD how it is now, it's a very good idea if you ask me. Maybe we should extend its functionality by adding surfaces to its management for the double buffering process to be transfered into its hands, but this is something that I have to think more about. [This way, surfaces can be used directly by DD (through HW acc if available) to perform amazing effects] > In order to conserve RAM usage, it would be necessary to construct a > memory managemet method which would free bitmaps when they were not > needed without imposing much of a performance hit. This is a must. > The bitmaps > themselves would just be allocated off the heap, but I would imagine > that providing a means to set a fixed limit on the amount of RAM > consumed would also be a Good Thing (TM) in this situation. Yup, I think so too: 1/3rd ? Adi.