Hi once more, Just a thought (probably I am way behind you all :) Would it be possible to do something like this? (so we always have optimal speed from the hardware) ----- ->ask displaydriver if it can do acceleration ->ask the driver if it can do acc from main mem (later on: not yet) (implies of course acc can be done, so previous question was answered with "yes") Now decide how to setup double buffering: 1./ no acc: ->setup source for doublebuffering in main mem. As CPU must copy this into the onscreen buffer, only writes across the AGP/PCI bus will occur, speeding it all up considerably compared to having source in graphicsmem also. If you keep it serialized, FW/burstwrites will help 'optimally' 2./ only local acc (no main mem acc) ->setup source for double buffering in graphicsRAM: the engine will do the copying. (CPU will instruct acc engine to do blits) 3./ acc possible from main mem: setup source for doublebuffering in main mem. (CPU will instruct acc engine to do blits while fetching source from main mem) -------- OK, number 2/ requires something more (future): -> ask driver if it can do offscreen blits 1./Nope, this means the source bitmaps that are offscreen will need to be in the same colorspace and granularity as the desktop: we in fact have a virtual screen (height) that only 'seems' to be blitting offscreen. 2./Yep: this means the sourcebitmaps may be in other spaces and other granularity: this engine can actually do real offscreen blits. Note please that utahGLX (3D hardware acc, predecessor of DRI AFAIK) suggests that people writing a 3D add-on driver should use the setup used as if the engine cannot do offscreen blits (1.). This requires less knowledge of hardware, and may be the difference between 3D acc or no acc at all. (The 'offscreen bitmaps' here are things like Z buffer etc) So (2.) is an option for utahGLX drivers. I still have to checkout DRI to see if this option still exists there. It might be as the drivers for utahGLX still seem to work on DRI (AFAIK, but I might be mistaken) ==== If you could make the app_server and other stuff that might need to know about these kind of things flexible so these decisions can be made 'on the fly' (on initing a graphicscard/driver or maybe even on setting a mode), you would always be able to select the best setup, speed wise. Of course, you can just skip all that and go for failsave (least capability) or max (best capability), but you understand both of these desicions would have considerable downsides :) Just something to maybe think about. Rudolf.