Rudolf, > I'm not sure I follow. I didn't think about app_server internals. I was > merely thinking of some app that for some reason directly writes in the > buffer in cardRAM. You cannot foretell what an app is going to do. Ah, OK. >> I'm talking about GeForce 3/4/5/6 pixel shaders. You know... the >>biggest revolution in 3D graphics in years! Programs that can be >>executed on the GPU - Drawing/Filling ovals, shapes, etc. > > OK, sounds nice. > One problem exists however: these programs are kind of a machine code > that is card architecture specific. The instructions are one of the > best kept secrets in the graphics industry I am assuming (so AFAIK). > This means that _this_ is likely going to be a part of the engine we > won't be able to use. At all. Maybe if Haiku grows enough, we'll receive support from nVIDIA and ATi with drivers supporting standard sharders specifications: 1.0, 1.1, 1.2, 2.0, 3.0. > If I setup 3D acc, it will be a mixture > of real 3D acc and MESA. I think I said this before at some point... Yup and we'll be very happy with this. Thank you. Just as you were saying, I was dreaming a bit. :-D > Then one thing more: you say the shaders can execute programs. True, no > doubt. But they have to stay somewhere: in the graphics memory that > (maybe) doesn't get swapped out on low mem like other parts containing > bitmaps and other 3D scene stuff. At least, that would be logical to me > ATM. (So _not_ inside some internal extra buffer, though I might be > mistaken). AFAIRead this code is hold in graphics memory. Dunno, the driver manages videoMem for nothing to go wrong. >> I'll give you one too: >> We'll need to copy regions from main memory, that means lots of >>rectangles which can be interpreted as: many bitmaps >>'simultaneously'. >>:-) ;-) > > Hehe. IF that is true, than those GART/aperture AGP transfers speedup > the process more. > However, I don't see it right now. You can't foretell what part of the > screen you have to update next: as you can't foretell where a window > hiding some other stuff will be dragged next (direction might change > for instance). The way I see it is that you will be in fact fetching > different pieces of different buffers at different times. I'm not getting this... When PollerThread needs to copy a bitmap onto screen, it gets all it needs and then send a this data to the engine. The engine then fetches the bitmap from main mem and then blit on screen. Am I wrong? >>>If you want to use 2D acc later on, you should consider real AGP >>>transfers >>>and so acc itself (by letting engine fetch from main mem) as being >>>an >>>option. >> >> Of course. That is 99% of what we'll have in R1, except that, >>instead >>of instructing the CPU to copy in vidMem, we'd instruct the graphic >>engine to fetch from main mem. >> Am I wrong? > > > I'm afraid I can't answer because I am not following you. I think you > misread my sentence? > lets rewrite it a bit: > question: how are we going to transfer a bitmap to the graphicscard > RAM? > 1. does the driver report it can do acceleration on main mem? yes: use > it (in the way you state: engine fetching from main mem). No: > 2. lets write the bitmap, preferably in a serial fashion, to the > graphicscard. But where?: > -------- 1. does the driver report it can do local offscreen > acceleration? yes: use triple buffering if you want. No, > ---------2. write directly onscreen. I understood perfectly what you said. Yes, that is the way I referring to also. This misunderstanding is because I was looking from app_server's perspective. Let me explain again: How do we do double buffering? We draw in mainMem, when drawing is done, we(the CPU) copy(ies) from here to graphic card's memory - the part that is mapped for screen retrace(visible on screen) If we want to use AGP, I think(after what you explained), that after drawing is done in mainMem, we should instruct the graphic engine to fetch that bitmap from mainMem and blit it on screen. (Maybe a blit is not necessary and it can directly copy mainMemBitmap to screen mapped video memory.) > >> With pixel shaders at hand we won't be needing main memory anymore. > > How do you figure? Drawing instructions are in videoMem. We allocate memory in there. Drawing instructions are executed on the GPU. >>I'm SURE this is the way to go. > > So I'm afraid you are a bit too much in dreamland... Although I > certainly would love to see it :-) Not dreaming, just need the latest drivers form nVIDIA and ATi ported on Haiku. ;-) :-) Yes, you can say I'm still in a dreamland. :-)))) >>>This of course requires both buffers to be updated >>>with everything, and you can talk and think a lot about strategies >>>to >>>get this done with minimal overhead. >> >> I'm listening... :-) > > This is a subject I haven't thought about much. Two versions could be: > 1. just draw everything to both buffers, but delay writing to the one > onscreen untill it's offscreen. > 2. Draw only to the offscreen buffer, and draw just the differences > applied there to the onscreen buffer once it goes offscreen. 3. Draw only in the off screen buffer, and blit the differences after the page flip? This way we can use the 2D acc engine. >> A while back you were talking about Allegro game library. Isn't >>page >>flipping the technique used by it for double buffering. Isn't the >>page >>flipping process ordered by the gaming library, and done on next >>retrace? > > Yep. > >>Then, why are you talking about _every_ retrace? > > because now we have a system 'feature'. OTOH: if nothing changes > onscreen, no flip is needed of course. You could think multiple ways > here as well (as usual :) > 1. just flip every retrace > 2. only flip if something changed. > > Whatever.. Maybe if you start thinking about this stuff a reason for > one of them comes to mind ;-) There is no reason to flip on every retrace. Page flipping should be commanded by app_server when drawing into backbuffer is done. Bye, Adi.