[haiku-appserver] Re: nVidia hardware known options for new features (R2?)

  • From: "Rudolf" <drivers.be-hold@xxxxxxxxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Sat, 10 Dec 2005 11:22:11 +0100 CET

Morning Stephan,

> thanks for the detailed email! Since drivers usually need to be 
> recompiled 
> for Haiku anyways, I see no technical reason to delay some of this 
> until R2 
> if it can be done now (given you/we have enough time). The accelerant 
> API 
> would not care about new exported calls, no?

Indeed: I was thinking about that too. Other things I'd like to add is 
saturation/intensity controls for video overlay for example ;-)


> Anyways, I fully agree that the memory manager should not be 
> contained in 
> each driver again. But maybe you know of some constraints, that the 
> driver 
> would have to inform the manager about? Like that stuff about padding 
> and 
> alignment you mentioned. Maybe it is different for different chips. 
> So we 
> need to come up with a negotiation interface between driver and 
> memory 
> manager.
Those two constraints indeed. Also not all colorspaces might be 
supported.
The current overlay bitmap system would nolonger suffice I think, as 
the driver can now determine by itself where the bitmaps get placed, 
currently it does internal alignment setup. And the padding/slopspace 
solution as it is now would nolonger suffice as well: the current way 
it works is:
-app_server (or other 'client') asks for a specific size. It's 
guaranteed to get that if the driver function for creation succeeds. (i 
fails if bitmap space or size is outside (min/max) constraints) or not 
enough mem is left oncard.
-driver creats bitmap so that the requested size is met: if the driver 
can't do it because it can only do certain widths, it will create the 
smallest, but larger bitmap that can hold the requested one.
-client finds out about slopspace by asking the buffer for it's 
constraints (bytes_per_row), but also min.max scaling constraints are 
given back.

So, the same applies for the blit version of such buffers in theory. 
But, as a mem manager would now need to determine where a bitmap gets 
located, it needs to know about slopspaces beforehand, and the 
granularity for widths. Which need to be given once in theory: but:
- depending on colorspace.
- maybe even depending on current desktop mode set. best keep that in 
mind, even have it if it's not true (the stuff needs to be setup 
according to worst case scenarios, as it's next to impossible to 
foretell what all those HW manufacturors come up with...

The part about min/max scaling stuff requests for a given bitmap could 
remain the same as it is for overlay I think. 

Anyway: I'll be experimenting with the scaled blit soon now, so I'll 
learn more details :-)

 I would also export new calls for the blitting functionality. It 
> seems too cumbersome to figure out the "virtual" (off)screen location 
> of a 
> bitmap. Simply call the accelerant function and tell it the staring 
> address, rect size for source and possibly colorspace, and rect for 
> onscreen destination.
Yeah, of course. a virtual offscreen location == a hack to keep the 
current interface :-)
I could do this:
-setup the scaled blit for current interface,
-also setup a new version for it for offscreen bitmaps.

I'll be able to test both using the video consumer node I am fiddling 
with.

BTW: Anyone know if it should be working to get a connection for the 
B_YCbCr422 colorspace? Currently I can only get RGB spaces going...
There's of course a (bigger) chance that this is because of my still 
very limited knowledge about the node subject...

(Is for instance a MPEG2 decoder just a node that has an input and an 
output? Would a mediaplayer connect to that node? whould the consumer 
node in turn get connected to the MPEG2 decoder? I have no idea yet how 
/ if this exists... Any hints?)

> Personally, I am on ATI, so I would not benefit from any improvements 
> you 
> do to the nVidia accelerant for now... :-(

Well, just get a card for a desktop system. OTOH: I am getting more and 
more requests to 'take over' ATI dev. ;-)

Bye!

Rudolf.


Other related posts: