[haiku-appserver] Fwd: found one more thing

  • From: Stephan Assmus <superstippi@xxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Tue, 15 Nov 2005 12:27:35 +0100

On 2005-11-15 at 01:11:38 [+0100], Adi Oanca <adioanca@xxxxxxxxx> wrote:

- - - - - - - - - - - - - - - - - - - -
Inline Attachment <New Text Document _2_.txt> follows:
- - - - - - - - - - - - - - - - - - - -

Hi Adi,

Sending you a snippet I got from Thomas about this.
This makes me very happy!!

I am going to see if I can get VLC to segfault, and If I can prevent that 
by 
using these calls. Now I need some time to actually do that (got some other 
things up my sleeve for VLC as well 


Rudolf.

-----------


>>     There are 2(undocumented) public methods in BBitmap's class 

declaration,

>> witch I can't test right now:
>>          status_t LockBits(uint32 *state = NULL);
>>          void UnlockBits();
>>     I don't have a disassembly of libbe.so either, but, I bet it sends a
>> message to appserver to acquire an internal semaphore, status_t returning
>> B_OK if successful.
>>     I don't know for sure!!! Try it!  In fact I'm asking you to, you 

made

>> me curious! :-)))
>> </snippet>


These are the locks I meant. Actually, it works the following way: for 
normal 
bitmaps, they are NOPs. You can access your bitmap whenever you want to 
using 
BBitmap::Bits(). But for offscreen-bitmaps (which are only provided for 
overlays currently), you have to call LockBits() before accessing the 
bitmap 
directly (via BBitmap::Bits()) and to call UnlockBits() when you finished 
the 
access. If the app_server wants to discard the bitmap during a mode-switch 
(well: in _theory_ this can happen at any time), it locks the bitmap too, 
and 
releases the lock once the overlay is reallocated. It even tries to buffer 
the content of the overlay during mode-switch, but this code seems to be 
broken somehow; I reckon the buffer is only there so that broken programs 
have something to play with during the mode-switch. 

As long as you access the overlay by software only, this is safe - you lock 
the bitmap only during the access, so if the app_server wants to release 
the 
overlay, it gives you some seconds before he gets angry and kills your 
application. Locking the bitmap for a long time is a no-no (similar to 
BDirectWindow and alike). But there is a case where this doesn't work that 
easy: if you have hardware that wants to access the bitmap. The only 
example 
I'm aware of are TV-cards that write the received image directly into the 
bitmap (though it could be a hardware MPEG decoder or something like that 
too). They run continuously and don't ask the app_server before they start 
the access; in fact, they stream the bitmap so they _always_ access the 
bitmap apart during vertical blank. So, in this case the application has to 
lock the bitmap forever and thus risk getten thrown out of the game during 
a 
mode-switch. To solve that, the app_server sends you a 
B_RELEASE_OVERLAY_LOCK 
message to indic!
 ate that he expects you to release the bitmap soon (the official 
documentation is quite irritating here). When you receive this, you should 
instantly stop the TV-card writing into the bitmap and call UnlockBits(). 
As 
written above, the app_server will try to lock the bitmap itself and if you 
don't let the bitmap go, he will kill you after a timeout. As soon as he 
doesn't want to lock the bitmap anymore, he sends a you 
B_ACQUIRE_OVERLAY_LOCK, which means that you are allowed to lock the bitmap 
again. So, you can now call LockBits() and let the TV-card continue 
streaming 
the bitmap. 

This system seems to be currently implemented for overlays only, but the 
interface is capable of handling _any_ offscreen-bitmap. This means, that 
we 
can stick with the current interface! The only restriction we will have to 
abandon is that an off-screen bitmap will never get lost forever: 
currently, 
the app_server guarantees that overlays will survive mode-switches - they 
may 
got lost during the switch, but afterwards they are always recreated 
afterwards; i.e. the only thing you notice is that LockBits() gets delayed 
during the mode-switch, but it will never fail. This is dangerous if you 
think about switching from a low resolution where plenty of graphics mem 
was 
available to a high resolution where there is not enough memory for 
overlays/
off-screen bitmaps left. So, the application must be aware that LockBits() 
may fail, which indicates that the bitmap is lost. For this, we may add a 
new 
method to restore the bitmap. Of course, you could delete the lost bitmap 
and 
create !
 a new instead, but having the ability to restore should simplify coding.

The drawback is, that Be never documented this, so all applications I know 
of 
neither bother calling (Un)LockBits() nor respond to the OVERLAY_LOCK 
messages (which wouldn't make sense anyway if you never lock the bitmap). 
Because of that I have this mode-switching problem, where applications keep 
on accessing the old overlay until they find out that it has moved 
somehow...


>> ->Thomas, can you please explain to me what BWindow::Lock() and unlock() 
do 
>> exactly?


To be honest: I never bothered with GUI programming under BeOS apart from 
rewriting the screen preferences (where you didn't need that). So, I can 
only 
tell you about the LockBits() thing. The only thing I'm aware of is that 
during creation of the bitmap you can choose between a "raw" bitmap where 
you 
can only write to directly, and a bitmap that accepts views to draw into, 
i.e. you can use the normal interface kit's function to draw e.g. text into 
it. Only if you have a view-capable bitmap, you can (need to ?) use the 
(un)lock() methods, for plain bitmaps they are noops. I reckon this 
app_server guy can tell you more about that.


>> I have a need to understand that! Why is that not what you wanted, and 
what 
>> the hell is it supposed to do (in plain English please 


I hope you could follow the explanation above 

>> ->What do you think about those undocumented methods LockBits() and 
>> UnlockBits()??? It _this_ what we'd need 'desperately'??


Yes, it is, though it's only implemented for overlays as they are the only 
off-screen bitmaps R5 supports.


>> ->Can you test that?? (does it work?)


This info is retrieved the "usual" way, if you know what I mean  There 
isn't a test program though, but I wouldn't be sad if you wrote one 




Hi Adi 

>>    How does MS Windows recognize monitor manufacturer and model?

Via the I2C bus called DDC channel in the VGA and DVI connectors. Every 
modern monitor outthere has a I2C slave device in it that responds to a 
certain adress. It will return an information block called EDID. In this 
block you can find min/max available modes, refreshrates, manufacturor 
string, type string. Also the native resolution can be found here 
(applicable 
for flatpanels).

I don't have the details, but Thomas has already worked with it. It would 
probably be best if OBOS would have a library for graphicsdrivers that 
support DDC/EDID comms and interpretation. Linux has this BTW.

Note that the DDC/EDID stuff is also a VESA standard, which officially 
costs 
money. Stuff is on the net though 

-----

The use of this EDID info should be to place extra restrictions on the 
modelist and pixelclock limits, so the user can't select modes not 
supported 
by the connected monitor.

Rudolf.



Adi,

Got to correct/clarify something regarding my previous mail I think...


>>Currently the app_server asks for the number of modes in the drivers 

modelist and the actual a modelist

>>on startup.

The screenprefs panel itself does not ask the driver for the modelist. It 
get's it from the appserver somehow. Although in theory it could ask the 
driver directly instead of course..


>>Here is something 'funny' that happens in BeOS (prefs app I presume, I 
don't 

know).  If the user requests a

>>mode with a refreshrate that's _not_ mentioned in the modelist, but it 
can 

do a mode both below and

>>above that, a new mode will be 'invented' by this app. It will use both 

modes in the neighbourhoud to create

>>a new mode with timing (sync pulses lenght and place) that lies between 

those two modes.

I am not 100% sure here. IT can also be the closest matching mode's timing 
is 
used.


>>Strangely enough the screenprefs app limits internally at 90Hz upper 

'border', instead of using the reported

>> max refresh by the driver. Maybe this is done to prevent a user from 

destroying his/her monitor, though

>> that's not the responsibility of the screenprefs app in this manner.

In fact the screenprefs app does use the reported max refreshrate (via 
pixelclock limits) as soon as it goes below the fixed 90Hz value set in 
this 
app. So the max refreshrate is not taken from the modelist.

Rudolf.


Hi Adi,

Sending you a snippet I got from Thomas about this.
This makes me very happy!!

I am going to see if I can get VLC to segfault, and If I can prevent that by 
using these calls. Now I need some time to actually do that (got some other 
things up my sleeve for VLC as well 


Rudolf.

-----------


>>     There are 2(undocumented) public methods in BBitmap's class 

declaration,

>> witch I can't test right now:
>>          status_t LockBits(uint32 *state = NULL);
>>          void UnlockBits();
>>     I don't have a disassembly of libbe.so either, but, I bet it sends a
>> message to appserver to acquire an internal semaphore, status_t returning
>> B_OK if successful.
>>     I don't know for sure!!! Try it!  In fact I'm asking you to, you 

made

>> me curious! :-)))
>> </snippet>


These are the locks I meant. Actually, it works the following way: for normal 
bitmaps, they are NOPs. You can access your bitmap whenever you want to using 
BBitmap::Bits(). But for offscreen-bitmaps (which are only provided for 
overlays currently), you have to call LockBits() before accessing the bitmap 
directly (via BBitmap::Bits()) and to call UnlockBits() when you finished the 
access. If the app_server wants to discard the bitmap during a mode-switch 
(well: in _theory_ this can happen at any time), it locks the bitmap too, and 
releases the lock once the overlay is reallocated. It even tries to buffer 
the content of the overlay during mode-switch, but this code seems to be 
broken somehow; I reckon the buffer is only there so that broken programs 
have something to play with during the mode-switch. 

As long as you access the overlay by software only, this is safe - you lock 
the bitmap only during the access, so if the app_server wants to release the 
overlay, it gives you some seconds before he gets angry and kills your 
application. Locking the bitmap for a long time is a no-no (similar to 
BDirectWindow and alike). But there is a case where this doesn't work that 
easy: if you have hardware that wants to access the bitmap. The only example 
I'm aware of are TV-cards that write the received image directly into the 
bitmap (though it could be a hardware MPEG decoder or something like that 
too). They run continuously and don't ask the app_server before they start 
the access; in fact, they stream the bitmap so they _always_ access the 
bitmap apart during vertical blank. So, in this case the application has to 
lock the bitmap forever and thus risk getten thrown out of the game during a 
mode-switch. To solve that, the app_server sends you a B_RELEASE_OVERLAY_LOCK 
message to indic!
 ate that he expects you to release the bitmap soon (the official 
documentation is quite irritating here). When you receive this, you should 
instantly stop the TV-card writing into the bitmap and call UnlockBits(). As 
written above, the app_server will try to lock the bitmap itself and if you 
don't let the bitmap go, he will kill you after a timeout. As soon as he 
doesn't want to lock the bitmap anymore, he sends a you 
B_ACQUIRE_OVERLAY_LOCK, which means that you are allowed to lock the bitmap 
again. So, you can now call LockBits() and let the TV-card continue streaming 
the bitmap. 

This system seems to be currently implemented for overlays only, but the 
interface is capable of handling _any_ offscreen-bitmap. This means, that we 
can stick with the current interface! The only restriction we will have to 
abandon is that an off-screen bitmap will never get lost forever: currently, 
the app_server guarantees that overlays will survive mode-switches - they may 
got lost during the switch, but afterwards they are always recreated 
afterwards; i.e. the only thing you notice is that LockBits() gets delayed 
during the mode-switch, but it will never fail. This is dangerous if you 
think about switching from a low resolution where plenty of graphics mem was 
available to a high resolution where there is not enough memory for overlays/
off-screen bitmaps left. So, the application must be aware that LockBits() 
may fail, which indicates that the bitmap is lost. For this, we may add a new 
method to restore the bitmap. Of course, you could delete the lost bitmap and 
create !
 a new instead, but having the ability to restore should simplify coding.

The drawback is, that Be never documented this, so all applications I know of 
neither bother calling (Un)LockBits() nor respond to the OVERLAY_LOCK 
messages (which wouldn't make sense anyway if you never lock the bitmap). 
Because of that I have this mode-switching problem, where applications keep 
on accessing the old overlay until they find out that it has moved somehow...


>> ->Thomas, can you please explain to me what BWindow::Lock() and unlock() do 
>> exactly?


To be honest: I never bothered with GUI programming under BeOS apart from 
rewriting the screen preferences (where you didn't need that). So, I can only 
tell you about the LockBits() thing. The only thing I'm aware of is that 
during creation of the bitmap you can choose between a "raw" bitmap where you 
can only write to directly, and a bitmap that accepts views to draw into, 
i.e. you can use the normal interface kit's function to draw e.g. text into 
it. Only if you have a view-capable bitmap, you can (need to ?) use the 
(un)lock() methods, for plain bitmaps they are noops. I reckon this 
app_server guy can tell you more about that.


>> I have a need to understand that! Why is that not what you wanted, and what 
>> the hell is it supposed to do (in plain English please 


I hope you could follow the explanation above 

>> ->What do you think about those undocumented methods LockBits() and 
>> UnlockBits()??? It _this_ what we'd need 'desperately'??


Yes, it is, though it's only implemented for overlays as they are the only 
off-screen bitmaps R5 supports.


>> ->Can you test that?? (does it work?)


This info is retrieved the "usual" way, if you know what I mean  There 
isn't a test program though, but I wouldn't be sad if you wrote one 




Hi Adi 

>>    How does MS Windows recognize monitor manufacturer and model?

Via the I2C bus called DDC channel in the VGA and DVI connectors. Every 
modern monitor outthere has a I2C slave device in it that responds to a 
certain adress. It will return an information block called EDID. In this 
block you can find min/max available modes, refreshrates, manufacturor 
string, type string. Also the native resolution can be found here (applicable 
for flatpanels).

I don't have the details, but Thomas has already worked with it. It would 
probably be best if OBOS would have a library for graphicsdrivers that 
support DDC/EDID comms and interpretation. Linux has this BTW.

Note that the DDC/EDID stuff is also a VESA standard, which officially costs 
money. Stuff is on the net though 

-----

The use of this EDID info should be to place extra restrictions on the 
modelist and pixelclock limits, so the user can't select modes not supported 
by the connected monitor.

Rudolf.



Adi,

Got to correct/clarify something regarding my previous mail I think...


>>Currently the app_server asks for the number of modes in the drivers 

modelist and the actual a modelist

>>on startup.

The screenprefs panel itself does not ask the driver for the modelist. It 
get's it from the appserver somehow. Although in theory it could ask the 
driver directly instead of course..


>>Here is something 'funny' that happens in BeOS (prefs app I presume, I don't 

know).  If the user requests a

>>mode with a refreshrate that's _not_ mentioned in the modelist, but it can 

do a mode both below and

>>above that, a new mode will be 'invented' by this app. It will use both 

modes in the neighbourhoud to create

>>a new mode with timing (sync pulses lenght and place) that lies between 

those two modes.

I am not 100% sure here. IT can also be the closest matching mode's timing is 
used.


>>Strangely enough the screenprefs app limits internally at 90Hz upper 

'border', instead of using the reported

>> max refresh by the driver. Maybe this is done to prevent a user from 

destroying his/her monitor, though

>> that's not the responsibility of the screenprefs app in this manner.

In fact the screenprefs app does use the reported max refreshrate (via 
pixelclock limits) as soon as it goes below the fixed 90Hz value set in this 
app. So the max refreshrate is not taken from the modelist.

Rudolf.


Other related posts:

  • » [haiku-appserver] Fwd: found one more thing