[haiku-appserver] Re: Overlay support

  • From: "Rudolf" <drivers.be-hold@xxxxxxxxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Tue, 25 Apr 2006 12:35:35 +0200 CEST

Hi there Jonathan,

As Axel pointed out later, I am just trying to describe how things are.
However, I have some remarks.

> ... and quite possibly allowing the user to lose data, or at least 
> sanity, for doing something they thought they could do without harm, 
such as changing to a different workspace temporarily, or changing the 
resolution of the current workspace, or whatever it is that caused the 
snafu that's limited by reality. 

There's no change that data gets lost: we are talking about the app 
playing back video only. The system remains totally stable.True, 
sometimes a user sees a behaviour that he or she didn't anticipate, and 
certainly doesn't understand. However, It's not always possible to 
prevent such a thing from happening. Although we (BeOS/Haiku) should/
will certainly do our best to minimize this kind of thing. Personally I 
hate that too. Users should be able to use a system without having in-
depth technical knowledge :)

> 2.  The user decides to change to a different workspace or resolution 
> in the workspace(s) that use the overlay, but for whatever reason, the 
hardware can't accommodate that switch, or requires renegotiating 
everything.  From what I've read, it seems that if there isn't a 
response within a given amount of time, that the application using the 
overlay is killed, either by choice or by default due to things being 
changed out from under it.  Well, frankly, this shouldn't be remotely 
time-based, at least, not without user intervention.

It isn't time based. There's a 'system' that handles this without the 
user interventing or experiencing problems, and it even doesn't crash 
the app if it's written correctly (and it will be ;-)
The time-based thing is just the backup system that will crash the app 
if the app potentially would bring trouble to the system and user. So, 
it's the app causing your concern here, not the system.

> Quite possibly the system is very busy, and the application can't 
> process a message ASAP, which may be directly related to the fact that 
it needed the overlay in the first place.  What should have the highest 

Priority. I used that word here:
-highest priortity is that the system should be able to set the mode 
the user requested (implicitly or explicitly).
-below that priority lie all kind of options in theory available in a 
given mode, like 3D acceleration and video overlay. Later on offscreen 
desktop composition will also be such a option.

I perfectly understand that this stuff might be not that obvious to a 
lot of users, and that's why the system should automatically come up 
with replacements for these options should they not be avaiable 
-In case of video overlay the app should fallback to bitmap mode;
-In case of 3D the 3D function library should automatically start to 
flip textures on the fly, or even fallback to software rendering mode;
-In case of offscreen composition the system should fall back to the 
(now default and only) onscreen composition.

In all cases, the only thing the user will witness, is a slowing down 
of components. No crash, no lacking functionality, no data loss. It 
just works, and it keeps working. It even looks 'exactly' the same.
However, from the technical viewpoint these things require re-
negotiation of the actual method used. This is a totally 'under the 
hood' thing, and there it should be.

So, as far as I'm concerned one item remains:
-why do I call setting a mode top priority, and all other stuff 
Well, they are options because without the mode working, they can't be 
done. As simple as that. You wouldn't have a picture at all... (OK, I 
might be oversimplifying a bit, I admit ;-)

-why is it required to resetup a thing like overlay, even though in 
most modes it would work? The reason is rather simple as well: a new 
mode requires more (or less) memory. If the overlay data lies in a part 
of memory now required by a mode, then it must be relocated. A mode 
requires one block of memory, no chuncks allowed. This is all dictated 
by hardware, and can't be overcome mostly.

> 4.  Depending on whether it is an old application or not, it responds 
> by either calling the new API function release_overlay() in a timely 
manner, or the app_server asks the user (or consults preferences and 
finds the one for overlays) and acts accordingly, either ripping the 
rug out from under the application, or failing the attempted mode 
switch that would cause the application from losing the overlay.
Well, in a sence a segfault is a message to the user telling him or her 
the app doesn't work correctly. If it would be a video-mixing app, then 
you'd loose your data forthis app if it didn't do intermediate saves. I 
can see how it would be handy in such a case if a user would be given 
the option to not issue the modeswitch here, instead of crashing the 
app. In a perfect world that is, personally I've never seen this kind 
of behaviour on _any_ OS yet. I'd be very satisfied already, if we have 
in place what I described, and segfaults, if they happen, are at least 
consistent (same relative time and place, so they can be easily tracked 
back). In the R5 days, this was already reasonably well there, 
certainly if compared to Windows 9x... ;-)

Hope this info settles things a bit for you.

Best regards,


Other related posts: