[haiku-appserver] Re: [openbeos] Re: Launching the input_server from the app_server...

  • From: "Ryan Leavengood" <leavengood@xxxxxxxxx>
  • To: "Haiku app_server" <haiku-appserver@xxxxxxxxxxxxx>
  • Date: Sat, 23 Dec 2006 04:04:51 -0500

On 12/22/06, Axel Dörfler <axeld@xxxxxxxxxxxxxxxx> wrote:

It's a common design pattern that an object knows its owner, and sends
it a message on some special occurence.
If you don't want to keep a pointer to the desktop, you can also just
store its message port, and send the message to it directly; but it's
guaranteed that the event dispatcher is quit when desktop is deleted,
anyway.

I see this is used a lot in our code, so I guess there is no problem
in using it here. I think my concern for this comes from years of Java
coding, where circular dependencies can cause memory leaks because of
the garbage collection. This obviously isn't an issue in C++ with
manual memory management.

However, I wouldn't call this message AS_LAUNCH_INPUT_SERVER - what
happened is that its event stream got lost, the event dispatcher
doesn't know anything about this stream; it doesn't have to be an
input_server, it could well be some network stream or whatever. Maybe
AS_EVENT_STREAM_LOST/GONE/whatever would be more appropriate.
The Desktop itself will then try to restore the previous event stream -
which, for now, is always the input server, of course.

I decided on AS_EVENT_STREAM_CLOSED. I also learned the hard way that
putting this in an arbitrary place in ServerProtocol.h can cause
problems (well everything needs to be recompiled and even then there
were some problems.) When I put it right before AS_LAST_CODE things
worked better.

Notice, however, that current servers (including the input_server) run
as an application in that desktop anyway, so they will be quit when the
desktop object is discarded. But since that's a future extension to
multi-user anyway, it doesn't matter that much for now.

Here lies the problem I'm having with this new feature: the
AS_GET_DESKTOP message sent to the app_server by the Application class
in the app kit is what causes the Desktop object to be created, which
would launch the input_server. But an application must be created for
this to happen. In our current Bootscript I think the first app is the
syslog_daemon, with the input_server next. I'm not sure I like this
dependency. Though from what I can see in the R5 Bootscript it may
work the same way.

Since our app_server links against libbe.so, and our registrar is
already working, BRoster::Launch() should work fine, I'd guess. If not,
load_image() could still be used.

This seems to work, except for the issue described above.

Currently, the kernel does not check if you're waiting for yourself
(which will always just block forever until someone kills you, or
interrupts you with a signal). It's the same on BeOS, so I'm not sure
we can change this; I think it would make more sense to return
B_BAD_VALUE in this case - if you just wanted to wait forever until a
signal you could do this via sleep(B_INFINITE_TIMEOUT).

Right now I just set fThread to -1 before calling Unset. This seems to
work in the libbe_test environment.

OK to make matters worse, what you described for detecting when the
input_server has died does not seem to work. After a lot of trouble
and frustration I finally got my changes running in QEMU and when I
sent a kill -HUP to the input_server it quit and never returned. None
of the debugging output I added to EventDispatcher showed up, so it
seems the loop won't quit when the input_server is gone. Unless I'm
missing something.

Ryan

Other related posts: