[haiku-appserver] Re: [openbeos] Re: Launching the input_server from the app_server...

  • From: "Axel Dörfler" <axeld@xxxxxxxxxxxxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Sat, 23 Dec 2006 11:20:33 +0100 (MET)

"Ryan Leavengood" <leavengood@xxxxxxxxx> wrote:
> On 12/22/06, Axel Dörfler <axeld@xxxxxxxxxxxxxxxx> wrote:
> > It's a common design pattern that an object knows its owner, and
> > sends
> > it a message on some special occurence.
> > If you don't want to keep a pointer to the desktop, you can also
> > just
> > store its message port, and send the message to it directly; but
> > it's
> > guaranteed that the event dispatcher is quit when desktop is
> > deleted,
> > anyway.
> I see this is used a lot in our code, so I guess there is no problem
> in using it here. I think my concern for this comes from years of
> Java
> coding, where circular dependencies can cause memory leaks because of
> the garbage collection. This obviously isn't an issue in C++ with
> manual memory management.

Even in Java an object needs to know a target whom to send messages to,
and it's often the case that this target is also the owner of the
object. It might not always be that clear, though :)

> I decided on AS_EVENT_STREAM_CLOSED. I also learned the hard way that
> putting this in an arbitrary place in ServerProtocol.h can cause
> problems (well everything needs to be recompiled and even then there
> were some problems.) When I put it right before AS_LAST_CODE things
> worked better.

I like AS_EVENT_STREAM_CLOSED, but it shouldn't really matter where you
put it in the ServerProtocol.h header - just make sure you rebuild the
image afterwards.

> > Notice, however, that current servers (including the input_server)
> > run
> > as an application in that desktop anyway, so they will be quit when
> > the
> > desktop object is discarded. But since that's a future extension to
> > multi-user anyway, it doesn't matter that much for now.
> Here lies the problem I'm having with this new feature: the
> AS_GET_DESKTOP message sent to the app_server by the Application
> class
> in the app kit is what causes the Desktop object to be created, which
> would launch the input_server. But an application must be created for
> this to happen. In our current Bootscript I think the first app is
> the
> syslog_daemon, with the input_server next. I'm not sure I like this
> dependency. Though from what I can see in the R5 Bootscript it may
> work the same way.

There is no dependency - it's just that the first application that is
started that needs a desktop will create it. And only then you actually
need an input_server - if there is no desktop, you cannot input
anything either.

> > Since our app_server links against libbe.so, and our registrar is
> > already working, BRoster::Launch() should work fine, I'd guess. If
> > not,
> > load_image() could still be used.
> This seems to work, except for the issue described above.


> > Currently, the kernel does not check if you're waiting for yourself
> > (which will always just block forever until someone kills you, or
> > interrupts you with a signal). It's the same on BeOS, so I'm not
> > sure
> > we can change this; I think it would make more sense to return
> > B_BAD_VALUE in this case - if you just wanted to wait forever until
> > a
> > signal you could do this via sleep(B_INFINITE_TIMEOUT).
> Right now I just set fThread to -1 before calling Unset. This seems
> to
> work in the libbe_test environment.

It'll work under Haiku as well. If we change wait_for_thread()
semantics (and not reset fThread), however, it would only work under
Haiku, and not in the test environment - so I guess your solution is
the way to go for now, anyway :-)

> OK to make matters worse, what you described for detecting when the
> input_server has died does not seem to work. After a lot of trouble
> and frustration I finally got my changes running in QEMU and when I
> sent a kill -HUP to the input_server it quit and never returned. None
> of the debugging output I added to EventDispatcher showed up, so it
> seems the loop won't quit when the input_server is gone. Unless I'm
> missing something.

That's a bit strange: the input port, and the cursor semaphore are both
owned by the input_server - if that server is gone, both objects will
be deleted automatically, and both threads in the app_server will
notice that.


Other related posts: