[haiku-appserver] Re: Shared Memory Allocation

  • From: Gabe Yoder <gyoder@xxxxxxxxxxx>
  • To: haiku-appserver@xxxxxxxxxxxxx
  • Date: Sat, 4 Sep 2004 09:57:39 -0400

> I just changed our pool allocator from a bunch of C code (and ugly at
> that) to a class so we can have multiple pool managers in the same
> application without conflicts. We are going the same route as R5 and
> setting up a general application area. We will eventually be moving
> BBitmap storage to this shared area to prevent a bad app from
> overwriting BBitmap data in another app, but one question still
> remains: how should we send over a large object? Should we create a new
> area, dump it there, and tell the server to look there, or should the
> client ask the app_server for a memory chunk to write to, write to it,
> and then send the message? I'm kinda concerned about overhead and I
> wondered what the rest of you gentlemen thought.

Okay, first comes my disclaimer - I have only been awake for about an hour, so 
my brain isn't on full power.  Since we are wanting to deal with potentially 
large chunks of memory, we should try to limit memory allocation and 
deallocation as much as possible, otherwise we are going to start eating up 
time mucking with memory.  Our usual strategy at work is to allocate a chunk 
of memory, hang onto it after using it, and then reuse it later (when 
necessary, we increase the size of the memory chunk).  My first thought is 
that we should have the client request a chunk of a given size and then the 
server will respond back with a chunk of at least the requested size.  The 
client then fills in the data and sends back a message.

Now that my brain is thinking a bit more, I am remembering that the purpose of 
the pool allocator was to handle some of that memory reuse that I mentioned 
before.  If we handle the allocation at the client side, then the allocator 
and all of its associated memory can (and probably should) be freed on 
application exit.  Of course, whenever a new application starts, it will need 
to have the allocator grab new memory.  If we keep the allocation on the 
server side, we could possibly have some strategy of assigning a pool 
allocator to a single application, and then after the application terminates, 
we could hang onto the allocator and save it for use with a new application.  
We could potentially reduce the amount of memory allocation and deallocation, 
but we have more messaging overhead and we would probably need some sort of 
strategy for deciding if we need to get rid of some of our spare pool 
allocators (so we don't hang onto huge amounts of unneeded memory for 
extended periods of time).

The more I right, the less sure I am about what answer I should give.  My 
current thoughts are that handling the allocation on the client side will be 
a bit simpler to implement, and would be faster on memory passing, but the 
server implementation could possibly save a bit of overhead on application 
startup (although in some cases the overhead will be the same).  So I guess 
we are probably best off if we go with the client side allocation.  Now I am 
sure that Adi will probably have some big explanation for why I am wrong ;-)


Gabe

Other related posts: