[haiku-development] Re: Latest changes and general status.

  • From: Bruno Albuquerque <bga@xxxxxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Wed, 23 Apr 2008 12:13:40 -0300

Axel Dörfler wrote:

Actually, that doesn't have to mean anything. For example, all slabs are counted to normal memory as well - and that's what the block cache is using. Since the slabs are deallocated lazily, the memory is not freed when it could be freed, but only when someone needs to free it (ie. when memory is low).

So I did a simple test: I ran "unzip haiku_src.zip; rm -rf haiku_src" in an infinite loop. Eventually I ran out of memory. Even taking into account what you said above, this should not happen at all.

So while we need a better mechanism to track committed vs. cached memory (the slab is tricky, though, as one might not be able to flush all of them even under memory pressure when some blocks are still in use), it doesn't necessarily mean there is a memory leak somewhere.

But do we know how much memory is being used in a slab? because it that memory can be reused (and I guess it can) we just need to also track this and report whatever is available as free memory. Or am I missing something?

Anyway, I think there is actually a leak in the network stack (or rather, TCP), but unzipping anything surely shouldn't trigger it :-)

I sure hope this is the case. If unzippig a local file causes TCP to leak we are probably in a really bad shape. ;)

To see where the memory is spent on, you can dig in the KDL a bit, the area/heap debugger commands are surely helpful with that.

Will try that later today when I get home.

-Bruno


Other related posts: