[haiku-development] Re: Kernel panic while wget -ing large files - BFS bug?

  • From: "Axel Dörfler" <axeld@xxxxxxxxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Sun, 01 Jul 2007 12:37:04 +0200 CEST

Travis Geiselbrecht <geist@xxxxxxxxxx> wrote:
> Looks like it ran out of virtual memory while writing to the file. I  
> think the file cache code right now will currently try to cache the  
> entire file if it's active, so it'll eventually end up requiring  
> 700MB of ram during the writing process. I think this is the source  
> of most of the 'I just tried to do a bunch of stuff to a huge pile of  
> files' bugs that people see.

Exactly.

> The back end of the VM is incomplete in that it doesn't properly  
> reclaim pages when it runs out, so when you stress the system by  
> doing stuff that hits a lot of memory it'll tend to get unhappy.

I had already started working on that, but I postponed it until we've 
solved some other VM problems - because stealing pages will potentially 
show lots of race conditions and other things again.

Bye,
   Axel.


Other related posts: