[haiku-development] Re: Kernel panic while wget -ing large files - BFS bug?

  • From: Travis Geiselbrecht <geist@xxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Sat, 30 Jun 2007 22:07:46 -0700

On Jun 29, 2007, at 10:11 AM, Pieter Panman wrote:
Hey,

Been away from BeOS for a long time, but I still follow the Haiku progress.
I'm impressed at the work you've done so far.

I tried out the networking a bit from within VMWare player 2.0.0 build-45731 First I created a large bfs volume in BeOSMax (with drivesetup), which I
then use in VMWare player. (see
http://haiku-os.org/community/forum/ larger_bfs_harddrive_images_for_vmware_p
layer)

When I try to download a file of 700 MB using wget, it crashes after a few
seconds. Files of around 10 mb are okay...
The files are served with apache on the host computer.

Error: "PANIC: vm_allocate_page: out of memory! page state = 4"
I also get "Disabling DMA because of too many errors", maybe it is related.

So either it is a networking issue, or a hdd issue. I'm using the e1000
network device in vmware.

I'm not sure if this is already an existing bug. If so, I can submit it. If you need more information, let me know. I don't have a build environment.
(yet)...

Looks like it ran out of virtual memory while writing to the file. I think the file cache code right now will currently try to cache the entire file if it's active, so it'll eventually end up requiring 700MB of ram during the writing process. I think this is the source of most of the 'I just tried to do a bunch of stuff to a huge pile of files' bugs that people see.

The back end of the VM is incomplete in that it doesn't properly reclaim pages when it runs out, so when you stress the system by doing stuff that hits a lot of memory it'll tend to get unhappy.

Travis

Other related posts: