[haiku-bugs] Re: [Haiku] #3768: create_image -i 943718400 (or other large sizes) results in freeze

  • From: "bonefish" <trac@xxxxxxxxxxxx>
  • Date: Tue, 26 Jan 2010 17:59:23 -0000

#3768: create_image -i 943718400 (or other large sizes) results in freeze
---------------------------+------------------------------------------------
 Reporter:  anevilyak      |       Owner:  bonefish      
     Type:  bug            |      Status:  new           
 Priority:  normal         |   Milestone:  R1            
Component:  System/Kernel  |     Version:  R1/Development
 Keywords:                 |   Blockedby:                
 Platform:  All            |    Blocking:                
---------------------------+------------------------------------------------

Comment(by bonefish):

 Replying to [comment:21 anevilyak]:
 > I'm curious, would some of the recent VM changes make the improvements
 listed in this ticket a bit easier to implement?

 Nope, not really. It's not that complicated anyway, just quite a bit of
 work.

 > The system still behaves quite horrendously under memory pressure, I
 just tried loading a 500MB data file into DebugAnalyzer and pretty much
 had the entire sys go completely unresponsive.

 DebugAnalyzer is really a memory hog. It not only reads the complete file
 into memory, but also uses much more memory for various analysis data it
 computes. I haven't checked, but the total memory usage is probably 2 to 3
 times the size of the file, maybe more. Even worse, DebugAnalyzer actually
 iterates more than once through the data, so if not everything fits fully
 into RAM, at least a part of it will always be paged out, and there'll
 probably be a lot of disk thrashing going on.

 > Dropping the responsible threads into the userland debugger via KDL had
 no measurable effect even after giving the sys ~20 minutes to try and
 recover, which makes me wonder if it wasn't hitting another deadlock
 situation.

 As soon as memory of periodically running threads has been paged out,
 things really don't look good anymore. While a thread is waiting for one
 page being paged in again, the previous page will already have aged enough
 to be available for being paged out. So until the memory pressure is
 relieved there will be serious disk thrashing going on. It might be that
 the system actually still made progress, but it was slowed down to a
 crawl. Possibly after a few more days it might even have recovered. :-)

 Anyway, as written above we need to significantly improve the page aging
 algorithm to do better in those situations.

 Regarding the original issue of the ticket, I believe it fixed in r35299.
 I'll leave it open for the time being as it seem I had good ideas in
 [comment:19] to improve the overall situation.

-- 
Ticket URL: <http://dev.haiku-os.org/ticket/3768#comment:22>
Haiku <http://dev.haiku-os.org>
Haiku - the operating system.

Other related posts: