[haiku-bugs] Re: [Haiku] #8007: [app_server] fully unresponsive when resizing window of big textfile

  • From: "bonefish" <trac@xxxxxxxxxxxx>
  • Date: Tue, 08 Nov 2011 12:21:54 -0000

#8007: [app_server] fully unresponsive when resizing window of big textfile
   Reporter:  ttcoder             |      Owner:  axeld
       Type:  bug                 |     Status:  new
   Priority:  normal              |  Milestone:  R1
  Component:  Servers/app_server  |    Version:  R1/Development
 Resolution:                      |   Keywords:
 Blocked By:                      |   Blocking:
Has a Patch:  0                   |   Platform:  All

Comment (by bonefish):

 Replying to [comment:18 axeld]:
 > While this doesn't solve the general priority inversion issue, we could
 at least improve the ports problem by having 'real time' ports that go
 into another port list.

 I don't think this would help much in this case. It might get the mouse
 cursor going again, but pretty much everything else would still be toast.

 > I could imagine not using strict FIFO order in mutexes to get quite

 The implementation would be rather simple, I think. Instead of queuing new
 thread at the end of the waiting threads list, they would be inserted
 according to their priority. That might improve the situation, but still,
 once a low priority thread gets the lock and is prevented from running by
 a busy higher priority thread, any other thread that tries to acquire the
 lock will be blocked, resulting in very high latencies for high priority
 threads (like the 0.22 s observed in the example).

 A fairer scheduler -- that allots lower priority threads more run time --
 could improve the situation as well. That would also increase latencies
 for higher priority threads, though.

 Implementing priority inheritance is the only strategy that seems to solve
 this kind of issue in general. It's unfortunately also not completely
 trivial to do.

 The problem at hand is actually caused by a relatively high priority (15)
 thread doing a lot of work, which conflicts with the general policy "the
 higher the priority the less work should be done". So, regardless of
 whether or not we address the priority inversion issue at kernel level,
 BTextView or StyledEdit (whoever is to blame) should also be fixed to
 offload the work to a normal priority thread.

 Replying to [comment:16 SeanCollins]:
 > I got a system time of 15min and a kernel time of 50min to build Haiku.
 The disparity is much smaller on Linux system. I just wouldn't expect such
 a drastic difference in compile time. Maybe this is all pointing to the
 same symptom.

 I don't think this is related at all. I particularly analyzed the Haiku
 building test case a while back and didn't notice any problem of that
 kind. I believe the main difference between Haiku's kernel and the kernel
 of Linux or FreeBSD is that they are a lot better optimized. E.g. we don't
 do any pre-faulting. IIRC FreeBSD maps up to 7 or 8 additional pages on a
 page fault. Thus they decrease the number of page faults significantly and
 from what I recall from profiling page faults make up a significant chunk
 of the kernel work that happens while compiling. Furthermore Linux and
 FreeBSD have a lot more optimized common functions (like those in
 <string.h>) than Haiku. This reduces kernel times and user times (which on
 Haiku are also much higher than on the other systems). So, I believe the
 performance gap is mostly caused by missing optimization in Haiku. Also
 note that in case you tested with the default kernel debug level (2) that
 adds significantly to the kernel time.

 Replying to [comment:17 ttcoder]:
 > @bonefish if you suspect a particular rev. to be the trigger of the
 regression (or if a range of revs are candidate for bisecting) I can test
 that here if you need. But in your latest comment it seems the suspicion
 is not on a recent regression, but in mutex acquisition and thread
 scheduling being "out of phase" with each other ? So a thread can make a
 system call to acquire_sem(), and the kernel may grant that semaphore but
 ''not'' schedule the thread to work (and eventually release the sem)
 immediately, hmmm. The kernel neophyte in me wonders, why not do it the
 other way around: when the kernel schedules a given thread for running
 (according to priority ..etc) at ''that'' time it also looks at what
 system calls are pending for that thread, and grants it the resource.. (if
 available, otherwise it goes on to next schedulable thread).

 ATM the scheduler only considers threads that are ready to run. It doesn't
 even know about waiting threads. When a thread releases a mutex (similar
 for other locking primitives) it picks a thread currently waiting on that
 mutex (ATM strictly FIFO) as the next lock owner and puts it in the ready-
 to-run scheduling queue. This makes the thread, which now holds the lock,
 eligible to run. When it will actually run is a decision of the scheduler.
 As written above, changing the FIFO strategy to a priority based one
 (which goes in the direction you suggest) wouldn't be that hard, but it
 wouldn't fully solve the issue. Moreover, since higher priority threads
 would be able to overtake lower priority ones, the latter can now starve
 while waiting on a lock, which is an undesirable property as well.

 Postponing the decision which of the waiting threads to make the new lock
 owner even further, into the scheduler (which is what you suggest), would
 make the scheduler quite a bit more complicated. It would no longer only
 need to consider threads that are ready to run, but also sets of threads
 from which to maybe choose one. Possible, but not even a full solution to
 the problem either, since once the scheduler has chosen a thread, that
 thread might be preempted while still holding the lock, which would again
 set the stage for priority inversion.

Ticket URL: <http://dev.haiku-os.org/ticket/8007#comment:19>
Haiku <http://dev.haiku-os.org>
Haiku - the operating system.

Other related posts: