#8007: [app_server] fully unresponsive when resizing window of big textfile -----------------------------+---------------------------- Reporter: ttcoder | Owner: axeld Type: bug | Status: new Priority: normal | Milestone: R1 Component: System/Kernel | Version: R1/Development Resolution: | Keywords: Blocked By: | Blocking: 7882, 8136 Has a Patch: 1 | Platform: All -----------------------------+---------------------------- Comment (by bonefish): Replying to [comment:87 jua]: > * Was surprised to find that just moving the mouse cursor causes torrents of calls to `set_port_owner()`, constantly changing the ownership of a port back and forth between registrar and whatever application is responsible for the area below the mouse pointer. I so far know only little about app_server/registrar internals, so I just assume that this ownership-pingpong is intentional and not a bug? That is due to how synchronous messaging is implemented (cf. the `BMessage` implementation). A temporary reply port is transferred to the target team before sending the message. This way, if the target team dies or sending the reply fails for some reason, the reply port is deleted and the thread waiting for the reply on that port wakes up. After retrieving the reply the port is reclaimed. So for a successful synchronous message exchange one sees two `set_port_owner()`s, besides the less problematic two `write_port()`s, two `read_port()`s, and two `get_port_info()`s. TBH, I don't find this strategy particularly elegant and while it works ATM, it will fail once we go multi-user and restrict port operations as part of the security concept. So at some point we'll have to extend the API (or use a completely new mechanism). It's probably a good idea to have a look what kind of messages the registrar is sending and whether it would be possible to avoid synchronous messaging in this case. > Anyway, `set_port_owner()` is not as lightweight as the function name makes it seem, it has to lookup a team ID, lookup a port and then atomically move the port from one list to another... and all that protected by a single mutex (and in the RW-lock-branch version it always acquired the lock for writing) > > * So another way to handle the team-port-list was needed. Two solutions came to mind: a) a separate lock per team as part of the team data structure or b) lock striping. I've gone with b) for now and it works well enough. Instead of a single mutex guarding all team-port-lists there is now an array of 8 mutexes. When code wants to access the list of team `x` it has to lock mutex number `(x % 8)`. The choice of '8' is somewhat arbitrary, maybe 16 would be better, or maybe not... guess I need to benchmark there. More is better. :-) Mutexes are really cheap, no need to be skimpy. Alternatively it might be possible to use the the `Team` lock (needs checking) or even introduce a new per-`Team` mutex for ports (and maybe other resources). > * I've also implemented the ports-by-name hash and using that in the RW- lock version for `find_port()` lookups. Doesn't influence the freezing though because `find_port()` is rarely called (at least in my current test cases). The remaining suggestion about the locking order is still to be tried out. I listed the locking order change first (before switching to a R/W lock), since I think that is a big deal. After that change there should be very little work be done while the global lock is being held -- most importantly no other locking! -- so it should be a lot less likely that a thread is preempted while holding the lock. So with that change alone I'd expect the lock contention to drop dramatically. -- Ticket URL: <http://dev.haiku-os.org/ticket/8007#comment:88> Haiku <http://dev.haiku-os.org> Haiku - the operating system.