[haiku-commits] r41380 - in haiku/branches/developer/bonefish/signals: headers/private/kernel src/system/kernel src/system/kernel/debug

  • From: ingo_weinhold@xxxxxx
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Sun, 8 May 2011 12:26:24 +0200 (CEST)

Author: bonefish
Date: 2011-05-08 12:26:23 +0200 (Sun, 08 May 2011)
New Revision: 41380
Changeset: https://dev.haiku-os.org/changeset/41380

Modified:
   haiku/branches/developer/bonefish/signals/headers/private/kernel/ksignal.h
   haiku/branches/developer/bonefish/signals/headers/private/kernel/team.h
   
haiku/branches/developer/bonefish/signals/headers/private/kernel/thread_types.h
   
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/system_profiler.cpp
   
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/user_debugger.cpp
   haiku/branches/developer/bonefish/signals/src/system/kernel/signal.cpp
   haiku/branches/developer/bonefish/signals/src/system/kernel/team.cpp
   haiku/branches/developer/bonefish/signals/src/system/kernel/thread.cpp
   haiku/branches/developer/bonefish/signals/src/system/kernel/usergroup.cpp
Log:
* Eliminated gTeamSpinlock. Its tasks have been taken over by other locks:
  - Protection of the team hash table (sTeamHash): sTeamHashLock.
  - Protection of the process group hash table (sGroupHash): sGroupHashLock.
  - Protection of ProcessGroup and Session members: The respective objects'
    locks.
  - Protection of Team members: The respective objects' locks (in several cases
    in combination with other locks) or other locks as documented.
  - Ensuring a team remains valid: Reference or lock of the team, lock of its
    parent team (if any) or of its process group.
  As to be expected that makes the locking in some places more involved.
* Added a bunch of Team and ProcessGroup locking helper methods.
* Added/extended doxygen function comments, particularly regarding their locking
  requirements/behavior.
* Introduced a new team state TEAM_STATE_SHUTDOWN, which is set by
  team_shutdown_team(). A team in that state is considered already dead for most
  purposes.
* Added return parameter to team_remove_team(): If the team was a leader of a
  session with a controlling TTY, the ID of the foreground process group is
  returned this way. The caller (thread_exit()) sends SIGHUP to it (formerly
  done directly in team_remove_team()).
* Removed ProcessGroup::orphaned and all code maintaining it. The flag was only
  needed to decide whether a formerly not orphaned process group has become
  orphaned by the termination of a session leader. We can determine that without
  the flag, though. Whether the session leader contributed to the
  non-orphanedness of a process group is a simple check, so checking whether the
  respective child groups are orphaned afterwards suffices. Due to the complex
  locking situation in team_remove_team() we can't do that there, though.
  Instead we add the process groups to a global list
  (sOrphanedCheckProcessGroups) and defer the check until the locking situation
  allows it (orphaned_process_group_check() called from team_delete_team()).
* thread_get_io_priority(), thread_set_priority(): Locking is also needed, if
  the thread is the current thread.
* thread_set_io_priority(): Added missing locking.
* Got rid of SIGNAL_FLAG_TEAMS_LOCKED.
* Added new functions send_signal_to_thread(), send_signal_to_team(), and
  send_signal_to_process_group(). Rewrote send_signal_etc() to use them.
* inherit_parent_user_and_group(): The caller is now responsible for locking.
* has_children_in_group(): Iterate through the children instead of the group
  members (fits our locking situation better).
* getpgid(), getsid(): On error don't return the error code, but -1 and set
  errno. Fixed _user_process_info() which relied on the incorrect behavior.
* team_iterate_through_teams(): Changed return value from Team* to team_id.
  Since the caller doesn't hold a lock we would have to return a reference
  or lock the returned object otherwise.



Modified: 
haiku/branches/developer/bonefish/signals/headers/private/kernel/ksignal.h
===================================================================
--- haiku/branches/developer/bonefish/signals/headers/private/kernel/ksignal.h  
2011-05-08 10:13:39 UTC (rev 41379)
+++ haiku/branches/developer/bonefish/signals/headers/private/kernel/ksignal.h  
2011-05-08 10:26:23 UTC (rev 41380)
@@ -11,9 +11,11 @@
 
 
 namespace BKernel {
+       struct Team;
        struct Thread;
 }
 
+using BKernel::Team;
 using BKernel::Thread;
 
 
@@ -22,35 +24,37 @@
 #define SIGNAL_TO_MASK(signal) (1LL << (signal - 1))
 
 // additional send_signal_etc() flag
-#define SIGNAL_FLAG_TEAMS_LOCKED                       (0x10000)
-       // interrupts are disabled and team lock is held
-#define SIGNAL_FLAG_DONT_RESTART_SYSCALL       (0x20000)
+#define SIGNAL_FLAG_DONT_RESTART_SYSCALL       (0x10000)
 
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
-extern bool handle_signals(Thread *thread);
-extern bool is_kill_signal_pending(void);
-extern int has_signals_pending(void *_thread);
-extern bool is_signal_blocked(int signal);
+bool handle_signals(Thread *thread);
+bool is_kill_signal_pending(void);
+int has_signals_pending(void *_thread);
+bool is_signal_blocked(int signal);
 
-extern void update_current_thread_signals_flag();
+status_t send_signal_to_thread(thread_id threadID, uint32 signal, uint32 
flags);
+status_t send_signal_to_team(Team* team, uint32 signal, uint32 flags);
+status_t send_signal_to_process_group(pid_t groupID, uint32 signal,
+       uint32 flags);
 
-extern int sigaction_etc(thread_id threadID, int signal,
+void update_current_thread_signals_flag();
+
+int sigaction_etc(thread_id threadID, int signal,
        const struct sigaction *newAction, struct sigaction *oldAction);
 
-extern status_t _user_send_signal(pid_t tid, uint sig);
-extern status_t _user_sigprocmask(int how, const sigset_t *set,
-       sigset_t *oldSet);
-extern status_t _user_sigaction(int sig, const struct sigaction *newAction,
+status_t _user_send_signal(pid_t tid, uint sig);
+status_t _user_sigprocmask(int how, const sigset_t *set, sigset_t *oldSet);
+status_t _user_sigaction(int sig, const struct sigaction *newAction,
        struct sigaction *oldAction);
-extern bigtime_t _user_set_alarm(bigtime_t time, uint32 mode);
-extern status_t _user_sigwait(const sigset_t *set, int *_signal);
-extern status_t _user_sigsuspend(const sigset_t *mask);
-extern status_t _user_sigpending(sigset_t *set);
-extern status_t _user_set_signal_stack(const stack_t *newUserStack,
+bigtime_t _user_set_alarm(bigtime_t time, uint32 mode);
+status_t _user_sigwait(const sigset_t *set, int *_signal);
+status_t _user_sigsuspend(const sigset_t *mask);
+status_t _user_sigpending(sigset_t *set);
+status_t _user_set_signal_stack(const stack_t *newUserStack,
        stack_t *oldUserStack);
 
 #ifdef __cplusplus

Modified: 
haiku/branches/developer/bonefish/signals/headers/private/kernel/team.h
===================================================================
--- haiku/branches/developer/bonefish/signals/headers/private/kernel/team.h     
2011-05-08 10:13:39 UTC (rev 41379)
+++ haiku/branches/developer/bonefish/signals/headers/private/kernel/team.h     
2011-05-08 10:26:23 UTC (rev 41380)
@@ -22,11 +22,11 @@
 
 status_t team_init(struct kernel_args *args);
 status_t wait_for_team(team_id id, status_t *returnCode);
-void team_remove_team(Team *team);
-port_id team_shutdown_team(Team *team, cpu_status& state);
+
+void team_remove_team(Team *team, pid_t& _signalGroup);
+port_id team_shutdown_team(Team *team);
 void team_delete_team(Team *team, port_id debuggerPort);
-struct ProcessGroup *team_get_process_group_locked(ProcessSession* session,
-                       pid_t id);
+
 Team *team_get_kernel_team(void);
 team_id team_get_kernel_team_id(void);
 team_id team_get_current_team_id(void);
@@ -42,7 +42,7 @@
 int32 team_used_teams(void);
 
 typedef bool (*team_iterator_callback)(Team* team, void* cookie);
-Team* team_iterate_through_teams(team_iterator_callback callback,
+team_id team_iterate_through_teams(team_iterator_callback callback,
        void* cookie);
 
 thread_id load_image_etc(int32 argCount, const char* const* args,

Modified: 
haiku/branches/developer/bonefish/signals/headers/private/kernel/thread_types.h
===================================================================
--- 
haiku/branches/developer/bonefish/signals/headers/private/kernel/thread_types.h 
    2011-05-08 10:13:39 UTC (rev 41379)
+++ 
haiku/branches/developer/bonefish/signals/headers/private/kernel/thread_types.h 
    2011-05-08 10:26:23 UTC (rev 41380)
@@ -29,11 +29,6 @@
 #define GRAB_THREAD_LOCK()    acquire_spinlock(&gThreadSpinlock)
 #define RELEASE_THREAD_LOCK() release_spinlock(&gThreadSpinlock)
 
-extern spinlock gTeamSpinlock;
-       // NOTE: TEAM lock can be held over a THREAD lock acquisition,
-       // but not the other way (to avoid deadlock)
-#define GRAB_TEAM_LOCK()    acquire_spinlock(&gTeamSpinlock)
-#define RELEASE_TEAM_LOCK() release_spinlock(&gTeamSpinlock)
 
 enum additional_thread_state {
        THREAD_STATE_FREE_ON_RESCHED = 7, // free the thread structure upon 
reschedule
@@ -44,9 +39,11 @@
 #define THREAD_MAX_SET_PRIORITY                                
B_REAL_TIME_PRIORITY
 
 enum team_state {
-       TEAM_STATE_NORMAL,      // normal state
-       TEAM_STATE_BIRTH,       // being contructed
-       TEAM_STATE_DEATH        // being killed
+       TEAM_STATE_NORMAL,              // normal state
+       TEAM_STATE_BIRTH,               // being constructed
+       TEAM_STATE_SHUTDOWN,    // still lives, but is going down
+       TEAM_STATE_DEATH                // only the Team object still exists, 
threads are
+                                                       // gone
 };
 
 #define        TEAM_FLAG_EXEC_DONE     0x01
@@ -72,6 +69,7 @@
 namespace BKernel {
        struct Team;
        struct Thread;
+       struct ProcessGroup;
 }
 
 
@@ -221,31 +219,43 @@
 struct Team : TeamListEntry, BReferenceable, AssociatedDataOwner {
        DoublyLinkedListLink<Team>      global_list_link;
        Team                    *next;                  // next in hash
-       Team                    *siblings_next;
-       Team                    *parent;
-       Team                    *children;
-       Team                    *group_next;
+       Team                    *siblings_next; // next in parent's list; 
protected by
+                                                                       // 
parent's fLock
+       Team                    *parent;                // write-protected by 
both parent (if any)
+                                                                       // and 
this team's fLock
+       Team                    *children;              // protected by this 
team's fLock;
+                                                                       // 
adding/removing a child also requires the
+                                                                       // 
child's fLock
+       Team                    *group_next;    // protected by the group's lock
+
+       // process group info -- write-protected by both the group's lock, the
+       // team's lock, and the team's parent's lock
        pid_t                   group_id;
        pid_t                   session_id;
-       struct ProcessGroup *group;
+       ProcessGroup    *group;
+
        int                             num_threads;    // number of threads in 
this team
        int                             state;                  // current team 
state, see above
        int32                   flags;
        struct io_context *io_context;
        struct realtime_sem_context     *realtime_sem_context;
        struct xsi_sem_context *xsi_sem_context;
-       struct team_death_entry *death_entry;
+       struct team_death_entry *death_entry;   // protected by fLock
        struct list             dead_threads;
        int                             dead_threads_count;
 
+       // protected by the team's fLock
        team_dead_children dead_children;
        team_job_control_children stopped_children;
        team_job_control_children continued_children;
+
+       // protected by the parent team's fLock
        struct job_control_entry* job_control_entry;
 
        VMAddressSpace  *address_space;
        Thread                  *main_thread;
-       Thread                  *thread_list;
+       Thread                  *thread_list;   // protected by fLock and the 
global threads
+                                                                       // 
spinlock
        struct team_loading_info *loading_info; // protected by fLock
        struct list             image_list;             // protected by 
sImageMutex
        struct list             watcher_list;
@@ -279,12 +289,27 @@
 
        static  Team*                           Create(const char* name, bool 
kernel);
        static  Team*                           Get(team_id id);
+       static  Team*                           GetAndLock(team_id id);
 
                        bool                            Lock()
                                                                        { 
mutex_lock(&fLock); return true; }
+                       bool                            TryLock()
+                                                                       { 
return mutex_trylock(&fLock) == B_OK; }
                        void                            Unlock()
                                                                        { 
mutex_unlock(&fLock); }
 
+                       void                            
UnlockAndReleaseReference()
+                                                                       { 
Unlock(); ReleaseReference(); }
+
+                       void                            LockTeamAndParent(bool 
dontLockParentIfKernel);
+                       void                            UnlockTeamAndParent();
+                       void                            
LockTeamAndProcessGroup();
+                       void                            
UnlockTeamAndProcessGroup();
+                       void                            
LockTeamParentAndProcessGroup();
+                       void                            
UnlockTeamParentAndProcessGroup();
+                       void                            LockProcessGroup()
+                                                                       { 
LockTeamAndProcessGroup(); Unlock(); }
+
                        const char*                     Name() const    { 
return fName; }
                        void                            SetName(const char* 
name);
 
@@ -336,7 +361,9 @@
        bool                    was_yielded;
        struct scheduler_thread_data* scheduler_data;
 
-       struct user_thread*     user_thread;
+       struct user_thread*     user_thread;    // write-protected by global 
threads
+                                                                               
// spinlock, freely readable from the
+                                                                               
// thread itself
 
        struct {
                uint8           parameters[32];
@@ -383,6 +410,7 @@
                struct list     waiters;
        } exit;
 
+       // protected by the global threads spinlock
        struct select_info *select_infos;
 
        struct thread_debug_info debug_info;
@@ -400,6 +428,9 @@
        int                             kernel_errno;
                // kernel "errno" differs from its userspace alter ego
 
+       // TODO: Fix locking for user_time and kernel_time. In several places 
they
+       // are updated without holding any lock, which makes reading them from
+       // other threads unsafe.
        bigtime_t               user_time;
        bigtime_t               kernel_time;
        bigtime_t               last_time;
@@ -424,6 +455,8 @@
 
                        bool                            Lock()
                                                                        { 
mutex_lock(&fLock); return true; }
+                       bool                            TryLock()
+                                                                       { 
return mutex_trylock(&fLock) == B_OK; }
                        void                            Unlock()
                                                                        { 
mutex_unlock(&fLock); }
 
@@ -436,29 +469,45 @@
        struct ProcessGroup *next;              // next in hash
        pid_t                           id;
        BKernel::Team           *teams;
-       bool                            orphaned;
 
 public:
                                                                
ProcessGroup(pid_t id);
                                                                ~ProcessGroup();
 
+       static  ProcessGroup*           Get(pid_t id);
+
                        bool                            Lock()
                                                                        { 
mutex_lock(&fLock); return true; }
+                       bool                            TryLock()
+                                                                       { 
return mutex_trylock(&fLock) == B_OK; }
                        void                            Unlock()
                                                                        { 
mutex_unlock(&fLock); }
 
                        ProcessSession*         Session() const
                                                                        { 
return fSession; }
                        void                            Publish(ProcessSession* 
session);
+                       void                            
PublishLocked(ProcessSession* session);
 
+                       void                            ScheduleOrphanedCheck();
+                       void                            UnsetOrphanedCheck();
+
 protected:
        virtual void                            LastReferenceReleased();
 
+public:
+                       SinglyLinkedListLink<ProcessGroup> 
fOrphanedCheckListLink;
+
 private:
                        mutex                           fLock;
                        ProcessSession*         fSession;
+                       bool                            fInOrphanedCheckList;   
// protected by
+                                                                               
                                // sOrphanedCheckLock
 };
 
+typedef SinglyLinkedList<ProcessGroup,
+       SinglyLinkedListMemberGetLink<ProcessGroup,
+               &ProcessGroup::fOrphanedCheckListLink> > ProcessGroupList;
+
 }      // namespace BKernel
 
 using BKernel::Team;
@@ -467,6 +516,7 @@
 using BKernel::Thread;
 using BKernel::ProcessSession;
 using BKernel::ProcessGroup;
+using BKernel::ProcessGroupList;
 
 
 struct thread_queue {

Modified: 
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/system_profiler.cpp
===================================================================
--- 
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/system_profiler.cpp
       2011-05-08 10:13:39 UTC (rev 41379)
+++ 
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/system_profiler.cpp
       2011-05-08 10:26:23 UTC (rev 41380)
@@ -446,7 +446,7 @@
 
        // teams
        if ((fFlags & B_SYSTEM_PROFILER_TEAM_EVENTS) != 0) {
-               if (team_iterate_through_teams(&_InitialTeamIterator, this) != 
NULL)
+               if (team_iterate_through_teams(&_InitialTeamIterator, this) >= 
0)
                        return B_BUFFER_OVERFLOW;
        }
 

Modified: 
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/user_debugger.cpp
===================================================================
--- 
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/user_debugger.cpp
 2011-05-08 10:13:39 UTC (rev 41379)
+++ 
haiku/branches/developer/bonefish/signals/src/system/kernel/debug/user_debugger.cpp
 2011-05-08 10:26:23 UTC (rev 41380)
@@ -141,7 +141,7 @@
 
 /*!    Updates the thread::flags THREAD_FLAGS_BREAKPOINTS_DEFINED bit of the
        given thread.
-       Interrupts must be disabled and the team lock must be held.
+       Interrupts must be disabled and the thread lock must be held.
 */
 static void
 update_thread_breakpoints_flag(Thread* thread)
@@ -161,9 +161,10 @@
 static void
 update_threads_breakpoints_flag()
 {
-       InterruptsSpinLocker _(gTeamSpinlock);
+       Team* team = thread_get_current_thread()->team;
 
-       Team* team = thread_get_current_thread()->team;
+       TeamLocker teamLocker(team);
+
        Thread* thread = team->thread_list;
 
        if (arch_has_breakpoints(&team->debug_info.arch_info)) {
@@ -177,8 +178,8 @@
 
 
 /*!    Updates the thread::flags B_TEAM_DEBUG_DEBUGGER_INSTALLED bit of the
-       given thread.
-       Interrupts must be disabled and the team lock must be held.
+       given thread, which must be the current thread.
+       Interrupts must be disabled and the global threads spinlock must be 
held.
 */
 static void
 update_thread_debugger_installed_flag(Thread* thread)
@@ -194,7 +195,7 @@
 
 /*!    Updates the thread::flags THREAD_FLAGS_DEBUGGER_INSTALLED bit of all
        threads of the given team.
-       Interrupts must be disabled and the team lock must be held.
+       The team's lock or the global threads spinlock must be held.
 */
 static void
 update_threads_debugger_installed_flag(Team* team)
@@ -373,18 +374,18 @@
 
        while (true) {
                // get the team
-               InterruptsSpinLocker teamLocker(gTeamSpinlock);
-
-               team = team_get_team_struct_locked(teamID);
-               if (team == NULL || team->death_entry != NULL)
+               team = Team::GetAndLock(teamID);
+               if (team == NULL)
                        return B_BAD_TEAM_ID;
+               BReference<Team> teamReference(team, true);
+               TeamLocker teamLocker(team, true);
 
                // don't allow messing with the kernel team
                if (team == team_get_kernel_team())
                        return B_NOT_ALLOWED;
 
                // check whether the condition is already set
-               SpinLocker threadLocker(gThreadSpinlock);
+               InterruptsSpinLocker threadsLocker(gThreadSpinlock);
                SpinLocker debugInfoLocker(team->debug_info.lock);
 
                if (team->debug_info.debugger_changed_condition == NULL) {
@@ -398,7 +399,7 @@
                team->debug_info.debugger_changed_condition->Add(&entry);
 
                debugInfoLocker.Unlock();
-               threadLocker.Unlock();
+               threadsLocker.Unlock();
                teamLocker.Unlock();
 
                entry.Wait();
@@ -790,7 +791,7 @@
                        requireDebugger, restart);
        } while (result >= 0 && restart);
 
-       // Prepare to continue -- we install a debugger change condition, so 
no-one
+       // Prepare to continue -- we install a debugger change condition, so no 
one
        // will change the debugger while we're playing with the breakpoint 
manager.
        // TODO: Maybe better use ref-counting and a flag in the breakpoint 
manager.
        Team* team = thread_get_current_thread()->team;
@@ -1040,13 +1041,10 @@
 {
        // Update thread::flags of the thread.
 
-       InterruptsLocker interruptsLocker;
+       InterruptsSpinLocker threadsLocker(gThreadSpinlock);
 
-       SpinLocker teamLocker(gTeamSpinlock);
-       SpinLocker threadLocker(gThreadSpinlock);
-
        Thread *thread = thread_get_thread_struct_locked(threadID);
-       if (!thread)
+       if (thread == NULL)
                return;
 
        update_thread_user_debug_flag(thread);
@@ -1535,6 +1533,9 @@
        team_debug_info teamDebugInfo;
        bool destroyDebugInfo = false;
 
+       TeamLocker teamLocker(nubThread->team);
+               // required by update_threads_debugger_installed_flag()
+
        cpu_status state = disable_interrupts();
        GRAB_TEAM_DEBUG_INFO_LOCK(nubThread->team->debug_info);
 
@@ -1552,6 +1553,8 @@
        RELEASE_TEAM_DEBUG_INFO_LOCK(nubThread->team->debug_info);
        restore_interrupts(state);
 
+       teamLocker.Unlock();
+
        if (destroyDebugInfo)
                teamDebugInfo.breakpoint_manager->RemoveAllBreakpoints();
 
@@ -2462,10 +2465,10 @@
                }
        }
 
-       RELEASE_THREAD_LOCK();
-
        // update the thread::flags fields
        update_threads_debugger_installed_flag(team);
+
+       RELEASE_THREAD_LOCK();
 }
 
 

Modified: haiku/branches/developer/bonefish/signals/src/system/kernel/signal.cpp
===================================================================
--- haiku/branches/developer/bonefish/signals/src/system/kernel/signal.cpp      
2011-05-08 10:13:39 UTC (rev 41379)
+++ haiku/branches/developer/bonefish/signals/src/system/kernel/signal.cpp      
2011-05-08 10:26:23 UTC (rev 41380)
@@ -1,4 +1,5 @@
 /*
+ * Copyright 2011, Ingo Weinhold, ingo_weinhold@xxxxxxx
  * Copyright 2002-2009, Axel Dörfler, axeld@xxxxxxxxxxxxxxxxx
  * Copyright 2002, Angelo Mottola, a.mottola@xxxxxxxxxx
  *
@@ -302,8 +303,10 @@
 }
 
 
-/*! Actually handles the signal - ie. the thread will exit, a custom signal
+/*! Actually handles the signal -- i.e. the thread will exit, a custom signal
        handler is prepared, or whatever the signal demands.
+       Interrupts must be enabled.
+       \param thread The current thread.
 */
 bool
 handle_signals(Thread *thread)
@@ -393,10 +396,13 @@
 
                                        // notify threads waiting for team 
state changes
                                        if (thread == 
thread->team->main_thread) {
-                                               InterruptsSpinLocker 
locker(gTeamSpinlock);
+                                               
thread->team->LockTeamAndParent(false);
+
                                                
team_set_job_control_state(thread->team,
                                                        
JOB_CONTROL_STATE_CONTINUED, signal, false);
 
+                                               
thread->team->UnlockTeamAndParent();
+
                                                // The standard states that the 
system *may* send a
                                                // SIGCHLD when a child is 
continued. I haven't found
                                                // a good reason why we would 
want to, though.
@@ -416,19 +422,25 @@
 
                                        // notify threads waiting for team 
state changes
                                        if (thread == 
thread->team->main_thread) {
-                                               InterruptsSpinLocker 
locker(gTeamSpinlock);
+                                               
thread->team->LockTeamAndParent(false);
+
                                                
team_set_job_control_state(thread->team,
                                                        
JOB_CONTROL_STATE_STOPPED, signal, false);
 
                                                // send a SIGCHLD to the parent 
(if it does have
                                                // SA_NOCLDSTOP defined)
-                                               SpinLocker _(gThreadSpinlock);
+                                               InterruptsSpinLocker 
threadsLocker(gThreadSpinlock);
+                                               // TODO: We should send the 
signal to the team, not the
+                                               // thread.
                                                Thread* parentThread
                                                        = 
thread->team->parent->main_thread;
                                                struct sigaction& parentHandler
                                                        = 
parentThread->sig_action[SIGCHLD - 1];
                                                if ((parentHandler.sa_flags & 
SA_NOCLDSTOP) == 0)
                                                        
deliver_signal(parentThread, SIGCHLD, 0);
+
+                                               threadsLocker.Unlock();
+                                               
thread->team->UnlockTeamAndParent();
                                        }
 
                                        return true;
@@ -606,87 +618,111 @@
 }
 
 
-int
-send_signal_etc(pid_t id, uint signal, uint32 flags)
+status_t
+send_signal_to_thread(thread_id threadID, uint32 signal, uint32 flags)
 {
-       status_t status = B_BAD_THREAD_ID;
-       Thread *thread;
-       cpu_status state = 0;
+       T(SendSignal(threadID, signal, flags));
 
-       if (signal < 0 || signal > MAX_SIGNO)
-               return B_BAD_VALUE;
+       InterruptsSpinLocker threadsLocker(gThreadSpinlock);
 
-       T(SendSignal(id, signal, flags));
+       Thread* thread = thread_get_thread_struct_locked(threadID);
+       if (thread == NULL)
+               return B_BAD_THREAD_ID;
 
-       if ((flags & SIGNAL_FLAG_TEAMS_LOCKED) == 0)
-               state = disable_interrupts();
+       status_t error = deliver_signal(thread, signal, flags);
+       if (error != B_OK)
+               return error;
 
-       if (id > 0) {
-               // send a signal to the specified thread
+       if ((flags & B_DO_NOT_RESCHEDULE) == 0)
+               scheduler_reschedule_if_necessary_locked();
 
-               GRAB_THREAD_LOCK();
+       return B_OK;
+}
 
-               thread = thread_get_thread_struct_locked(id);
-               if (thread != NULL)
-                       status = deliver_signal(thread, signal, flags);
-       } else {
-               // send a signal to the specified process group
-               // (the absolute value of the id)
 
-               ProcessGroup *group;
+status_t
+send_signal_to_team(Team* team, uint32 signal, uint32 flags)
+{
+       T(SendSignal(team->id, signal, flags));
 
-               // TODO: handle -1 correctly
-               if (id == 0 || id == -1) {
-                       // send a signal to the current team
-                       id = thread_get_current_thread()->team->id;
-               } else
-                       id = -id;
+       InterruptsSpinLocker threadsLocker(gThreadSpinlock);
 
-               if ((flags & SIGNAL_FLAG_TEAMS_LOCKED) == 0)
-                       GRAB_TEAM_LOCK();
+       // TODO: Actually send the signal to the team!
+       status_t error = deliver_signal(team->main_thread, signal, flags);
+       if (error != B_OK)
+               return error;
 
-               group = team_get_process_group_locked(NULL, id);
-               if (group != NULL) {
-                       Team *team, *next;
+       if ((flags & B_DO_NOT_RESCHEDULE) == 0)
+               scheduler_reschedule_if_necessary_locked();
 
-                       // Send a signal to all teams in this process group
+       return B_OK;
+}
 
-                       for (team = group->teams; team != NULL; team = next) {
-                               next = team->group_next;
-                               id = team->id;
 
-                               GRAB_THREAD_LOCK();
+#if 0
+static status_t
+send_signal_to_team(team_id teamID, uint32 signal, uint32 flags)
+{
+       // get the team
+       Team* team = Team::Get(teamID)
+       if (team == NULL)
+               return B_BAD_TEAM_ID;
+       BReference<Team> teamReference(team, true);
 
-                               thread = thread_get_thread_struct_locked(id);
-                               if (thread != NULL) {
-                                       // we don't stop because of an error 
sending the signal; we
-                                       // rather want to send as much signals 
as possible
-                                       status = deliver_signal(thread, signal, 
flags);
-                               }
+       return send_signal_to_team(team, signal, flags);
+}
+#endif
 
-                               RELEASE_THREAD_LOCK();
-                       }
-               }
 
-               if ((flags & SIGNAL_FLAG_TEAMS_LOCKED) == 0)
-                       RELEASE_TEAM_LOCK();
+status_t
+send_signal_to_process_group(pid_t groupID, uint32 signal, uint32 flags)
+{
+       ProcessGroup* group = ProcessGroup::Get(groupID);
+       if (group == NULL)
+               return B_BAD_TEAM_ID;
+       BReference<ProcessGroup> groupReference(group);
 
-               GRAB_THREAD_LOCK();
-       }
+       T(SendSignal(-group->id, signal, flags));
 
-       if ((flags & (B_DO_NOT_RESCHEDULE | SIGNAL_FLAG_TEAMS_LOCKED)) == 0)
-               scheduler_reschedule_if_necessary_locked();
+       AutoLocker<ProcessGroup> groupLocker(group);
 
-       RELEASE_THREAD_LOCK();
+       for (Team* team = group->teams; team != NULL; team = team->group_next)
+               send_signal_to_team(team, signal, flags | B_DO_NOT_RESCHEDULE);
 
-       if ((flags & SIGNAL_FLAG_TEAMS_LOCKED) == 0)
-               restore_interrupts(state);
+       groupLocker.Unlock();
 
-       return status;
+       if ((flags & B_DO_NOT_RESCHEDULE) == 0)
+               scheduler_reschedule_if_necessary();
+
+       return B_OK;
 }
 
 
 int
+send_signal_etc(pid_t id, uint signal, uint32 flags)
+{
+       if (signal < 0 || signal > MAX_SIGNO)
+               return B_BAD_VALUE;
+
+       // If id is > 0, send the signal to the respective thread.
+       if (id > 0)
+               return send_signal_to_thread(id, signal, flags);
+
+       // send a signal to the specified process group (the absolute value of 
the
+       // id)
+
+       // TODO: handle -1 correctly
+       if (id == 0 || id == -1) {
+               // send a signal to the current team
+               id = thread_get_current_thread()->team->id;
+       } else
+               id = -id;
+
+       return send_signal_to_process_group(id, signal, flags);
+}
+
+
+int
 send_signal(pid_t threadID, uint signal)
 {
        // The BeBook states that this function wouldn't be exported

Modified: haiku/branches/developer/bonefish/signals/src/system/kernel/team.cpp
===================================================================
--- haiku/branches/developer/bonefish/signals/src/system/kernel/team.cpp        
2011-05-08 10:13:39 UTC (rev 41379)
+++ haiku/branches/developer/bonefish/signals/src/system/kernel/team.cpp        
2011-05-08 10:26:23 UTC (rev 41380)
@@ -13,6 +13,7 @@
 
 #include <team.h>
 
+#include <errno.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
@@ -294,10 +295,22 @@
 // #pragma mark -
 
 
+// the team_id -> Team hash table and the lock protecting it
 static TeamTable sTeamHash;
+static spinlock sTeamHashLock = B_SPINLOCK_INITIALIZER;
+
+// the pid_t -> ProcessGroup hash table and the lock protecting it
 static ProcessGroupHashTable sGroupHash;
+static spinlock sGroupHashLock = B_SPINLOCK_INITIALIZER;
+
 static Team* sKernelTeam = NULL;
 
+// A list of process groups of children of dying session leaders that need to
+// be signalled, if they have become orphaned and contain stopped processes.
+static ProcessGroupList sOrphanedCheckProcessGroups;
+static mutex sOrphanedCheckLock
+       = MUTEX_INITIALIZER("orphaned process group check");
+
 // some arbitrarily chosen limits -- should probably depend on the available
 // memory (the limit is not yet enforced)
 static int32 sMaxTeams = 2048;
@@ -305,16 +318,14 @@
 
 static TeamNotificationService sNotificationService;
 
-spinlock gTeamSpinlock = B_SPINLOCK_INITIALIZER;
 
-
 // #pragma mark - TeamListIterator
 
 
 TeamListIterator::TeamListIterator()
 {
        // queue the entry
-       InterruptsSpinLocker locker(gTeamSpinlock);
+       InterruptsSpinLocker locker(sTeamHashLock);
        sTeamHash.InsertIteratorEntry(&fEntry);
 }
 
@@ -322,7 +333,7 @@
 TeamListIterator::~TeamListIterator()
 {
        // remove the entry
-       InterruptsSpinLocker locker(gTeamSpinlock);
+       InterruptsSpinLocker locker(sTeamHashLock);
        sTeamHash.RemoveIteratorEntry(&fEntry);
 }
 
@@ -331,7 +342,7 @@
 TeamListIterator::Next()
 {
        // get the next team -- if there is one, get reference for it
-       InterruptsSpinLocker locker(gTeamSpinlock);
+       InterruptsSpinLocker locker(sTeamHashLock);
        Team* team = sTeamHash.NextTeam(&fEntry);
        if (team != NULL)
                team->AcquireReference();
@@ -697,7 +708,7 @@
                return team;
        }
 
-       InterruptsSpinLocker locker(gTeamSpinlock);
+       InterruptsSpinLocker locker(sTeamHashLock);
        Team* team = sTeamHash.Lookup(id);
        if (team != NULL)
                team->AcquireReference();
@@ -705,7 +716,166 @@
 }
 
 
+/*!    \brief Returns the team with the given ID in a locked state.
+       Returns a reference to the team.
+       Team and thread spinlock must not be held.
+*/
+/*static*/ Team*
+Team::GetAndLock(team_id id)
+{
+       // get the team
+       Team* team = Get(id);
+       if (team == NULL)
+               return NULL;
+
+       // lock it
+       team->Lock();
+
+       // only return the team, when it isn't already dying
+       if (team->state >= TEAM_STATE_SHUTDOWN) {
+               team->Unlock();
+               team->ReleaseReference();
+               return NULL;
+       }
+
+       return team;
+}
+
+
+/*!    Locks the team and its parent team (if any).
+       The caller must hold a reference to the team or otherwise make sure that
+       it won't be deleted.
+       If the team doesn't have a parent, only the team itself is locked. If 
the
+       team's parent is the kernel team and \a dontLockParentIfKernel is \c 
true,
+       only the team itself is locked.
+
+       \param dontLockParentIfKernel If \c true, the team's parent team is only
+               locked, if it is not the kernel team.
+*/
 void
+Team::LockTeamAndParent(bool dontLockParentIfKernel)
+{
+       // The locking order is parent -> child. Since the parent can change as 
long
+       // as we don't lock the team, we need to do a trial and error loop.
+       Lock();
+
+       while (true) {
+               // If the team doesn't have a parent, we're done. Otherwise try 
to lock
+               // the parent.This will succeed in most cases, simplifying 
things.
+               Team* parent = this->parent;
+               if (parent == NULL || (dontLockParentIfKernel && parent == 
sKernelTeam)
+                       || parent->TryLock()) {
+                       return;
+               }
+
+               // get a temporary reference to the parent, unlock this team, 
lock the
+               // parent, and re-lock this team
+               BReference<Team> parentReference(parent);
+
+               Unlock();
+               parent->Lock();
+               Lock();
+
+               // If the parent hasn't changed in the meantime, we're done.
+               if (this->parent == parent)
+                       return;
+
+               // The parent has changed -- unlock and retry.
+               parent->Unlock();
+       }
+}
+
+
+/*!    Unlocks the team and its parent team (if any).
+*/
+void
+Team::UnlockTeamAndParent()
+{
+       if (parent != NULL)
+               parent->Unlock();
+
+       Unlock();
+}
+
+
+/*!    Locks the team, its parent team (if any), and the team's process group.
+       The caller must hold a reference to the team or otherwise make sure that
+       it won't be deleted.
+       If the team doesn't have a parent, only the team itself is locked.
+*/
+void
+Team::LockTeamParentAndProcessGroup()
+{
+       LockTeamAndProcessGroup();
+
+       // We hold the group's and the team's lock, but not the parent team's 
lock.
+       // If we have a parent, try to lock it.
+       if (this->parent == NULL || this->parent->TryLock())
+               return;
+
+       // No success -- unlock the team and let LockTeamAndParent() do the 
rest of
+       // the job.
+       Unlock();
+       LockTeamAndParent(false);
+}
+
+
+/*!    Unlocks the team, its parent team (if any), and the team's process 
group.
+*/
+void
+Team::UnlockTeamParentAndProcessGroup()
+{
+       group->Unlock();
+
+       if (parent != NULL)
+               parent->Unlock();
+
+       Unlock();
+}
+
+
+void
+Team::LockTeamAndProcessGroup()
+{
+       // The locking order is process group -> child. Since the process group 
can
+       // change as long as we don't lock the team, we need to do a trial and 
error
+       // loop.
+       Lock();
+
+       while (true) {
+               // Try to lock the group. This will succeed in most cases, 
simplifying
+               // things.
+               ProcessGroup* group = this->group;
+               if (group->TryLock())
+                       return;
+
+               // get a temporary reference to the group, unlock this team, 
lock the
+               // group, and re-lock this team
+               BReference<ProcessGroup> groupReference(group);
+
+               Unlock();
+               group->Lock();
+               Lock();
+
+               // If the group hasn't changed in the meantime, we're done.
+               if (this->group == group)
+                       return;
+
+               // The group has changed -- unlock and retry.
+               group->Unlock();
+       }
+}
+
+
+void
+Team::UnlockTeamAndProcessGroup()
+{
+       group->Unlock();
+       Unlock();
+}
+
+
+void
 Team::SetName(const char* name)
 {
        if (const char* lastSlash = strrchr(name, '/'))
@@ -741,8 +911,8 @@
        :
        id(id),
        teams(NULL),
-       orphaned(true),
-       fSession(NULL)
+       fSession(NULL),
+       fInOrphanedCheckList(false)
 {
        char lockName[32];
        snprintf(lockName, sizeof(lockName), "Group:%" B_PRId32, id);
@@ -754,8 +924,20 @@
 {
        TRACE(("ProcessGroup::~ProcessGroup(): id = %ld\n", group->id));
 
+       // If the group is in the orphaned check list, remove it.
+       MutexLocker orphanedCheckLocker(sOrphanedCheckLock);
+
+       if (fInOrphanedCheckList)
+               sOrphanedCheckProcessGroups.Remove(this);
+
+       orphanedCheckLocker.Unlock();
+
+       // remove group from the hash table and from the session
        if (fSession != NULL) {
+               InterruptsSpinLocker groupHashLocker(sGroupHashLock);
                sGroupHash.RemoveUnchecked(this);
+               groupHashLocker.Unlock();
+
                fSession->ReleaseReference();
        }
 
@@ -763,19 +945,63 @@
 }
 
 
+/*static*/ ProcessGroup*
+ProcessGroup::Get(pid_t id)
+{
+       InterruptsSpinLocker groupHashLocker(sGroupHashLock);
+       ProcessGroup* group = sGroupHash.Lookup(id);
+       if (group != NULL)
+               group->AcquireReference();
+       return group;
+}
+
+
 /*!    Adds the group the given session and makes it publicly accessible.
-       The teams lock must be held.
+       The caller must not hold the process group hash lock.
 */
 void
 ProcessGroup::Publish(ProcessSession* session)
 {

[... truncated: 2769 lines follow ...]

Other related posts: