[haiku-commits] haiku: hrev46824 - src/system/kernel/scheduler

  • From: pdziepak@xxxxxxxxxxx
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Thu, 6 Feb 2014 04:00:16 +0100 (CET)

hrev46824 adds 5 changesets to branch 'master'
old head: 48fcadd44cec578e9dcd4af44e6596714e3086c7
new head: a96e17ba9d3cf1b7e576fb62a7f06ffbe80cfc97
overview: http://cgit.haiku-os.org/haiku/log/?qt=range&q=a96e17b+%5E48fcadd

----------------------------------------------------------------------------

771ae06: scheduler: Update priority penalty computation in debug dump

230d1fc: scheduler: Update load of idle cores
  
  In order to keep the scheduler tickless core load is computed and updated
  only during various scheduler events (i.e. thread enqueue, reschedule, etc).
  The problem it creates is that if a core becomes idle its load may remain
  outdated for an extended period of time thus resulting in suboptimal thread
  migration decisions.
  
  The solution to this problem is to add a timer each time an idle thread is
  scheudled which, after kLoadMeasureInterval, would fire and force load
  update.

667b23d: scheduler: Always update core heaps after thread migration
  
  The main purpose of this patch is to eliminate the delay between thread
  migration and result of that migration being visible in load statistics.
  Such delay, in certain circumstances, may cause some cores to become
  overloaded because the scheduler migrates too many threads to them before
  the effect of migration becomes apparent.

1da76fd: scheduler: Inherit estimated load, do not inherit assigned core
  
  The initial core assignment has to be done without any knowledge about
  the thread behaviour. Moreover, short lived tasks may spent most of their
  time executing on that initial core. This patch attempts to improve the
  qualiti of that initial decision.

a96e17b: kernel: Adjust load tracking interval

                                    [ Pawel Dziepak <pdziepak@xxxxxxxxxxx> ]

----------------------------------------------------------------------------

5 files changed, 85 insertions(+), 57 deletions(-)
headers/private/kernel/load_tracking.h           |  2 +-
src/system/kernel/scheduler/scheduler.cpp        | 34 ++--------
src/system/kernel/scheduler/scheduler_cpu.cpp    | 68 ++++++++++++++++----
src/system/kernel/scheduler/scheduler_cpu.h      | 23 +++++--
src/system/kernel/scheduler/scheduler_thread.cpp | 15 ++---

############################################################################

Commit:      771ae065dbd8326c5c464c0dc94c742b07e84a18
URL:         http://cgit.haiku-os.org/haiku/commit/?id=771ae06
Author:      Pawel Dziepak <pdziepak@xxxxxxxxxxx>
Date:        Mon Feb  3 18:50:34 2014 UTC

scheduler: Update priority penalty computation in debug dump

----------------------------------------------------------------------------

diff --git a/src/system/kernel/scheduler/scheduler_thread.cpp 
b/src/system/kernel/scheduler/scheduler_thread.cpp
index 7f57786..c5eed05 100644
--- a/src/system/kernel/scheduler/scheduler_thread.cpp
+++ b/src/system/kernel/scheduler/scheduler_thread.cpp
@@ -126,12 +126,10 @@ ThreadData::Dump() const
 {
        kprintf("\tpriority_penalty:\t%" B_PRId32 "\n", fPriorityPenalty);
 
-       int32 additionalPenalty = 0;
-       const int kMinimalPriority = _GetMinimalPriority();
-       if (kMinimalPriority > 0)
-               additionalPenalty = fAdditionalPenalty % kMinimalPriority;
+       int32 priority = GetPriority() - _GetPenalty();
+       priority = std::max(priority, int32(1));
        kprintf("\tadditional_penalty:\t%" B_PRId32 " (%" B_PRId32 ")\n",
-               additionalPenalty, fAdditionalPenalty);
+               fAdditionalPenalty % priority, fAdditionalPenalty);
        kprintf("\teffective_priority:\t%" B_PRId32 "\n", 
GetEffectivePriority());
 
        kprintf("\ttime_used:\t\t%" B_PRId64 " us (quantum: %" B_PRId64 " 
us)\n",

############################################################################

Commit:      230d1fcfeaedb4d034e3f03e1697957ca633e6ea
URL:         http://cgit.haiku-os.org/haiku/commit/?id=230d1fc
Author:      Pawel Dziepak <pdziepak@xxxxxxxxxxx>
Date:        Mon Feb  3 22:14:37 2014 UTC

scheduler: Update load of idle cores

In order to keep the scheduler tickless core load is computed and updated
only during various scheduler events (i.e. thread enqueue, reschedule, etc).
The problem it creates is that if a core becomes idle its load may remain
outdated for an extended period of time thus resulting in suboptimal thread
migration decisions.

The solution to this problem is to add a timer each time an idle thread is
scheudled which, after kLoadMeasureInterval, would fire and force load
update.

----------------------------------------------------------------------------

diff --git a/src/system/kernel/scheduler/scheduler.cpp 
b/src/system/kernel/scheduler/scheduler.cpp
index d07ddf8..67f99e2 100644
--- a/src/system/kernel/scheduler/scheduler.cpp
+++ b/src/system/kernel/scheduler/scheduler.cpp
@@ -218,28 +218,12 @@ scheduler_set_thread_priority(Thread *thread, int32 
priority)
 }
 
 
-static inline void
-reschedule_needed()
-{
-       // This function is called as a result of either the timer event set by 
the
-       // scheduler or an incoming ICI. Make sure the reschedule() is invoked.
-       get_cpu_struct()->invoke_scheduler = true;
-}
-
-
 void
 scheduler_reschedule_ici()
 {
-       reschedule_needed();
-}
-
-
-static int32
-reschedule_event(timer* /* unused */)
-{
-       reschedule_needed();
-       get_cpu_struct()->preempted = true;
-       return B_HANDLED_INTERRUPT;
+       // This function is called as a result of an incoming ICI.
+       // Make sure the reschedule() is invoked.
+       get_cpu_struct()->invoke_scheduler = true;
 }
 
 
@@ -444,18 +428,12 @@ reschedule(int32 nextState)
        cpu->TrackActivity(oldThreadData, nextThreadData);
 
        if (nextThread != oldThread || oldThread->cpu->preempted) {
-               timer* quantumTimer = &oldThread->cpu->quantum_timer;
-               if (!oldThread->cpu->preempted)
-                       cancel_timer(quantumTimer);
+               cpu->StartQuantumTimer(nextThreadData, 
oldThread->cpu->preempted);
 
                oldThread->cpu->preempted = false;
-               if (!nextThreadData->IsIdle()) {
-                       bigtime_t quantum = nextThreadData->GetQuantumLeft();
-                       add_timer(quantumTimer, &reschedule_event, quantum,
-                               B_ONE_SHOT_RELATIVE_TIMER);
-
+               if (!nextThreadData->IsIdle())
                        nextThreadData->Continues();
-               } else
+               else
                        gCurrentMode->rebalance_irqs(true);
                nextThreadData->StartQuantum();
 
diff --git a/src/system/kernel/scheduler/scheduler_cpu.cpp 
b/src/system/kernel/scheduler/scheduler_cpu.cpp
index cdce2ae..e072268 100644
--- a/src/system/kernel/scheduler/scheduler_cpu.cpp
+++ b/src/system/kernel/scheduler/scheduler_cpu.cpp
@@ -81,7 +81,8 @@ CPUEntry::CPUEntry()
        :
        fLoad(0),
        fMeasureActiveTime(0),
-       fMeasureTime(0)
+       fMeasureTime(0),
+       fUpdateLoadEvent(false)
 {
        B_INITIALIZE_RW_SPINLOCK(&fSchedulerModeLock);
        B_INITIALIZE_SPINLOCK(&fQueueLock);
@@ -294,6 +295,27 @@ CPUEntry::TrackActivity(ThreadData* oldThreadData, 
ThreadData* nextThreadData)
 
 
 void
+CPUEntry::StartQuantumTimer(ThreadData* thread, bool wasPreempted)
+{
+       cpu_ent* cpu = &gCPU[ID()];
+
+       if (!wasPreempted || fUpdateLoadEvent)
+               cancel_timer(&cpu->quantum_timer);
+       fUpdateLoadEvent = false;
+
+       if (!thread->IsIdle()) {
+               bigtime_t quantum = thread->GetQuantumLeft();
+               add_timer(&cpu->quantum_timer, &CPUEntry::_RescheduleEvent, 
quantum,
+                       B_ONE_SHOT_RELATIVE_TIMER);
+       } else if (gTrackCoreLoad) {
+               add_timer(&cpu->quantum_timer, &CPUEntry::_UpdateLoadEvent,
+                       kLoadMeasureInterval, B_ONE_SHOT_RELATIVE_TIMER);
+               fUpdateLoadEvent = true;
+       }
+}
+
+
+void
 CPUEntry::_RequestPerformanceLevel(ThreadData* threadData)
 {
        SCHEDULER_ENTER_FUNCTION();
@@ -323,6 +345,24 @@ CPUEntry::_RequestPerformanceLevel(ThreadData* threadData)
 }
 
 
+/* static */ int32
+CPUEntry::_RescheduleEvent(timer* /* unused */)
+{
+       get_cpu_struct()->invoke_scheduler = true;
+       get_cpu_struct()->preempted = true;
+       return B_HANDLED_INTERRUPT;
+}
+
+
+/* static */ int32
+CPUEntry::_UpdateLoadEvent(timer* /* unused */)
+{
+       CoreEntry::GetCore(smp_get_current_cpu())->ChangeLoad(0);
+       CPUEntry::GetCPU(smp_get_current_cpu())->fUpdateLoadEvent = false;
+       return B_HANDLED_INTERRUPT;
+}
+
+
 CPUPriorityHeap::CPUPriorityHeap(int32 cpuCount)
        :
        Heap<CPUEntry, int32>(cpuCount)
@@ -497,10 +537,8 @@ CoreEntry::_UpdateLoad()
        bigtime_t now = system_time();
        if (now < kLoadMeasureInterval + fLastLoadUpdate)
                return;
-       if (!try_acquire_write_spinlock(&gCoreHeapsLock))
-               return;
-       WriteSpinLocker coreLocker(gCoreHeapsLock, true);
        WriteSpinLocker locker(fLoadLock);
+       WriteSpinLocker coreLocker(gCoreHeapsLock);
 
        int32 newKey = GetLoad();
        int32 oldKey = CoreLoadHeap::GetKey(this);
diff --git a/src/system/kernel/scheduler/scheduler_cpu.h 
b/src/system/kernel/scheduler/scheduler_cpu.h
index 3b2f0c1..65d7755 100644
--- a/src/system/kernel/scheduler/scheduler_cpu.h
+++ b/src/system/kernel/scheduler/scheduler_cpu.h
@@ -82,12 +82,18 @@ public:
                                                void                    
TrackActivity(ThreadData* oldThreadData,
                                                                                
        ThreadData* nextThreadData);
 
+                                               void                    
StartQuantumTimer(ThreadData* thread,
+                                                                               
        bool wasPreempted);
+
        static inline           CPUEntry*               GetCPU(int32 cpu);
 
 private:
                                                void                    
_RequestPerformanceLevel(
                                                                                
        ThreadData* threadData);
 
+       static                          int32                   
_RescheduleEvent(timer* /* unused */);
+       static                          int32                   
_UpdateLoadEvent(timer* /* unused */);
+
                                                int32                   
fCPUNumber;
                                                CoreEntry*              fCore;
 
@@ -101,6 +107,8 @@ private:
                                                bigtime_t               
fMeasureActiveTime;
                                                bigtime_t               
fMeasureTime;
 
+                                               bool                    
fUpdateLoadEvent;
+
                                                friend class DebugDumper;
 } CACHE_LINE_ALIGN;
 
@@ -440,10 +448,11 @@ CoreEntry::ChangeLoad(int32 delta)
        ASSERT(gTrackCoreLoad);
        ASSERT(delta >= -kMaxLoad && delta <= kMaxLoad);
 
-       ReadSpinLocker locker(fLoadLock);
-       atomic_add(&fCurrentLoad, delta);
-       atomic_add(&fLoad, delta);
-       locker.Unlock();
+       if (delta != 0) {
+               ReadSpinLocker locker(fLoadLock);
+               atomic_add(&fCurrentLoad, delta);
+               atomic_add(&fLoad, delta);
+       }
 
        _UpdateLoad();
 }

############################################################################

Commit:      667b23ddc2944789ab4c62402bb361529997f4f4
URL:         http://cgit.haiku-os.org/haiku/commit/?id=667b23d
Author:      Pawel Dziepak <pdziepak@xxxxxxxxxxx>
Date:        Wed Feb  5 02:34:27 2014 UTC

scheduler: Always update core heaps after thread migration

The main purpose of this patch is to eliminate the delay between thread
migration and result of that migration being visible in load statistics.
Such delay, in certain circumstances, may cause some cores to become
overloaded because the scheduler migrates too many threads to them before
the effect of migration becomes apparent.

----------------------------------------------------------------------------

diff --git a/src/system/kernel/scheduler/scheduler_cpu.cpp 
b/src/system/kernel/scheduler/scheduler_cpu.cpp
index e072268..0f88fce 100644
--- a/src/system/kernel/scheduler/scheduler_cpu.cpp
+++ b/src/system/kernel/scheduler/scheduler_cpu.cpp
@@ -527,7 +527,7 @@ CoreEntry::RemoveCPU(CPUEntry* cpu, ThreadProcessing& 
threadPostProcessing)
 
 
 void
-CoreEntry::_UpdateLoad()
+CoreEntry::_UpdateLoad(bool forceUpdate)
 {
        SCHEDULER_ENTER_FUNCTION();
 
@@ -535,9 +535,11 @@ CoreEntry::_UpdateLoad()
                return;
 
        bigtime_t now = system_time();
-       if (now < kLoadMeasureInterval + fLastLoadUpdate)
+       bool intervalEnded = now >= kLoadMeasureInterval + fLastLoadUpdate;
+
+       if (!intervalEnded && !forceUpdate)
                return;
-       WriteSpinLocker locker(fLoadLock);
+
        WriteSpinLocker coreLocker(gCoreHeapsLock);
 
        int32 newKey = GetLoad();
@@ -546,12 +548,16 @@ CoreEntry::_UpdateLoad()
        ASSERT(oldKey >= 0);
        ASSERT(newKey >= 0);
 
-       ASSERT(fCurrentLoad >= 0);
-       ASSERT(fLoad >= fCurrentLoad);
+       if (intervalEnded) {
+               WriteSpinLocker locker(fLoadLock);
 
-       fLoad = fCurrentLoad;
-       fLoadMeasurementEpoch++;
-       fLastLoadUpdate = now;
+               ASSERT(fCurrentLoad >= 0);
+               ASSERT(fLoad >= fCurrentLoad);
+
+               fLoad = fCurrentLoad;
+               fLoadMeasurementEpoch++;
+               fLastLoadUpdate = now;
+       }
 
        if (oldKey == newKey)
                return;
diff --git a/src/system/kernel/scheduler/scheduler_cpu.h 
b/src/system/kernel/scheduler/scheduler_cpu.h
index 65d7755..11a681b 100644
--- a/src/system/kernel/scheduler/scheduler_cpu.h
+++ b/src/system/kernel/scheduler/scheduler_cpu.h
@@ -173,7 +173,7 @@ public:
        static inline           CoreEntry*              GetCore(int32 cpu);
 
 private:
-                                               void                    
_UpdateLoad();
+                                               void                    
_UpdateLoad(bool forceUpdate = false);
 
        static                          void                    
_UnassignThread(Thread* thread,
                                                                                
        void* core);
@@ -416,7 +416,7 @@ CoreEntry::AddLoad(int32 load, uint32 epoch, bool 
updateLoad)
        locker.Unlock();
 
        if (updateLoad)
-               _UpdateLoad();
+               _UpdateLoad(true);
 }
 
 
@@ -434,7 +434,7 @@ CoreEntry::RemoveLoad(int32 load, bool force)
                atomic_add(&fLoad, -load);
                locker.Unlock();
 
-               _UpdateLoad();
+               _UpdateLoad(true);
        }
        return fLoadMeasurementEpoch;
 }

############################################################################

Commit:      1da76fd4cc60e12877a69ec685b00ff49bd7b34e
URL:         http://cgit.haiku-os.org/haiku/commit/?id=1da76fd
Author:      Pawel Dziepak <pdziepak@xxxxxxxxxxx>
Date:        Thu Feb  6 01:50:07 2014 UTC

scheduler: Inherit estimated load, do not inherit assigned core

The initial core assignment has to be done without any knowledge about
the thread behaviour. Moreover, short lived tasks may spent most of their
time executing on that initial core. This patch attempts to improve the
qualiti of that initial decision.

----------------------------------------------------------------------------

diff --git a/src/system/kernel/scheduler/scheduler_thread.cpp 
b/src/system/kernel/scheduler/scheduler_thread.cpp
index c5eed05..25a332b 100644
--- a/src/system/kernel/scheduler/scheduler_thread.cpp
+++ b/src/system/kernel/scheduler/scheduler_thread.cpp
@@ -30,8 +30,6 @@ ThreadData::_InitBase()
        fLastMeasureAvailableTime = 0;
        fMeasureAvailableTime = 0;
 
-       fNeededLoad = 0;
-
        fWentSleep = 0;
        fWentSleepActive = 0;
 
@@ -95,11 +93,11 @@ void
 ThreadData::Init()
 {
        _InitBase();
+       fCore = NULL;
 
        Thread* currentThread = thread_get_current_thread();
        ThreadData* currentThreadData = currentThread->scheduler_data;
-       fCore = currentThreadData->fCore;
-       fLoadMeasurementEpoch = fCore->LoadMeasurementEpoch() - 1;
+       fNeededLoad = currentThreadData->fNeededLoad;
 
        if (!IsRealTime()) {
                fPriorityPenalty = std::min(currentThreadData->fPriorityPenalty,
@@ -118,6 +116,7 @@ ThreadData::Init(CoreEntry* core)
 
        fCore = core;
        fReady = true;
+       fNeededLoad = 0;
 }
 
 

############################################################################

Revision:    hrev46824
Commit:      a96e17ba9d3cf1b7e576fb62a7f06ffbe80cfc97
URL:         http://cgit.haiku-os.org/haiku/commit/?id=a96e17b
Author:      Pawel Dziepak <pdziepak@xxxxxxxxxxx>
Date:        Thu Feb  6 02:21:13 2014 UTC

kernel: Adjust load tracking interval

----------------------------------------------------------------------------

diff --git a/headers/private/kernel/load_tracking.h 
b/headers/private/kernel/load_tracking.h
index b4ff87a..1c68a46 100644
--- a/headers/private/kernel/load_tracking.h
+++ b/headers/private/kernel/load_tracking.h
@@ -10,7 +10,7 @@
 
 
 const int32 kMaxLoad = 1000;
-const bigtime_t kLoadMeasureInterval = 50000;
+const bigtime_t kLoadMeasureInterval = 1000;
 const bigtime_t kIntervalInaccuracy = kLoadMeasureInterval / 4;
 
 


Other related posts:

  • » [haiku-commits] haiku: hrev46824 - src/system/kernel/scheduler - pdziepak