[haiku-development] Re: The new locking primitives and boosting -- I need some guidance here :)

  • From: "André Braga" <meianoite@xxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Wed, 25 Jun 2008 18:13:42 -0300

On Wed, Jun 25, 2008 at 07:49, Axel Dörfler <axeld@xxxxxxxxxxxxxxxx> wrote:
> While that is true, if it has any undesirable effect it is most likely
> a design problem of your software :-)

And I'm trying to shield the system from common-lack-of-sense  ;)

> In general, if more than onev thread may have their share of CPU time
> now, the one with the higher priority should win and be scheduled
> first.

You know, this goes against fair scheduling models. Fair scheduling
has more to do with quality of service than urgency.

> Unless you can show that this isn't enough to provide a good experience
> (with an otherwise fair scheduler), I don't see why you should spend
> much time on it.

There are two approaches (that I know of) to fair scheduling:
a) fairness of using a given resource
b) fairness of access to a given resource

(a) is a pessimistic approach in that it assumes that every user of
said resource will use as much of it as possible.
(b) is an optimistic approach, since it assumes nothing about the user
and relies on being told the priority of each user so it can make
informed decisions.

In te context of schedulers,
(a) it fits pretty well the UNIX model where every process is created
equal and it's pretty rare for a process to lower its prriority via
nice(3).
(b), on the other hand, fits the BeOS model where priority levels are
freely defineable by userland processes (and thankfully most of the
times they're used correctly) and actually express the roles of the
threads.

(a) ends up working fairly well for interactive processes since they
tend to consume only a short fraction of their quanta, thus receiving
a huge boost as the scheduler attempts to make (or force, if you will)
them consume a full quantum. In other words, it's a rehash of the
boosting heuristics based on CPU usage, and as such the scheduler has
to book-keeping and recalculate the CPU usage for a given thread at
every switch context, and as implemented on Linux this triggers a
removal/reinsertion of the thread on the timeline red-black tree,
which could potentially grow quite big.
(b) on the other hand won't, by itself, lend any boost interactive
processes. But it *is* desirable that those threads get boosted, so I
came up with a way to make this possible *and* free the scheduler from
any heuristics. And since it aims for fainess of access, every thread
that has the same priority level can be grouped together, and it
suffices to schedule *groups* of threads and just pick threads from
those groups round-robin. This leads to a quite smaller balanced tree,
and makes it trivial to implement actual scheduling groups if instead
of using a simple queue each group is made into a tree itself; no
interfaces need to change as long as both trees and queues understand
what to do when they receive a getNext message.


And it's not "spend much time", it's really circa 5 lines of code to
implement the branding and boosting and then gradually converting the
semaphores to get the branding.

> IOW just get the scheduler integrated first, before
> thinking about how to make eventually it better -- these things are
> much better done with a working example at hand.

OK, agreed. But I'm still sure that we'll need to get there
eventually, since a lean and fast fair scheduler sort of needs it.
Linux only gets fast context switches with CFS on current hardware;
try doing with kernel 2.6.24+ what BeOS managed to do on Pentium 1
level hardware...


Cheers,
A.

Other related posts: