[haiku-commits] Re: BRANCH pdziepak-github.scheduler [24dbeed] src/system/kernel/scheduler src/system/libroot/posix/malloc headers/private/kernel

  • From: Pawel Dziepak <pdziepak@xxxxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Wed, 9 Oct 2013 19:22:14 +0200

2013/10/9 Axel Dörfler <axeld@xxxxxxxxxxxxxxxx>:
> Am 09/10/2013 04:15, schrieb pdziepak-github.scheduler:
>
>> 3e91b08: libroot: Do not rely on thread_yield()
>
>
> Out of curiosity, why was this necessary?

thread_yield() is quite unpredictable and shouldn't be used to make things work.
In this particular case of Hoard locks if we follow POSIX
sched_yield() semantics it only yields to the threads with the same
priority and on single processor systems. If the thread holding the
lock has lower priority than the thread spinning and yielding it
wouldn't be possible to release the lock. If we use here full yield
(allowing also lower priority threads to run) on a busy system it may
take very long time, much longer than it is actually needed. The
previous implementation of thread_yield() which actually used snooze()
also isn't perfect, it's just guessing whether the lock will be free.
What I think would be the best option here is to used adaptive mutex
(as I mentioned in TODO comment). Try spinning for a while (only on MP
systems) and if the spinning fails to acquire the lock then move to
the standard mutex behavior - go to sleep. It actually isn't anymore
heavyweight than yielding since the kernel will reschedule anyway.
Only with mutex the thread would continue exactly at the moment when
the lock is freed.

>> -       lock = UNLOCKED;
>> +       mutex_init(&lock, name);
>
>
> Is it possible that you broke the lock on fork?

I don't think I introduced any /new/ problem. True, forking while
another thread holds the lock would make that lock locked and never
released in the child process, but that problem is also present with
the previous implementation as noted in:
src/system/libroot/posix/malloc/arch-specific.cpp:124-127

    // TODO: We should actually also install a hook that is called before
    // fork() is being executed. In a multithreaded app it would need to
    // acquire *all* allocator locks, so that we don't fork() an
    // inconsistent state.

Paweł

Other related posts: