[haiku-commits] Re: haiku: hrev48664 - in src/add-ons/kernel: file_cache network/stack src

  • From: Stephan Aßmus <superstippi@xxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Tue, 13 Jan 2015 11:43:15 +0100



On 12.01.2015 23:32, Ingo Weinhold wrote:
On 01/12/2015 10:42 PM, Rene Gollent wrote:
On Jan 12, 2015 4:40 PM, "Axel Dörfler" <axeld@xxxxxxxxxxxxxxxx
<mailto:axeld@xxxxxxxxxxxxxxxx>> wrote:
 > geist just made a good point on IRC: khash would not resize itself
automatically. I haven't had the time to read over Adrien's changes, but
in the kernel you cannot always simply allocate memory.

OpenHashTable will return a failure from Insert if it can't allocate, so
that just needs to be checked (haven't looked yet if that's the case in
the recent refactoring).

The problem isn't allocation failure. As long as Init() was called
successfully, allocation failures are handled internally and only affect
performance. The issue are the allocations themselves. There are
contexts where memory allocations aren't permitted or must be performed
with special functions/flags. E.g. when interrupts disabled or in code
that is used (directly or indirectly) by certain VM code paths (FS hooks
related to paging come to mind).

I take that to mean that Haiku will now be more instable than before, since code has been replaced with other code that is not equivalent. The new hash table code may need to do allocations at random times and it may be a time when it is not allowed to leading to panics. This will be very hard to track. I am in favor of reverting all the related kernel changes.

Also, I have not yet heard a reason why this was done in the first place. It's not about "I should be able to touch code". It should be about improving Haiku. Not about doing some changes for the sake of it under the misguided assumption that maintainability was improved. Where is the improvement here?

Best regards,
-Stephan

Other related posts: