[haiku-development] Re: Developing in kernel code - advice needed

  • From: André Braga <meianoite@xxxxxxxxx>
  • To: "haiku-development@xxxxxxxxxxxxx" <haiku-development@xxxxxxxxxxxxx>
  • Date: Sat, 5 Dec 2009 08:58:54 -0200

Em 05/12/2009, às 08:29, Mikhail Panasyuk <otoko@xxxxxxxxx> escreveu:
While experimenting with scheduler I create additional structures which require additional memory. Is there any limits for the memory size I can utilize?

Welcome to the kernel (debugging) land! ;)

You can't allocate memory inside the scheduler routines. They run with interrupts disabled. The panic message pretty much tells you that:

Kernel often panics with "unhandled page fault when interrupts disabled" message when there is much memory used or even when there is less used but it's for C++ objects and allocations are done in their constructors.

Preallocate all the memory you'll need beforehand (scheduler initialisation time) and override operator new (and delete, I guess) to use that slice of memory. Yes, you'll have to do it completely by hand.

You'll have to override new at some point anyway. The memory allocation routine inside the kernel (when with interrupts enabled) isn't malloc.

But if I combine all stuctures in one big structure it would panic on system boot. Or it would sometimes panic, sometimes not.

It would be rather more helpful if you posted the code that doesn't work rather than the part that does ;)

2) I'd like to use some <math.h> functions in scheduler, at least pow (). What changes should make in Jamfiles?

Forget about math.h, been there, done that, gave up. You'll have interrupts off, and while I believe that Haiku saves the FPU context on switch (Axel?), I tried (and documented having tried) that approach of using an exponential to calculate skipping, and and it was not only the wrong approach, but complete overkill.

If you want to improve the skipping situation (or should I say starvation situation) you don't need a more complex algorithm to implement some really smart skipping criteria; you need better data structures to ensure fairness thus avoiding starvation.

I'm only saying this because I tried exactly the same approach before (check my [rather old at this point] blog posts): your model is probably wrong at this point. Try another approach.

I really should resume posting articles (and, well, code) about my scheduler design. So much has changed since those blog posts.


Cheers,
A.

Other related posts: