[haiku-development] Re: On timeslices and cycles

  • From: Ingo Weinhold <ingo_weinhold@xxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Mon, 16 Mar 2009 15:18:20 +0100

On 2009-03-16 at 12:02:49 [+0100], Christian Packmann 
<Christian.Packmann@xxxxxx> wrote:
> Ingo Weinhold - 2009-03-14 01:59 :
[...]
> > Anyway, introducing hard affinity -- which guaranteedly will be abused by
> > application developers who think they know what they are doing --
> 
> Yeah, horrible. All the kind of evil things one can do with hard affinity.
> When one considers the havoc and terror which such programs wreak on
> systems with hard affinity like Windows, Linux, AIX... eh, what problems
> exactly, come to think of it? I've never heard of a single case where hard
> affinity caused problems for a user. Care to provide a link?
> 
> Besides, hard affinity can not introduce any problems I can't create right
> now by creating a high number of high-priority threads. The scheduler
> needs to be able to handle such cases to maintain a usable system on
> single-core/CPU systems anyway. The fact that possibly misbehaving threads
> are fixed to some CPUs doesn't change anything. That's certainly not a
> technical reason against hard affinity.
> 
> Besides, bringing such a point up in context of an OS where any program
> can switch off the CPUs at will seems a bit... absurd.

My point isn't that hard affinity can harm the system -- it can't -- but that 
it will be used in cases where it doesn't make sense, thus slowing down the 
application using it. My general assumption being that in most cases the OS 
will have a better idea about the hardware it is running on than an 
application (respectively its developer). Hence the OS should make the 
scheduling decisions.

I have no doubt that there are situations where hard affinity can be put to 
good use, but it's a pretty crude API. I'd rather see an API that allows the 
application to give hints about the resource usage profile of its threads and 
let the OS do that actual scheduling.

[...]
>  > Besides, from an environmental
> > point of view, I'm not even sure I still like those projects.
> 
> Yeah, I'm beginning to see where you come from. Wasting kWh upon kWh for
> rebuilding Haiku over and over and over again is Good(TM), but
> participating in DC efforts for finding cures to AIDS and cancer is bad.
> Because Haiku is more important than human lives?

I think you're getting carried away a bit. This was merely a side remark and 
I was mostly thinking of cryptographic projects like cracking RC5 keys.

[...]
> Oh yeah, and of course running DC projects is much more
> environment-friendly than running these calculations in HPC installations.
> The latter require extensive cooling due to the high density of computing
> equipment, where the cooling often uses up as much energy as the computing
> hardware. Which doubles the energy consumption compared to running the
> same code on distributed systems, where no additional cooling is required.

Interesting. Last time I looked all of my computers sported active cooling. 
Anyway, this is quite off topic.

CU, Ingo

Other related posts: