[contestms] Re: CMS in the cloud

  • From: Stefano Maggiolo <s.maggiolo@xxxxxxxxx>
  • To: contestms@xxxxxxxxxxxxx
  • Date: Thu, 03 Apr 2014 14:59:34 +0000

Damien, I agree that the minimum makes initially more sense, but it is my
intuition that the variance of the median of n samples should be lower than
the variance of the minimum (I just made a small script to confirm it, but
probably is demonstrable); hence taking the median could be fairer, as the
same program should give closer results on different runs.

On Thu Apr 03 2014 at 3:17:45 PM, Damien Leroy <damien.leroy@xxxxxxxx>
wrote:

> As the goal is to use the same processing unit anyway, abnormal timing
> should only be higher value than expected. I do not see how a processor
> might be faster in specific case or when there is mis-management of a VM.
> In that respect, I would keep the minimum time.
>
> Artem:
> Actually, for our final, I managed to have the maximum time limit much
> higher that what was expected for a given algorithm (and complexity). In
> that way, 10% difference on timing should not have given a different score
> for my contestants.  But I agree, this is not completely fair.
>
> Damien
>
> On 02 Apr 2014, at 20:12, Stefano Maggiolo <s.maggiolo@xxxxxxxxx> wrote:
>
> As a side note, I thought of implementing multiple evaluations for
> IOI2012, but luckily our workers were of the touchable type and timing
> variance was not an issue. I wouldn't object to adding such a feature in
> CMS, but it would require a fair amount of work to implement it in a proper
> way.
>
> Also the rationale for how many times to re-evaluate and what time to
> consider needs some thought, e.g.:
> - evaluate minimum twice and until the variance is low enough, or a fixed
> number of times?
> - take the minimum time, or the median?
>
> -- Ste
>
> On Tue Apr 01 2014 at 7:06:06 PM, Artem Iglikov <artem.iglikov@xxxxxxxxx>
> wrote:
>
>> I did some more testing and by this moment results are not very good. If
>> anybody is interested, I've attached a file with execution times of the
>> same program with the same data on 10 workers several times (it's the
>> isolate output), and a file, showing diffs of /proc/cpuinfo on that
>> workers. Though the most of workers work pretty stable (but have some
>> difference in execution time), some workers are not stable enough
>> (fluctuation is almost 10%). And I don't see any obvious correlation
>> between cpuinfo and execution time. The worker7 is the most interesting
>> case.
>>
>>
>> On Sat, Mar 29, 2014 at 8:37 PM, Artem Iglikov 
>> <artem.iglikov@xxxxxxxxx>wrote:
>>
>>> > You can check (cat /proc/cpuinfo) if you observe such a difference
>>> again.
>>>
>>> Thanks. Why didn't I think about this at that very moment :-(
>>>
>>> > A nice feature on CMS would be to allow requiring the tests to be run
>>> at least two time on two different workers :-)
>>>
>>> I'm thinking about this too. For example second testing can be done when
>>> some worker is free.
>>>
>>> > BTW, why do you run your worker on a fast instance type (C3) ? For our
>>> olympiad, I chose a small one, so that it is slow and so easier to
>>> distinguish the algorithms. (e.g., O(n) vs O(ln))
>>>
>>> I chose C3 because Amazon explicitly says that vCPUs on these instances
>>> are hardware hyper threads. I think this limits number of vCPUs per CPU and
>>> in my opinion performance should be more stable. But I may be wrong.
>>>
>>
>>
>>
>> --
>> Artem Iglikov
>>
>
>

Other related posts: