Since many people are already using master, I guess it's worth sending this
mail also on the main list. If you don't use master, this will end up in
1.3, so better read anyway :)
We just pushed a big change in EvaluationService, to keep the benefit of
parallel evaluation of testcases (reduced latency of evaluation)
eliminating the downside (much lower throughput due to ES becoming a CPU
bottleneck).
The visible effect is that workers will receive from 1 to 25 operations (=
testcases) at a time, depending on how long is the queue (if you're
curious, len(queue) / len(workers) capped between 1 and 25).
Important things to notice:
- As always, the best way to ensure a smooth contest is to keep
evaluation times small. Make sure that 25 * avg_eval_time_of_a_testcase is
not too high, and that 25 * max_eval_time is well below 10 minutes. (Sadly
the constant 25 is not adaptive, so if you need to change it you need to
modify the code in ES.)
- The probability of ending up in a deadlock will be temporarily higher,
as this code has not been tested in many contests. I don't think it is a
high risk, but if it happens it should be something restarting ES should
fix.
- Not news, just a generic advice: if you are tweaking a dataset
actively judged (adding/removing testcases, ...) you should stop ES first.
Please let us now if you see any problem!
-- Stefano