[argyllcms] Re: More Argyll experiments...

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 14 Jul 2008 12:08:16 +1000

Gerhard Fuernkranz wrote:
[My experience is that there is a correlation, i.e. if a test set with
perceptually distributed data points is used for verification, then the
winner is in a majority of cases the training set with perceptual
distribution too, while on the other hand a test set with an even or
random distribution in device space rather favors a training set with an
even distribution in device space as winner]

My experience agrees with Gerhard's (thanks for the nice summary!).
It is remarkably hard to prove anything in regard to the benefits of
certain test chart distributions. The amount of noise in the readings
due to repeatability and sampling, combined with the above effects
makes it very difficult to draw firm conclusions.

The best technique I arrived at for verification was to use
a large number of device space randomly distributed, and
perceptually randomly distributed verification points. Often the
results measured with the two different distributions would conflict,
and/or average vs. RMS vs peak errors would conflict in their conclusions.

To be practical, I had to test against synthetic device models
(how else to you "measure" 100000 test points in a repeatable
fashion ?), but I suspect that the synthetic device models also
introduce their own biases into things as well. Varying any parameter
(ie. profile smoothness, grid resolution, synthetic device model)
would shift the results.

Graeme Gill.

Other related posts: