[argyllcms] Re: Question regarding Profile Validation

  • From: <robert@xxxxxxxxxxxxxxxxxx>
  • To: <argyllcms@xxxxxxxxxxxxx>
  • Date: Wed, 5 Nov 2014 14:47:38 -0000

Brad Funkhouser [brad.funkhouser@xxxxxxxxxxx] says:

"I think those results are actually very good.  The average error is
imperceptible to the human eye.

Read the last line of colprof's output from when you built the main printer
profile.  It tells you what kind of errors to expect based on how it had to
structure the mapping given the patch data it used to build the profile.
There are compromises being made in the algorithm between exactness and
smoothness.  That paper has a large gamut, probably a million different
perceptible colors, and you're building a mapping into it from a couple
thousand data points, each of which has it's own measurement error.  And
since the original target spread you used was not perceptually uniform, some
of those data points are going to be pretty far apart, like dE 20 or 30 or
maybe 40.  Then there's interpolation going on between those far apart data
points.  It's a bunch of compromises, and when you take it all into
consideration, an avg dE of 0.717430 seems pretty darn good to me!

For the 100 patch test target, you could have it be perceptually uniform
within the printer space by adding " -I
-ciPF6400-Canson-Baryta-310-Argyll-2584.icc " to targen.  That would be a
better test spread.

For the main target, you could use a preconditioning profile to get a
perceptually uniform spread which would even out your data points."


Yes, I agree that an average dE of 0.7 is good; around 85 out of 100 values
are around 1.0 or below, with around 40 values under 0.5.  I'm not so sure
that 15 values over 1.0, with 4 over 2.0 is so great though, considering
that all of the colors are guaranteed in-gamut.

Still, I'm OK with the figures, as long as they are correct.  My concern is
that I may not be testing correctly.  You've pointed out two things I can do
which may improve things.  There was the question of relative/absolute,
which in this test doesn't seem to make much difference (actually, the
Relative readings were better).  But most of all I would like to know why I
have to use the -N flag in colverify, in view of the fact that the test is
Absolute all the way through.  Taking N out results in BAD dEs!

Robert


Other related posts: