[argyllcms] Re: DeviceLink Profile refining

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Wed, 14 Feb 2007 10:54:07 +1100

marcel nita wrote:
The results are always the same: the white point and yellow are increasing
after the first step( the other primaries and other ink combinations are
really improving ).

Could you please tell me if there are there any known problems with refining or if you have an explanation for my results? I do not expect
extraordinary results from the process, I do not expect it to do magic
things, but I think at least they shouldn't that get worse.

Refine isn't sophisticated enough to keep track of the whole of
the refining history, so currently it doesn't really cope with the overall
behaviour doing non-linear things. Hitting the gamut boundary is
a non-linear thing, and CMYK profiles often have complicated
gamut boundary topology near the dark end.

Looking at the PremiumGlossyPhoto results you posted, these
don't look unreasonable to me. Average and peak errors have halved,
so overall that's a pretty good result. I can understand that it's
not so good that the white has got worse, but I guess this is
the influence of trying to correct colors near white. You might
try adding several white test patches as a way of increasing
the weighting of the white error (the latest version of refine
will give the lightest patch an increased weight of 5 automatically.)

Another thing you could try is to leave out test patches of colors
that get worse, or edit the test results for those patches, making
them the same as the target, thereby selectively "turning off"
further corrections for those points. Clumsy, but it might help.

Also, in tweak/refine.c, in function PCSp_PCSp(void *cntx, double *out,
double *in), would it be correct to test(something like dE) the new
correction and the compund correction against the corresponding target
reference data just to see which is getting too far from reference?

It's not easy to change things to avoid regression without keeping
a longer history, and constraining how it works a great deal more
(ie. forcing the test chart to be constant, and there would have to be
some sort of "finish refine" step to revert to the best corrections
for each point, rather than continuing to try and find better corrections),
and even then it could be complicated, because each test point
correction influences nearby corrections due to the interpolation
used. Even sophisticated error minimization algorithms don't cope well with
disjoint behaviour, often needing many sample trials to get close to such
a boundary, nor do they cope with noise/non-repeatability very well.

At the moment refine can be quite independently applied. You can apply it
at any stage, with any set of test points. You could use it every now
and again to track a device.

Then the
correction with the smaller delta will be considered( think it as a way to
ignore the previous abstract profile for certain patches, that are getting
worse ). I am still not sure if this approach would be correct, so it would
be very helpful if someone can correct me if I am wrong( preferably Graeme

Having said all that, I've have had a bit of a play with refine, and think I've
struck upon a slightly better scheme to deal with out of gamut points, that
allows efforts to correct them without the correction "running away"
and causing things to get worse. I still notice some regressions for
dark out of gamut points though. The overall improvement is slight in
my tests, but may be worthwhile in improving behaviour for the critical
near white colors. If you're running MSWindows, you can try out this
version of refine here <http://www.argyllcms.com/refine.zip>.

Graeme Gill.



Other related posts: