[argyllcms] Re: 0.60 CMYK profile misshaped

  • From: "Gerhard Fürnkranz" <nospam456@xxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Thu, 07 Dec 2006 10:09:17 +0100

-------- Original-Nachricht --------
Datum:  Thu, 07 Dec 2006 18:06:14 +1100
Von: Graeme Gill <graeme@xxxxxxxxxxxxx>
An: argyllcms@xxxxxxxxxxxxx
Betreff:  [argyllcms] Re: 0.60 CMYK profile misshaped

> In theory the per channel curve modelling is meant to pick
> channel non-linearity stuff up, since it can model behaviour in more
> detail than the 4D grid, but in practice it has limitations. One
> limitation is that the per channel curves have to be monotonic,
> and it therefore tends to "average out" non-monotonic sections,

Is non-monotonicity really a problem? Have you e.g. actually seen (CMYK) 
printers with non-monotonic colorant vs. L* responses? I guess this would be 
rather unusual, given the way how halftoning works (if one adds even more dots, 
then the result IMO cannot become lighter). Sure, colorant vs. C* often 
reverses beyond the saturation point, particularly for C or M, (but the section 
beyond saturation should not be used anyway). Colorant vs. a* or b* may indeed 
be non-monotonic, except obviously for the yellow channel, but a* or b* is 
rather not a calibration target for C, M or K anyway. And I'm not sure, whether 
the euclidian distance to paper white does typically reverse at the saturation 
point of any colorant too, does it? (but even if it does, I guess it's usually 
monotonic up to this point).

> which is probably not the very best behaviour. It's difficult
> to figure out how it could be made to work much differently though.
> 
> One the earlier versions of the profiler would probably have done
> better on this data, since it looks for step wedge type values,
> and then computes the per channel curves from this (an "on the fly"
> calibration), but it did less well on other data sets, and doesn't
> work at all if it there are no step wedge values in the test set.
> 
> The probable explanation for the difference between the V53 and V60
> profiles, is that I accidentally turned off the use of CIE94 Delta
> E when optimizing the per channel curves, and normal delta E
> is being used. It seems to have a larger effect than one would
> imagine for this particular device on the maximum error
> (xicc/xlut.c, about line 28).

Actually I'm still a bit sceptic regarding the shaper/matrix/shaper optimized 
curves. In many cases they seem to work very well, but I also encountered data 
sets, where "-ni" or "-ni -no" did result in better accuracy.

Or another example: I created a profile from a gamma=1 raw scan of an IT8 
target. Now I apply gamma=2.2 to the raw scan in an image editor and create a 
profile from the brightened scan image again. Ideally, the difference between 
these two profiles should be handled by the prelinearization curves only, i.e. 
the prelinearization curves of the two profiles should differ by a gamma 
correction of 2.2, and the PCS curves and the CLUT should be approx. equal. But 
the prelinearization shapes of the two profiles were significantly different 
(not just gamma 2.2). So I guess that the optimization possibly gets stuck in a 
local minimum, either for the 1st profile, or for the 2nd one, or even for both 
ones. Btw I'm wondering, do you actually optimize all oders together from the 
beginning, or do you start with 1st order curves, optimize, use the result as 
starting point for the next iteration, drop in the next order (additionally), 
optimize again, and so on?

Regards,
Gerhard

-- 
"Ein Herz für Kinder" - Ihre Spende hilft! Aktion: www.deutschlandsegelt.de
Unser Dankeschön: Ihr Name auf dem Segel der 1. deutschen America's Cup-Yacht!

Other related posts: