[argyllcms] Re: Different approach to profiling?

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 03 Sep 2007 10:48:06 +1000

Ben Goren wrote:
The  standard approach  to generating  test patches  makes perfect
sense for  the general case: you  have no idea what  colors you'll
wind up  printing, so you want  some sort of even  distribution of
colors to make sure everything gets as close as possible.

The ideal approach as I see it, is to generate a set of test patches
that leaves errors in the final profile distributed perfectly
(visually) evenly. Since the human eye is more sensitive to errors
near the neutral axis than it is (say) for highly saturated colors,
the distribution of patches would be higher near the neutral axis.

This is what the adaptive pre-conditioning algorithm in targen
attempts to do. By providing a previous profile (using the -c option)
and setting a degree of adaptation (ie. -A 0.5), the largest expected
(visual) error anywhere due to sampling error is minimized. Unfortunately,
the algorithm only achieves this aim within a tolerance of 2:1 since
it does it in one pass, whereas the default algorithm is able to iteratively
improve on the initial placement, and so achieves an even device space
distribution to a tighter tolerance (it aims to minimizes the largest
distance from any point in the device space to its nearest sampling point).
Attempting to iteratively improve the initial adaptive distribution
fails to converge. So the choice between these two options is
currently not an easy one.

However, as a photographer, none of my photographs actually has an
even  distribution of  colors. Usually, just  a few  parts of  the
spectrum predominate. And black  and white is just  a special case
of that -- the colors are  tightly bunched along the middle of the
Lab volume, rather than clumped in a couple areas.

What I'm thinking  of to better address  profiling for photography
would require two things (which, for  all I know, might already be
possible to do with Argyll):  the ability to generate test patches
from  an image,  and  the  ability to  merge  chart readings  with
earlier ones after the fact.

I've thought about doing this type of thing in the past (and even
started coding a utility to do exactly this), but it wasn't
clear if it was really worthwhile. For instance, it might simply
be easier to use more patches of the current chart distribution.
Some people (e.g. Bill Atkinson) have been known to use
quite large numbers of test points in making charts (up to 10,000),
so this doesn't seem an unreasonable (if brute force) approach.

The other aspect is that while it's easy enough to create a chosen
number of test points from a set of sample images, they aren't necessarily
very evenly spread, and the other test points (the ones you need
to add to make sure that the whole gamut is sampled to some
minimal degree) should ideally work around/with the image generated
test point, rather than simply ignoring them and therefore duplicating
them. So a technically more sophisticated approach is to try and modify
the normal point distribution algorithm to be weighted in proportion
to the likelihood of colors in the images. I haven't figured out how
to do that yet :-)

You would  start with a  profile built the  same way as  is normal
today.  Then,  when you wanted to  make a high-quality print  of a
particular image, you'd create a chart tuned to the colors used in
the image,  print it,  read it,  and generate  a new  profile that
includes the original data as well as the new data.

Anything is possible. Of course if I was that keen on optimizing for
a particular print, I'd first make sure I was using a device link
workflow (much smoother and more accurate), and customize the gamut
mapping to each image (see the src.gam option to icclink -g) :-)

Graeme Gill.

Other related posts: