[argyllcms] Re: Targen query

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Sun, 26 Mar 2006 23:31:54 +1000

Milton Taylor wrote:

Why is it that the targets produced by targen perceptually look - at least on my printouts - as though they're made up mostly of quite saturated color patches, even though the algorithm is distributing the patches evenly in device space? i.e. there don't seem to be all that many 'greyish' subtle colors, either light or dark.

I don't have any researched explanation, but a hand waving one is that for the color to be neutral or near neutral, the three device values have to be fairly close to each other, and since the values are distributed evenly, the chances of that happening is relatively small. It's far more likely that there will be large differences in device value, hence subjectively a large number of saturated colors.

> I double checked the distribution of rgb values in the ti1 file and that
> was all in order.

An exercise I've tried a couple of times is to pull a test chart into
Photoshop, and then apply a gaussian blur with a really large radius.
Reassuringly, the result is grey.

It raises the question in my mind about what 'gamma curves' are used by printer manufacturers in their devices, i.e. if the printer's raw behaviour is perceptually non-linear, and targen is spreading patches with a linear distribution in device space, then I guess the saturated colors might well be over-represented. I suppose this is the reason for the -c switch in targen. (Is it therefore also necessary to select a -f suboption that spreads the samples in perceptual space rather than device space? Presumably it would not know how to do this unless it had been given a starting-point profile to work off with -c?)

It's there for the use of test patch distributions that need some estimate of the subjective distribution (ie. -R, -I, -A > 0.0). Most of the choices in targen are there so I could conveniently research the impact they had on the end quality of profiles. Usually the influence is subtle enough that small differences in the testing procedure have an equally large effect on the outcome.

The adaptive distribution was the one I was hoping to end up using by
default, but I haven't yet found an algorithm to achieve this, that results
in an optimal enough distribution. The default setting is optimized, and
should come close to a perfectly even distribution in device space. Any
level of adaptiveness, and it currently uses a one pass alrorithm,
that only comes within 2:1 of the optimal distribution.

Graeme Gill.





Other related posts: