[argyllcms] Re: targen and profile

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Thu, 22 Feb 2007 14:31:41 +1100

Lorenzo Ridolfi wrote:
My concern in using a large number of patches was to introduce noise in the measurement and reduce the profile quality in smoothness and tone transitions. Based on your advice, Argyll CMS does have some smoothing algorithm to reduce noise caused by excessive patches. Am I correct ?

Yes. This can be controllable using the -r parameter.

A last question: By using a large number of patches, I understand the adaptive patch placement algorithm is less effective.

There's no connection.

> I'm starting to
believe that the adaptive may cause more harm than good in the profile generation (in the case of a large number of patches) and not using it, I'm eliminating a variable to be tested in the profile process. Am I correct ?

The default test patch placement places patches so that the worst
case (furthest) distance from any point in the (device) colorspace
to a test patch is minimized. This turns out (at least from my
experiments) to be a pretty good way of doing things, especially
when nothing is assumed about the device behaviour. (it's something like
60% more efficient that typical cubic grid test chart patterns) Many other
arrangements that intuitively seemed like a better idea (like
evenly spaced patches in Lab/perceptual space), faired worse
in terms of the resulting profile accuracy. (When you
consider that the forward table maps device values into
PCS values, this does make some degree of sense).

The current adaptive algorithm adds successive points at the
location in device space that has the largest discrepancy
between the anticipated perceptual value, and the linearly
interpolated value from existing sample points. This results
in a sampling pattern that is (within a 2:1) ratio, perfectly
optimal. 2:1 isn't as ideally distributed though, as the
default algorithm distributes points (since it uses an iterative
algorithm, rather than one pass), and the sampling is only optimal
when measured according to the anticipated device characterization,
which is in fact not perfect (or we wouldn't be trying to characterize
the device any further!).

If you haven't determined that adaptive is going to improve things, and
you don't want to mess around, then I'd stick to the default algorithm.

Graeme Gill.




Other related posts: