[argyllcms] Re: Determining proper error value for -r

  • From: Ben Goren <ben@xxxxxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Tue, 23 May 2006 12:11:31 -0700

On 2006 May 23, at 8:11 AM, Graeme Gill wrote:

> The most sensitive  "real world" test I stumbled  across, was to
> simply make  up the profile,  and eyeball the gamut  surface.  I
> found quite  noticeable changes in  the smoothness of  the gamut
> surface, as  I varied the  -r factor in profile. Too  small, and
> the surface  was noticeably bumpy. As the  number increased, the
> surface got  visibly smoother,  and looked  more like  one would
> expect for  a well behaved  device. The self fit errors  rise as
> the -r  factor goes up  too, so I  stopped at a  suitable "knee"
> point.

I generated 20  profiles of a 960-patch chart of  plain paper, all
identical  except for  the -r  value; that,  I varied  in 0.1-step
increments from 0.0 to 1.9.

Looking at  the plot  in Apple's  ColorSync Utility,  there's very
little difference  from -r 0.0  to -r  0.5. At -r 0.6,  it visibly
gets smoother but retains the same shape. From that point until -r
1.3, the overall shape starts  to soften or mush. There's not much
difference between -r 1.3 and -r 1.9.

I just made test prints of -r  0.0, -r 0.6, -r 1.3, and -r 1.9. Of
the  four, -r  0.6  is the  best.  -r 0.0  has  some artifacts  in
a  Grainger  rainbow,  and  midtone  neutral  steps  are  slightly
warm. In  -r 1.3  and  -r  1.9, the  highlight  neutral steps  are
slightly purplish. Neither  -r 0.6  nor -r  1.3 show  artifacts in
a  Grainger  rainbow, but  -r  0.6  is  a  bit smoother  and  more
regularly-shaped. In -r  1.9, some (different) artifacts  start to
appear. The entire gray strip of -r 0.6 is the most neutral of the
lot. Fine detail in a black-and-white photo is best in -r 0.6, but
some shadow details are /slightly/ better in -r 1.3.

(I should note that, before I got the i1 and started using Argyll,
I'd have been  absolutely thrilled to get even close  to the worst
of the lot.)

The spreadsheet I  created from the reading of  the 39-patch chart
duplicated eight times came up with the following:

       +----------+-------------+-------+-----------------+
       |   Data   | Avg Std Dev | Scale | Std Dev / Scale |
       +----------+-------------+-------+-----------------+
       |     L    |     0.35    |  101  |       0.35%     |
       |     a    |     0.56    |  256  |       0.22%     |
       |     b    |     0.43    |  256  |       0.17%     |
       | spectral |     0.43    |  128  |       0.33%     |
       +----------+-------------+-------+-----------------+

(A  bit of  explanation: I  individually  calculated the  standard
deviation  for  the  eight  copies  of  each  patch--the  standard
deviation  of  the L  values  for  all  eight white  patches,  the
standard  deviation  of all  the  spectral  values for  the  eight
purplish blue patches, and so on--and then took the average of all
those standard deviations. My statistics isn't good enough to know
if I just made a major boo-boo or not, though....)

The only thing I don't quite  understand yet is how the value used
for -r translates into an  actual percentage for an error. Does -r
0.6 mean  that the  actual percentage error  is 0.3%  or 1.2%? I'm
confused....

Anyway,  if it's  the former,  I  think this  technique shows  the
potential for a good deal of merit. If the latter, I'm clearly all
wet....

Cheers,

b&

Other related posts: