[argyllcms] Re: Determining proper error value for -r

  • From: Ben Goren <ben@xxxxxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Wed, 24 May 2006 22:30:43 -0700

On 2006 May 24, at 9:31 PM, Graeme Gill wrote:

> Ben Goren wrote:
>
>> Before I go  any further, the only  really important part. What
>> ``average deviation as a percentage'' does the following sample
>> of  values represent? These  partial readings  are of  the same
>> color (paper white,  in this case) in different  patches on the
>> same  sheet  of  paper,  and  the last  line  is  the  standard
>> deviation.
>>
>> LAB_L   LAB_A   LAB_B   400     410     420     430     440
>> 94.69   2.28    -9.05   27.72   49.83   84.45   104.77  109.47
>> 95.23   2.32    -9.36   27.92   50.59   86.03   106.81  111.64
>> 95.37   2.32    -9.29   28.09   50.88   86.36   107.08  111.88
>> 95.3    2.29    -9.35   27.62   50.39   86.09   107     111.85
>> 95.22   2.31    -9.21   28.2    50.71   85.86   106.5   111.29
>> 95.2    2.34    -9.25   27.45   50.21   85.82   106.61  111.38
>> 95.37   2.28    -9.35   27.92   50.73   86.4    107.23  112.02
>> 95.23   2.31    -9.3    27.88   50.41   85.84   106.69  111.54
>> 0.22    0.02    0.11    0.25    0.34    0.61    0.77    0.81
>>
>> (That's the  readings of but  one ``color'' patch from  a chart
>> with a few dozen such. If you need the full set of the spectrum
>> readings, just give a holler.)
>
> Well,  the whole  thing is  a little  "loose", because  it's not
> actually that critical in the scheme of things.  I calculate the
> average deviation (absolute) as 0.08, 0.02,  0.05 for L, a and b
> respectively. Since the range of L  is 0 to 100, this translates
> roughly into a percentage.

Um...I guess what I'm /really/ asking for: what does that mean for
-r? Would it be -r 1, or -r 0.5, or -r 2...?

Put another way,  how does -r relate  to standard deviation? Given
the one, how would you convert  to the other? Or is that the worng
way to go about it entirely?

> In the 0.53  release, if you set  -r to 1.3 or so,  it should be
> fine for 99% of all situations too.

I'll try  that as a  starting point  for my parents'  LaserJet the
next time I'm there. Thanks!

>> What I'm  envisioning for  the future  is a  two-pass profiling
>> process, not noticeably different for  the end user from what's
>> currently used for high-end profiling. I  think it has a chance
>> at giving better results for high-end profiling, and I'm almost
>> certain it'll make profiling poorly-behaved devices pretty much
>> straightforward.
>
> Sorry,  I  don't agree. While  there  may  be  some merit  in  a
> more  complicated  approach for  high  end  profiling (still  to
> be  proven),  I  can't  see  how  it  is  relevant  to  low  end
> devices. There's no  point trying  to get that  extra 1  delta E
> accuracy somewhere in the colorspace,  if the device is banding,
> has inks that change by 3 delta  E in the first 24 hours, or has
> a repeatability of +/- 2-6 delta E.

With that perspective, I completely agree with you.

What  I'm looking  at, however,  is the  case where  you've got  a
device that needs  more leeway than the default  (assuming that -r
1.3 won't cut it for, for example, my parents' LaserJet) but don't
want to  spend lots  of time  experimenting to  find out  what the
optimal value is.

But  if a  higher  default  value will  work  for  them and  won't
(significantly) hurt everybody else, it's obviously a moot point.

> I'm  sure  these  sorts  of  issues can  be  improved  on,  with
> sufficient knowledge,  ingenuity and  persistence, but  it's the
> sort  of  project  that  would   normally  take  some  time  and
> resources.

Would that be time and  resources that us non-C-hackers could help
with? If it  means, for example,  some grunt work of  creating and
measuring charts or hacking together Perl programs or spreadsheets
to analyze them, I'd be more  than happy to volunteer. And I won't
complain if  it turns  out that  it's a  dead end. Well,  not very
loudly, at least.

Cheers,

b&

Other related posts: