[argyllcms] Re: A few questions + an idea for spyd2.c (this time even more useless than before!)

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Thu, 12 Mar 2009 10:15:38 +1100

howdy555@xxxxxxxxx wrote:

1) Is there any option that would increase the profile quality even
more than -qu? With -qm I get peak err = 2.808569, avg err = 0.511902,
with -qu: peak err = 1.891550, avg err = 0.446999, so maybe with -qX
(as in eXtreme) I would get 1.50? :). The calculation time with a LUT
profile is very reasonable, even for the 'ultra' quality.

It's not clear exactly what your workflow is, or where these
numbers come from. Are they from colprof ? How many points
are in your test set ?
As has been noted, the self fit errors do not give much
real indication of how accurate a profile, you need independent
measurements for this. You may well find that the self fit numbers
get worse as you increase the number of test points used (since the
profile will be unable to track the noise), even though the accuracy
of the profile is increasing.

Generally average errors < 1.0 indicate that you are either fitting
a very small number of points, or are close to the repeatability
limits of the device/instrument.

The original implementation makes a weighted average of 2 XYZ values read:
the "fast" one (with a very small integration time) and the "slow" one

Actually, the current 1.0.3 release code does something different,
it does an initial very short reading, and then uses that to scale the
real reading. I prefer that because it minimized quantization, but
unfortunately the Spyder3 doesn't cope with the short measurement
and plays up, hence the change to an incremental scheme.

- possibly increasing the integration time, with the increase being the
weight). This puzzled me as the average is very far from the source of
all the errors - the meter's sensors.

Since the sensor readings and XYZ are linearly related (the
"level 2" correction doing nothing typically for the Spyder 3),
I'm not sure this makes any difference in itself.

Therefore I tried a different approach:
I averaged the sensor readings and THEN calculated XYZ from the
averaged data. The real novelty however is that I allowed ONLY
non-zero sensor values into the average. So each sensor "k" has its own
"n[k]" and its own "accumulatedValue[k]" (with the result equal to
sensor[k] = accumulatedValue[k]/n[k], of course). This makes the
problem of "zero XYZ result for non-zero sensor values" much less
likely to occur. To get better results, I averaged 4 readings: fast one,
moderately slow one and 2 very slow ones.

It's hard to know what's going on in the instrument. The sensors have
a typical dark frequency of 0.3 Hz, so should produce at least two transitions
in 5 seconds when not illuminated, but this doesn't seem to be the
case. It does seem reasonable to work at the sensor reading level
if zero sensor readings are to be detected.

speed: it is much slower than the original method although I would say
it is usable even for monitor calibration. Any further increase in
integration time or number of averaged measurements seems to be
unacceptable for this purpose though.

I may have more of a play with it, and determine how long the integration
time needs to be on my instrument to guarantee non-zero sensor readings.

3) The camera calibration - this one puzzles me so I would really be
grateful for some insights about my wacky idea:

Why can't I use my trashy monitor as a calibration target? Why do I
have to use expensive calibration targets? If I understand correctly

Pretty much, yes. The reason is that the spectral output shape of
a monitor is narrow band, since this is the best way of achieving
a wide color gamut with minimal cost (ie. 3 colorants). The real world
colors of reflective objects typically the subject of most photography
have the opposite characteristic of a smoother, broadband spectrum.
A reflective measurement chart is therefore much more characteristic
of what the camera is going to be used on, while the spectral
interaction between a monitor and camera sensor is likely to
be relatively unpredictable in its correlation to the real world.

You could do something like what you are suggesting, and create
a profile, but much like using a scanner in place of an instrument,
you might be disappointed with the result.

Graeme Gill.

Other related posts: