[argyllcms] A few questions + an idea for spyd2.c (this time even more useless than before!)

  • From: howdy555@xxxxxxxxx
  • To: Graeme Gill <argyllcms@xxxxxxxxxxxxx>
  • Date: Wed, 11 Mar 2009 14:33:53 +0100

Hello!

Today was a glorious day for me. At last I managed to get reasonably
good colors out of my trashy montor (the el-Cheapo Hyundai N220W)!
Just for the record, I started with max DE = 17, then it was max
DE=3700 something (yup, this is not a typo: 3700! This profile
redefined the meaning of "miscalibrated" monitor :] ).

Today, finally, max DE=1.89! [then I started playing with
my monitors' menu and accidently pressed "reset to defaults" :(. Oh, well....]
(note to self: write the monitor settings in the profile description).

Anyway, I would like to ask you about 3 things:
1) Is there any option that would increase the profile quality even
more than -qu? With -qm I get peak err = 2.808569, avg err = 0.511902,
with -qu: peak err = 1.891550, avg err = 0.446999, so maybe with -qX
(as in eXtreme) I would get 1.50? :). The calculation time with a LUT
profile is very reasonable, even for the 'ultra' quality.

========

2) I have been playing with the spyd2.c again :). I would like to know
everyone's opinion about my approach:
The original implementation makes a weighted average of 2 XYZ values read:
the "fast" one (with a very small integration time) and the "slow" one
- possibly increasing the integration time, with the increase being the
weight). This puzzled me as the average is very far from the source of
all the errors - the meter's sensors. Therefore I tried a different approach:
I averaged the sensor readings and THEN calculated XYZ from the
averaged data. The real novelty however is that I allowed ONLY
non-zero sensor values into the average. So each sensor "k" has its own
"n[k]" and its own "accumulatedValue[k]" (with the result equal to
sensor[k] = accumulatedValue[k]/n[k], of course). This makes the
problem of "zero XYZ result for non-zero sensor values" much less
likely to occur. To get better results, I averaged 4 readings: fast one,
moderately slow one and 2 very slow ones.

fast = the initial, very low integration time (before DO_ADAPTIVE)

moderately slow = original, with DO_ADAPTIVE (using the multiplier of
2.0 for bxyz<5 and 3.0 for byxz<0.5; the rest as in the original code)

very slow =
  if sensor[4]==0 : moderately slow with yet another DO_ADAPTIVE
                    else the same as moderately slow

Also I included a safeguard not to let the product of ALPHAs in the nframes *= 
ALPHA
be greater than 20 to avoid really excessive time for truly zero luminance.

After collecting all 4 measurements I calculated the average of non-zero
sensor values.

After obtaining the sensor values XYZ are calculated using the
original Argyll code.
---
speed: it is much slower than the original method although I would say
it is usable even for monitor calibration. Any further increase in
integration time or number of averaged measurements seems to be
unacceptable for this purpose though.
accuracy: yet to be determined. I would guess that it is much higher,
with almost all "XYZ=0 for non-zero sensor values" removed. Possible
drawback - the returned reading is slightly higher than it should be.
All of this needs to be determined by a good soul (if anyone could
test that, I am more than willing to share the modified spyd2.c) with a 
higher-grade
equipment.

========

3) The camera calibration - this one puzzles me so I would really be
grateful for some insights about my wacky idea:

Why can't I use my trashy monitor as a calibration target? Why do I
have to use expensive calibration targets? If I understand correctly
the only difference is that with the targets I know the values of XYZ
for each patch. But if I make a picture of the area very close to the
colorimeter using a rubber anti-flare shield mounted on the lens I
would be able to measure one color for which I know (at least
approximately) the XYZ coordinates (because I can read this data from
the Spyder). The only drawback of this would be the number of
photographs taken - at least 100. This could be reduced by calculating
a small number of colors (e.g. 4) first and then drawing a grid of 4
patches on the monitor and make a picture of that. This would reduce
the number of pictures to 25 - a very reasonable amount.

Do you think it is sensible approach?



Other related posts: