Why don't you try the adaptive mode and see what youthink: spotread -e vs. display mode: spotread -dspotread -e appears to fluctuate less than half as much as spotread -d over multiple readings, which is excellent. I'm definitely looking forward to trying out dispcal -V when RC3 is released. The measurements between the two modes differ a bit though. Should I be assuming that the spotread -e measurements are more correct than the spotread -d measurements? Below are multiple measurements of the same gray patch on my CRT (set to 2048x1536 @85Hz / ~85cd/m2 white / ~0.01cd/m2 black) with an uncalibrated video lut: http://pastebin.com/f7ed5671fGraeme, do you have any insight about this? Does the difference I'm seeing between the spotread -e and spotread -d modes appear normal to you? For reference, I did use -N when switching between -e and -d in order to eliminate the internal sensor calibration as an additional variable.
> Avg 11.448286 11.986959 12.567771 41.195356 -0.781389 -8.206391 > Avg 11.129467 11.682361 12.249876 40.706755 -0.977966 -8.140483 That's a DE of 0.44 or so, which is not very great. The fact that the non-linearity correction is reversed between low and high gain may be a factor in the discrepancy that will improve with the next release, but simply from an Engineering point of view I wouldn't expect exact agreement between a mode which changes integration time and gain vs. one with a fixed integration time.
The resulting color temperatures between the two modes seem close enough, but the color coordinates and resulting Delta E values differ somewhat significantly,
Generally < 1 DE is regarded as insignificant.
They can't both be correct, so which should I trust as being more accurate? Another thing that I found a bit curious was how much my measurements of a completely black screen varied every time I did an internal calibration of the sensor. Below are only -e results, but the -d measurements were also showing similar changes between internal sensor calibrations when measuring my near 0.00cd/m2 black-point.
That's the nature of a silicon sensor for you. The background noise is proportional to temperature, so it can change noticeably for very dark readings if the instrument changes temperature. I also noticed what might be a degree of sensitivity to how well the instrument is sitting on the white tile during calibration. You need to make sure that it is sitting tightly, and isn't in bright light at the time.
The first internal sensor calibration gave me color temperature results like the following: CCT = 2500K Closest Planckian temperature = 2500K Closest Daylight temperature = 2500K The second internal sensor calibration gave: CCT = 2500K Closest Planckian temperature = 999999K Closest Daylight temperature = 34999K The third internal sensor calibration gave: CCT = 9000K Closest Planckian temperature = 7000K Closest Daylight temperature = 7000K See full results here: http://pastebin.com/f3cec1649
You're way down in the noise, so it's not that much of a surprise that calculations that depend on the ratio's of the sensors values bounce around wildly.
I don't expect the Eye-One Pro to be all that accurate so close to 0.00 cd/m2, but the above seems a bit odd. The first result seems sensible with CCT, Planckian, and Daylight all having similar temperatures. On the second the CCT stayed similar to the first, but the Planckian and Daylight temperatures were being reported as out-of-range values. The third which was taken about an hour later then the first also seems sensible, but it is significantly different then the others. What do you think?
I suspect that is just the nature of it. It so happens that with the CCT's skewed sense of "closest" it lands somewhere near a normal value, but with the other readings truer sense of "closest" it reveals that the point measured is almost equally distant from a couple of points on the locus. In other words, color temperature isn't a very meaningful measure for colors that aren't close to the daylight or planckian locus. Graeme Gill.