robert@xxxxxxxxxxxxxxxxxx wrote:
Yes, I agree that an average dE of 0.7 is good; around 85 out of 100 values are around 1.0 or below, with around 40 values under 0.5. I'm not so sure that 15 values over 1.0, with 4 over 2.0 is so great though, considering that all of the colors are guaranteed in-gamut.
Note that V1.7 will have a new -h option for colverify that plots an error histogram. This can be quite useful in judging the significance of errors. The above error ranges look good for a print device, although it all depends on the device capability, paper uniformity, ink uniformity, and instrument repeatability.
My concern is that I may not be testing correctly.
Generally errors will be much worse if you have messed the workflow up.
But most of all I would like to know why I have to use the -N flag in colverify, in view of the fact that the test is Absolute all the way through. Taking N out results in BAD dEs!
You don't, although it may better represent the visual error if you assume complete viewer adaptation to the white point. Graeme Gill.