Lars Tore Gustavsen wrote:
Trigger instrument switch or any key to start, Esc, ^C or Q to give up: There is at least one patch with an very unexpected response! (DeltaE 82.860992)
As Gerhard indicated, if no previous profile is provided to targen, it uses a default model for the device response to generate the "expected" guide values. These are used for a few things: Detecting the a wrong strip read. Detecting scan direction (ie. for the i1). Warning of other possible errors. Since the default model may be very far from the actual devices response, it can erroneously trigger the type of response you see above. Perhaps I will change things to suppress these messages if the default model is used.
profile check complete, peak err = 7.405302, avg err = 1.056087
A not unreasonable result.
I then played with different text utility and deleted all measurements in the ti3 file from row F. I then got this result when I made the profile without the F row. profile check complete, peak err = 7.345448, avg err = 1.044123
Is there a reason you deleted this row ? Was it the row with the highest (self fit) delta E ?
I guess this mean that that this patch is skipped in the profile creation? I tested the profile on a PDI target and I am actually very please with what I saw. I just wonder what can be wrong with my target. Can there be a patch outside the printers gamut?
There's nothing wrong with your target. This is quite usual. Since Argyll uses a fitting algorithm that balances smoothness against fit to the incoming point set, then there will be points that are some distance from the smoothed surface. The balance between smoothness and fit can be changed if you wanted to, but the default parameters are reasonably well tuned to give a result that is a good approximation to the underlying device response. For instance, if you reduced the -r parameter below its default of 0.5, you will tilt it more in the direction of a tighter fit, at the cost of worse smoothness, and at the (possible) cost of a poorer fit to the underlying device response, since less sampling noise will be averaged out. One of the contradictions of trying to create profiles with a small number of sample points (and 500 is quite small, and even 3000 isn't very big compared to the space being sampled) is that when there is noise in the readings (and there always is in the real world, due to printing and instrument inconsistency), a fit that is closer to the actual underlying device response may well be a poorer fit than is possible to the actual measured points, and visa versa: A fit that is as tight as possible to the actual measured points is often a poorer match to the underlying device response.
I tried so to identify this patch. If I take a look at the output from profcheck from row F the one with highest Delta E is patch 6: Profile check complete, errors: max. = 7.405302, avg. = 1.056087, RMS = 1.331468 [0.571890] 1.000000 0.000000 0.000000 -> 50.490826 70.709652 65.121628 should be 50.464170 70.820274 64.561172
Yes, it's often the b* that has the highest error, since it is the vector that is most sensitive to errors. Because of this Argyll actually uses slightly different smoothing parameters for the a* and b* planes, compared to the L* plane.
However icc_examin gives me this result for the profile: averaging deviations (dE CIE*Lab): 36.5161 maximum: 67.1216 minimum: 0.739298 (dE CIE 2000) averaging: 14.9649 maximum: 21.93 minimum: 0.670952 and patch 6 dE Lab dE2000 46.638 12.9499 Take a look at http://www.mulebakken.net/div/iccexamin.html for a more readable cutted down iccexamin report for row F only.
Sorry, I'm not familiar with icc_examin. I'd take a look at Gerhard's suggestion about RGB scaling. Argyll adopts the consistent convention of device values being percentages from 0-100, while many other packages assume binary numbers for RGB devices as a special case. Graeme Gill.