Roberto Michelena wrote:
Building a PCS->device table from target readings (device), basically loses the accuracy of the original data and replaces it by interpolated (thus less accurate) data situated at regular intervals of PCS space (which the readings were not). As you don't know in advance which points in PCS space you'll need to render, you just select a "grid" and render it. Afterwards when you want to render some PCS colors to device space, you interpolate between those gridpoints that were already interpolated from device space readings. Double loss of precision. If instead of that, you keep the original readings (device->pcs table) and when you want to find a device value for an original PCS point, you do a reverse interpolation (I know what I want to say, but doubt this is proper wording!) in the device space, you have much more precision. That's what the Imation CFM did (although I never saw evidence of better quality!), and that's one of the options of Argyll CMM.
Well, technically no, Argyll always has "one level of lost precision". There are ways of directly interpolating scattered data (typical of chart readings) directly (e.g. Delaunay Tessellation, inverse distance techniques etc.) and in some situations (typically with RGB or CMY devices), it might be possible to measure every value of a regular test grid, allowing the clut table to be filled directly.
It may seem that such approaches would give the best, most accurate results, but in practice this may not be the case. The reasons are inaccuracy in the measurements, sampling "noise", and not using "typical device behaviour" information.
Direct interpolation techniques typically behave somewhat simplistically between the sample points, and represent the sample points exactly. If the sample points have some inaccuracy (which will almost certainly be the case), then this noise is represented exactly. An interpolation technique that smoothes the sample data slightly may be able to reduce the inaccuracy "noise", as well as giving a more pleasing result due to better smoothness. The location of the sampling points and the (often) linear or arbitrary interpolation between them, causes a form of "stepping" or unrealistic transition behaviour between the sample points, resulting in a sort of sampling "noise". The relatively fixed or arbitrary interpolation between the sampling points ignores any knowledge about the typical, reasonable or desirable behaviour of the device between sampling points. In addition there are issues with trying to get reasonable extrapolated results, something that often crops up in scanner profiling.
In practice therefore, I found it better to suffer one loss of precision in resampling and interpolating from the measured sample points (which can be unconstrained in their location, adding flexibility), while overall gaining a better result. It's also a necessary step in representing the device behaviour in ICC profile format. In practice, the per channel device and PCS curves can be "extracted" from the scattered sample data, before generating the regular grid interpolation.
Yes, the creation of the B2A table represents a further loss of accuracy, which direct inversion of the A2B table avoids (icclink -G etc.).
Graeme Gill.