> It may seem that such approaches would give the best, > most accurate results, but in practice this may not > be the case. The reasons are inaccuracy in the measurements, > sampling "noise", and not using "typical device behaviour" > information. I used to judge the quality of the A2B table of a profile, by its correspondance with the original sample data; furthermore, being that sometimes the sample data was exactly the gridpoints (evenly spaced target), why wouldn't the A2B table be just a dump of such data? I understand 'inaccuracy in the measurements"; and supposedly making 4 or more measurements of the same target, then doing an intelligent average (discarding stray samples), should get rid of such inaccuracy. What do you mean by "sampling noise"? For the "typical device behaviour" part I'm a little more skeptic. If building an A2B of an offset press or a laminate proof, there's some smoothness to be expected -because they're analog processes- and therefore the samples should exhibit reasonable behaviour, and you should correct them if they don't (like PrintOpen's "automatically correct measurements"). But nowadays, what can you call typical in inkjets? with ink and paper technology ever changing, ultrachrome inks, gloss differential, swellable papers... CMYKRGB printers, variable drops, etc etc... there's very little that can be assumed about them, and smoothness might not be for granted. So doesn't it make more sense to just leave the data as-is, maybe after averaging some prints and/or measurements? -- Roberto Michelena Infinitek Lima, Peru