I'm giving a lot of thought lately to the subject of optimum targets. For example, I made a profile from almost 6000 points, typical device value iterations (target generated from Gretag's Measure Tool, which I believe is equivalent to using targen's -m parameter alone). And despite the large number of patches, when I plot the profile I see what I believe to be an artifact in the saturated greens. Plotted in diverse softwares (ColorThink, ColorSync Utility, Monaco, etc) and tried with diverse profilers (Argyll, ProfileMaker, PerfX).
Then I pass the "P2P19" target (the one used by the Gracol7 folks) through the profile, in order to get Lab values; it has grays and primary and secondary ramps. When plotting these points it seems obvious why the green area plotted so badly. The "hooks" formed by the green ramp are terrible, and thus even for the 6000 patch target, the resolution in that part of space is insufficient to account for the curvature. Furthermore, it's quite likely that the resolution of the profile's grid is unsufficient in that zone!
Oh! this printer had already been "linearized" by BestColor (v4.x) by densities.
So, how does one work around this? if we want to represent such curvature it's not enough to add more and more patches to the target. We also need to address the grid resolution problem, and as we can't have a variable-resolution grid in the pcs->device table, the only way I can see is by proper linearization. But, would it be enough to linearize the primaries? ideally we would want to linearize secondaries too! So maybe a combination of two profiles, one abstract one (or more properly said, device link) to perform such multicolor linearization, plus another one created from a target printed through the linearization, would work? Or maybe it boils down to how linearization is performed?
I've seen RIPs linearize by density; I find it gross given that density is related to a constant-hue colorant, which is not true for current inkjet and even less when they combine Lc and C (or Lm and M) together in one value. Other RIPs linearize by L* value; and what the heck do they do with yellow, whose lightness changes are so small that they could be confounded by instrument error? I would find it more appropiate to linearize by first derivative; a constant dE between each step, regardless of whether it's caused by a change of lightness, hue, chroma, or any combination of them. But that would fail to properly follow the curves and hooks, where it won't place points closer as it should. So what, second derivative? I've seen one or two RIPs claiming "spectral linearization"; what do they mean, I don't know. I tried to find ink coverage by adding and substracting spectra (kind of sample_spectrum=paper_spectrum*paper_area+solid_spectrum*ink_area where paper_area+ink_area=1); I even tried adding a third area, "fringe area". No way. Nothing fits. I guess the measurement errors by individual spectral bands are potentially unmanageable.
I guess one solution is to forgo the typical profile, and just use icclink -G ; that would at least take care of the grid resolution problem. But the issue of how to properly sample highly curved areas of the perceptual space remains at large. If I still had the math skills I used to have when in college (I'm only 40, but unexercised in math) I would dream of a space transformation (Jacobian?) to make device space fit more regularly into perceptual space. But I guess a good linearization method should do. What's the winner?
If anyone is interested in taking a look, the related files (measurements, references, profile) are at: http://idisk.mac.com/rmichelena-Public (all files that are not "Gracol" are related to this).
-- Roberto Michelena Infinitek Lima, Peru