Roberto Michelena wrote:
this is weird... how can you do correct perceptual mapping if you don't have a clear idea of your gamut boundary? and how can you define your gamut boundary without sampling it?
Well, it's not that the gamut boundary is unknown, it's just that it is "estimated", rather than "measured". Now, I quoted those two words because the difference is actually one of degree, rather than kind. There is inaccuracy even at the sample points, but one can reasonably expect that the inaccuracy will be slightly higher away from the sample points, where the correspondence between device value and color value is being interpolated. The distribution algorithm has tried to minimize the interpolation errors.
An interesting aspect to keep in mind is that the devices gamut is not always at the edge of the device value range (although this is almost always the case). Some times a device response will "saturate" near the edges, and things can get very complicated near black for a CMYK device. Given the sampling involved in creating the profile, and then the sampling involved in creating the gamut boundary, there are sources of inaccuracy everywhere.
Perhaps the strongest criticism of the "farthest point" algorithm as it currently stands, is that it systematically arranges things so that all the edges of the gamut have the equal poorest accuracy of any point in the device model. Given that the gamut boundary is an important feature, this might be a good reason to change the algorithm to not work the way it currently does.
Note that even if the sample points were spread to the edges, there are still points on edges equally far away from the closest sample point as they are currently (actually, slightly further away, because the point density will have been reduced slightly to make the sample points touch the edges.)
usually the problem with ICC standard linking lies in the gamut borders; that is something -G solves, but only if it does have accurate info on that part of the space, wouldn't it be so?
Yes, you're right, but it comes down to what accuracy is sufficient. There are errors everywhere, it's a matter of whether you can see them, or whether you can do anything about them. In playing around with the gamut mapping for the next release, my biggest problems were that the smoothing of the instrument readings wasn't sufficient, leading to a rather bumpy gamut surface (perhaps the fact that the surface wasn't being sampled precisely contributed to this ? - hmm. I'll have a think about that), that the triangulation of the surface is kind of tricky to get right, to make sure that ridges are in the right direction, that triangulation rules that eliminate holes due to the sampling nature of the candidate gamut points don't then cause the concave aspects of the surface shape to be lost. Add to that the fact that the gamut mapping is created by guide points, and implemented by an interpolating mapping that has to balance smoothness off against precision, and there is a lot that can end up not quite as one would like. Given all these aspects, the visual result seems remarkably good!
what I'd like is something like filling the space very densely with -m (say, 64,000 points), then delete all points over the ink limit, then convert to Lab via the pre-existing profile, put a 3D grid (of cubes or tetrahedrons, whatever) in Lab space and inside of each grid volume compartment, delete all the points except the one closest to the center of the compartment; exception being any point which has 100% of at least one colorant or which is at the surface of the volume defined by all the points (the gamut volume).
You can get a pretty good space filling/farthest point spread using -i ( body centered cubic grid in device space), or -I (BCC grid in perceptual space.) It's a little tricky to get the number of sample points to match the target, and you may want to play with the -a angle, but the result will only be very slightly worse than the default algorithm, and the sample points will end up on the edges of the gamut. [ Note that my research indicated that evenly spread in device space is better than evenly spread in perceptual space (counter intuitive though this is.) In verifying these things, the sampling distribution of verification points is pretty important, which is why I added the -r option. ]
this is more or less what I was expecting from targen -c ; obviously I'm still lagging behind in comprehension :) bear with me, just started playing with Argyll.
Most of the targen options were for me to play with different distributions, to see what effect they had. They do seem to have an effect, but it's somewhat subtle, and hard to measure.
Graeme Gill.