robert@xxxxxxxxxxxxxxxxxx wrote: cluded from it's reports. Hi, > Well ... exactly the same thing could be said of i1Profiler, Argyll > (although I suppose I could look at the code, so there is a definite plus > there), ColorThink Pro, Photoshop .... At some stage you have to trust what > expert companies produce, and Imatest (which produces GamutVision) is a very > reputable company in the measurement and testing of image systems. that's not what I'm saying - I have no reason to doubt that whatever they've implemented is working as intended, but if you don't know what is intended, then it's hard to draw any conclusions. The whole point is that if you say "I did X & Y with argyll and it produced Z", then I can figure out exactly what it's doing, and what if anything that means. When you say that gamut vision measured a perceptual gamut volume of X, then I don't know what that means (there's no standard meaning), because the actual devices gamut doesn't change with the intent. I'm sure it's doing something (and Argyll's iccgamut can also produce various gamut plots from the A2B and B2A tables), but to understand what they mean, you need to know exactly what it's doing. >> If you define the source gamut as the colorspace the images are encoded in >> (which is the assumption that happens 99% of the time), then this is very >> easy, >> since there is nothing that forces an image to occupy the full gamut of the >> space it is encoded in. > > OK, I understand ... when you said "occupy the source gamut" you meant fully > occupy the source gamut. No, I mean to fully occupy the encoding space gamut. All images must have a gamut withing the gamut of their encoding space, but not all images fully occupy their encoding space gamut. When the encoding space is small and is that of display or output device, then it is very, very common to have rendered the image so that it fully occupies the encoding space gamut. This is sometimes called an "output referred" image. In this situation, the encoding space gamut is a good proxy for the image gamut. When the encoding space has a very large gamut (i.e. L*a*b*, ProPhoto, BruceRGB to some degree, etc.), then you almost certainly don't want your image to be rendered to fill that space, since you will be making it look terrible, and throws away the actual look of the image. Images encoded in large gamut spaces are often called "input referred", i.e. they are often used to faithfully capture real world light, as it is recorded on an instrument that sees color the same way we do. For such an image, you don't want to use the encoding space gamut as a proxy for the image gamut, since that would result in vastly excessive compression of typical images. In a lot of photography circles there are recommendations to use a large gamut space as a "working space". In theory this might be a good thing, but beware - you probably don't want to use the large gamut working spaces gamut as a proxy for your image gamut when creating a gamut mapping to an actual output colorspace. > The thing is that a particular image could have 99% > of colours well within sRGB, but with the one, critical, colour being > outside of even Beta RGB, but within the printer gamut (a lot of 'experts' > like Andrew Rodney, Digital Dog, recommend using ProPhoto for that reason). > If sRGB is chosen then the critical colour gets dumbed down, if ProPhoto is > chosen then all the other colours are dumbed down(assuming a perceptual > mapping). Of course, it could be that your image-specific mapping would > deal with that situation very well. That's the only practical approach to working with images encoded in a large gamut space and not yet rendered to an output space. You can do the rendering in an automated fashion with ArgyllCMS, or you could do it manually, or some combination of both. > Stay within it, yes, but also get compressed in order to try to maintain the > relationship between the colours. Only if the encoding space is used as the definition of the image gamut. If you create an image gamut, then colors in the image don't get compressed any more than needed for that particular image. >> i.e. you expect the black to be raised, so as to not loose >> shadow detail the way you are loosing it to clipping using relative >> colorimetric. > Well, the black point with perceptual is the same in the perceptual and > relative mappings, so I don't see why you say that the relative mapping is > clipping the colours more that the perceptual mapping. Because the two tables handle out of gamut colors differently - that's why they exist. The colorimetric table clips to the closet color. So if you feed L* 0 into it, you get the printer black of L*3 out (clip). L* 1 in, L* 3 out (clip) L* 2 in, L* 3 out (clip) L* 3 in, L* 3 out (clip) L* 4 in, L* 4 out (clip) L* 5 in, L* 5 out (clip) etc. while the perceptual table doesn't clip, it compresses: L* 0 in, L* 3 out. L* 1 in, L* 3.8 out L* 2 in, L* 4.7 out L* 3 in, L* 5.6 out L* 4 in, L* 6.5 out etc. > What is happening is > that greys are being raised in the perceptual mapping while the black point > is maintained. Which, in general, is exactly expected and what you want. > BTW, I did have BPC on. On what - the preview ? It doesn't look like it. BPC on the preview would re-expand the printer L*3 back to sRGB L*0. > Your figures are consistent with mine, using my profile. However what I > would like to understand is why the perceptual darks are opened up so much > more than the relative darks. Here are some figures: > > Input Lab: > 25 0 0 > LabOutPerc: > 0.297312 0.346474 0.361547 [RGB] -> Lut -> 32.296640 -0.061477 1.196553 > [Lab] Hmm. That is a bit odd. It should be getting closer to matching 1:1 as you approach L* 50. I'll see if I can reproduce this and figure out why it is happening. Graeme Gill.