[argyllcms] Re: Profile input white not mapping to output white

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 26 Nov 2012 13:02:07 +1100

Ben Goren wrote:

Hi Ben,
        that's quite a difficult data set to make a good profile
out of.

> I can't even imagine any photographic situation where you'd have all three 
> channels
> saturated / clipped / maxed out in the original where you'd want those areas 
> to get
> rendered other than as pure white. Indeed, one of the big problems with RAW 
> processing

I understand, but this is basically a desire for reasonable gamut 
mapping/clipping
behaviour. If this was a situation where a single processing stage has
all the required information, and had the job of implementing the
gamut mapping/clipping behaviour, then it would be easy to achieve
this desire.

But the situation is actually much more messy and real world. The clipping
is happening inside the camera, and represents a loss
of the necessary information to clip correctly, and therefore there may be
no means of behaving reasonably in all circumstances.

For instance, there is no way to know what the input light level
is when the device (camera) returns a value of 1. Does it happen
to be exactly 1, or is it actually clipping ? Even if we knew this,
what assumptions should we make about the ratio between channel
values where one or more channels are clipping ?

Consider a situation where I have three bright light sources
facing the camera. For the purposes of this experiment assume
that they have a smooth drop off in intensity with visual radius
away from the centre of the lights. One light is D50,
one is D65 and the third is a (not completely saturated) green.

Assume the camera RGB channels are color balanced for D50 and
that the profile returns D50 for 1,1,1, and that all three light
sources saturate all camera channels at their centre (brightest)
points. What does the image look like (completely ignoring glare etc.) ?

The D50 light will be the same hue throughout its disk, but its
brightness will clip at some radius. The D65 light will look slightly
blue until it reaches a radius where the blue channel clips. It will than
progress to a different hue until the next channel clips, and then
progress with a hue transition to D50 as all 3 channels get clipped.
The green light will get progressively brighter until the green channel
starts to clip, and then (assuming R & B have non equal values) it's
hue will change as it gets closer to the next channel clipping,
and then it's hue will head in a different direction until the last
channel clips, when it will be D50.

While the visual result will look fine for the D50 light, the results
for the other two light sources will look distinctly artificial, with
halo's of different hues, with an ultimately incorrect clipped
hue. Short of an "intelligent" image analyser that can make
real world assumptions about the likely image composition and infer what
the values of the clipped pixels is, there's not much you can do
about this. Processing on a pixel by pixel value, you have no idea
what 1,1,1 should really map to (or 1,X,X, or X,1,X or X,X,1, or 1,X,1, etc.)

The real world solution to this if you want to capture accurate
highlights is to stop the camera down to avoid clipping !
ie. to get good gamut mapping/clipping, you really need to do
it after the camera with unclipped values, so that you can
clip with constant hue.

So while it's reasonable to suggest that 1,1,1 shouldn't map to some
wild colored value, why should it map to exactly D50 rather than
the best estimate of the white balance based on the measured patches ?

[ The other thing is that creating a profile that doesn't at least attempt to
  fill the gamut, is really asking for extrapolation issues. If you assume
  additivity in the camara sensor, then probably the most robust profile
  for use with RAW input is to carefully linearise each channel using
  a separate calibration step (if needed), then create a matrix with no
  curves from the test chart. Extrapolation is then (by defintion) going to be
  perfectly consistent. If you don't need the general 3D mapping provided
  by cLUT profiles because your device is additive, then you are much
  better off not using them. ]

> For a scanner or for a camera operated on a carefully-controlled copy table, 
> I can
> understand how you might want to be wary of extrapolating beyond the gamut of 
> the
> target. But for a camera off the copy table, I really don't think there's any 
> valid
> assumption other than that the maximum possible input value should map to the 
> profile's
> white point. Whatever the absolute color of that object, the camera saw it as 
> pure
> white, and I think the profile should see it that way, too.

But there's no guarantee that camera 1,1,1 is D50. For one thing the white
test patch on the test chart may not (probably does not) map to R=G=B device
values. And even if it did, you've chosen a cLUT profile, indicating that
you do not trust that the the device is additive, implying that
you do not trust that the same ratio of RGB values to
always have the same chromaticity. If the test patch data indicated that
the hue was not consistent down the R=G=B axis, then it would be correct
to extrapolate in the same direction as indicated by the test points for
device values beyond the RGB levels exercised by the test chart.
This seems to be partly what's happening with your test set.

I don't think it's correct to assume that brightness should clip
at the test chart white patch either, if the device values have more
headroom. That's really what the extrapolation is all about.

Your particular test case seems rather extreme though. So much of the device
response is compressed into tiny device values, that (for instance)
it's almost impossible to get any sort accuracy out of the B2A table.
This in itself probably calls for calibration before profiling, or
an approach that uses a matrix profile.

On some further investigation into the input profile extrapolation issues,
I notice three things. One is that I have introduced a serious bug
into the operation of -u. It simply isn't doing what it should in
V1.4.0. I also notice that the extrapolation code is probably
useful even in the non -u case, and lastly I notice that
I can probably implement -u in a slightly more useful way, by
distinguishing -u absolute and relative colorimetric by having
relative correct the hue of the white test patch correctly, while
scaling the Y to prevent clipping. In this way -u becomes an automated
way of setting -U scale.

I've put a snapshot of what I've been working on here
<http://www.argyllcms.com/colprof_osx_exe.tgz>
<http://www.argyllcms.com/colprof_win32_exe.zip>

Graeme Gill.

Other related posts: