[argyllcms] Re: Creating a camera profile with ColorChecker Passport Please Help

  • From: Ben Goren <ben@xxxxxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Fri, 12 Jun 2015 22:28:20 -0700

On Jun 12, 2015, at 9:10 PM, Graeme Gill <graeme@xxxxxxxxxxxxx> wrote:

So the flip side is that if you want to use a matrix to 2D clut, you really
should do non-linearity correction.

I definitely plan on mapping the sensor's linearity and including that somehow
in the spreadsheet that models the entire behavior of the sensor.

I still don't know for sure what's going on in the range between 10001 and
14581. It
doesn't appear instantly obviously non-linear from histograms, but that's
the extent of
the analysis I've done so far.

Typically the way to figure this out with CCDs etc. is to use a stable light
source and
vary the electronic integration time and plot the curve, since integration
time
can be controlled accurately. If it's set by a mechanical shutter, then this
may not be so easy on a camera.

I don't trust the shutter or the aperture to be as accurate as I want. It may
well be, but I don't yet know if it is or not.

So, the plan...is to shine a light through a couple diffusers and put a series
of apertures in the path to control the final amount of light reaching the last
diffuser. I'll cut out 40 such apertures on a computer-controlled paper cutter,
covering 13 stops in 1/3 stop intervals, which I'm hoping will be a good number
and spread of sampling points. Depending on what I discover with that test, I
might or might not decide to make more apertures to sample some part of the
range in more detail.

...or, in other words, the blue channel is half a stop less sensitive than
the green
and the red channel a stop and a third less sensitive.

Relative to what though ? 'E' illuminant ?

Hmpf. That's one of those questions to which I initially thought the answer was
obvious...but just caused me to spend several minutes pacing and working things
out. Curse you!

...but I *think* the answer is, instead, D50 (or whatever your target white
point is), but only in certain circumstances...in other circumstances, perhaps,
it's much less relevant than I thought an hour ago. But I think need to sleep
on this one. This stuff makes my brain hurt....

At the end of the day it doesn't matter - it's what the limit and
non-linearity
of each channel is that matters. The color balance comes out in the wash
once it's calibrated.

Yes, but a big part of the question is how you're going to go about mapping
white points.

The traditional method of profiling is to shoot in whatever light and use chart
reference values computed from D50. Questions of metameric failure and noise
and the like aside, the resulting image appears as if it were photographed
under perfect D50 light -- even if the light instead was a crazy mixture of dim
incandescent and spiky fluorescent. But that profile is only good for that
exact illuminant (and workflow).

If instead the reference values are generated using the actual illuminant for
the scene, the profile (with usual caveats) renders the actual XYZ values of
the scene, regardless of illuminant. Build the profile in that crazy mixture
using reference values from a spectrometer measurement of the light hitting the
target, and the white patch on the target doesn't come out as D50 white but
instead as whatever crazy color the light actually is. And, if you then take
the camera outdoors, the same white patch now appears as, say, the D100 white
point if it's in open shade. Even better, a styrofoam ball in direct light gets
the D60 (or whatever) white point on the part facing the Sun and the D150
(etc.) white point on the opposite part, and the general "atmosphere" is of the
actual color of the light.

If you want a faithful rendering of an uncommon scene -- say, a stage lit by a
red spotlight -- that's likely exactly the profile you want to use. The
traditional profiling method would involve photographing a chart in the
spotlight, which would map the red light to D50 and thus effectively remove the
red tint. A working photographer is going to either randomly fiddle with white
balance to get something "pleasing" or just go with daylight and hope for the
best. But a profile made with reference values for the illuminant the target
was shot under will accurately reproduce the actual tristimulus values of the
red spotlight scene itself, or any other scene.

You should also be able to use a profiling mechanism to map the scene
illuminant to D50, if you know what the scene illuminant actually is -- either
through actual measurement or intelligent guessing. (The scene is outdoors; a
piece of styrofoam is on the same XYZ axis as D73; assume D73.) The result
would be the same as the traditional profiling approach, rendering the scene as
if it were shot under D50. Probably exactly what the typical product or fashion
photographer dreams of, even if not in so many words.

(When I want to do that sort of thing, I'll use the spreadsheet I'm building to
build a new .ti3 file; XYZ will come from D50 as usual but RGB will come from
the actual scene illuminant, whether measured or guesstimated.)

And, lastly...you could simply scale the data so the white point goes from
whatever it is to R=G=B. This is how white balancing is done today. It
obviously gets the neutral axis where it would be in D50, and shifts everything
else accordingly. If the profile was built traditionally, those shifts will
have the illuminant the chart was originally photographed with baked into the
results; that's the unknown source of most of the heartache surrounding camera
profiling. But if the profile is built with the actual illuminant's reference
values, the result is a good first approximation of chromatic adaptation...and
I rather suspect that there are even better ways to map the white point that
actually take chromatic adaptation into account -- something that would
faithfully preserve the "look" of ambient incandescent (or whatever) light but
with a D50 white point.

...but I most definitely need to re-think that whole relative channel
sensitivity thing...and just when I thought I had it all figured out, too....

b&

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Other related posts: