[argyllcms] Re: White Point

  • From: Thomas Mansencal <thomas.mansencal@xxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 09 Nov 2015 22:59:41 +0000

Hi,

Thanks to Troy for the head up regarding the thread!

And I'll bet another cup of coffee that their method could be adapted to
determining an optimal set of primaries...and another cup (not on the same
day!) that the optimal primaries are going to be within shouting distance
of the 450, 550, and 600 nm peaks of the standard observer, with red and
its funky double hump being the farthest from the peak and blue being next
farthest (to compensate for the red).

This is somehow planned indeed, we need to find time for it and implies
some currently being done spectral API refactoring on our end but the
document will be re-visited for sure.

Now that we've got previously undreamt-of computational power at our
ready disposal, I think it's time to start considering moving beyond the
simple RGB (etc.) and other simplifications of approximations of human
vision, and going straight for a full-out modeling.

I would like to see more movement toward spectral rendering and related but
there are still some challenges. I speak a little bit about that in the
conclusion of that post:
http://colour-science.org/blog_about_rendering_engines_colourspaces_agnosticism.php

Cheers,

Thomas

On Tue, Nov 10, 2015 at 11:52 AM Ben Goren <ben@xxxxxxxxxxxxxxxx> wrote:

On Nov 9, 2015, at 6:07 AM, Hening Bettermann <hein@xxxxxxxxxxxxx> wrote:

In landscape, however, my intention at least is to reproduce the color
of the light that was, not to neutralise it.

Yes, that's absolutely the goal of landscape photography. What's the point
of shooting in the golden hour if you turn the golden light noonday white?

So, first, it's entirely possible to, within relatively insignificant
limits of the gear, create a photograph that, when viewed, will reproduce
the same tristimulus values as what an observer at the original scene would
have experienced.

The problem...is that human vision is much more complex than that. The
tristimulus values are just the beginning of the story. When you're in the
field with the camera, your surrounding environment is lit by the Sun of
the golden hour. It's a very rich perceptual experience...but, if you pull
out something white and look at it, you're going to superficially still see
it as white, even though the tristimulus values for that object in that
light are much more yellow than typical. Or, if it's in shade, it's much
more blue than normal.

So, you take your picture and it's recorded all those tristimululs values
(which, granted, require a fair bit to extract from the raw data). Now, you
look at the picture on your computer...but the computer's white point is
much different from that of the golden hour. What looked white in the
golden hour, in the context of your computer studio, now looks much
yellower than you remember.

But you plough ahead, confident that you've got the perfect workflow, and
make a print, and put it on the wall...across from the beautiful
north-facing bay window with its great light...and the colors are _really_
off now for all sorts of reasons.

There are answers, but all involve various compromises. Ideal would be to
display the image in an otherwise-darkened room without any other lighting
cues...but that's impractical. Next would be to use a light source that's a
spectral match for the original scene...but, though theoretically possible,
that sort of thing is rather beyond practical (and affordable!) modern
technology. From there, it just goes downhill -- with the most common
presentation method these days, the Web, being about the worst imaginable.

So...don't give up, but also don't beat yourself up about it, either. Push
yourself, of course, but accept that you're not likely to achieve the goal.

b&

Other related posts: