[argyllcms] Re: colprof problem

  • From: Ben Goren <ben@xxxxxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Sun, 21 Jun 2015 07:20:33 -0700

On Jun 20, 2015, at 12:45 PM, Hening Bettermann <hein@xxxxxxxxxxxxx> wrote:

peak error 5.25..., average error 2.14..., fine.

That's the right ballpark, but the numbers are somewhat high. I'd suspect a
less-than-optimal workflow, especially lighting problems. You're looking for
numbers half those, ideally less than half.

The -am option assumes a linear input. I read that the Sony a7r raws are
compressed and hence not linear

I think there might be conflicting definitions of the term, "compression,"
being used.

If I remember right, Sony uses a lossy JPEG-style data compression algorithm in
their RAW files. Most manufacturers use a lossless data compression algorithm
such that, after the data is decoded, the sequence of bits that comes out is
exactly the same as what went in. Most people are most familiar with TIFF files
functioning like this. With lossy compression, the decoded data is _not_
identical to the encoded data, but is (hopefully) not visually distinguishable;
this is how JPEG encoding (typically) works.

But the type of compression relevant to profiling is entirely tangential, and
has to do with the tone curve. With a linear tone "curve," the pixel values are
directly proportional to the number of photons at the photosite. If a photosite
has a "full well capacity" of, say, a million photons, after which any more
photons just cause the accumulated electrical charge to bleed off into
neighboring photosites or overflow circuitry, in an ideal 14-bit camera that
would correspond with a pixel value in the RAW file of 2^14 = 16384. Half a
million photons at that photosite, one stop less light, would correspond with a
pixel value of 8192; a quarter million photons (two stops) with 4096, and so on.

Film does not function like that. There's a characteristic S-shape to its tone
response, such that sensitivity tapers off in the highlights and shadows. In
practice, this works well for the sorts of things people tend to do with
cameras -- so well, in fact, that a common practice is to "crank the contrast,"
which exaggerates the S-shaped curve even more. The problem is that, the
farther a particular image's encoding is from linear, the more out-of-whack
things get the farther things are from perfect exposure. With a linear
encoding, you can just multiply or divide the values by a constant and you wind
up with the actual values you should have gotten at the scene. With anything
else, you first have to undo the non-linearity before applying the
constant...something often easier said than done, and even more often simply
not done.

If you want to know how linear (or not) your camera's sensor's response is, the
way to test is to photograph a subject of known brightness / reflectivity and
compare the values recorded to those of the scene. The typical way is with a
standardized chart, with the Kodak Q-10 and Sekonic's meter calibration targets
being favorites for this task. I had hoped to finish yesterday and likely will
finish today building a device that I can shine a light into one end, insert
various apertures in the middle, and get precisely-defined brightnesses out of
the other end.

New question: since the -am profile was linear all the way, I used it both as
an input and output profile. What can I do now to get a camera-own, linear
output profile?

The first question I'd have is, "Why would you want to do that?" Standard
practice is to convert from whatever your camera (and RAW converter) produces
to your preferred RGB working space, and linear spaces make poor working spaces.

WRT -Zr: Why do you think I should not want it? My thought is: Better have
most colors accurate, and some very few OOG colors pushed rather than all
colors pushed with (almost) no need.

Relative colorimetric is designed to map media white points -- most commonly,
papers not exactly white, and even more commonly when making proofs on
something other than the final printer and paper. Your final print might be on
something with a lot of optical brighteners and thus a bluish paper, but your
proofing printer is using a fine art paper without optical brighteners. You can
either do an absolute mapping, which lays down a very little bit of blue ink
over the entire page so the "paper white" hue matches, or you can map the
colors such that anything very light gets twisted to the paper color rather
than rendered absolutely correct.

Input devices don't have media, and therefore they don't have media white
points. Plus, white balancing is typically handled by the RAW converter. In a
typical workflow, you're going to balance the RAW file to the white point of
the output working profile (typically D50). Different light sources have
different colors and the different color filters in the camera sensor have
different efficiencies; white balancing involves applying a constant to each of
the three channels (or, rather, two of the three...) such that spectrally-flat
objects in the scene have equal RGB values. In theory, there should be no
difference between absolute and relative colorimetric in such conditions...but
I could imagine inaccuracies and rounding errors creeping in.

But, in practice...input devices have unbounded gamuts. Photograph something
prismatic, like a crystal ball or a diamond ring, and you'll get a sampling of,
literally, all the colors of the rainbow. Even Lab isn't big enough for all
those colors, let alone more pragmatic working spaces like BetaRGB. So, in
practice, any sort of absolute rendering is going to clip those colors and
you'll lose any hint of detail in them. But with a perceptual rendering,
especially using Argyll's image-specific gamut mapping, you'll lose saturation
but keep detail. Considering that no display and _especially_ no printer can
even remotely come close to reproducing colors that saturated, this is a good
thing.

Cheers,

b&

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Other related posts: