[argyllcms] Re: Xorg video card lookup tables, tone response curves, etc

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Tue, 15 Apr 2008 14:24:25 +1000

Samer Abdallah wrote:

[ At the moment I'm not entirely convinced that using an input
  offset for the power curve is necessarily the right thing to do.
  It seems like it should be in theory, but it's hard to be sure. ]

This is interesting. It seems to me that there isn't a single right  thing
to do. For an application that cares about accurate intensity display,
the important thing is to have a known, easily invertible relationship
between pixel value and intensity. In that case, it wouldn't matter
what calibration curve was chosen because the app would just use the
matching inverse to map from the desired intensity to the pixel value.
The whole Gamma curve thing was/is an approximate model of CRT
behaviour that doesn't fit once you take the black point into account
 - the problem is that people are used to the resulting colour space
(RGB values from 0 to 1) and some expectations of what "50%" is
supposed to look like.

It depends what the usage is. Many aspects of current systems use
non-color managed output, so the calibration curves visual behavior
is relevant. The behavior near black will have an influence
on the 8 bit quantization issues too.

The reason I went for an input offset rather than output
offset is as follows:

Our response to light is roughly ratiometric. We see a given
brightness ratio between two levels as being subjectively
the same, irrespective of the absolute brightness involved.
This is why the perceptual curves are shaped as they are,
small changes at low levels, with larger changes at high levels,
to maintain a similar ratio between each step, spreading quantization
errors perceptually evenly.  This is what leads to a power law type curve.

So if you have a certain minimum black level imposed by the
device, there is no point in the first step being the
ratio you would use against perfect black (output offset), since such
a ratio will be too small to be seen against the actual minimum black level.
So the first step above the actual black should be the one for
that absolute level in the curve, which equates to using an
input offset.
[ I'm sure some pretty diagrams would aid my explanation!]

I think it would make much more sense to throw out the Gamma
calibration model and bizarre nonlinear RGB spaces and calibrate
to log-intensity instead, that is if x is the pixel value between 0 and
1, then the intensity would be

    I=exp(k(x-1)) where k=-log(black_level)=log(contrast_ratio)
so to invert,
    x=1 + log(I)/k

The k is the "Dmax" of the display is the only required
parameter. Applications that manipulate RGB pixel values should
expect these to represent log intensities, eg to double the intensity,
add log(2)/k to the pixel values. This should give a reasonably
even spread of perceptible differences across the range too.

Maybe. But there is evidence that a subjectively linear response
is not a log curve. The L* curve is such an attempt, and it is a
1/3 power curve with a straight segment. An explanation I've
heard for the difference is that there is a level of "glare" within
the human eye that changes the shape of the sensor response at low
light levels (even assuming that our sensors have a pure log
response, which they do not quite have). The L* is a broad average though,
as there is ample evidence that we are most sensitive to level differences
at about the level of the surround, so the shape of the curve in practice
varies significantly with different surrounds.

In my work with black and white negatives, I'm considering using
16bit log scale images for intermediate storage too - it simplifies
the maths and preserves information.

Sure. If you are used to working with densities, and want to be able
to scale and preserve ratio's, then this makes sense.

Maybe I was using the wrong term - I just meant intensity, ie
proportional changes in XYZ values.

Roughly. Since they aren't computed this way, they won't exactly
match it though.

So does this mean that the TRCs in the profile are based only on this
sparse sampling of the colour space?

Yes.

And targen cannot sample
densely at least along the 3 colour axes?

It can if you choose to generate such samples. But all you
will end up doing is weighting the accuracy of the individual
channel responses at the cost of the model fit in other areas,
given a particular sample budget.
Unless you have a usage that values the individual
channel responses above other areas, this is probably not a good move.

Having written code to compute
pixel values from target intensities using only the information in the
TRCs (inversion by cubic interpolation) perhaps it would be more
accurate to go via the calibration curve after all. In particular, the
TRC inversion suffered at the dark end because of the 16 bit
quantisation of the TRC entries - repeated values mean that the
TRCs are not actually invertible.

If you have such exacting requirements for a monochrome output, then using
just the calibration is probably a better approach. The .cal file is
easily parsed, and is all floating point.

Well, for anyone who's interested, I've tried dispwin under OS X with Xorg running,
and though Xorg does not have the XFree86-VidModeExtension, it does  share
the LUT with the rest of the system and is affected when dispwin is used.

Yes, I would expect at least this, but it means that you are
operating the LUTs from OS X, not via X11. ( It would take
some changes to make Argyll OS X executable that expected to operate
the test window/LUTs via X11).

Graeme Gill.

Other related posts: