[argyllcms] Re: Gamma wrong for calibration curves?

  • From: william.wood@xxxxxxx
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Tue, 5 Dec 2006 20:46:16 -0500

Graeme,

Thanks for the detailed response.  I understand how it works now.  I hope
you don't mind my continued musings on this subject - its very interesting!

> You can't ignore the black point to compute a truncated gamma curve,
> that's the whole point. The idea is that an ideal gamma curve
> response has a reading of 0 output for 0 input. Real devices
> don't behave this way. There are several ways this could
> be dealt with, one being to subtract the black level reading
> from the calculations, the other being to assume that the
> ideal curve is being truncated at the black end (ie.
> that the device is adding an unknown input black offset to the input
> value we're feeding it). The current software is working
> on the second basis.

I was naively assuming that by truncated you meant that the target gamma
curve still goes through 0,0 although the device does not go that low.

Let's call these three methods the "zero offset method", the "output offset
method", and the "input offset method".

The zero offset method allows multiple monitors to be calibrated to the
exact same brightness curve (ideal gamma) if you match the white intensity
of the monitors, without having to match the black intensities also (which
would reduce the dynamic range of monitors with lower black intensities).
The downside is that low near-black values would not be visible to the
extent the black intensity is high on a given monitor.

The output offset method shifts the ideal gamma curve up in brightness to
the extent of the black intensity and has the benefit of not wasting low
near-black values, however unless you match black intensities, each monitor
will have its own overall brightness dependent on its black intensity.

The input offset method assumes a linearly decreasing input offset value
which decreases to 0 when the actual input is 1.  I don't understand the
rationale or benefit of this approach; it would seem to create a mis-shapen
gamma curve?  It creates very different results than the other two methods,
which are very close (for my monitor at least).

Please see the updated spreadsheet below where I have used your estimated
input black offset value of 0.075605 to calculate the gamma error for the
-k1 calibration curve using the "input offset method" (very good now at
0.0001), and also the error for the -k1 calibration curve with the
corresponding profile also applied - this error is now a factor of 53 worse
(0.2066 vs 0.0039 using the "zero offset method"). Doesn't this suggest an
inconsistency in approach between the calibration curve software and the
profiling software and that the profiling software is correcting for the
calibration software, in effect setting the gamma back to what it would be
using the zero or output offset methods? Visually it seems this what is
happening as the image darkens when the profile is applied.

http://spreadsheets.google.com/pub?key=pL40582sXA94lQ4iLfbf5Ww

- Bill

Other related posts: