[argyllcms] Re: Xorg video card lookup tables, tone response curves, etc

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 14 Apr 2008 14:03:45 +1000

Samer Abdallah wrote:
- which, if any, hardware and software combinations can result in
   a gamma table output resolution of greater than 8 bits? Can Xorg
  do this? I had a look at the xorg mga driver code but was left none
  the wiser.. A related question is, does X11 manage the gamma ramps
entirely in software, or can it ask the video driver to do it in hardware?

In principle this is supported on all platforms. In practice, few
combinations actually support more than 8 bits. The only reported
one so far is the Nvidia Quadro chip based cards, that apparently
have 10 bit/component frame buffers, which are exposed on Linux at least.

Some of the high end LCD displays have in built tables greater
than 8 bits, but the protocols are proprietary, so access to
such displays would be needed to be able to figure out how to make
this work.

1st order continuity. Going from the textual description in the dispcal
docs, dispcal adds in the black point so that, by my interpretation, the
target curve should be
    I_b(x) = b + (1-b)*I(x)
Is this correct?

Yes.

The other option is described as a gamma curve with an input offset, so
I suppose that would be something like this:

    I_b(x) = ((x+a)/(1+a))^g

where  b=(a/(1+a))^g is the target black point, so

    a = 1/(b^(-1/g) - 1)

That would be fine and quite manageable if we knew g, but the documentation states that g is chosen such that I_b(x) matches the relative intensity of the
*ideal target gamma curve*, ie, the constraint is

    I_b(0.5) =  0.5^gt

where gt is the gamma specified in the -g flag. Solving this for g appears to
be quite tricky - is this really what dispcal does?

Yes. People complained that the resulting curve didn't match
the expectations set by other packages, hence the distinction
between the technical gamma (the actual power used), and the
"popular" equivalent gamma value.

 Wouldn't it be  much
more
straightforward, from the point of view of an application that wants to
acheive a certain luminance, to set g=gt?

No. It created more confusion to do this, since people want to be
able to use their expectation of what a particular gamma looks like
portably between systems.

[ At the moment I'm not entirely convinced that using an input
  offset for the power curve is necessarily the right thing to do.
  It seems like it should be in theory, but it's hard to be sure. ]

Moving on to the profiling, no matter what the target curve during calibration was, presumably I can get the measured response after calibration from the profile?

Of course profiles don't change a device, they characterize it. Linking
profiles creates a means of changing things, and naturally the apparent
response will be largely dictated by the input profile chosen. Calibrating
a display helps for non-color managed applications (like window managers),
as well as creating a device response that is easier to accurately characterize.

Am I right in assuming that the red, green, and blue tone response  curves
(which are tables in the profiles I've produced with argyll, but a parameteric gamma
curves in the profile I producing using Gretagmacbeth's software)
will tell me the luminance of the display for each of the 256 input values, assuming
the corresponding video card gamma table has been installed?

More or less. In the case of the Argyll profile they are the
per channel curve shape that minimizes the discrepancies between
the characterization model and the measured test values. The term
"luminance" has a particular meaning in color science, and no,
these curves aren't directly related to luminance, although
there may well be a strong correlation.

In fact, for devices with only 8 bit DACs, is it not the case that the best thing is to use the native uncalibrated response, thereby not wasting any gray levels due to quantisation, and use the profile to fit the image to what the display can do?

Perhaps. It's complicated though. The overall characterization is relatively
coarse, since a few thousand points is sparse compared to the 16 million
possible device value combinations. If you sit down and work it out,
a few thousand points will on average be something of the order of
8 delta E distant from each other, meaning that there is plenty
of room to miss display response detail. The calibration on the other
hand is more detailed, using up to 128 measurements per channel,
and therefore capable of compensating for the detailed behavior
of a particular channel, and even the overall display if the
channels are largely additive (which they will be for a CRT and some LCD).
So under some conditions, a calibrated display may be more accurately
controlled than an uncalibrated one, even given the 8 quantization
loss, if the improvement in profile fit accuracy makes up for it.

And by the same argument, the next best thing is to calibrate to a gamma as
close as possible to the native gamma, again to avoid wasting colours?

Yes.

The other option I thought of for high quality gray level images on devices with >8bit DAC resolution is to do image-specific optimal quantisation to an adaptive gray map with high resolution entries, then install the map into the video card. That would give you the best possible 256 gray levels chosen from, eg 1024 for 10 bit DACs.

That's the idea of the calibration system, if it was running on systems
that have more than 8 bit entries in the RAMDACs.

If the target
image contained mostly subtle gradations in a narrow dynamic range, the video card would spend it's 256 colours in a narrow range. Has anyone tried this?

Sounds feasible. I'm not sure many people have tried image specific
calibration curves though, except for very special situations.
(it used to be done a lot in presenting color images on 8 bit
frame buffers of course, where the aim was to quantize 24 bit color
to 256 representative colors and then dither).

If you wanted to take the path of not calibrating the display, then it would
be advantageous for presenting greyscale output to optimize your test
chart to have a great number of test values near the neutral axis.
targen doesn't really have tools for this currently. What would
be good would be to distribute test values evenly within a
"tube" running up the R=G=B axis, although currently targen does
concentrate test values near the neutral axis.

One last question: does anyone know what happens in Xorg under Mac OS X? Does it manage it's own VCGT or does it share the one managed by the underlying Quartz display
server?

Sorry, no, I've never tried running X11 on OS X. It is quite possible
that there is no access to the video LUTs with this arrangement,
since under X11 it uses XF86VidMode extension which may not be available.

Graeme Gill.

Other related posts: