[argyllcms] Re: Black point, gamma level

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: Simon Kirby <sim@xxxxxxxxxxxx>
  • Date: Tue, 23 Oct 2007 12:23:37 +1000

Simon Kirby wrote:

Brightness loss: Only if I stray from the native white point, right?

Yes.

Resolution: Yes, I see this, especially with 8 bit LUTs. :)  Do video
cards support 16 bit value LUTs at all these days?  (eg: is this
possibly an X limitation within Linux?)

Once upon a time, video systems used to support more than 16 bit entries
in their Video LUTs, but apparently with the take-over of the graphics
hardware industry by the "games" cards, this was lost. The Operating
System API's permit up to 16 bit entries (which Argyll will make use of),
but little or no hardware actually implements it. Anything using DVI is
almost certainly limited to 8 bits by the DVI signalling conventions.

The main issue right now is that Linux has poor support for profiles. Web browsing, GTK, etc., mostly all do not make use of them. Only a few
applications watch the attribute which "xicc" sets, and typically become
very slow when they do.

There's no reason they should become slow, if they use a decent approach
to color management.

I should note that I just tried beta7 (as opposed to beta6), and the
output is now such that 0,0,0 black output from video card LUT is still
0,0,0.  Did something change wrt this in beta7?  The output is now:

There were a number of changes, but I wouldn't have thought this would
have changed.

0.0000 0.018335 0.011353 0.010989
3.9216e-03 0.023509 0.016135 0.016037
7.8431e-03 0.028616 0.020859 0.021008

It is my understanding that this translates to 8-bit output integers of:

0: 0 0 0
1: 1 0 0
2: 1 1 1

Hmm. No. more like:

0: 5 3 3
1: 6 4 4

but you can't judge on the numbers, since many systems will have
a "flat spot" from zero, where nothing changes until the levels reach
some threshold. Naturally the calibration should start at the threshold,
to avoid the flat spot. As I hinted, there are some issues with
Beta7 in that if the white point changes during the run
(due to drift, or inaccuracy in determining whether the white
point fits within the gamut), the black point gets shifted as well,
which it probably shouldn't.

If you'd like to try this fix out, I've put an x86 Linux version
of dispwin at <http://www.argyllcms.com/dispwin_x86_Linux.zip>
that attempts to fix the issue you noticed. (You may
have to "chmod +x dispwin_b8" after unziping it).

I attached the whole .cal file, as generated with:

        dispcal -K -m -g2.2 -k0.0 -yl -qu -v benq_native_b7

Note that with "-qu", I cannot actually see a visible patch in the first
patch until patch number 9.  I don't think this is because it is too
dark, but because rounding or otherwise is resulting in the output still
being RGB 0,0,0.

Note that "u" in -qu stands for:

        "unbelievably ultra slow"
        "untested and unverified"
        "uselessly excessive"
:-)

Unless there is a specific reason to do something else,
I'd recommend that everyone stick to -ql, -qm, or -qh. The reason
-qm is the default, is that it's generally the right choice.
-qu exists only to be able to prove it is not needed.

Hmm.  Yes, I was assuming that LCDs (especially compared with CRTs) would
be perfectly additive.

Given that some of them have circuitry designed to make them look like
an sRGB CRT (including 3D lookup tables), this can be far from the truth.
The cheaper displays a probably close to additive though, I guess.

Graeme Gill.

Other related posts: