[argyllcms] Re: Using Argyll to calibrate a Rec.709 HDTV CRT monitor

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 08 Oct 2012 15:53:51 +1100

Technical Operations wrote:
> 1. The goal was to measure the conformity of the monitor to Rec.709, and if 
> needed
> improve accuracy via the built-in hardware controls.  Profiling or any 
> outboard
> calibration was out of the question, since if anything in the world is 
> designed to
> conform to Rec.709, it's this kind of device. (And Rec.709 was created for 
> this kind of
> device.) If it can't be calibrated in the box we need to throw it out and get 
> something
> elseâ

        one of the problems with this idea is that Rec.709 is an encoding 
not a display standard. It assumes some ideal things such as perfect zero
black, whereas real world displays can't do that. So there is a great
deal of room for interpretation as to how a Rec.709 encoded source
should best be displayed, taking into account the limitations of
a real device, and the impact that viewing conditions has on the result.
In contrast, something like BT.1886 is a display standard.

> 2. First I wanted to evaluate how accurate that "extended desktop" mode was.  
> I fed a
> test pattern to an external scope, and compared it to a proper native feed 
> (from
> software that's designed to output Rec.709 YUV via the card).  The white and 
> black
> points were correct, but the gamma was different.
> 3. I used dispcalGUI's hardware calibration interface, doing Iterative 
> adjustments of
> the white point and black point. dispcalGUI was set to D65 white point, 120 
> cd/m^2
> white level, Rec.709 tone curve. Measured ambient light was 4 lux. White 
> point and
> brightness were a joy to adjust easily to within 0.4dE. Black point was much 
> less
> stable, with the readout cycling between a dE of 1 and 3.

> 4. Verification command: dispcal  -v2 -d3 -c1 -yc
> "-P0.48762541806,0.512977099237,1.49647887324"   -X CRT.ccss  -t6500.0 
> -b120.0 -g709
> -f1.0 -a4.0 -k1.0 -E

Hmm. I wouldn't 100% trust dispcal -E, as a review of the code I did some
time ago hinted that complexity of the calibration target adjustments isn't
being fully accounted for. It should really be using the information
in the .cal file for the calibration reference, and because it's not,
the references may in fact be subtly wrong. So if it looks like it matches,
then fine, but if it doesn't you can't completely rule out the possibility
that dispcal -E is mistaken.

> 1. Applying the CRT CCSS correction changed the white point target by 3dE. Is 
> this
> expected?

Hard to say, since it depends on the characteristics of the particular
display and instrument. Seems to be within the bounds of possibility though.

> 2. Black level calibration read at 0.58cd/m^2 while the calibration interface 
> called
> for 1.07. Why 1.07 when I specified "native"? (BTW, the target black level 
> for Rec.709
> is under 0.05cd/m^2).

It's really dificult to adjust a zero value, so instead the adjustment is
a 1% level. Try running dispcal -r to do a check on the actual calibrated
black level.

> 3. Without additional hardware, should I use this calibration at all - can I 
> trust the
> result to any degree - or am I better off doing it the way it's always been 
> done - by
> eye (with a SMPTE chart and blue-only mode)?

You can always use the instrument readings rather than "by eye", if you can
compute the targets. The idea of dispcal is that it computes targets
for you :-)

> 4. EBU Tech paper 3320 specifies tolerances for reference video monitors, 
> based on
> dE*uv in CIE1976. (The main formula is âE*uv = â(âL*Â + âu*Â + âv*Â)  if it 
> passes
> through the mailing list correctlyâ).  Comparing it to Argyll's measurements 
> would be
> handy, the math is a bit beyond me. 

L*u*v* is getting kind of archaic, although it has always
been more popular in video rather than graphic arts, which
tends to use L*a*b*. There has been more development over the
years in improved delta E formula based on L*a*b* than Luv
(ie. CIE 1994 and CIE DE2000), hence the document you
quote's use of fudge factors for neutral de's.
So my problem is that there are too many possible DE formula,
which can lead to bamboozlement. I'm not sure how to provide
this flexibility without a lot of confusion as to what it all
means, and a lot of questions with no clear answers as to why
you would choose one over the other. It seems that
ideally you'd like a "EBU TECH 3320" report generator,
that implements their specific formulas ?

Graeme Gill.

Other related posts: