[argyllcms] Re: A question and some suggestions regarding calibration (dispcal)

  • From: Ver Greeneyes <vergreeneyes1@xxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Sat, 16 Jan 2016 15:16:16 +0100

Hmm, well, my results seemed to get worse when I disabled it,
so I ended up leaving it active. I don't understand it though!

I take that back: I just disabled that line again and now white is able to
achieve its target! Perhaps one of my other changes affected things (I need
to do longer measurements on this monitor since the brightness fluctuates a
lot).

On Fri, Jan 15, 2016 at 10:57 PM, Ver Greeneyes <vergreeneyes1@xxxxxxxxx>
wrote:

Thanks for the detailed reply!

First a question: what is the reasoning behind the call to
reset_targ_w()
on line 4536 (just before the verify & refine loop)?

I'm afraid I don't really recall, if the comments don't explain it.

Hmm, well, my results seemed to get worse when I disabled it, so I ended
up leaving it active. I don't understand it though!

Anecdotally, both seem to be rare, since I've copped a bunch of implied
criticism for allowing for this case, and not simply wiring things up
to assumed device 0 = black (i.e. black point hack by default.)

Even with all the changes I've made, all my displays (none of which are
likely very good) have some amount of black crush that dispcal gets rid of.
The LUT does not map black to (0,0,0), but black isn't visibly raised :)
I'm starting to wonder if at least *some* of the white crush is due to
gamut clipping though (but my quantization pass was broken for a while, and
I still have to redo calibration on my finicky desktop monitor to make
sure).

Is simply setting a white target a little below the native white a
workaround for this problem ?

Possibly, but what I'm seeing on my desktop monitor seems to indicate that
one of the color channels (blue) maxes out somewhat below 1.0 (last
calibration set it to 0.969315). This is also the channel on which I still
see clipping. So reducing the maximum output value for that channel does
not affect the white point, but should reduce clipping.

Hmm. The problem is that quantization and measurement noise can result in
non-monotonic and bumpy curves. On some operating systems (MSWindows),
the OS will simply reject a non-monotonic calibration curve, so dispcal
has to ensure it doesn't create one.

Ideally the answer is to add "end clip points" to the monotonic spline
model,
and then make sure that these get used to better fit such cases.
I'm certain that there would be issues in getting that fitting right,
both for devices that don't clip, and ones that do, so it would be
non-trivial to implement sucessfully.

Yeah.. My simple solution works well for me (low variance between
measurements, no evidence of non-monotonicity), but I realize it might be
difficult to extend to work in general.

That will affect either the speed or accuracy of the result for other
cases
though. Ideally dispcal should be more adaptive, and sample where it is
actually
needed.

True, this is more of a workaround to coax the fitted curves into
identifying where black ends and where white begins. On the other hand, I
haven't seen any evidence of reduced accuracy near the center of the curve.
I suspect the ideal curves are simply much smoother and more predictable
there. My quantization pass displays the errors of the chosen values, and
they don't look any worse near the middle (errors of the quantized values
are generally between 0 and 0.5 DE).

[ Dumb question - this black and white clipping isn't simply a result of
attempting a full range calibration on a video range display is it ? ]

Nah, the corrections are way too small for that :) I've only tested this
on full-range monitors so far.

In theory dispcal is meant to have explored the possible quantization
space
and settled for the minimum DE at each point (although the spline fitting
could alter that). I'm not sure if this type of allowance for
quantization
is the best thing to do overall though, since it makes the process
very finicky and difficult, since there is no fool-proof way of knowing
the end to end quantization. Attempting to measure this proves to be hit
or miss, depending a lot on the instrument and display repeatability.

Yeah, my reasoning is that the measurements were based on madVR's patches,
which use dithering - but we don't know whether the GPU will truncate or
round the values. Since anything more precise than 8-bit is useless on my
monitors (which are all either 8-bit or 6-bit + FRC), this extra pass finds
which rounding actually produces the best result. But I don't know if it
helps in practice - it's more a case of cutting off every possible source
of errors.

Thanks for reporting you experiments. It's something I'll try and allow
for in
future changes to displcal. I'm afraid that I'm not very tempted to
apply them
as-is for the reasons above, and in the longer term I'd much rather
try re-writing dispcal to use a quite different approach to the current
code (reverting to one of the very first ideas I tried), to see if I
can speed the whole thing up while not compromising the accuracy.

I completely understand; I can always maintain my changes locally
until/unless you do rewrite it. I'm quite happy with the performance now
though! One thought I had for rewriting how dispcal works would be to
switch to a single pass, measuring the extremes first (white, black, 50%
grey) and then start filling in values in between, using the previously
measured values to limit the range of the values in between (and ensure
monotonicity). That way, barring measurement error, no fitting would be
required. It seemed like a pretty large time investment to try though.

Regards,

Emanuel Hoogeveen

Other related posts: