[argyllcms] Re: A question and some suggestions regarding calibration (dispcal)

  • From: Ver Greeneyes <vergreeneyes1@xxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Tue, 19 Jan 2016 14:30:22 +0100

I actually looked into the white drift compensation code a bit more, and
found that the white drift target is set at the start of each batch (when
the batch is large enough). This is actually not what I want: I'd rather
set the target once, then keep using it all throughout calibration
(including for the initial model). I changed this locally to *only* set the
white drift target when targ_w_v is 0 (i.e. the current target isn't
valid), and then added a small function to *set* the target, using the
white point measured in dispcal's initial 9 measurements. This is good
enough for my purposes, since dispread only uses a single batch anyway, but
it probably doesn't cover all the possibilities :)

Regards,

Emanuel Hoogeveen

On Tue, Jan 19, 2016 at 3:39 AM, Graeme Gill <graeme@xxxxxxxxxxxxx> wrote:

Ver Greeneyes wrote:

Hmm, well, my results seemed to get worse when I disabled it, so I ended
up
leaving it active. I don't understand it though!

I was hoping it had the opposite effect, since I've made changes to
the code along the lines I outlines.

Even with all the changes I've made, all my displays (none of which are
likely very good) have some amount of black crush that dispcal gets rid
of.
The LUT does not map black to (0,0,0), but black isn't visibly raised :)

It was always my assumption when creating dispcal that it should
cope with clipping at both ends of the range. It's just a lot harder
to do accurately than I imagined, given displays with very dark blacks
and the inconsistency of instrument measurements at the dark end.

Is simply setting a white target a little below the native white a
workaround for this problem ?

Possibly, but what I'm seeing on my desktop monitor seems to indicate
that
one of the color channels (blue) maxes out somewhat below 1.0 (last
calibration set it to 0.969315).

That would hint that a warmer white point (lower C.T.) would be another
way of
compensating.

Yeah.. My simple solution works well for me (low variance between
measurements, no evidence of non-monotonicity), but I realize it might be
difficult to extend to work in general.

Some instruments are pretty rough near black (the i1d2/Spyder 2
generation),
and 8 bit quantization and display warmup can make things pretty bumpy too.

True, this is more of a workaround to coax the fitted curves into
identifying where black ends and where white begins. On the other hand, I
haven't seen any evidence of reduced accuracy near the center of the
curve.
I suspect the ideal curves are simply much smoother and more predictable
there. My quantization pass displays the errors of the chosen values, and
they don't look any worse near the middle (errors of the quantized values
are generally between 0 and 0.5 DE).

Good to know.

very finicky and difficult, since there is no fool-proof way of knowing
the end to end quantization. Attempting to measure this proves to be hit
or miss, depending a lot on the instrument and display repeatability.

Yeah, my reasoning is that the measurements were based on madVR's
patches,
which use dithering - but we don't know whether the GPU will truncate or
round the values. Since anything more precise than 8-bit is useless on my
monitors (which are all either 8-bit or 6-bit + FRC), this extra pass
finds
which rounding actually produces the best result. But I don't know if it
helps in practice - it's more a case of cutting off every possible source
of errors.

One change I'd like to make to dispcal/dispread is to support dithering,
since this would speed up dispcal in many situations.
There's a fair amount of fiddly work in changing the main platform
code (MSWin, OS X, X11) plus Web to upload a generated bitmap and
wrangle the matching LUT values, rather than filling a rectangle with
a color though.

I completely understand; I can always maintain my changes locally
until/unless you do rewrite it. I'm quite happy with the performance now
though! One thought I had for rewriting how dispcal works would be to
switch to a single pass, measuring the extremes first (white, black, 50%
grey) and then start filling in values in between, using the previously
measured values to limit the range of the values in between (and ensure
monotonicity). That way, barring measurement error, no fitting would be
required. It seemed like a pretty large time investment to try though.

The aim though is to locate the device values that create colors that
exactly land on the neutral target curve. To do that efficiently,
some interpolation model that can be updated point by point is needed,
to determine what the next test point should be.
Current dispcal is relatively "memoryless", maintaining a matrix
approximation of device response for each target grey level, and iterating
to search for an improved solution. This is robust in many ways against
measurement and device inconsistency, but is "dogged" rather than
efficient.

Ver Greeneyes wrote:

I take that back: I just disabled that line again and now white is able
to
achieve its target!

That's more in line with what I would expect.

Graeme Gill.




Other related posts: