[argyllcms] Re: scanin and perspective distortion

  • From: Gerhard Fuernkranz <nospam456@xxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Wed, 06 Feb 2008 20:08:51 +0100

Gerhard Fürnkranz wrote:
> Thus I'm rather inclined to use only patches with the same color (e.g. white 
> ones, or identical gray tones (border of an IT8 target?)) for this purpose, 
> since they can be expected to have all approx. the same spectral reflectance 
> too, so that Y ratios and RGB ratios will be the same.

In the train i had some time to think more closely about this issue.
Likely we can find a subset of the patches on the target, so that the
spectra of these patches have a rank of only 3 (with a tolerance of say
1-2 delta E). For this subset of patches there will exist a linear
relationship between linear raw RGB from the camera sensors and XYZ
then. It's furthermore necessary to sort out too dark patches, due to a
too low SNR. The resulting set of patches may then be suitable too for
estimating the spatial luminance variation, even if these patches do not
have all the same color. A prerequisite for this approach will be of
course the availability of spectral measurements of the target, and a
camera which makes linear raw RGB images available. The question is now
of course, how many patches will be eventually contained in this subset,
and how are these patches spatially distributed across the target (if
all these patches happen to be concentrated in one particular region of
the target, then we have lost). Might be interesting to do this example
with an IT8 target...

> For a first rough guess, comparing luminance ratios (from the reference file) 
> and say linear raw green channel ratios may possibly suffice (in conjunction 
> with fitting the data to a (smooth) vignetting model, and sorting out 
> outliers).
>
> Whether a subsequent iterative refinement would converge, needs IMO further 
> investigations (The idea is to convert RBG to XYZ using the profile from the 
> previous iteration, then fitting and compensating the Y ratios to the 
> vignetting model, creating the next (hopefully better) profile, and so on). 
> Such an iterative approach might also work for non-linear RGB data which are 
> not power functions.

After thinking closer about the latter issue, I have now rather doubts
that it will be possible to separate the vignetting impact and the
device characteristics, if the camera has an _arbitrary_ response, and
if we don't have spatially spread patches with the _same_ color (or a
flatfield photo) for estimating the luminance variation. Given the
rather low number of patches on an IT8 target and the systematic
(non-randomized) arrangement of the patches, the non-parametric
regression will likely fit both, the device characteristics and the
spatial variations with a rather low error, instead of fitting only the
device charateristics, so that the impact of the spatial variation would
show up in the residual errors.

Regards,
Gerhard


Other related posts: