Fri, 25 Feb 2011 11:21:06 +1100 Graeme Gill wrote: > Hmm. I'm not sure that it would be possible to support this without adding a > quite elaborate set of new code to do the fitting. Is the flare you are > subtracting assumed to be the inter reflection of the test chart squares, > rather than flare due to air particles or the camera optics ? I think the camera optics plays major role here because flare level is depend of aperture value. I've deal with low&mid level of the lenses. Another variiant of the "flare" may be just sensor bias. > It would be easy enough to add an option to scanin that lets you specify > a scale factor (I was thinking as a numerator and denominator, so that > it's easy to pick one of the shots and scale all the rest to it) that > is then applied to the XYZ reference values, but this wouldn't take into > account flare. I think it would be enough at the first stage if the scanin will be able to recognize very dark and very light and saturated images of the target. The problem is that scannin can't find the patches in auto mode if they are very dimm or saturated. For example, it recognises only 2 imags from 6-7. All the rest I have to define by scanin "-F" parameter. I'm not shure that flare compensation is needed - I didn't compare the results between flare compensated and not. But I'm shure that profile error with flare compensation is less. > > trying to compute the input profile by such hihg DR data: Argyll colprof > > can't define > > the correct white point if the highest "Y" value have non-neutral patch. I > > have to > > delete some measurements or to compute virtual white patch whth highest "Y". > Is that a problem when you use "colprof -u" ? I didn't check yet. Or I've checked, but have forgotten... Thank You, Graeme for Your attention.