Nikolay Pokhilchenko wrote:
Yes. I'm taking several shots of the same target under the same illumination. For example, aperture values from 2.8 to 11: 2.8, 4.0, 5.6, 8.0, 11 with fixed shutter speed.
I'm choosing the reference chart with 1.0 exposure and computing the exposure coefficients for all the rest charts. I'm computing the gamma (for raw files it's about 1..1,18) of the sensor (one for all charts), computing the flare and the exposure coefficients (per chart) simultaneously. Then I computing XYZ "stimuls" for each patch of every chart by subtracting flare and multiplying the XYZ by approptiate exposure coefficient. As a result, I have much more patches than the original target
Hmm. I'm not sure that it would be possible to support this without adding a quite elaborate set of new code to do the fitting. Is the flare you are subtracting assumed to be the inter reflection of the test chart squares, rather than flare due to air particles or the camera optics ? It would be easy enough to add an option to scanin that lets you specify a scale factor (I was thinking as a numerator and denominator, so that it's easy to pick one of the shots and scale all the rest to it) that is then applied to the XYZ reference values, but this wouldn't take into account flare.
trying to compute the input profile by such hihg DR data: Argyll colprof can't define the correct white point if the highest "Y" value have non-neutral patch. I have to delete some measurements or to compute virtual white patch whth highest "Y".
Is that a problem when you use "colprof -u" ? Graeme Gill.