[argyllcms] Re: Camera profiling with bracketing

  • From: Nikolay Pokhilchenko <nikolay_po@xxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Fri, 25 Feb 2011 09:38:05 +0300

Thu, 24 Feb 2011 19:50:51 -0500 Iliah Borg wrote:

> Such shots are done using tripod and remote, so if user (or scanin) samples
> one chart all other locations are the same. I prefer manual sampling (like it
> is done in RPP) because that allows to avoid any imperfections right from the
> start. It seems that profiles with manual sampling go easy to 6dE max, while
> with scanin they are usually about 12 to 20 dE.

On the contrary, often I have to shot the targets from hands. I've edited .cht 
files to strict the sampling area toward the 
patch center - the sampling area is half of the patch area. And the error is 
about 6.8dE max in scanin auto mode.

> I would start with a merge utility that re-calculates spectral reference for
> each individual .ti3 file based on the exposure of grey patches (around
> neutral grey)

If it's necessary, I'm correcting the XYZs of the whole ti3. I haven't deal 
with spectrum.

> Flare (veiling glare) is from the chart, the lens, the chamber in the camera,
> and the sensor itself. Flare is responsible for a lot of non-linearities in
> the sensor response and in real world shooting it is strongly scene-dependent.

Exactly. That's way I have to build the profile for each critical scene.

> For cheaper lenses and poor filters veiling glare can reach 2/3 EV easily when
> shooting a chart. Adding a simple black trap (hole) to a chart allows for a
> good estimation of veiling glare.

IMHO It's isn't necessary if the charts was captured at the real scene 
conditions.

> On Feb 24, 2011, at 7:21 PM, Graeme Gill wrote:
>
> > Nikolay Pokhilchenko wrote:
> >> Yes. I'm taking several shots of the same target under the same
> illumination. For
> >> example, aperture values from 2.8 to 11: 2.8, 4.0, 5.6, 8.0, 11 with fixed
> shutter
> >> speed.
> >
> >> I'm choosing the reference chart with 1.0 exposure and computing the
> >> exposure coefficients for all the rest charts. I'm computing the gamma (for
> raw files
> >> it's about 1..1,18) of the sensor (one for all charts), computing the flare
> and the
> >> exposure coefficients (per chart) simultaneously. Then I computing XYZ
> "stimuls" for
> >> each patch of every chart by subtracting flare and multiplying the XYZ by
> approptiate
> >> exposure coefficient. As a result, I have much more patches than the
> original target
> >
> > Hmm. I'm not sure that it would be possible to support this without adding a
> > quite elaborate set of new code to do the fitting. Is the flare you are
> > subtracting assumed to be the inter reflection of the test chart squares,
> > rather than flare due to air particles or the camera optics ?
> >
> > It would be easy enough to add an option to scanin that lets you specify
> > a scale factor (I was thinking as a numerator and denominator, so that
> > it's easy to pick one of the shots and scale all the rest to it) that
> > is then applied to the XYZ reference values, but this wouldn't take into
> > account flare.
> >
> >> trying to compute the input profile by such hihg DR data: Argyll colprof
> can't define
> >> the correct white point if the highest "Y" value have non-neutral patch. I
> have to
> >> delete some measurements or to compute virtual white patch whth highest
> "Y".
> >
> > Is that a problem when you use "colprof -u" ?
> >
> > Graeme Gill.
> >
>
> --
> Iliah Borg
> ib@xxxxxxxxxxx

Other related posts: