[argyllcms] Re: Capture One Profiles

  • From: Elle Stone <l.elle.stone@xxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Thu, 11 Jul 2013 10:51:48 -0400

On 7/10/13, Graeme Gill <graeme@xxxxxxxxxxxxx> wrote:
> Elle Stone wrote:
>
>> So you see, the white balancing is done before the interpolation.
>
> Any demosaicing algorithm that relies on a color balance, or
> sums values from the different channels in a spatially
> dependent fashion ("cross-contamination"), is almost
> certainly wrecking the colorimetry.
>
> A color accurate demosaicing algorithm is restricted to
> transferring level independent spatial information from
> one channel to another, and even then it may wreck
> the colorimetry in pathological situations.
> By definition it does not need a color balance.

Could you explain what this means in somewhat simpler terms, with an
emphasis on how to put into practice the criteria you are specifying?
Are there demosaicing algorithms that more or less meet the criteria
you just specified? For example, I don't use AHD because of some funny
gamma? white-balance? dependent stuff, the nature of which I've long
since forgotten. I do use Amaze and DCB, which to my eyes give better
results than AHD. Should we all be using simple bilinear instead?

>
> Personally I think the idea of up-sampling mosaiced image data
> to the sensor positioning precision rather than the sensor
> spacing precision is the triumph of marketing over good engineering.

Same question. I don't know what "up-sampling . . . to the sensor
positioning precision rather than the sensor spacing precision" means.

Is the implication that demosaicing should be done using white balance
channel multipliers of 1? Or perhaps the camera-specific temperature
at which Green is 1 and R=B?

Regarding white balancing, this article:
http://en.wikipedia.org/wiki/White_balance has links to some
interesting research, in particular this pdf: Comparison of the
accuracy of different white balancing options as quantified by their
color constancy (http://www.acolyte-color.com/papers/EI_2004.pdf).

The pdf concludes that of the alternatives considered , white
balancing in the "native camera RGB" color space is superior to
everything except "illuminant dependent", which is vastly superior.

If someone could explain to me what these terms mean (in practical
terms as much as possible), that would be great! I took it to mean
that for most of us using the RGB channel multipliers before
interpolation is the best approach. But perhaps it means "create a
camera input profile using unitary channel multipliers, then white
balance in the color space defined by the profile"?

Also, does anyone know of a raw processor that doesn't apply the white
balance channel multipliers before interpolation? If so, what color
space does it use to apply the white balance? Or does it use some
other method altogether?

Elle

Other related posts: