On Jul 7, 2013, at 6:50 PM, Graeme Gill <graeme@xxxxxxxxxxxxx> wrote: > I suspect from > the discussion that few photo processing systems actually work this way ? Graeme, I think you may have just pointed to a rather large elephant in the room. White balancing in digital cameras serves two functions. One is the conversion of the scene illuminant to the output profile's illuminant, such as what you described. The other is to remedy an artifact resulting from the physical design of the cameras. A typical camera's sensor has as many green pixels as red and blue combined, and each channel has a different native level of sensitivity. If you were to photograph an Illuminant E light source, you would *NOT* get the same values recorded in each channel, though the output of said light source *would* scale linearly in each channel with changing exposure. In today's raw development engines, both operations are done with three different linear multipliers applied independently to each channel. And that, truly, is all that there is to white balancing behind the scenes, regardless of what sort of user interface is covering it up. While that works well enough in practice, I'm really starting to think that it's not the right way to do it. What I think should happen is that each camera should first have its own constant normalization factor to align the three channels (and, perhaps, to linearize the data -- though sensors are remarkably linear in practice). This factor would presumably be reasonably consistent for each camera of the same model, but would optimally be tweaked for each individual camera. Then, a separate profile-based operation would convert the scene illuminant to the desired working illuminant, just as you suggested. Iliah, does this make sense? And any chance for something like this in RPP? Cheers, b&