[argyllcms] Re: Beta RGB As a Color Workspace

  • From: "Andreas F.X. Siegert" <lists@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Sun, 20 Jul 2014 08:52:18 +0200

Hi Chris,

on 19.07.2014 11:34 Chris Lilley said the following:
>> On 2014-07-17 10:13, Chris Lilley wrote:
>>> Firstly, the rendering intent used for gamut mapping is not
>>> discussed. When a wide gamut image is reduced down to a very small
>>> one, relative colorimetric tends to keep the in-gamut colours the same
>>> and piles up all the OOG colours at the boundary to give flat areas
>>> where there used to be gradients. Perceptual, aka do something magic
>>> and undefined, tends to map the entire source gamut to the destination
>>> gamut in a content-unaware way which results in significant and
>>> needless desaturation.
> 
>> But that is irrelevant for the topic of whether to keep stuff wide till
>> the end or convert to small at the beginning.
> 
> I don't think so, because the gamut of an input-referred capture
> device does not seem to be well defined. So there are two mapping
> stages - input device to working gamut, and working to one or more
> eventual output gamuts.

Sure, there are two mapping stages. But in how many of the more widely used
programs do you have control over the rendering intent of the first stage at
all? So for me the question of the rendering intent of the input becomes
academic and I only worry about what I can influence.

> After the first mapping, then keeping stuff in a wide space is
> certainly a benefit.
That's basically all I am saying....

>> The conversion will have
>> to happen anyway. The question is, where do your losses happen or where
>> is the bigger chance that you loose something in your path from camera
>> to output.
>> And it also depends whether the tool chain you use even allows that 
>> selection.
>> Export from Lightroom or AfterShot for example is hardwired.
> 
> I agree that is problematic, but my point was made without regarding
> the additional limitations of specific products.

The problem is, people are constrained by whatever product they use and most
people will only us the widespread ones, in this case LR.


> ICC has added support for float profiles. Software like Krita can
> handle 32bit, float, and double working spaces. Software aimed at the
> film industry also tends to support higher bit depths.
Sure, but those are exotics when it comes to photography (been messing about
with FilmGimp and Cinepaint in the past).

> Certainly, software like Photoshop has never prioritized higher bit
> depths. Lok at the number of filters available with an 8bit RGB image;
> then convert it to 16bit RGB and look how many are greyed out; convert
> to 16bit Lab and look again. (And it used to be much worse).
This is the reason why I never bought PhotoShop in the first place. It was
not 16bit clean when I bought my first pixel editor.

> Lindbloom has shown that 16bit is insufficient for the physically
> realizable surface colour gamut in Lab. It therefore makes sense to
> ask whether an ultra-wide space like ProPhoto, with non-physical
> primaries, introduces a problem (banding) at the same time that it
> removes another (clipping).
Again, does that have a practical user changeable impact with current
products? No. Not for the majority.

cheers
afx
-- 
http://afximages.com/

Other related posts: