[argyllcms] Re: [argyllcms]

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Fri, 08 Sep 2006 12:01:56 +1000

Ben Goren wrote:

Just to make sure I've been doing this right--

I use Bruce Lindbloom's BetaRGB as my working space in Photoshop, etc. I convert images to that space early, and only convert them to something else for a specific reason (such as to sRGB as the last step in making something to post on the 'Net). Images (etc.) that I print (that I care about) are always in BetaRGB.

I feed the BetaRGB profile to -S. This is correct, no?

That's a straightforward way of approaching it, and may work well enough in most cases.

You may find that it is not optimal though. Using a colorspace
to define a gamut is really a shorthand for convenience. Many
(perhaps most ?) colorspaces we come across at the moment are
what is known as "output referred" colorspaces. An output referred
space represents an output medium (ie. printer, display etc.), and
will have real world constraints on the size of its gamut.

Generally, when we get an image that is in such a colorspace,
it has been "rendered" to that colorspaces gamut. By rendered,
we mean that it has been optimized to best make use of the
output devices gamut. If this is the case, then the colorspace
indeed represents the gamut of the images encoded in that
space, and it's pretty convenient to use that space to setup
our gamut mapping to another output referred space. The
gamut mapping can be computed once, and used efficiently many
times on images of that colorspace, and the output will
be consistent among themselves.

There is another sort of colorspace though, and that is
an "input referred" or "scene referred" colorspace. Such a
colorspace's gamut doesn't have to have any relationship to a
practical device, since its purpose is to represent colors as
they are, without rendering. In this situation, the colorspace
an image is encoded in tells you little or nothing about the
gamut the image actually occupies.

So the idea of converting images into a large gamut working
space before output, works in a direction that breaks the
convenient shorthand we tend to use, of assuming that the
colorspace an image is encoded in represents its gamut as well.

To get good looking output, someone or something has to
optimize the image for the output medium. If an image
is delivered in a state in which it has been optimized for
one medium (say the colorspace it's encoded in), then it's
possible to do a fair job of re-rendering it to another
output medium automatically. That's the type of thing
that happens when converting though an ICC profile pair,
where the output gamut mapping has been setup correctly
for the two colorspaces involved, and a perceptual or
saturation intent is used.

If an image is delivered in a state in which it has
not been optimized for a particular medium, then
a different workflow is needed. Converting images into
a large gamut working space, effectively converts images
into this state.

One possible workflow is to manually manipulate the images
so that they fit within a target gamut. The tool used would
have to encode the images during manipulation in a non-gamut
constrained way, and have a way of representing the target output
gamut constraints to the user. Something like photoshop
probably allows this, as the images can be converted into
something like a large gamut RGB space, or even a non-gamut
constrained space like L*a*b*, and the "gamut alarm" feature
can be used as a (crude) guide as to how best to manipulate
the image to fit the targeted output gamut. After
manual adjustment, the image would normally be saved into
the target colorspace without further manipulation. If the
image was saved in a working space, then an external conversion
to the target output space would be done in a "colorimetric" way,
since no further manipulation of the gamut is desirable, it having
all been done by the user.

A more automated workflow would be one it which using the
colorspace the image encoded in as shorthand for its gamut
is abandoned. In this sort of workflow, the gamut of each
image is measured, and then used to setup the gamut
mapping. A different gamut mapping (hence different
ICC profile or device link) would be needed for each
image in this workflow, and the output images would not
be consistent amongst themselves.

Argyll has the tools to try this type of workflow out :-
See <http://www.argyllcms.com/doc/tiffgamut.html>,
the -g flag of profile <http://www.argyllcms.com/doc/profile.html#g>,
the -G flag of icclink for the device link workflow

Graeme Gill.

Other related posts: