[argyllcms] Re: Dark colors and very poor gray scale after printer profiling
- From: Ben Goren <ben@xxxxxxxxxxxxxxxx>
- To: argyllcms@xxxxxxxxxxxxx
- Date: Mon, 30 Jul 2018 08:10:30 -0700
On Jul 30, 2018, at 7:18 AM, edmund ronald <edmundronald@xxxxxxxxx> wrote:
Let's be clear: as of now there can be no "man page" for color management
because the behaviors just aren't logical.
And, alas, I don’t anticipate any solution short of AI. (Or, more
realistically, “smart” algorithms that “intelligently” “analyze” the contents
of your picture and “automatically” “enhance” the image for its “best” color.
We already see strong echoes of that going on with consumer-level equipment.
The automatic display white point adjustment with modern iPads is quite
impressive, for example....)
The current used-everywhere image formats all trace their heritage directly
back to the era of Atari-style computers, where there was a direct pathway
between the numerical value of the file and the voltage applied to the
phosphors of the CRT. And 0 meant no voltage and F or FF meant maximum voltage.
There was zero concern for anything critical for color management. What
chromaticity were the phosphors of this particular CRT? What was the actual
color of the display in its minimum- and maximum- voltage conditions? What sort
of gamma-like curve did the display have between? How did that change when the
user fiddled with the brightness and contrast knobs?
Those file formats remained logically the same when graphics-capable printers
came along, though the byte endian order might have gotten swapped in the
device driver somewhere — with 0 meaning maximum ink and 1 meaning no ink.
None of that even vaguely resembles the century-old mathematical
characterization of color that Munsel and others developed that would have been
perfectly within the capabilities of, if not the first generations of graphics
computers, certainly the first generation of color-capable Macintoshes.
But, no. We’re _still_ stuck with image files that are barely one step removed
from raw device bit-fiddling, and you have to fight half the daemons of Hell
itself to work with color as actually perceived by humans.
And, sure. We already have the ability to store files as XYZ values or the
like. But who’s gonna get every single vendor to not only re-write code to
support it, but also do a not-miserable characterization of their devices with
reasonable built-in transforms for adaptation and the rest?
Even worse...which sales droid is going to convince consumers to pay for it all?
What that means is that only color geeks who care about this sort of thing will
be able to do it, and it’s up to us to convince our own clients that their
images look better after we’ve worked our magic on them.
Other related posts: