[argyllcms] Re: ArgyllCMS V1.1.0 RC1 is now available

  • From: Graeme Gill <graeme@xxxxxxxxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Mon, 09 Nov 2009 15:18:50 +1100

Alastair M. Robinson wrote:

My own experiments with GPLin have been using Gutenprint with pretty much everything turned off - basically I feed in n-channel data and Gutenprint just dithers it and puts it on the page. I've found I need a quite extreme gamma adjustment (normally handled as part of Gutenprint's papertype definitions, but I'm bypassing that because I don't want any unknowns in the chain). I've just tried reading a linearization chart with no adjustment so I can show you the raw device response, but the adjustment is so extreme that without it my DTP41 can no longer distinguish certain patches from their spacers!

        I guess I don't understand how that could happen unless the
ink is running off the page and obliterating the spacers. Have you
got an example (ie. a ti1 + command line that creates the chart +
specifically which spacers aren't distinguishable ?).

Anyhow, my point is this - it would be preferable for this gamma adjustment to be accounted for in the linearization curves created by Argyll rather than later in the chain. If the print path has significant non-linearity that Argyll doesn't know about, then as you explained to me once before, there are implications for ink limiting when profiling later on. (Ref: //www.freelists.org/post/argyllcms/Of-ink-limiting-and-maximizing-gamut,4)

I think it boils down to what bucket each adjustment falls into. I'm wondering
if what you're calling "gamma adjustment" is what I would expect should
be handled by "usable range". What I mean by this is that typically with
inkjet printers there are print modes in which the dot density is higher
than the dot coverage (due to multi-pass printing), leading to a maximum
coverage greater than 100% ink (ie. 400%, 800% or more). My expectation for
the printer calibration is that the usable range is being set and handled by
the printing system since it is closely tied to the printing mode.
So for instance, if 100% into the screening at maximum dot size puts
down 400% ink coverage, then the usable range will be somewhere around
0.25, so then the calibration systems 0-100% gets scaled to 0-25% into
the screening. While the usable range adjustment could be incorporated into
the calibration system, it doesn't seem to sit very comfortably there to me
in the situation where calibration is a very common occurrence (used much more
often that profiling for instance). Having to track the usable range each time
with the consequence of getting it wrong being ink running off the page
is not very nice.

[Certainly on our Cyclone systems we didn't have to do a whole lot more
 than setting the usable range for our calibration system to then work
 well using a fixed 31 step chart, albeit with some finer test steps
 near white. The usable range set the gross limihts and puts the useful
 range of device values roughly in the middle, wile the calibration
 target set the fine per channel limit. So if 50% device out of the calbraion
 system resulted in (say) more than 80% measured coverage (30% dot gain),
 the usable range is probably set too high.]

OK - maybe my inkjet's unusual there, then - but it seems as though the a/b loci will curve significantly, and even double back on themselves while L* continues merrily decreasing. See:


Right, but why is this a problem ? Non-monotonicity in the a*b* plane
doesn't imply non-monotonicity in L*a*b*, and the linearization will
take care of making the locus uniform, while profiling
takes care of what each colorant and the combinations of colorant
are doing. [i.e. do you have an example of this causing a problem
or some theory as to how this can cause a problem ? ]
The measure of DE from white vs. colorant level shows when you
get to diminishing returns.

        Graeme Gill

Other related posts: