[argyllcms] Re: Profiling flexo presses

Roberto Michelena wrote:

I used to judge the quality of the A2B table of a profile, by its
correspondance with the original sample data;

This is understandable, but not the correct measure of the A2B table quality. The correct measure is the error to the actual device behaviour. Of course, the original sample data may be the only available information related to the actual device behaviour, hence the common and understandable use of this to judge A2B quality.

The approach to tuning the algorithms in Argylls profiling code
that I took, was to create a high quality (lots of sample points,
say 10000) profile from a real device, and then treat that profile as
a perfectly characterized master test device. I take that profile,
sample it with a smaller, more realistic test set (ie. 500-3000
sample points), generate a sample profile from that, and then compare
the errors with 10000 or more sample points back to the master test
device profile. The settings that minimize this error, are often
not those that minimize the error of the realistic test set
to the sample profile. Now maybe this indicates limitations of
the algorithms being used, or maybe this indicates that some
degree of smoothing actually benefits the actual profile accuracy.

> furthermore, being that
sometimes the sample data was exactly the gridpoints (evenly spaced
target), why wouldn't the A2B table be just a dump of such data?

In that situation I would generally agree, but even so, it could turn out that there is a better fit to the underlying device characteristic, if the data is smoothed slightly in some way.

It's unlikely in real world situations that one would measure
the whole grid this way. The only situation I know of
this happening is in characterizing movie film, where creating
and measuring 35937 patches automatically is actually feasible (ie.
that's about 25 minutes worth of film).

I understand 'inaccuracy in the measurements"; and supposedly making 4
or more measurements of the same target, then doing an intelligent
average (discarding stray samples), should get rid of such inaccuracy.
What do you mean by "sampling noise"?

You could do that, but would you be better off using those extra 3 measurements to sample other parts of the device space ? (ie. stratify your sample in device space, as well as repetition space). That's certainly what I'd tend to do, the "averaging" becoming the "smoothing".

By "sampling noise", I was trying to convey the idea of there being
noise due to the sampled nature of the information determining
the interpolated response, something analogous to quantisation noise
in other sampling situations. In a linear interpolation of a set of
sample points (for instance using Delaunay), this is evident in the
flat/inflextion/flat/inflection nature of the model. Other models
(such as inverse distance weighted) have a different characteristic,
but (without going to a very great deal of trouble), the sparse nature
of the original samples will tend to show through (for instance, using
the simplest form of inverse distance weighting, the "slope" of the
interpolated "surface" will be zero at each sampling point.)
By fitting a higher resolution grid to the sample data, there is
the opportunity of having smoother transitions between the sample
points, disguising their presence in the overall characteristic.

For the "typical device behaviour" part I'm a little more skeptic. If
building an A2B of an offset press or a laminate proof, there's some
smoothness to be expected -because they're analog processes- and
therefore the samples should exhibit reasonable behaviour, and you
should correct them if they don't (like PrintOpen's "automatically
correct measurements").

There is a whole range of stuff one can do here, right down to very detailed models of a particular process, that will greatly improve the interpolation, and reduce the number of samples needed, but I was mainly focused on the general case (which is what Argyll is set up for), where there is no assumption made about the particular process involved. Even at this level, it is likely to be a more reasonable assumption that the device behaviour through and between sample points is smooth, rather than discrete. It is hard to make models that are based on the original sample data behave smoothly and reasonable, whereas it is easier to make a higher resolution regular sample grid behave in a sensible way, adhering to a uniform smoothness metric, as well as allowing smoothness to be traded off against sample point interpolation accuracy.

But nowadays, what can you call typical in inkjets? with ink and paper
technology ever changing, ultrachrome inks, gloss differential,
swellable papers... CMYKRGB printers, variable drops, etc etc...
there's very little that can be assumed about them, and smoothness
might not be for granted. So doesn't it make more sense to just leave
the data as-is, maybe after averaging some prints and/or measurements?

But there is no such thing as "as is" in interpolation. You have to interpolate, and you have to assume some sort of model for the interpolation.

Graeme Gill.

Other related posts: