[argyllcms] Re: Best (smallest) profiling patch set

  • From: Gerhard Fuernkranz <nospam456@xxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Sat, 22 Apr 2006 22:01:53 +0200

Andrej Javoršek wrote:

I would like to get comment on my approach to get the smallest possible (but still useful) patch amount for
quick profiling of news print offset press. My motivation is to be able to profile press with small amount of
patches that can be printed during normal press run (without need for common test-chart print run).
Patches should be printed in border (cut off) area of paper so probably I could only print some 50 to 100 patches.

The approach I'm considering is to first (and only once) print really big test chart (4000 to 5000 patches) and measure it.
Fine, having a huge set of measured test patches for verification is a good idea.
After that I could use "splitcgats" to get fraction of those measurements, create profile with that fraction and use "profcheck" against the whole set to get dE for that set. I can repeat "splitcgats" 500 or 1000 times to get statistically good representation. I could repeat same test for 2 or maybe 3 different papers to se if patch set is more or less the same for them.

IMO the important question is, HOW does splitctags select the small subset from the large set of patches, and how are the selected patches are distributed among the color space?

Normally, in order to create a profile from 100 patches, you would run "targen -f 100 ..." to obtain 100 patches which are evenly distributed in the CMYK space (though still with some randomness), and you would create the profile from these patches. I wonder, whether a subset created by splitcgats also fulfils the evenly distributed property (as patches generated by targen do), but I rather guess, it does not.

Thus, in order to honor the distribution generated by targen (at least approximately) I'd probably take the following approach:

   * Create N patches with targen (where N is the number of desired
     patches, 50..100 as you said), using the same options you would
     use to generate the 50..100 test patches intended for being
     printed on the border
   * For each patch generated by targen, find the closest patch in your
     large set (minimum euclidian distance in CMYK space), and add the
     found patch from the large set to the .ti3 file which you use to
     create the N-patch profile.
   * Then create the profile and verify against the huge test set


I have the ability to use Xgrid consisting of 20+ IMac's so the speed is not the main concern.
The script I would feed to the grid is something like that..

splitcgats -n25 -r $number "$cgatsin" "$cgatsout$number.ti3" /dev/null
profile -qm -B "$cgatsout$number"
profcheck -k "$cgatsin" "$cgatsout$number.icc"

than I could call that script arbitrary number of times with changing parameter of $number (eg. from 1 to 1000).

- Does this sound as reasonable solution?
- Is using integer as -r parameter in "splitcgats" good thing to do or is better not to include parameter at all?
- Is quality medium good in "profile" for that purpose or could I use low?
- Is using dE2000 in my case really better than using dE76?
- Please reconsider and advice!

Best Regards
Andrej Javorsek

Other related posts: