Roberto Michelena wrote: > This sounds quite obvious, or am I reading it wrong? > Basically you're saying "if I profile or refine with an ECI2002, and > then I use an ECI2002 to verify, I get better verification results > than if I use another patch set to verify". No that's absolutely not what I mean. I mean of course two different sets of patches, for instance a training set with just 1000 points and a test set with say 50000 points, but both patch sets drawn from either the same spatial density distribution in the color space, or from a different one (where the spatial density distribution is for instance "equally spaced in device space", or "equally spaced in perceptual space", i.e. the distributions as generated by different targen options). And in fact it seems to be not unreasonable (at least subjectively), that a test set coming from the same density distribution as the training set will give a better verification result. But the irony is, how should we judge the quality of a particular training set (or training set distribution) then? For each training set (or distribution), we may be able find a test set which proves that this training set (or distribution) is better then any other training set (or distribution). So how can we judge for instance whether targets which are "equally spaced in device space" or target which are "equally spaced in perceptual space" give "better" results, if the verification against one test set tells us that the former ones were better, while verification against another test set tells us the opposite. Regards, Gerhard