-------- Original-Nachricht -------- Datum: Thu, 22 Feb 2007 14:31:41 +1100 Von: Graeme Gill <graeme@xxxxxxxxxxxxx> > Many other arrangements that intuitively seemed like a better idea > (like evenly spaced patches in Lab/perceptual space), faired worse > in terms of the resulting profile accuracy. Graeme, my experience is that the verification result depends significantly on the distribution of the test set points used to assess the "resulting profile accuracy". My feeling is that I get in a majority of cases better average errors, whenever the same kind of distribution is used for both, the training set and the test set. For instance, "-I" training patches seem to give better result than the default, if the test set is generated with "-I" too (or "-R"), while vice versa default distributed taining patches seem to give better verification results, if the test set follows the default distribution too (or "-r"). But this makes it very hard to say, which one is really "better". Which is the "right" test set distribution to verify against? Regards, Gerhard -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer