[argyllcms] Re: colprof with 836 data points
- From: Gerhard Fuernkranz <nospam456@xxxxxx>
- To: argyllcms@xxxxxxxxxxxxx
- Date: Tue, 4 Feb 2020 00:03:03 +0100
Am 03.02.20 um 21:13 schrieb graxx@xxxxxxxxxxxx:
Decided to “Go for the gold!” and got 836 data points for my lowly NEC PA271W
monitor.
Here are the important statistics:
> peak err = 1.146645, avg err = 0.277301, RMS = 0.314956
For 300 data points, the statistics were:
> peak err = 0.888202, avg err = 0.244000, RMS = 0.285845
As Gerard pointed out, the statistics may have gone up but the “splines” have
more “real data” to “chew on” instead of leaving whole swaf of color space
uncharacterized.
Since you already have the readings now, the 836 points might be a good test
set for checking the 300-point profile (and also vice versa, if the points in
the two sets are independent, i.e. if the 300 points are not just a subset of
the 836 ones).
What is the meaning of “Overdetermination” in the context of fitting splines?
I mean a system with more equations than unknowns (or in the context of data
fitting: more data points than the number of model parameters which need to be
estimated).
[ But agreed, I'm not sure if I should use this term also in the context of a
non-parametric model, which is per se significantly _underdetermined_ (due to a
huge number of parameters), and which gets only constrained via regularization
(smoothing) to an _effective_ number of parameters lower than the number of
supplied data points. ]
Regards,
Gerhard
Other related posts: