Elle Stone wrote: > Does anyone have a clue what really is loading this "almost linear > calibration but with rounding errors" vcgt information upon computer > startup? Hmm. It's not a 16 bit scaling error introduced somewhere in X11 or the video driver by using the wrong "convert to 16 bits by shifting by 8 bits" in an attempt at setting a linear VideoLUT is it ? wrong: val16 = val8 << 8; If you do this you get: 0.000000, 0.000000 0.003922, 0.003906 0.007843, 0.007813 0.011765, 0.011719 0.015686, 0.015625 0.019608, 0.019532 0.023529, 0.023438 0.027451, 0.027344 0.031373, 0.031250 0.035294, 0.035157 . . 0.976471, 0.972671 0.980392, 0.976577 0.984314, 0.980484 0.988235, 0.984390 0.992157, 0.988296 0.996078, 0.992203 1.000000, 0.996109 If so, this is a coding error, not a stray calibration. (Note that 16 bit values are used internally in X11). The correct way to scale 8 bits to 16 bits is to multiply by 257, or shift by 8 bits and add: val16 = val8 * 257; or val16 = (val8 << 8 | val8) Graeme Gill.