Ivan Kolesov wrote: > Well, the main question is how come Argyll's and Windows' default gamma > curves are different? Also, what can be done about it, or should anything > be done? How does this affect the overall calibration and profiling > processes? Hi, I think this may have come up some time ago - summary is that some people don't know how to scale integers to a higher precision. The Windows VideoLUTs structure table entries are 16 bit, irrespective of what the hardware is. The correct values for a 256 step linear curve are: 0 = 0 1/255 * 65535 = 257 2/255 * 65535 = 514 . 255/255 * 65535 = 65535 A way of computing this using integer code is: val16 = val8 << 8 | val8 The wrong way of doing it is to truncate it: val16 = val8 << 8 This will only be the same as above if the entries are 8 bit. If the entries are more than 8 bit, you aren't getting the full range of video levels. My guess is that this may be what you are noticing. Doing a "dispwin -s filenameX" on system start and after dispwin -c and comparing the values would indicate exactly what's going on. V 0.7 of xcalib fixed this problem I think. From what you say, MSWin has a similar problem. Graeme Gill.