Hi, Thank you for the prompt answer. I don't think I have enough tech knowledge here, but from the layman's perspective, if the curves are linear, should the bitdepth matter? Anyway, I've downloaded the latest xcalib from their site, started it with the -c parameted (clear), and saved the resulting curves to a cal file with dispwin -s. Then I've reset the curves with Argyll using dispwin -c and saved the resulting curves to another cal file. When I compare these two cal files by loading one and running dispwin -V with the other, I get the same 0.4% discrepancy as before. This should not be a bitdepth issue by now, should it? Seems to be puzzling a bit. Ivan 2013/8/8 Graeme Gill <graeme@xxxxxxxxxxxxx> > Ivan Kolesov wrote: > > > Well, the main question is how come Argyll's and Windows' default gamma > > curves are different? Also, what can be done about it, or should anything > > be done? How does this affect the overall calibration and profiling > > processes? > > Hi, > > I think this may have come up some time ago - summary is that > some people don't know how to scale integers to a higher precision. > > The Windows VideoLUTs structure table entries are 16 bit, irrespective > of what the hardware is. The correct values for a 256 step linear curve > are: > > 0 = 0 > 1/255 * 65535 = 257 > 2/255 * 65535 = 514 > > . > 255/255 * 65535 = 65535 > > A way of computing this using integer code is: > > val16 = val8 << 8 | val8 > > The wrong way of doing it is to truncate it: > > val16 = val8 << 8 > > This will only be the same as above if the entries are 8 bit. > If the entries are more than 8 bit, you aren't getting the > full range of video levels. My guess is that this may be what you > are noticing. > > Doing a "dispwin -s filenameX" on system start and after dispwin -c > and comparing the values would indicate exactly what's going on. > > V 0.7 of xcalib fixed this problem I think. From what you say, > MSWin has a similar problem. > > Graeme Gill. > > >