Ivan Kolesov wrote: > I don't think I have enough tech knowledge here, but from the layman's > perspective, if the curves are linear, should the bitdepth matter? Using dispwin -s you see the last entries for xcalib -clear are: 0.98039 0.97658 0.97658 0.97658 0.98431 0.98048 0.98048 0.98048 0.98824 0.98439 0.98439 0.98439 0.99216 0.98830 0.98830 0.98830 0.99608 0.99220 0.99220 0.99220 1.0000 0.99611 0.99611 0.99611 while the dispwin -c values are: 0.98039 0.98039 0.98039 0.98039 0.98431 0.98431 0.98431 0.98431 0.98824 0.98824 0.98824 0.98824 0.99216 0.99216 0.99216 0.99216 0.99608 0.99608 0.99608 0.99608 1.0000 1.0000 1.0000 1.0000 xcalib has a value for 100% of 0.99611 = 65280 which is 0xff00, whereas it should be 0xffff. So it still has a bug. Looking at the source code, the problem is in MSWin's XF86VidModeSetGamma() function. xcalib should avoid this function and set the lookup values itself. Of course it's possible that XF86VidModeSetGamma() actually knows the hardware precision, while SetDeviceGammaRamp() caching the 16 bit values. Graeme Gill.