On 5/7/12, James Cloos <cloos@xxxxxxxxxxx> wrote: >>>>>> "GG" == Graeme Gill <graeme@xxxxxxxxxxxxx> writes: > > GG> The correct way to scale 8 bits to 16 bits is to multiply by 257, > > X11's dix (display independent) layer does do that (the change went in > maybe 10 to 15 years ago; I remember the discussion on the xfree lists, > but cannot concretely date it). Drivers, OTOH, still *might* get that > wrong.... > > -JimC > -- > James Cloos <cloos@xxxxxxxxxxx> OpenPGP: 1024D/ED7DAEA6 > > I checked the fourth and last Linux computer in the house, an twelve-year-old Dell laptop with an Nvidia graphics card, running a very minimal Debian Squeeze install. I uninstalled all the graphics drivers except vesa, nv, and nouveau. Nouveau is the one being used according to lsmod. "dispwin -s check.cal" shows the same "coding error" values as Graeme gave. All four computers were using Icewm, so I uninstalled Icewm on the old Dell, installed twm per Kai-Uwe Behrmann's suggestion, cleared the calibration data using "dispwin -c", and restarted the computer. Same "coding error" values are loaded upon computer startup. The Aspire laptop has integrated Intel graphics, so I uninstalled the nouveau and vesa drivers (nv isn't listed in Sid), cleared the calibration, and restarted. Same values upon restarting. I don't think it's the graphics drivers. And as the Dell doesn't have any graphics applications installed, other than argyll so I could run dispwin, and I've never profiled or calibrated it, and there aren't any graphics programs or icc profiles or device profiles, etc installed on it (geeqie was installed, but I removed it), I don't see where some kind of weird stray configuration file could be coming from. And again, "xgamma -gamma 1.00" DOES load the "coding error" values. xgamma is part of the X server utilities. Elle -- Elle Stone http://ninedegreesbelow.com