On Sun, 2008-01-13 at 00:42 +0100, Frédéric Crozat wrote: > > > > I think I understand that. From my somewhat rudimentary understanding > > of the code, the luts are loaded by standard functions in X11 VidMode > > extensions. There is a structure and the programs just copy the > > information into it and X then takes care of the rest. I presume under > > these circumstance that if you load one set of numbers and then with > > another application load some other set of numbers, the second loading > > just writes over what was there previously. So at least for the > > monitor, you don't have to worry about composing curves when you only > > want one to be operative. I hope that is correct. > > Not entirely (or your explanation is incomplete). > > LUT data is only loaded by dispwin (or xcalib) and is never modified > by applications (except gnome-screensaver ;). > Gimp (or other color managed applications) are modifying their own > rendering (ie the colors they request X server to render), based on > display characterization data, in order to get expected colors (as > measured by colorimeter). This color management is not done by X > server but by applications, most of them using littleCMS library to > take care of it. This is why if you get a different rendering if you > disable color management in GIMP (or if you use a non color managed > application, such as gthumb or firefox 2), even if the display is > correctly calibrated, thanks to LUTs. I'm still finding it hard to visualize what happens to a pixel in a file. I can see that it might have to be modified while editing to represent it in some device independent manner as a "color" in a color space. But presumably that doesn't get done by the display profile. I also see why that triple of values has to be modified so that what appears on the screen actually represents that color within the limits of what the hardware can do. But, there appears to be some division of labor in doing that, which I don't understand. Is the point that the data in the display profile is used by the application to deal with differing rendering intents, or gamuts, etc. before sending the values to X, which would then use the loaded lut. In particular, would pixel values, already represented in a device independent color space, which happen not to need altering due to rendering intent or other such issues just be sent unaltered to X where the loaded lut would then make the last corrections. Also, in connection with this, it seems that the application could do everything without loading any luts into X. So what would be the point of doing it with xcalib? Presumably it is that other applications which don't explicitly do color management do use implicit color management and if the display is not 'calibrated", what you see on the screen for such applications might be wildly off target.