Nikolay Pokhilchenko wrote: > Well, it's a good question. I didn't want to explain to the end users why I > ask them to print > 16-bit targets when they prints 8-bit images in general. I want building a > good profiles with the > user's habitual image bit-depth. The end user can disable dithering by > mistake. And if I send him > 16-bit target files, it would be impossible to characterize ink mixtures > between quantization > levels. On the contrary, while printing at high printer resolutions with > dithering, the > intermittent ink quantities can be characterized easily without 16-bit test > charts and without > dithering at the user side. Hi, I guess I'm a bit puzzled as to how this can work technically. 8 bit dithering or screening is effectively linearly interpolating between the 8 bit values using spatial averaging, and with a purely additive device this would lead to linear interpolation between its 8 bit device output values, adding no effective extra precision to the device measurement. Apart from some very specific cases (such as instrument quantization and/or an 8 bit printing process such as dye sublimation etc.) the only way I could imagine this helping is if the linear light to perceptual characteristic is extremely steep at some points, and this might be viewed as flaw device model if it is interpolating in a space that is not a good model of the device behaviour. I also guess that this is only with very specific test charts (calibration charts ?) since typical profiling charts don't have enough patches to crowd them close enough to be 1/256 of the gamut apart or less. Graeme Gill.