[argyllcms] Re: Argyll and 30-bit colors
- From: ternaryd <ternaryd@xxxxxxxxx>
- To: argyllcms@xxxxxxxxxxxxx
- Date: Thu, 11 Aug 2016 10:44:14 +0200
On Thu, 11 Aug 2016 17:49:41 +1000
Graeme Gill <graeme@xxxxxxxxxxxxx> wrote:
Does Argyll support 10-bit per color
The general profiling is floating point - so
there are no depth assumptions.
I'm new to all this, having realized that there
is much less information on the net than I had
hoped. Risking some misinformation, so far I
think to have understood that a 10-bit monitor
(and a supporting video card) still works in
8-bits in order to allow applications display
their 'traditional' images. But an application
may open a window in a special way such that
all 10-bits can be used. As far as I can see,
this requires OpenGL to be used.
Here is a PDF from nVidia which contains some
code snips for Windows and Linux:
While I would expect that OpenGL also works
with cards of other manufacturers, I realise
that including OpenGL support opens a pandora
box for developers and users. But others have
done it, and it would be nice as an option, as
the industry seems to go slowly but steadily
into that direction, even if faking the
10-bits by dithering.
Profiling my monitor, I get failed patches
around the black level, and I hoped that using
10-bits could help.
Display calibration supports arbitrary depth
VideoLUT entry sizes, and arbitrary depth X
11 frame buffers are supported, but it seems
neither Apple nor Microsoft have anticipated
displays with more than 8 bits/component,
and I have no idea if either has added such
support to their recent API's.
I am a pure Linux user, but I have seen enough
evidence that 30-bit colors are possible in
Windows. Sending 10-bit colors to display the
patches a colorimeter or spectrometer should
measure may make a difference.
If I understood things correctly, the operating
system is only part of the story; while it does
send the LUTs through the video card to the
monitor, it is an application like darktable
which uses the device characterizations in
order to display a color matching image, right?
If that application chooses not to do so and to
send linear RGB (like Gimp), there is little the
operating system could do about beyond the
LUTs already in the monitor. Similarly, if an
image has 100% intensity of a color channel at
255 or 1023 seems to be a choice of the
application (as long as the hardware is able to
Certainly no hardware that I currently own
has 8 bit frame buffer support, so it's
difficult for me to develop or test.
If my testing on a K2200 and Eizo CS270 can
help, please count on it.
Corrections to my understandings are very
Other related posts: