[argyllcms] Re: Obtaining a camera color response matrix

  • From: Nikolai Tasev <nikolai@xxxxxxx>
  • To: argyllcms@xxxxxxxxxxxxx
  • Date: Fri, 13 Mar 2015 16:07:53 +0200

Hi Graeme,

On 3/13/2015 2:49 PM, Graeme Gill wrote:
I presume that this means that I should find the closest matrix that converts 
from the RGB
raw values to RGB values in sRGB.
Do you think this is a correct assumption?
That seems a feasible idea, although it would be interesting to
know if that matrix is applied in linear light space ?
As far as the documentation says the matrix is applied in linear space. Before it are the flatfield correction, and the white balancing, there is no mention of gamma encoding.
Here is my approximate idea:
    - calibrate the camera in the intended lighting conditions
    - take a shot of a calibration target
    - measure the values with scanin
    - create an input profile with colprof ( matrix only )
    - look inside the icm file and find the matrix RGBraw to XYZ
    - multiply it by the matrix XYZ to sRGB
    - zero the offets
Do you think there is another way to do it or if I am missing some details?
Yes, that might sort of work, but you are overlooking the effect of
the gamma encode space to some degree. Ideally you would want the
camera profile created with the sRGB gamma curve, and you are assuming
that the camera matrix is applied in linear light space.
The documentation of the camera doesn't mention anywhere of applied gamma encoding so I assume the colors are in linear space. I would like the result after the color matrix is applied to be still in linear space. As I need to do some analysis before applying gamma encoding.

Does the sRGB color space imply that the RGB values are already encoded with some gamma? That is to be truly in sRGB color space
you have to apply gamma encode on the linear values?

I do not know the calculations of the gamma encoding but I presume that they cannot be applied with a 3x3 matrix. So what effect of the gamma
encoding should I include in the matrix calculations?
What use case should require offsets that are different than zero and how to 
change the
calculations?
If there is an offset out of the camera, say due to black level noise,
you might want to subtract an offset (are negative offsets supported ?).

If that were the cause of needing an offset, then you would simply measure
the black camera RGB, and multiply by your correction matrix to
arrive at the offsets to subtract.
Yes the offsets can be both positive and negative. I agree that noise is the most probable reason.

Best Regards
Nikolai Tasev


Other related posts: