Guy K. Kloss wrote: > Hi, > > I've experimented a bit with the scanin tool for reading characterisation > targets. As the use targeted case here involves digital cameras (still & > video) the capturing of the target is not as trivial as for e. g. scanners. > > Of course, the image needs to be captured in a way that no specularities or > any kind of glare is present on the target. This can be achieved by either > changing the positioning of the camera, the illumination or the target. For > many cases however the characterisation is to be performed in the geometric > conditions as given by a fixed setup in a fixed location (towards windows, > lamps, etc.). So the only way to remove glare is to move or tilt the target > relative to the camera. > > Doing this I have discovered the following, which is not an issue for > scanners > as input devices. The scanin tool seems to be relatively robust towards pure > rotation and (uneven) scaling as long as it is along the target's axes. For > testing I've created a few test samples. These are the problematic > circumstances: > > * perspective distortion > * pin cushion type distortion > * rotation with uneven scaling > (or scaling along a non-main axis of the target) > > The issue is now, that any image distortion that leads to a non-rectangular > shape of the target potentially endangers the characterisation, as pixels > outside the patches (or belonging to other patches) will be evaluated. This > means that the target MUST remain orthogonal towards the visual axis of the > camera, with only limited pin cushion distortion of the optical system. > > Any clues on solving this problem either in software or in handling the test > setups? If bad comes to worse I've got to dig deeply into the OpenCV tricks > box and try to undo certain distortions of the image before the > characterisation process. However, I'd rather like to avoid that to not > induce a higher error than necessary. I think you should not overestimate small errors introduced by correcting distortions with an image editing program, particularly if you do it with 16 bits per channel, in a linear light space (raw RGB with gamma 1.0). When capturing a target with a camera, there are IMO other typical sources of error with a significantly larger contribution to the total error (for instance uneven illumination, natural and optical vignetting induced by the lens, still some residual glare). Regards, Gerhard