[asvs] Re: the Concept

  • From: "Will Pearson" <will-pearson@xxxxxxxxxxxxx>
  • To: <asvs@xxxxxxxxxxxxx>
  • Date: Sun, 10 Oct 2004 22:16:24 +0100

Hi Grigori;

> this is a definitely wrong imagination of vision mechanisms and
> video processing in the brain.
> We see nothing without scanning! This statement was proved with
> different technique of image fixation regarding retina.
> Moreover, the image may be recognized due to a repetition of
> eye movements (scanpath) without the light sense.

Agreed.  After some review of material on the subject of eye movements, it
turns out that images that are static are quickly forgotten.  Whilst Yarbus
and others noted that ssaccadian movements do account for moving the eye to
get detailed information on different areas of an image, there is evidence
that this is not the only function of saccadian movements.

> I've proposed a system
> of magnification to gain fine detail,
>  >>>>>>>>
> I disagree
>  >>>>>>>>
> thus rendering only a portion of the
> image at a time.
> Therefore, we can have some scanning mechanism based
> around this, under the control of the user.
>  >>>>>>>
> Im strongly agree

So, what I think we need, is a system that has a single point of detail,
whilst leaving the periphery visible, but less detailed.  This will
encourage the user to move the point of detail, thus mimicking saccadian
movements, whilst giving them less detailed information on which to
determine targets for the saccadian movements.

Thinking back to my days as someone who wrote DOS applications, I wrote a
screen magnifier that may hold one potential solution.  It allowed a user to
select the area to be magnified from an image of the screen that was normal
size.  To someone with low vision, this looked low definition and coarse.
In contrast, the chosen area, once magnified, provided fine detail.  I'm
wondering whether a similar system would work auditorially.

To simulate the low definition, coarse, view, you would sonify at the normal
scale.  This would allow a user to select the point for which they wanted
finer detail, corresponding to selecting the target for the saccadian
movement.  They would then get finer detail on that point, before returning
to the coarser detailed view, and then on to the next saccadian movement.

Tieing this in with kinesthetic feedback, the selection of the point could
be made by mouse movement on a desktop, or by stylus interaction on a PDA.
A press of the mouse button would show the point in detail, which would be
replaced by lifting the stylus on a PDA.  This fine detailed image would
basically just be a magnified image that detailed smaller details, not
present in the original image due to the limits of auditory spatial location
at around 1º.


Other related posts: