[asvs] Re: the Concept

  • From: Grigori Evreinov <grse@xxxxxxxxx>
  • To: asvs@xxxxxxxxxxxxx
  • Date: Sun, 10 Oct 2004 19:44:18 +0300

Hi Will,

I know Steve and Alistair Edwards and others well,
Steve visited us last week (he also saw this ppt you have a link)
and I hope he saw many interesting things& including
both the virtual stick interaction and blind interaction with graphics
based on directional predictive sounds for mobile PC...

Yes, I try to combine tactile and sound, but most important the 
kinaesthetic feedback
during manipulation with graphics and the active exploratory movements.
Sometimes, speech cues are faster, but positioning (a navigation) and
some relationships of objects could be more adequately transformed
with hand movement and sound mapping.

Regarding semantics of the vibrations: it works well.
I use it throughout the games scripts.

Reverse engineering sight for a moment,
would it not be a reasonable assumption that scanning occurs due to the
difference in definition between the centre and periphery of the eye? We
can see a picture of the world, as we have receptor cells around the surface
of the retina, but we have more receptor cells in the centre of the eye,
this giving more definition to what is in the centre of the visual arc. So,
we visually scan the image to get a detailed image, but if we don't require
high definition, there's no requirement for scanning.
this is a definitely wrong imagination of vision mechanisms and
video processing in the brain.
We see nothing without scanning! This statement was proved with
different technique of image fixation regarding retina.
Moreover, the image may be recognized due to a repetition of
eye movements (scanpath) without the light sense.

Stark, L.W., Top-Down Vision in Humans and Robots. BISC Seminar, 
University of California at Berkeley, 1997.
Yarbus, A.L., Eye movements and vision. New York, Plenum Press, 1967.
Deubel, H., & Schneider, W.X., "Saccade target selection and object 
recognition: Evidence for a common attentional mechanism," Vision 
Research, 36, pp. 1827-1837, 1996.
Driver, J., & Baylis, G. C., "Movement and visual attention: The 
spotlight metaphor breaks down," J. of Experimental Psychology: Human 
Perception & Performance, 15, pp. 448-456, 1989.

Finally, human eye cannot scan own pupil and the pupil is invisible,
but the pupil will be visible if light source will be placed near the eye
(better, closer the focus) like the circle due to scanning of the pupil 
retinal projection

I've proposed a system
of magnification to gain fine detail,
I disagree

thus rendering only a portion of the
image at a time.
Therefore, we can have some scanning mechanism based
around this, under the control of the user.
Im strongly agree

this is positive, and than more definitely information could be 
connected to
scanning (movement by head or finger or stick or&
that is, a priori known and may be directly used) than more easy way to 
or to decode optical info concerning the interaction style or motion 
strategy with
the image piece. The interaction style is the spatial-temporal 
synchronization for
demodulation or/and interpreting the signals used for visual cues 
In any case, the optical information is mostly information about the 
particular position
and only some of low-level detectors can recognize some patterns, lines, 
angles and their changes (movements). While the patterns are also could be
recognized due to scanning of key positions and comparison with the pattern
stored in a memory, motor and visual associative cortex plus 44, 45, 46
Brodmans areas! if we talk about the textual image.


Other related posts: