[asvs] Re: Concept and others thoughts

  • From: "Will Pearson" <will-pearson@xxxxxxxxxxxxx>
  • To: <asvs@xxxxxxxxxxxxx>
  • Date: Tue, 12 Oct 2004 16:52:38 +0100

Grigori;

I think it depends on what you want out of the system.  Visual aesthetics
are largely based on color and spatial relationships.  So, you would need
some full scale sonification to cater for this.

However, to get the functionality of sight, I don't believe you need any
sonification at all.  With the exception of aesthetics, we only use sight to
gather meaning from the environment.  For example, how wide is the gap,
which is determined by visual analysis of the spatial relationships between
two objects, or where's that mug of coffe in relation to my hand.  Possibly,
semantics could be used here, in order to transfer the meaning conveyed by
the visual image into another form.  Simon Polovina and myself are currently
investigating a semantic based approach for diagram accessibility.  You
would probably need some strong AI to work out the semantics, but I believe
it could play a part.

I also like the idea of increasing the SNR.  However, I would think that a
cautious approach needs to be taken.  We would need to allow the user to
select what they considered noise, and allow them to do this on a contextual
basis.  Different users will have different ideas on what constitutes noise,
and may change those ideas with different contexts.

Finally, I wholeheartedly agree on the issue of the learning curve.  Unless
the benefits really outweigh the costs of learning a new approach, users
won't want to use a system.  This is illustrated throughout software, where
once people get used to a particular product, unless change is forced on
them, they really don't change products unless there's significant benefits
gained from the change.

Will
----- Original Message ----- 
From: "Grigori Evreinov" <grse@xxxxxxxxx>
To: <asvs@xxxxxxxxxxxxx>
Sent: Tuesday, October 12, 2004 11:29 AM
Subject: [asvs] Concept and others thoughts


> Hi,
>
> Not only Geoff Smith has problems with hearing.
> A hearing loss is often accompanying the blindness or
> both perceptive diseases are provoked due to diabetic or other neuronal
> disorders.
> Thus, there is a challenge to design Artificial SVS which could be based
on
> suitable signals in each particular case.
> Sonification which is based on 3D or surround sound will always
> applicable for
> the narrow range of the tasks (case study!) and an audience of the users
> will be narrow too: 1-10, while the most of them will be from or
> connected to
> the research society. Moreover, the hearing is the single remote
> perception for the blind.
> It means that if ASVS will not satisfy speed and quality of visualization
> for remote objects, nobody will use it like the aid.
> The most of people are very conservative to learn a new technique
> if the results are doubtful or the method require a long training.
>
> I have the experience with text entry for blind people and for people
> with normal vision.
> If the people can use a conventional keyboard and any text-to-speech
> software,
> they never wish to learn Braille or other codes.
> However, all of us know the main lacks of the Braille: a low speed,
> the loss of the fingertip sensitivity due to a disease progress;
> and QWERTY keyboard: the long training, intended for two hands,
> does not applicable for blind typing with touchscreen.
> I have proposed the number of alternative and competitive blind text
entry.
> Nevertheless, these are interesting for the narrow range of researchers
> and several blind people.
>
> We (all of us) know a lot but not enough to build the real ASVS.
> Our knowledge should prevent us to make the same errors
> which were done before. I can say that all sound mappings,
> which I tried before and could guess in near future, were a case study
only.
> This does not mean that blind people should use only the method proposed
> by me.
> And Ill continue the search of a possible technique to transform
> visual information into another modality suitable for blind inspection
> and remote interaction with objects.
>
> Magnification, Zooming, or Scaling
> Magnification is not always useful. Any transformation of the visual image
> by the crystalline lens (accommodation of the eye and attentional tuning)
> or due to special hardware and software
> (for instance, http://www.visionadvantage.net/maxport.htm )
> pursues the goal to increase(!) a ratio signal-to-noise.
> That means that most of non-significant details could be removed
> and some important elements of the picture might be presented sequentially
> to help in the image description and recognition.
> Meantime, the user of some system for image transformation will not wish
> to waste time doing efforts with a strong concentration to investigate
> auditory
> or tactile presented (embossed) quasi-images
> if speech feedback could describe the same parameters by several words.
> Therefore, textual magnification does not competitive to speech synthesis,
> at least for people with some degree of hearing.
> Magnification of the graphics is not always useful,
> as a special strategy should be developed and user needs the special
skills
> for decoding (mental integration) the information presented by pieces.
> Interesting that kinaesthetic afferent flow plays an important role
> for spatial-temporal synchronization and integration of sequence patterns
> with the help of motor cortex. Therefore, some of researchers try to
> develop
> the blind inspection of the graphs with the help of presenting
> compressed image
> or visual stage. They try to minimize the parameters about the objects
> and their relations by using alternative techniques and physical signals
> or/and to present not the parameters of the objects but the parameters
> of the interaction with objects.
>
> Somehow, the alternative visualization of visual images is intended
> to facilitate a remote interaction with objects and the blind navigation.
> This is not intended for seeing TV.
>
> Finally, the alternative visualization should be based on video processing
> (sometimes in mode degree than sonification or using another signals)
> and development special strategy to fit software to the user needs.
> Concerning sonification I can say, than shorter signals we could use,
> and rarely use the sound, then more viable and usable system will be
> created,
> as it will not contradict with hearing and other perceptive information
> which is not less important for blind user.
> The permanent or continuous sonification of the full stage or a part of
> the stage
> or navigation or whatever is an extremely bad solution.
> This way is applicable only for a particular research.
>
> Grigori
>
>
>
>



Other related posts: