[asvs] Re: the Concept

  • From: "Will Pearson" <will-pearson@xxxxxxxxxxxxx>
  • To: <asvs@xxxxxxxxxxxxx>
  • Date: Sun, 10 Oct 2004 16:27:21 +0100

Hi Grigori,

Good to have you here, you're experience will no doubt prove invaluable.

Your point about supplementary tactile information is interesting.  I know
Steve Brewster and Lorna Brown at Glasgow have been looking into tactons, as
Steve seems to like looking at different encoding schemes, following on from
earcons :-).  It's an interesting idea, and one that should be followed up
on.  The problem that I can forsee, is the sequential nature of tactile
information, unless you use an array of tactile inducers.  A single tactile
point would work really well for a scanning system, where you had a single
point in focus, such as your Spotty Mapping, Peter's Voic, or even Kees Van
Den Dohl's Sound view, but depending on which you use, you have either the
length of time to consider, or exploration of the image causing problems.

I've recently been doing some work on semantic analysis and graph
transformation of vector based images.  I've got a couple of papers coming
out next year on the subject.  I'm just wondering if it may be possible to
convey the semantic meaning tactually, in a sequential manner.  This could
even build on Steve and Lorna's work on Tactons.

Will
----- Original Message ----- 
From: "Grigori Evreinov" <grse@xxxxxxxxx>
To: <asvs@xxxxxxxxxxxxx>
Sent: Sunday, October 10, 2004 12:10 PM
Subject: [asvs] Re: the Concept


> Dear All,
>
> My name is Grigori
> I'm sincerely wish to help some people to acquire more information
> and manipulate by info similar way like sighted people.
> However, ASVS does not mean a “poor” sonification “for sonification
 itself”
> for graphics or full images, like scientific research
> or case study to gather data and write the report or to test a new sound
> mapping only.
> I doubt that somebody of you got a full education in physiology,
> particularly in physiology of perception like me.
> This is my basic education. And, I can say definitely that physiology
> still is not the complete subject. There are a lot of speculations
> around artifacts than valid or/and validated data.
> But due to my own and long experience I can also see
> that some approaches in our area of the sonification were based on a
> wrong concept.
> And my own mappings were not so well.
> The start point is very important for the research when the goal of that
> is a practical outcome for the user.
> Will is quite write when he stated that
> “the software fit to the user, not the user fit to the software”!!!
> Recently, I’ve given a presentation on similar subject.
> Sorry, for some of you it could be problematic for interpretation
> as ppt is mostly oriented on graphics and my comments.
> But I’ll give you the link and will comment later some questions if
> necessary.
> ~ 1.6 Mb
>
http://www.cs.uta.fi/~grse/ConferenceWork/UCIT_seminar/SensesAndAccessibility.ppt
>
> If you could apply Artificial instead of auditory in abbreviation ASVS
> it could be more suitable later as the sound in this case is only the
> case study
> but in the final system, other signals could be used as well.
> For instance, any other skin irritation like as mechanical stimulation
> through vibrations
> (not sound) or electrical current or whatever.
> In such a case, the problem of information occlusion (other speech and
> auditory info)
> could be resolved. This is also important factor for the blind, as using
> the AuditorySVS
> the visually impaired people will lose the “hearing” (not like
> perception ability).
>
> Grigori
>
>
>



Other related posts: