Re: Sonified Debugger vs. Screenreader Question

  • From: "Andreas Stefik" <stefika@xxxxxxxxx>
  • To: programmingblind@xxxxxxxxxxxxx
  • Date: Thu, 22 Nov 2007 13:51:34 -0800

Hey again Will,

> Your analysis technique is interesting.  I'm wondering whether any of your
> sounds are dependant on the context provided by sounds that precede them?

Yup, they definitely do. Programming is often this way, because of
control flow and data flow concerns. It makes testing comprehension
issues, as you are picking up on, very tricky.

Will said:

> The reason why I'm asking about context is that in linguistics quite often
> words with multiple wordsenses are disambiguated based on context.  Part of
> this context comes from the linguistic context that is derived from the
> words and text that precede the ambiguous word.  People usually store this
> linguistic context in their memory.

Andreas said:

Indeed, this type of issue is very, very, critical for the type of
study that we're running. While you won't see it in the paper you now
have a copy of, nowadays we have a mound of ... I guess you could say,
checks and balances to make sure what we are measuring is as "close as
possible" to raw comprehension, and removes issues like working memory
from the equation. Context is taken care of by our custom algorithms.

For example, and this is just a minor issue, in Baddeley's working
memory papers (See his 1992 science article if you want a good
overview), Baddeley defines the phonological store, which sadly only
has about 2 second of memory for language (in a human). This means,
any piece of speech based sound we use needs to be less than this or
we are going to have issues related more to working memory than to
comprehension of that audio. As such, our speech based cues are as
short as possible, and never exceed 2 seconds of audio before a pause.
I pilot tested the work without pauses ... it was painful to watch.
There are lots and lots of issues besides this too, like cascading
errors or many others, and we've gone to great lengths to remove as
many as we possibly can. A n experimental psychologist and I have been
working on removing these types of issues for some time now, it's been
a fun and tricky project.

Will said:

> If you do have sounds that are context dependant, which could be thought of
> as being equivalent to linguistic context, then having people write down
> what they think the sound means may give them a memory aid to use for
> disambiguating future sounds.

Andreas Said:

Yup, but I figured out a technique to measure this type of stuff
directly. The custom algorithms that we have analyze participant data
are very fancy. In a nutshell, the algorithms break apart participants
answers into a bunch of pieces related to comprehension, then re-form
aggregates of those scores under various contexts. I guess that's a
little vague, but the techniques are super complicated. Once I get
that 4th chapter text thoroughly edited, I can send you a copy. It
will probably be a few months though.

Will said:

> You've probably thought about this already, but if it is a potential problem
> then maybe having the person say what they think the meaning is might be a
> simple modification that would avoid the memory aid problem.

Andreas:

You hit it right on the nose. When we ran the pilot in the study you
now have a copy of, we ran into these issues, and needed some fancy
techniques to get past them.

Thanks for the post, always nice to discuss this kind of stuff.

Andreas
__________
View the list's information and change your settings at 
//www.freelists.org/list/programmingblind

Other related posts: