Re: Sonified Debugger vs. Screenreader Question

  • From: "Will Pearson" <will@xxxxxxxxxxxxx>
  • To: <programmingblind@xxxxxxxxxxxxx>
  • Date: Thu, 22 Nov 2007 21:21:56 -0000

Hi Andreas,

I've got your paper and will read through it tomorrow.

Your analysis technique is interesting. I'm wondering whether any of your sounds are dependant on the context provided by sounds that precede them?

The reason why I'm asking about context is that in linguistics quite often words with multiple wordsenses are disambiguated based on context. Part of this context comes from the linguistic context that is derived from the words and text that precede the ambiguous word. People usually store this linguistic context in their memory.

If you do have sounds that are context dependant, which could be thought of as being equivalent to linguistic context, then having people write down what they think the sound means may give them a memory aid to use for disambiguating future sounds.

You've probably thought about this already, but if it is a potential problem then maybe having the person say what they think the meaning is might be a simple modification that would avoid the memory aid problem. As speech is temporary it won't be present for the person to refer to later, and you could capture the person's response using a tape recorder or other recording device.

Will __________ View the list's information and change your settings at //www.freelists.org/list/programmingblind

Other related posts: