Re: Sonified Debugger vs. Screenreader Question

  • From: "John Greer" <jpgreer17@xxxxxxxxxxx>
  • To: <programmingblind@xxxxxxxxxxxxx>
  • Date: Thu, 22 Nov 2007 14:31:21 -0600

Simply put. Comprehention of a program is going to happen faster if a button says start button instead of just making a ding sound. The user may eventually be able to understand what the ding sound means but it won't happen as fast as it would if it were just told to them in their spoken language. It is the same as going to a foreign country and being expected to instantly know the language. Comprehention doesn't happen that way. Comprehention is actually the brain comparing similarities of what it already knows, and natively spoken language is always going to be the most efficient means of communication. ----- Original Message ----- From: "Andreas Stefik" <stefika@xxxxxxxxx>

To: <programmingblind@xxxxxxxxxxxxx>
Sent: Thursday, November 22, 2007 2:02 PM
Subject: Re: Sonified Debugger vs. Screenreader Question


Hey folks,

I'll respond to inthane first.

Inthane said:

john hit it correctly on the "may as well do it with the latest version" of
VS 2008, also there is a newer version of jaws being implemented as well.

Andreas said:

I checked yesterday afternoon, but it looks as if my university hasn't
yet recieved visual studio 2008 through our academic alliance with
microsoft yet. I'll check it out once it arrives.

Inthane said:

 but if there is a set of scripts in the
folder for the application just opened, it references those first, and then
the default scripts. an example;

Andreas said:

Ahh, thanks, this is exactly what I was looking for. Ok, so the moral
of the story is that scripts basically make it easier to navigate the
environment, if I'm understanding correctly. Do the scripts have the
capability of changing the audio that is output in any way?

Inthane said:

your program would be best if it made the interaction of the vs debugger
system more able to be interpreted by the readers, along with your use of
sounds to assist in the process.

Andreas said:

Oh, one minor clarification here on my part. It is very, very likely
our program won't be compatible with screenreaders in general. We've
built in our own text to speech into the program, down to the nether
regions of the compiler itself, to be able to test the effectiveness
of the sounds, etc, in a very tightly controlled environment. Of
course, if we were releasing this tool to the public, that would be a
problem, but this way we can get results where the only sound stimuli
are the sounds from our program, nothing else.

So, you can think about it like this: Visual studio using Jaws, or
whatever tool, outputs sounds that are "supposed" to indicate
information about the computer program. We're basically asking, "If we
integrate the text to speech into the compiler, can we give better
sounds that would make the blind programmer more efficient?"  Or, put
another way, how much can someone comprehend about a computer program
using one set of auditory cues or another?


Inthane said:

list of the Microsoft built in hot keys available on my grab bag site at:
http://grabbag.alacorncomputer.com

Andreas said:

I've dug through this. Very helpful. I still haven't quite gotten the
scripts installed yet, but I'm going to take another crack at that
today or tomorrow.

Thanks for the clarification inthane, very helpful!

Andreas
__________
View the list's information and change your settings at
//www.freelists.org/list/programmingblind



__________
View the list's information and change your settings at //www.freelists.org/list/programmingblind

Other related posts: