RE: SSIP for Windows -- beta almost ready for release

  • From: Jamal Mazrui <empower@xxxxxxxxx>
  • To: programmingblind@xxxxxxxxxxxxx
  • Date: Thu, 29 Nov 2007 22:30:31 -0500 (EST)

Hi Jay,
Are you saying that this would make a Java application self voicing even
if it was built without accessibility in mind, or that a developer
interested in accessibility could make an application self-voicing by
making calls to this server?  If the latter, is the idea similar to the
extra speech messages I have implemented in my applications?  In other
words, is this an API whereby a developer can make an application
generate speech using the screen reader currently in use?  Is the idea
that the developer does not have to know the particular screen reader
API, but can write to a general one and SSIP will determine what screen
reader is in use and how to make it talk?

Jamal
On Wed, 28 Nov 2007, Macarty, Jay
{PBSG} wrote:

> Date: Wed, 28 Nov 2007 16:43:39 -0600
> From: "Macarty, Jay  {PBSG}" <Jay.Macarty@xxxxxxxx>
> Reply-To: programmingblind@xxxxxxxxxxxxx
> To: programmingblind@xxxxxxxxxxxxx
> Subject: RE: SSIP for Windows -- beta almost ready for release
>
> Jamal,
> I'll let Sina respond to the Linux questions. But for the java questions
> you asked, here are the answers:
>
> 1. Yes, the sample SSIPClient jar, which will come with the server
> installation, could be used to self voice a java application.
>
> 2. To simply self voice a java application would not require knowledge
> or use of the accessibility framework. If one simply wished to self
> voice a particular event, such as a button being pressed or the content
> of a JTextArea, one would only need to create an instance of the
> SSIPClient object in the desired class and then call the sayString
> method to vocalize the desired text.
>
> While the use of swing and the accessibility framework are not required,
> certainly choosing to take advantage of the fact that the framework is
> there is a big plus in self voicing an application. also, if one wishes
> to create accessibility tools of a more general purpose, such as a java
> based screen reader solution, then utilizing the java accessibility API
> would be the best approach.
>
> 3. If one were simply wishing to self voice a specific java application,
> the only thing you would need to do is include the SSIPClient jar in
> your classpath. If you were executing multiple  applications from the
> same JRE, you could place the jar in the jre\lib\ext directory so that
> it would be picked up automatically. However, if you were simply self
> voicing a single application, it would likely be preferable to include
> the SSIPClient jar in the classpath definition for that application.
>
> NOTE: While you need only include the client jar to be able to gain
> connectivity to the SSIP server, you must, of course, make sure that the
> server executable is running to receive the connection prior to when the
> java application tried to establish a session. This could be handled as
> simply as placing the launch of SSIPVoiceServer.exe in your startup
> folder.
>
> Again, if your intention was to develop a general purpose accessibility
> technology, such as a java screen reader, the configuration of the SSIP
> client itself isn't any harder but you would likely have to define the
> accessibility application to the JVM thru the accessibility.properties
> file.
>
> In addition to the work Sina is doing, the SSIP server, by itself, will
> include the server executable and some client wrappers for various
> environments such as java, AutoIt, ruby, and a DotNet assembly dll to
> allow one to include a SSIPClient namespace in whatever DotNet
> environment they are using.
>
> Hope this helps.
>
> -----Original Message-----
> From: programmingblind-bounce@xxxxxxxxxxxxx
> [mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Jamal Mazrui
> Sent: Wednesday, November 28, 2007 11:47 AM
> To: programmingblind@xxxxxxxxxxxxx
> Subject: RE: SSIP for Windows -- beta almost ready for release
>
> Congratulations on your progress with this project, Jay!  Like others, I
> confess having trouble understanding the full ramifications.  Could you
> or
> Sina describe some vignettes from a user's perspective?
>
> I think I understand that this technology would allow someone on a
> Windows
> computer (e.g., running JAWS) to operate a remote Linux computer with
> Orca.  Is that right?  Would JAWS need to be running after the
> connection
> was made?  If so, would there be key conflicts to manage between JAWS
> and
> Orca?
>
> Does this technology also allow Java applications to be self voicing?
> Do
> they have to implement the Swing API according to accessibility
> guidelines?  If one has a Java app installed, how would the self-voicing
> part be added?
>
> I know from the quality of your skills and the time you have invested in
> this project that it is something with exciting potential.  I'm just
> trying to get a better grasp of what it would and would not do.  If
> there
> are any sample apps or audio demos that illustrate the possibilities,
> that
> would be great.
>
> Cheers,
> Jamal
>
>
> __________
> View the list's information and change your settings at
> //www.freelists.org/list/programmingblind
> __________
> View the list's information and change your settings at
> //www.freelists.org/list/programmingblind
>
__________
View the list's information and change your settings at 
//www.freelists.org/list/programmingblind

Other related posts: