RE: SSIP for Windows -- beta almost ready for release

  • From: Jamal Mazrui <empower@xxxxxxxxx>
  • To: programmingblind@xxxxxxxxxxxxx
  • Date: Thu, 29 Nov 2007 22:37:42 -0500 (EST)

If JAWS is not running when remotely connected, would the default SAPI
speech be used locally?  Also, you mentioned SuperNova support, have you
figured out how to generate speech through its API?  I tried, mightily
without success (speech messages were quirky), and unfortunately, the
Dolphin developers never took me up on my offer to work with them until
the problem was resolved.  On the other hand, I have been able to generate
speech messages reliably through the APIs of JAWS, Window-Eyes, and System
Access, so I can foresee abstracting their APIs into a general one for
3rd party developers.

Jamal
On Wed, 28 Nov 2007, Sina Bahram
wrote:

> Date: Wed, 28 Nov 2007 17:51:07 -0500
> From: Sina Bahram <sbahram@xxxxxxxxx>
> Reply-To: programmingblind@xxxxxxxxxxxxx
> To: programmingblind@xxxxxxxxxxxxx
> Subject: RE: SSIP for Windows -- beta almost ready for release
>
> Jaws does not need to be running for any of this to work, but it can be
> instructed to sleep during the instance of a VNC application or x11 client,
> and so on.
>
> Take care,
> Sina
>
>
> -----Original Message-----
> From: programmingblind-bounce@xxxxxxxxxxxxx
> [mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Macarty, Jay
> {PBSG}
> Sent: Wednesday, November 28, 2007 5:44 PM
> To: programmingblind@xxxxxxxxxxxxx
> Subject: RE: SSIP for Windows -- beta almost ready for release
>
> Jamal,
> I'll let Sina respond to the Linux questions. But for the java questions you
> asked, here are the answers:
>
> 1. Yes, the sample SSIPClient jar, which will come with the server
> installation, could be used to self voice a java application.
>
> 2. To simply self voice a java application would not require knowledge or
> use of the accessibility framework. If one simply wished to self voice a
> particular event, such as a button being pressed or the content of a
> JTextArea, one would only need to create an instance of the SSIPClient
> object in the desired class and then call the sayString method to vocalize
> the desired text.
>
> While the use of swing and the accessibility framework are not required,
> certainly choosing to take advantage of the fact that the framework is there
> is a big plus in self voicing an application. also, if one wishes to create
> accessibility tools of a more general purpose, such as a java based screen
> reader solution, then utilizing the java accessibility API would be the best
> approach.
>
> 3. If one were simply wishing to self voice a specific java application, the
> only thing you would need to do is include the SSIPClient jar in your
> classpath. If you were executing multiple  applications from the same JRE,
> you could place the jar in the jre\lib\ext directory so that it would be
> picked up automatically. However, if you were simply self voicing a single
> application, it would likely be preferable to include the SSIPClient jar in
> the classpath definition for that application.
>
> NOTE: While you need only include the client jar to be able to gain
> connectivity to the SSIP server, you must, of course, make sure that the
> server executable is running to receive the connection prior to when the
> java application tried to establish a session. This could be handled as
> simply as placing the launch of SSIPVoiceServer.exe in your startup folder.
>
> Again, if your intention was to develop a general purpose accessibility
> technology, such as a java screen reader, the configuration of the SSIP
> client itself isn't any harder but you would likely have to define the
> accessibility application to the JVM thru the accessibility.properties file.
>
>
> In addition to the work Sina is doing, the SSIP server, by itself, will
> include the server executable and some client wrappers for various
> environments such as java, AutoIt, ruby, and a DotNet assembly dll to allow
> one to include a SSIPClient namespace in whatever DotNet environment they
> are using.
>
> Hope this helps.
>
> -----Original Message-----
> From: programmingblind-bounce@xxxxxxxxxxxxx
> [mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Jamal Mazrui
> Sent: Wednesday, November 28, 2007 11:47 AM
> To: programmingblind@xxxxxxxxxxxxx
> Subject: RE: SSIP for Windows -- beta almost ready for release
>
> Congratulations on your progress with this project, Jay!  Like others, I
> confess having trouble understanding the full ramifications.  Could you or
> Sina describe some vignettes from a user's perspective?
>
> I think I understand that this technology would allow someone on a Windows
> computer (e.g., running JAWS) to operate a remote Linux computer with Orca.
> Is that right?  Would JAWS need to be running after the connection was made?
> If so, would there be key conflicts to manage between JAWS and Orca?
>
> Does this technology also allow Java applications to be self voicing?
> Do
> they have to implement the Swing API according to accessibility guidelines?
> If one has a Java app installed, how would the self-voicing part be added?
>
> I know from the quality of your skills and the time you have invested in
> this project that it is something with exciting potential.  I'm just trying
> to get a better grasp of what it would and would not do.  If there are any
> sample apps or audio demos that illustrate the possibilities, that would be
> great.
>
> Cheers,
> Jamal
>
>
> __________
> View the list's information and change your settings at
> //www.freelists.org/list/programmingblind
> __________
> View the list's information and change your settings at
> //www.freelists.org/list/programmingblind
>
> __________
> View the list's information and change your settings at
> //www.freelists.org/list/programmingblind
>
__________
View the list's information and change your settings at 
//www.freelists.org/list/programmingblind

Other related posts: