[liblouis-liblouisxml] Re: Proposal for capital and emphasis in UEB

  • From: "Michael Whapples" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "mwhapples@xxxxxxx" for DMARC)
  • To: liblouis-liblouisxml@xxxxxxxxxxxxx
  • Date: Mon, 02 Feb 2015 14:43:05 +0000

That possibly pushes me into my response part 2.

What I was trying to show is simply what Braille codes I use for what and why. Yes I do use different Braille codes for different things (eg. BAUK for standard text and mathematics, US 8-dot Braille for programming on a Braille display, etc). I do this because some tools are better for achieving the specific thing I need to do in a given situation. I know to those who use Nemeth that you believe Nemeth is the superior Braille code for mathematics, but as I noted for me I learnt BAUK first and so I have not managed to swap to Nemeth being my primary code for one reason or another.

I can understand the desirability of wanting to only need to learn and use a single system regardless of the situation, but that is quite a challenge to create such a single system. There is always the risk that one will make a system which is passable for everything, but is never outstanding for anything. Passable meaning that it technically can be used but says nothing to actual usability.

I don't want to get into whether UEB has fallen into that trap or not, I have not got enough experience with it to feel qualified to comment. I hope though that my comments have been useful for those with more knowledge of the code to understand whether it does what I want from a Braille code in different situations.

Michael Whapples
On 02/02/2015 13:33, Keith Creasy wrote:
Wow, excellent points from both. Also, just in general, the mix of languages and the way 
variables, classes, and the like are written in computer code make trying to use braille, 
especially contracted braille, unwieldy. Computer code is not purely mathematical and it 
is certainly not literary. It is computer program code and the best braille code for it 
is eight-dot "computer" braille.


Keith

-----Original Message-----
From: liblouis-liblouisxml-bounce@xxxxxxxxxxxxx 
[mailto:liblouis-liblouisxml-bounce@xxxxxxxxxxxxx] On Behalf Of Michael Whapples
Sent: Monday, February 02, 2015 8:27 AM
To: liblouis-liblouisxml@xxxxxxxxxxxxx
Subject: [liblouis-liblouisxml] Re: Proposal for capital and emphasis in UEB

Hello,
I am not sure I want to get into the discussion of whether UEB is better or 
worse than any other Braille system, Susan does make some good points and I 
would like to give some detail of how I work as a Braille user.

Firstly a quick background. I am based in the UK, so I was initially taught 
BAUK and this probably is still my primary Braille code. I have seen a little 
Nemeth, however either through habbit or something else I have never managed to 
swap to it as my primary Braille code for mathematical content.

My work is mostly computer programming, and for that I do use a Braille 
display,. For that I use the simple US 8-dot computer Braille mapping (in 
BrlTTY what is termed NABCC). My main reason for this is that the simple 
one-to-one mapping of what I read to what is on the screen has a simplicity 
which allows me to just get on with the actual coding. Also if the source code, 
or other text files, have some tabular layout then the one-to-one mapping of 
cells and characters means I can appreciate the formatting and meaning of the 
code in the same way someone sighted probably does. This last point is really 
so when using a unix console.

Now to a point Susan made regarding input. I do type using a standard keyboard. 
I am not fully sure on my reasons why, but Braille input has always seemed 
awkward, particularly when dealing with a computer. May be its I learnt to 
touch type first and so that is my natural language when writing, or may be 
something else. May be its a feeling of issues with back translation, which may 
be leads me to another of Susan's points.

Back translation being reliable with human produced Braille. There are a number 
of errors in my writing of Braille which may occur for me: I simply forget a 
particular contraction so may write out something in grade one, I make a typo 
or I fall into what I term relaxed or slack Braille (simply I know that a human 
reader has common sense in most cases so may drop certain strict rules where 
the meaning is obvious to a human, .profile might be such an example as in a 
unix book disprofile would not make any sense to a unix user).

May be some of these errors should not occur (particularly me relaxing certain 
rules), but I doubt all these errors can be prevented (typos in particular). So 
yes could back translation ever be resistant against these errors, or at least 
detect errors?

On a slightly separate note, I personally feel back translation is a wrong 
approach, if someone can only work in Braille then they are setting themself up 
for a life of reliance. The reliance may be on software or it may be on a human 
transcriber, but still relying on something else performing correctly. Being 
able to type my own stuff I feel it gives me greater independence in producing 
something as and when I want it.

Michael Whapples
On 28/01/2015 19:14, Susan Jolly wrote:
As most of you know, I have long opposed UEB for use in the United
States and, not surprisingly, I still do so.  Just to be complete, I
am a sighted retired computational scientist who became interested in
braille software development back in 2000. Because of this interest I
was lucky enough to meet John Boyer back then.  I would estimate that
I have spent the equivalent of at least two working years on UEB
issues:  studying UEB in great detail, trying to help various
organizations who were opposing it, and carefully documenting my
objections on my website. I realize that the current situation is
unlikely to change but I would like to respond to a few of the
comments in this thread.

As a programmer for more than 40 years I find it essential when I read
technical material that includes software fragments that these
fragments are identical to what would appear in working software that
incorporates those fragments.  That is, I should whenever I wish, be
able to copy and paste these software fragments directly into actual
computer code.  I find it impossible to comprehend why braille-using
programmers wouldn't prefer to learn, read, or write computer code
using the same characters that sighted people do. I've seen programs
written by persons whose native language is French or another
non-English language which uses a similar alphabet to English (ASCII)
and while the comments are often in their native language, they do not
translate the keywords of the programming language to their native
language.
(It's possible that there are cases I'm unaware of where there are
forms of programming languages that use other alphabets where
compiling requires
backtranslation.)

On a similar note it is my impression that many braille-using adults
prefer to directly enter print or computer code using either a
standard keyboard or the default 8-dot computer braille table built
into their braille display and have no trouble switching back and
forth from computer braille to six-dot contracted braille.

Since the deficiencies of UEB math have been well-domented elsewhere I
have just this one statement. As a computational mathematician I have
been continually impressed with how Nemeth math represents the true
nature of mathematics in a way that I've not seen any other braille
system come close to.

Next I'd like to make two technical comments about translating and
back-translating.  First, both of these processes are technically
speaking examples of parsing. Those of you with an advanced computer
science background are likely aware that the standard techniques long
used for lexical analysis and parsing are quite different from the way
these processes are handled in table-based braille software.  I
understand that the use of tables is intended to make it possible for
the same engine to translate according to numerous braille systems and
I've observed that the popularity of this feature is a primary reason
for the widespread adoption of liblouis. However, I wouldn't be
surprised if some of the advances in parsing lead to new approaches to
braille software in the future.

The second comment is specific to backtranslating UEB. My
understanding is that there is a "mathematical" proof that the
prefix-root nature of UEB makes it possible to automate fully correct
backtranslation of UEB.
(It may
be that the use of shortform contractions violates this; I'm not sure.
If so, one would need to scan for shortforms first.)

However, my concern is not whether correct UEB can be automatically
backtranslated, my concern is whether human-produced UEB, which is
likely to contain various errors, can be automatically backtranslated.
It is important to realize that it is the presence of extra rules more
than the elimination of a few problematic contractions that makes
accurate UEB backtranslation potentially automatable. For example, as
was pointed out in an earlier post on this thread, UEB allows a
leading period (full
stop) if
the item is preceded by a Grade 1 indicator. In other words, the real
question is whether a UEB backtranslator can localize braille errors
just as a compiler can generally find all the mistakes in a piece of
code without crashing.

Finally, congratulations and best wishes to everyone who has been
and/or still is working so hard to make liblouis a success.  This is
truly an impressive project of worldwide importance!

Sincerely,
Susan Jolly
www.dotlessbraille.org

For a description of the software, to download it and links to project
pages go to http://www.abilitiessoft.com
For a description of the software, to download it and links to project pages go 
to http://www.abilitiessoft.com
For a description of the software, to download it and links to
project pages go to http://www.abilitiessoft.com

For a description of the software, to download it and links to
project pages go to http://www.abilitiessoft.com

Other related posts: