Hi James, I agree with you. This is an example of how some tables rely on character definition rules to "display" dot patterns. The actual translation phase and the encoding phase should be made completely independent, and we can by using the display rule. The feature that automatically maps a dot pattern to a display character when it encounters the first character definition in a table that maps a character to that dot pattern, is convenient (it allows to skip the definition of display rules) but not really needed. Maybe this feature should be deprecated in order to avoid this kind of weirdness. However, with math you need to be extra careful. If I understand correctly, because of the way the editing is currently implemented, the edit table needs to do a "decoding" phase (the inverse of "displaying") prior to the actual "editing". This means that you can't use a different display table without using the counterpart of that display table in your edit table. HTH Bert James Teh writes: > Hi all, > > I am extremely confused by the way some of UK Maths has been implemented > in liblouis*. I'd greatly appreciate it if someone could help clarify. > Ultimately, I want Unicode braille output. This seems to be extremely > difficult for UK Maths because of the way it's implemented. > > Let's take a simple fraction which I'll present in LaTeX for brevity: > \frac{a+b}{c} > As I understand it, in UK Maths, the braille should be (using US > computer braille encoding): > <a ;6b>_/c > > In ukmaths_edit.ctb, \x0003 (which is defined as the start indicator for > a fraction in ukmaths.sem) is defined as dots 5 6. Similarly, \x0004 > (the end indicator for a fraction) is defined as dots 4 5. As I > understand it, they should be dots 1 2 6 and 3 4 5, respectively. > > From what I can see, this has been done because > ukmaths_single_cell_defs.cti defines "<" as dots 5 6 and ">" as dots 4 > 5. The practical upshot is that if you use us-table.dis as the first > table in mathExprTable for ukmaths.cfg, this does give you valid US > computer braille output. However, as far as I can see, it's impossible > to get valid Unicode braille output. > > To make things more confusing, ukmaths_single_cell_defs.cti defines > digits as upper digits with dot 6, so dots 1 2 6 is actually the digit > "2" unless defined earlier (e.g. by us-table.dis). > > Why is it implemented this way? Shouldn't how the output is encoded be > determined last, rather than the tables hacking around to get a specific > output encoding? That is, the dots in the tables should all be correct > so that any output encoding can be used, whether it be US computer > braille, Unicode braille or something else. So, \x0003 should be dots 1 > 2 6 and \x0004 should be dots 3 4 5. > > Does anyone actually use these tables currently? Would such a change > break things and why? > > Thanks, > Jamie For a description of the software, to download it and links to project pages go to http://www.abilitiessoft.com