Additionally, the configuration of the list has something to do with why this happens. This list has standard replies go to the whole list rather than the original sender, meaning in order to directly respond to a sender either the sender's EMail must be clearly contained in the original message or the user must go digging in the message properties for it. Very doable, but if the list only sent replies to all to the list and sent standard replies to the original sender, you'd see less straying off topic, because the off topic conversation would naturally metriculate to private messages between its participants. I don't know whether Jeff cares one way or the other about this, but I know I'm more apt to just hit reply when there's a gray area, simply because it's far easier. If all implementations of reply lead back to the list, you're going to see messages go to the list that might have otherwise been redirected elsewhere.
JW Matthew2007 wrote:
You're indulging in the exact same behavior you're criticizing, but I feel you can do so just like the rest of us.Matthew----- Original Message ----- From: "Veli-Pekka Tätilä" <vtatila@xxxxxxxxxxxxxxxxxxxx>To: <programmingblind@xxxxxxxxxxxxx> Sent: Saturday, November 24, 2007 4:57 AMSubject: Re: Semi OT: USefulness of Auditory Icons, Mercator (Was: Sonified Debugger vs. Screenreader Question)Hi Jared et al, To get to the actual topic, skip down to end fluff. Fluff: This first thing is not addressed to Jared in particular but almost everyone in this thread: I hate to be a kiljoy but: Once again, on this list although not being a moderator here, I think we're getting pretty badly off-topic. Research is fine, but a discussion on the merit of research itself, the McDonalds analogies, whether or not someone likes music in the background with speech etc... Just are neither blind programming, nor gathering valuable research data or methodology on blind programming. screen reader internals, from a programming point of view certainly are on topic, but please do start another technical rather than end user thread on it, take mostOT things off-list and tag the rest as OT, and change the subject lines accordingly so that I can tell what the last reply in a message is about, without reading through the whole thing. Warning: If the subject doesn't catch my interest, I won't read the message unless the author is a regular I like. These are things a lot of people never seem to do here and I've ranted on this before so I'll stop. If this was a usenet programming news group, a lot of people would have complained already, at least that's how the Perl culture is like in comp.lang.perl.misc, which I've been reading about ay year now. Errm seems my own mesage has a lot of OT:ish interesting stuff, so who am I to talk, <grin>. Now here's a guideline, if you're mostly going to reply to the fluf bits marking something that's OT here to me, please do reply, but reply off-list, change subjets and snip accordingly. End fluff. Regarding Auditory icons, I'd like to share an experience which might shed some light on attitudes: Initially when I read about auditory icons about a year ago I was not really convinced, though found the article mightily interesting. The thing I'm refering to here is: [37] W. Gaver, "The SonicFinder: An Interface That Uses Auditory Icons", Human-Computer Interaction Vol. 4, No. 1, 1989 pp. 67-94 Another such incident happened a little later when I read The UI guy Alan Cooper stating quite plainly, in About Face 2.0, that computers should make reassuring noises when they work well. We have been conditioned by rude error beeps in current computer systems. Then I started experimenting with Windows sound schemes as a user. I tried fast AT&T sampled speech, the fastest Dolphin Orpheus formant synth setting of 700, despite not being a native English speaker, and the bits of audio borrowed from MacOS X. The AT&T was easy to listen to but it was not very fast and efficient. But the very rapid formant synth, on the other hand, took me quite a long time to understand, too. now I never like music in the background personally, despite loving music and doing computer music myself (when I listen to music I generally do nothing else), but when I document read something I intensely concentrate on the speech and can understand it blazingly fast. HOwever, when I get an unexpected, rapid spoken prompt in response to some off-screen event, e.g. battery running out or braekpoint hit, it takes me quite a while to actually realize aha this is not from the screen reader, and also, I grasp the meaning way after having already heard the prompt. In my informal experiments I found out that the MAc OS X sound scheme actually works much better than any of the speech synth prompts. I do think it would be interesting to convey some attributes in the UI with a conventional subtractive analog synth, specially for people who have a mental model of such a synth's operation, however this experiment of mine used static samples in stead. In brief, what I found out was that for telling binary info, like whether event e happened or not, I could recognize the uniqueness of a particular sound effect very quickly after the beginning of a sample, where as a speech prompt took much more effort to parse in my head, when used for the same purpose. So all this goes to show that your initial attitudes might be wrong, too. Fluff: As a historical note before sample playback synths were common place and ROM was expensive, the guys at Roland figured out that the attack portion of a sound is mightily important to recognizing natural, acoustic instuments. Cut the attack portion of a piano sound away, and it is hard to tell based on the decay tail, that it is in fact, a piano sound. So they sampled the attacks and generated the rest of the sound using synthesis techniques, and it worked out surprisingly well at the time. Personally, I've never been a big fan of the Roland D series synth myself, but hey, that's just me. End fluff. The last thing I wanted to ask about is the research screen reader Mercator mentioned at least in: @article{mercator, title={{Transforming Graphical Interfaces Into Auditory Interfaces for Blind Users}}, author={Mynatt, E.D.}, journal={Human-Computer Interaction}, volume={12}, number={1 \& 2}, pages={7--45}, year={1997} } Pardon the LaTeX notation. Mercator used both auditory icons and a tree navigation approach I haven't seen since in any Windows screen reader and thus am interested in. I even found the source code on-line but could not find an ancient enough SunOS machine, and the hardware synth, for actually running the reader. Is there any way to run that reader in LInux or some mainframe VM? In particular, I did not find the sound effects used in for the auditory icons for UI overviews in the source code package. Neither could I grasp exactly what on-screen elements of the UI generated the differente levels of the tree. There are some examples but ideally I'd liek to try out the reader to actually figure it out myself and test whether I got its operation right. Any ideas as to how that's possible? Are there on-line recorded demos of MErcator usage, foor instance? Fluff: All in all, I think it is remarkably sad how few of the research prototypes, Web sites and such things referenced in accessibility articles are actually linked to, and freely available on-line. Now, after having read some good article, what I would like to do would be to try out the things they created. Yet often this is a no can do. Isn't science supposed to be public and open to everyone? In general, I don't care about the source, binaries would be good enough. End fluff. Guess that's all. -- With kind regards Veli-Pekka Tätilä (vtatila@xxxxxxxxxxxxxxxxxxxx) Accessibility, game music, synthesizers and programming: http://www.student.oulu.fi/~vtatila Jared Wright wrote:Jared Wright wrote: "The research certainly is intriguing and worthwhile, and users possibly playing music in the background shouldn't slow it up at all." I'd hoped to make it clear that whatever research is being conducted is by no means a bad thing. However, the general idea this research had seemed to be pointing to was some sort of assistive technology that utilized a sophistocated sound scheme to inform users rather than the mostly text-to-speech mediums we get information through now. There had been prier discussion, in this and other threads, about the possible practicality of such an idea, and my remark was intended to provide an additional take on the overlying concept and to bring something to the general idea's discussion that had, to my knowledge, been overlooked. Perhaps you are unconsciously allowing John's apparent misgivings about the research to modify the context of my own remark. Rest assured, I need no convincing of the potential merit of such research. My impression is that Andreas, compared to much of the list, is somewhat unfamiliar with the kinds of assistive technology we regularly use and just how we use it. He seems to be making a thorough effort to learn these things. Knowing that at least one blind user, and I would imagine plenty more, listens to music or listens consistently to some other sound source while computing may effect his research now or in thefuture. It wouldn't be the first time someone thought the tunes went offto account for all the visual access technology I use, and given where the general direction of this research seems heading, I think it's a relevant tidbit. JW__________ View the list's information and change your settings at //www.freelists.org/list/programmingblind __________ NOD32 2683 (20071124) Information __________ This message was checked by NOD32 antivirus system. http://www.eset.com__________View the list's information and change your settings at //www.freelists.org/list/programmingblind
__________View the list's information and change your settings at //www.freelists.org/list/programmingblind