Here's an article you might be interested in. It's about software that is being developed at Cornell for reading color with sound. Starting point was a Doctorial student's need to read weather maps.
another link to another article about this
Take Java computer code that can translate images into sound, via a rudimentary software program capable of converting pixels of various colors into piano notes of various tones, and what you have is a technology that enables blind people to read maps.
Victor K. Wong, a Cornell University graduate student from Hong Kong who lost his sight in a road accident at age seven, is helping to develop innovative software that translates color into sound. "Color is something that does not exist in the world of a blind person," explains Wong. "I could see before, so I know what it is. But there is no way that I can think of to give an exact idea of color to someone who has never seen before."
He helped develop the software in Cornell's Department of Electrical and Computer Engineering (ECE) with undergraduate engineering student Ankur Moitra and research associate James Ferwerda from the Program of Computer Graphics.
The inspiration for using image-to-sound software came in early 2004 when Wong realized his problems in reading color-scaled weather maps of the Earth's upper atmosphere - a task that is a necessary part of his doctoral work in Professor Mike Kelley's ECE research group.
It is a field dubbed "space weather," which attempts to predict weather patterns high over the equator for use by Global Positioning System and other satellite communications. A space weather map might show altitude in the vertical direction (along the "y" axis), time in the horizontal direction (along the "x" axis), and represent density with different colors.
As a scientist, Wong needs to know more than just the general shape of an image. He needs to explore minute fluctuations and discern the numerical values of the pixels so that he can create mathematical models that match the image. "Color is an extra dimension," explains Wong.
At first, the team tried everything from having Kelley verbally describe the maps to Wong to attempting to print the maps in Braille. When none of those methods provided the detail and resolution Wong needed, he and Ferwerda began investigating software. Moitra later became their project programmer."We started with the basic research question of how to represent a detailed color-scaled image to someone who is blind," recalls Ferwerda. "The most natural approach was to try sound, since color and pitch can be directly related and sensitivity to changes in pitch is quite good."
Over the summer of 2004, Moitra wrote a Java routine that could translate images into sound, and in August he unveiled a rudimentary software program capable of converting pixels of various colors into piano notes of various tones.
Wong test-drove the software by exploring a color photograph of a parrot. He used a rectangular Wacom tablet and stylus - a computer input device used as an alternative to the mouse - which gives an absolute reference to the computer screen, with the bottom left-hand corner of the tablet always corresponding to the bottom left-hand corner of the screen.
As Wong guided the stylus about the tablet, piano notes began to sing out. The full range of keys on a piano was employed, allowing color resolution in 88 gradations, ranging from blue for the lowest notes to red for the highest.
The software also has an image-to-speech feature that reads aloud the numerical values of the x and y coordinates as well as the value associated with a color at any given point on the image. "In principle I could turn off the music and just have the software read out the value of each point. I would know what the gradient is in a more absolute sense, but it would get annoying after some time. It keeps reading out 200.1, 200.8, 200.5, and so on," says Wong.
One of the biggest challenges of the project is the so-called "land-and-sea" problem. "Sometimes I just want to know where is the land and where is the sea," says Wong - meaning that he would like to have an idea where the major boundaries in an image lie, such as the boundary between the parrot and the background. The problem hinges on shape recognition, which for Wong can be difficult.
In the simplest situation, the right half of an image would be completely blue and the left half completely red. To find the boundary Wong has to move the stylus continuously back and forth from one color to the next along the length of the tablet, which is both time-consuming and error prone.
To solve the land-and-sea problem, Wong, Moitra and Ferwerda tried printing the major boundary lines of an image in Braille and then laying the printed sheet over the Wacom tablet, combining both audio and tactile detection. However, they are still working to develop software that can effectively pick out the important boundaries in an image so that it can be printed.
"It is also important that there is no time delay between notes," says Moitra. "That is something we need to improve. Otherwise the image will become shifted and distorted in Victor's mind."
One of the major issues facing the project is funding. "The initial work was done on a shoestring as a side project to grants Kelley and I have received," says Ferwerda, who is preparing a proposal to the National Science Foundation to extend this work and explore other ideas for making images and other technical content accessible to blind scientists and engineers.
Says Wong: "Tackling complex color images is only one problem out of many that blind scientists are facing. But I think this is a pretty important idea."
accessibleimage@xxxxxxxxxxxxx skrev 27. juni 2007 kl. 16:46 +0000:
My name is Nayab Begum, I’m a psychology student at Aston university and
I am hoping to pursue a career in neuroimaging, and currently trying to
find ways to make the technique more accessible. This will involve being
able to access complex brain images in colour as well as graphs.
I was wondering whether anyone might be able to recommend the most
appropriate assistive technology for tactile diagrams? We’ve been looking
into electronically refreshable devices, but there doesn’t seem to be
anything on the market, and we’re not sure it would provide sufficient
level of detail. From our research, the most advanced technology seems to
be the tiger embosser, although this also seems to have its limitations.
If anybody knows about the phantom device, and if it is sold in the UK,
that would also be very helpful.
Another option we’re looking into is converting images to sound-does
anyone have any experience with this?
Also, for the data analysis, we use matlab and scientific linux. We’re in
the process of installing Ubuntu to use the orca speech software, but
we’re not really sure how much i twill be able to read. If anyone has
experience with using linux with a Braille note taker, I’d also be really
interested to know how compatible it is.
We would be really grateful for any advice