[accessibleimage] vOICe, ICAD 2006, Synesthesia, and sonification

Hi,
This is a bit short notice but ICAD, international conference held annually on the topic of Auditory Display will be having it's conference 20 - 23 June at Quenn Mary, University of London. Some of the interesting presentations will be
/Cognitive-map forming of the blind in virtual sound environment by/Makoto Ohuchi, Yukio Iwaya & Yôiti Suzuki ; Tetsuya Munekata ./Trajectory capture in frontal plane geometry for visually impaired by/Martin Talbot & Bill Cowan
There will be an auditory graph session and lots and lots or really interesting talks.


Sending a couple of articles about vOICe developed by Dr. Peter Meijer. In an article from Scientific Computing both Peter Meijer's project and Joshua Miele of the Smith-Kettlewell Eye Research Institute project is talked about. Also in same article information and story about Eye-Borg, Sonification of colour. Also sending links to sights and articles.
Regards,
Lisa



link to article vOICe http://www.damninteresting.com/?p=581 Meijer, Miele, Eye-Borg http://www.scientific-computing.com/scwmarapr05sonification.html site http://www.seeingwithsound.com/

ICAD 2006
http://www.dcs.qmul.ac.uk/icad2006/
synesthesia
http://www.damninteresting.com/?p=450
Matlab braille support and sonification toolbox,
www.ski.org/skdtools/


Can You Hear the View? <http://www.damninteresting.com/?p=581>
Posted by Cynthia Wood <http://www.damninteresting.com/?page_id=325> on June 15th, 2006 at 11:09 pm


Cochlear Implant ElectrodeCybernetic senses have been the subject of science fiction for decades. The idea of using sophisticated technology to repair damaged bodies, or even to enhance normal ones, has a tremendous appeal – but how far have we progressed towards that goal?

In some ways, we’ve gotten amazingly far. Cochlear implants are now a normal – if controversial – treatment for deafness. They substitute for damaged or missing portions of the inner ear, gathering and processing sound. The first generation of cochlear implants provided only a distant approximation of sound, making them of limited usefulness, particularly for understanding speech sounds. Even the more sophisticated models of today have yet to approach the functionality of a normal ear, though they are far more useful than their predecessors.

While those dealing with cybernetic hearing seem to have decided upon their basic approach, dealing with lost vision is a different ball game. Several different research groups, using various methods, are attempting to produce cybernetic vision. Some, like cochlear implants, seek to replace a malfunctioning part. Others are attempting to produce something entirely new that will nonetheless function as vision. One of the projects that is furthest along is using exactly that kind of substitution. Rather than attempting to somehow re-engineer the eye, the vOICe system (Oh, I see) is using sound to bypass the eyes altogether and substituting the ears in their stead.

The vOICe system was developed by Dr. Peter Meijer - a senior researcher with Philip Research Laboratories (a Netherlands based company). It uses a computer program, a pair of video-sunglasses (sunglasses with a small video camera on the bridge), and a pair of stereo earphones to provide an auditory image of the world. Scanning left-to-right, once a second, the program translates the images seen by the camera into coded sound for the user to interpret. The format is fairly simple – louder sounds mean brighter, higher pitch means that something is higher up in the visual field, and so forth.

Learning to translate the sounds into visual meaning is another task altogether. Users of the device liken it to learning a foreign language. As with a foreign language, the more the vOICe system is used, the more quickly the user gains facility. Most users appear to start the learning process with a set of images on a computer screen (available as a free download off the Internet), and then progress to using the mobile system. The amount of information that can be distinguished even by a novice user is fairly impressive. Within a week of starting use, at least one congenitally blind woman was reporting being able to distinguish walls, stairs, and windows in her house, as well as whether the lights were on or not. The vOICe website reports that a trained user can distinguish approximately 1000 to 4000 pixels per 1 second scan. Comparatively speaking, an average sighted person can identify visual objects in an image of 32×32 pixels – or about 1024 pixels.

In what may be the most interesting part of the vOICe system, its constant use seems to cause a sort of induced synesthesia in the user - a cross-wiring of the senses, where input from one sense is perceived in another. Brain plasticity – the ability of the brain to rearrange itself in response to demand – seems to come into play, as the brain sorts auditory input into visual data. Some previously sighted vOICe users report consistent visual responses comparable to blurry or foggy vision, while their awareness of the sounds themselves recedes into the background. Users who have been blind from birth obviously cannot compare their experiences to a previous experience of vision, but they too seem to rapidly stop processing the vOICe input as auditory data.

32 x 32 pixel imageWhile the results so far are exciting, there are a few downsides to the vOICe system. Since distinguishing the auditory landscapes requires good hearing, the system is not going to work well, if at all, for someone with any sort of hearing impairment. Additionally there is some concern that the headphones and the sounds produced by the system could interfere with normal hearing function while the system is in use. However, the difference in how the brain processes the vOICe data seems to enable less interference between the competing sounds than one might imagine, with some users reporting being able to use the system while sewing, listening to TV, or even listening to music.

Slow scanning speed is likely to be the most difficult problem to improve. While retinal or brain implants (neither yet available for general use, but under investigation) allow scanning the visual field between four and eight times a second, the vOICe scan is only once a second - a fairly slow pace for interacting with the world while moving, but dictated by the need for the user to process the sounds. The brain seems to be able to compensate, though, and the vOICe users have few complaints about slow updating speed.

The last major limitation is that of current technology. While the system is portable, the need to carry a laptop, and wear a video camera and earphones does use up some carrying capacity. The limitations of the laptop battery also make venturing forth for long periods of time problematic. These problems will ease with time though. Even in the short time the vOICe has been available, new and smaller devices have become available for each of the system components. A major plus for the system is the ease of upgrading as newer technology comes on-line. Since the system is physically separate from the user, upgrading is as simple as buying the new equipment and integrating it. Upgrading a retinal implant, let alone a chip in the brain, is a much trickier proposition.

As promising as it is, the vOICe system is far from the last word in cybernetically enhanced vision. Brain implants, retinal implants, and devices that translate visual imagery to touch are all under active investigation. Thus far vOICe seems to have a clear lead by utility, by its lack of invasiveness, and even by cost, but who's to say which of the other contenders may surpass it tomorrow?

article


Sound sense

*/Ray Girvan/ reports on sonification - the representation of data as sound. Well-established in applications for the visually impaired, it has far wider scientific possibilities *

Ever since Al Jolson first spoke in /The Jazz Singer/, few have doubted that sound adds a valuable dimension to visual media. But, given the seamless integration of sound and vision in entertainment, it's perhaps odd that sound as a channel for practical data remains largely limited to end conditions: computerised bleeps, such as the 'battery low' warning on a mobile phone, that tell us only when some state has been reached. Yet there are familiar examples of sound used to monitor data: the sinister rattle of the Geiger counter (a convenient accident of the physics of the detector); the telephone speaking clock; or the audible output of a hospital ECG machine. This is data sonification.

Unsurprisingly, the possibilities for sonification have been mined most thoroughly in applications for the visually impaired. Talking digital gadgets - clocks, thermometers, barometers - are well-established hardware technology for the home, but it's less known that scientific equivalents exist. A classic example was the ULTRA system (Universal Laboratory Training and Research Aid) devised for blind science students by professors David Lunney and Robert Morrison. ULTRA, a data acquisition computer, could be interfaced to give speech readouts from laboratory instruments such as pH probes and resistance thermometers.

Location devices (ultrasonic 'sonar canes' of varying sophistication) also lead into some interesting research territory. It rather dates me that I remember Dr Leslie Kay's 'Sonic Torch' being featured on BBC TV's Tomorrow's World when I was at school. Still going strong as KASPA - Kay's Advanced Spatial Perception Aid Technology - this is the traditional sonar technology, using a bat-like frequency sweep to return detailed textural information. Another sonar device, the Sonic Pathfinder by Perceptual Alternatives, Melbourne, uses a headset with multiple transducers to give both forward and sideways detection, along with microprocessor analysis to prioritise audible warning to the most immediate hazard. Sonar, given its steep learning curve, hasn't yet achieved the popularity of low-tech approaches such as the long cane and the guide dog, but advances in computing and cognitive science may lead to equivalents that are more intuitive to users. According to Dr Robert Massof's 2003 summary paper, Auditory Assistive Devices for the Blind, many blind people perceive their environment in terms of 'auditory flow fields' - the way sound is modulated by the surroundings. Sonified spatial information based on this model could involve software-assigned virtual objects: for instance, bleeping beacons, with filtering to enhance the 'head-related transfer functions' (i.e. the effects of head and external ears) that help all hearers localise sounds.

Another approach to location-finding, image sonification, is the basis of The vOICe, developed by Dutch physicist Dr Peter Meijer (the typography is to emphasise 'Oh I See'). This works on input from a digital camera, spectacle-mounted, or even that of a mobile phone. The software sweeps the image with a vertical scan line, and sonifies features in the scan by representing vertical position as pitch, horizontal position as time within the sweep, and brightness as volume. As with sonar, this takes serious learning for anything more than simple geometrical objects, although like all learning, it's partly unconscious. One wearer, after several months, reported a sudden experience of seeing - literally - depth in the kitchen sink and around her house; the possibility that neural plasticity can evoke this spontaneous synaesthesia (cross-talk between sight and vision) is one intriguing aspect of Meijer's work.

Dr Meijer's support website, www.seeingwithsound.com, is worth visiting for its Java demonstration of The vOICe. Beyond the main application, The vOICe can be used to demonstrate various auditory effects such as Shepard tones, the illusion of an ever-ascending scale. Another feature to play with is the sonification of (x,y) function graphs: that is, sounding a tone where time is the x-axis and pitch the y-axis. You can equally do this with some mathematics packages. In Mathematica, this is done with Play [f, {t, 0, tmax}] - a direct analogue of its 2D graph plotting function Plot [f, {x, xmin, xmax}] function. Matlab has a similar construct sound (y,Fs) where y is the vector of the function, and Fs an optional parameter for sample frequency. To a blind user, however, such output isn't very informative in isolation. As you can hear with the demo of The vOICe, it's easy enough to get a qualitative impression of, say, a function being sinusoidal. But if you can't see the axes, there's no indication of the actual values.

However, many development tools enable blind users to read quantitative information from sonified graphs. For instance, Joshua Miele of the Smith-Kettlewell Eye Research Institute, San Francisco, has written a Matlab braille support and sonification toolbox, SKDTools, for blind engineers and scientists (see www.ski.org/skdtools/ for more detail). The y value of a function is represented as pitch, but there's the option of a 'discrete mode', using fixed-frequency steps that can be counted by ear. Additionally, there are tones for axis ticks, noise bursts for x-axis crossings, and high-pass and low-pass noise to signify high and low out-of-range data. The Java-based Sonification Sandbox, a project of the Psychology Department, Georgia Institute of Technology, offers similar functions, but as part of a more general toolkit to map imported Excel .csv data to multiple audio parameters for export in MIDI format. For example, an auditory graph can be made more comprehensible by overlaying its pitch profile f(t) with a drumbeat with interval representing the slope f'(t) and drum pitch representing the curvature f''(t).

*The musical approach*
Music, as in the Sonification Sandbox, is a central approach to sonic representation of data. Among computer-savvy musicians, this has a long history. In some cases, it's a form of steganography: a few years back, there was the discovery of an apparent demonic face in the spectrograph of Windowlicker, a track by the techno musician Aphex Twin (Richard David James). When viewed with the proper logarithmic scale, it turned out to be an intentional portrait of Aphex Twin himself. Other artists use scientific data, rather than art, as their source. Some notable examples are Life Music, a sonification of protein data by John Dunn and Mary Anne Clark; Bob L Sturm's Music from the Ocean, using records from deep water buoys in the Pacific; and Marty Quinn's various sonifications drawing on a variety of phenomena such as the 1994 Northridge California earthquake retold musically through his Seismic Sonata. (Dunn is an artist expert with MIDI, Clark a biologist, and Sturm and Quinn scientist-musicians with an interest in bridging the gap between arts and sciences).


But musical sonification, particularly of derived data, is a powerful concept outside art. We're used to hearing subtle distinctions in multi-instrument arrangements, and this is potentially a means to access equally subtle hidden characteristics in data sets. A good example is the Penn State University heart rate sonification project <scwjanfeb05heart_sounds.html> reported by Felix Grant in the previous issue of /Scientific Computing World/. Using suitable assignments to MIDI channels of derived data such as running means of inter-beat interval, this project attempts to replace the laborious reading of ECG traces with quickly accessible diagnostic 'music': a siren-like oscillation for sleep apnoea, a 'tinkling' timbre for the abnormal intervals of ectopic beats, and so on. An arts-science crossover runs right up to the most powerful computational efforts in this field. For instance, the Cray-powered 'Rolls Royce bulldozer' of sound synthesis, DIASS (Digital Instrument for Additive Sound Synthesis) finds twin roles as a precision tool for the sonification of scientific data and 'the most flexible instrument currently available to composers of experimental music'.

It shouldn't be imagined that sonification involves merely passive conversion of data to sound. The Neuroinformatics Group, Bielefeld University, is one of a number of teams exploring active multidimensional data mining by sound. The more complex techniques include Data Sonograms and Particle Trajectory Sonification. The former is analogous to a seismic charge that propagates and excites data points to make sounds, the latter to shooting a whooshing 'comet' into the data set and listening to how its path is affected by what it encounters.

At its science-fictional extreme, sonification leads to the idea of human augmentation. In 1998, the Associated Press news agency reported on 'the borg', a group of MIT graduate students, headed by Leonard (Lenny) Foner, pioneering wearable computers. Foner's particular invention is the Visor, a system that sonifies radiation detected by a head-mounted Zeiss MMS-1 spectrometer with range 350-1150nm (human vision is 400-700nm). The Visor has utilitarian possibilities, such as melanoma screening and seeing through camouflage. Nevertheless, its main use is sensory extension: a novel and intuitive means to extract more information from the ordinary ('Hey, my lawn looks okay, but it sounds funny today - maybe it's sick'). In both intent and implementation, the Visor has a great deal in common with Eye-Borg.

Sonification, it seems, remains the eccentric artistic relation in the family of data display methods. Given sound's importance in human culture, it's hard to say why. Perhaps vision is, ultimately, our dominant sense, and the dominance of visual displays merely reflects that. Or perhaps there's a subtle selection bias that leads interfaces to being designed mostly by 'techies' whose strongest card is visuo-spatial ability. I don't know; but the possibilities of sound are there to be explored. As Al Jolson promised, you ain't heard nothin' yet.

*References*

The International Community for Auditory Display www.icad.org/ <http://www.icad.org/>
Its /The Sonification Report: Status of the Field and Research Agenda/, although written in 1997, gives a good overview of the field, and the online proceedings of the annual International Conference on Auditory Display contain a wealth of papers on data sonification.


Design Rhythmics Sonification Research Lab www.quinnarts.com/srl/index.html <http://www.quinnarts.com/srl/index.html>

Bob L Sturm www.composerscientist.com/ <http://www.composerscientist.com/>

------------------------------------------------------------------------


Eye-Borg: Sonification of colour

Late in 2004, the regional media in Devon, UK, covered the story of Neil Harbisson, a student at Dartington College of Arts who has no colour vision due to the disorder achromatopsia. He asked Adam Montandon, a lecturer with an interest in cyborg technology, if a device could be created to enable him to perceive colour. The result of their collaboration was Eye-Borg, a sonification system that uses a laptop PC and camera-earpiece headset to convert hue to musical pitch. Developed in association with the Plymouth company HMC Entertainment Systems, Eye-Borg has since won the Europix Top Talent media award. Neil found its use came so naturally to him that he wears it all day, and has begun painting in colour. Having obtained unprecedented dispensation to wear it for his passport photo, he is, in a sense, officially a cyborg.

A cynic might write this off as a quirky regional invention story, as I did at first. Hand-held colour identifiers for the blind or visually impaired have been around for a decade or so. For instance, CareTec's ColorTest won the Winston Gordon Award from the Canadian National Institute for the Blind in 1993, and there are at least half-a-dozen others such as the Brytech Color Teller and the Cobolt Talking Colour Detector. However, the unusual technical concept interested me.

Unlike the more common contact devices, which use an LED reflection system, the camera-based Eye-Borg can sense the colour of remote objects. Commercial colour identifiers use a speech chip interface, but instead Eye-Borg maps the visible spectrum to the twelve semitones of a musical octave. In part, this caters to Neil's own preference as a musician. But it also has roots in historical schemes for mapping colour to pitch, such as those of Arcimboldo (better-known for his composite portraits of people made of natural objects such as fish and fruit), Kepler, Newton and Helmholtz. Furthermore, Eye-Borg's interface has encouraged its integration with Neil's vision as an intuitive 'extra sense', a recurring concept in the field of sonification even outside the niche market for visually impaired users.




Further Reading:
Seeing with Sound <http://www.seeingwithsound.com/>
Sound Sense <http://www.scientific-computing.com/scwmarapr05sonification.html>


Related Articles:
Bionic Eyes <http://www.damninteresting.com/?p=193>
Do You See What I Hear? <http://www.damninteresting.com/?p=450>

Sections: Medical Science <http://www.damninteresting.com/?cat=8>, The World of Tomorrow <http://www.damninteresting.com/?cat=17>

Other related posts:

  • » [accessibleimage] vOICe, ICAD 2006, Synesthesia, and sonification