While reading “Esseys Dedicated to the 50th Anniversary of Artificial Intelligence” I stumbled upon an article about sensory substitution, very interesting thing. The basic idea - it doesn't matter from which sensors (eyes, ears etc.) you get information, only structure of input data matters, e.g. if brain gets 2-dimensional information, it will construct 2 dimensional visual perceptions.
To try it out I downloaded simple program, called vOICe, intended for blind people, which converts visual data from web-cam into sound waves (soundscapes). Each image is scanned from left to right every second, brightness corresponds to loudness and Y-axis corresponds to pitch (e.g. bright object in lower part of "vision" field will corresponds to loud low sound).
I spent around an hour today, blindfolded, with web-cam on my bicycle helmet, trying to figure out that I "see". It was already dark outside, so I switched light on and off to get more contrast. Distinguishing light from darkness was quite easy, darkness was generally silent sound with some small clicks, brightness was much loader, like white-noise on TV. Seeing details was much more difficult (if not impossible yet), for example, I spent quite a long time, trying to "see" my doorway, turning head in all directions and trying to hear meaningful changes in soundscape.
More about this later...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment