Sunday, February 7, 2010

vOICe, day 3

Third training session with vOICe, today using artificial lighting. I can distinguish vertical bright lines and whether these lines occupy upper or lower part of vision field, e.g. I can tell that there is doorway (black rectangle with bright left-top-right borders). Also I can tell that I'm looking at cluttered scene, it gives melodic tones, e.g. my bookshelf has very distinctive la-la-la sound. Of course, it helps that I really know that I'm looking at, without this knowledge of my surroundings it would be much harder to tell that I "see". At the end of the session (~1 hour) I usually feel quite tired of all these sounds, it seems that brain needs rest to accumulate new information.
Conclusion so far - it is harder than I thought, nothing like "mental images" appears so far, I guess it's result of intensive training.

Saturday, February 6, 2010

vOICe, day 2

Another hour with vOICe, today in daylight. It meant that background was generally bright (and loud), making it harder to distinguish that I "see". Shades might have added another difficulties. For example, I was not able to distinguish doorways, because almost everything around them was bright. Nevertheless I tried to turn around few times to disorient myself and was able to more or less precisely establish my direction. It was easier in controlled settings, for example, I sat in front of bright wall and looked at my laptop (which is black), slowly raising or lowering it and listening to changes in sound. Generally X-axis is quite easy, it is much harder to get grasp of Y-axis, which is translated into pitch. If setting is simple, e.g. horizontal black laptop on bright background, pitch is constant along X-axis (time), but if laptop is turned 45 degrees, sound comes from lower to higher pitch and it becomes much more difficult to understand. Real cluttered scenes still seems completely meaningless. Another thing I tried was sitting in dark room and slowly opening doors, which gave bright line on dark background with very distinguishable sound. It was also possible to hear diagonal lines quite precisely.

I also changed the setup a little bit - instead of fixing webcam on bicycle helmet, I fixed it directly on forehead - it gave much better feeling of where I am looking.

Conclusion - I'm starting to get used to it, but still - nothing like "mental vision", although in some moments I had feeling that high sound in left ear corresponds to "something" in upper left side. And after taking off blindfolding glasses world seems so full of colors, so bright :)

Friday, February 5, 2010

vOICe, first impressions

While reading “Esseys Dedicated to the 50th Anniversary of Artificial Intelligence” I stumbled upon an article about sensory substitution, very interesting thing. The basic idea - it doesn't matter from which sensors (eyes, ears etc.) you get information, only structure of input data matters, e.g. if brain gets 2-dimensional information, it will construct 2 dimensional visual perceptions.

To try it out I downloaded simple program, called vOICe, intended for blind people, which converts visual data from web-cam into sound waves (soundscapes). Each image is scanned from left to right every second, brightness corresponds to loudness and Y-axis corresponds to pitch (e.g. bright object in lower part of "vision" field will corresponds to loud low sound).

I spent around an hour today, blindfolded, with web-cam on my bicycle helmet, trying to figure out that I "see". It was already dark outside, so I switched light on and off to get more contrast. Distinguishing light from darkness was quite easy, darkness was generally silent sound with some small clicks, brightness was much loader, like white-noise on TV. Seeing details was much more difficult (if not impossible yet), for example, I spent quite a long time, trying to "see" my doorway, turning head in all directions and trying to hear meaningful changes in soundscape.

More about this later...