Introducing SoundSight Training

SoundSight Training will help blind people honing their listening to be more aware of the space around them.

Developing a technology for blind people looked really challenging.

At first we tried to develop something that could sense and reconstruct reality for them, a cane on steroids of sort.

But while becoming closer, friends with them, our vision started changing: we were more and more exposed to their small habits, to their stories,  their successes and frustrations.

And we realized that they didn’t need a technology that would mediate their perception of the environment, they really needed ways to interact with their surrounding as naturally as possible.

But have I introduced us yet? We are @IreneLanza, @markomanka and Henrik Kjeldesen and it’s our pleasure and pride to present SoundSight Training, an educational tool, as the name suggests, to develop and hone the innate ability of humanking to explore one’ surroundings beyond the use of sight.

In boring technical details, it is an acoustic virtual reality that simulates in real time the diffusion, reflection and distortion of user emitted sounds in different environments, offering small variations to capture the consequent modulation of sound features in a trial and error learning process, mediated by appropriate feedback about the results.

Our mission is to make this tool as accessible as possible, and open source, to be owned and tinkered around by its community, and for this reason we are slowly and organically onboarding hackers, and blind people alike, like our friends, Cecilia, Luca and Matteo.

True to this vision, we have turned to crowdfunding as the fundraising strategy of choice, and we are trying to exploit the kickstarter campaign and press attention to mobilitate people, and to experience holding a stake into the success of this adventure.

We aim at allowing millions of blind people to train their hearing, and to learn how to echolocate and navigate in living environments.

So, down to the important question: would you like to help us? :wink:

1 Like

Fingers crossed

Well done Marco, @Irene_Lanza & Henrik. And welcome on board to Irene, new Edgeryder.

I’ve only heard of teaching echolocation “manually” via examples like Daniel Kish's (“Batman”) school for children. And through handheld devices and more recently a mobile app (?), but those are mediating the environment, as you rightly point out. Kudos for the open approach, and thinking about community members who might be interested in this.

Maybe Alison Smith from Pesky People…? hm.

Was too early at the time, but maybe shoot for a fellowship?

Hi Laura,

I had managed to miss this as we were in deep preparation mode ahead of launching OpenCare. Are you up for trying something else? I have two suggestions to make

  1. Repost the material from the kickstarter campaign in your post above (if you like I can do it for you). This will make it more appealing for an Op3n fellowship

  2. I am just about to start pursuing a number of fundraising avenues including fellowships to support my own experimental work. We can collaborate on this if you like?

Can you make it to one of the OPENandChange workshops?

We’re building the collective bid for the MacArthur Foundation’s 100 million dollar grant with peers in several countries and I think you should be in it