AudioFocus designs and manufactures hearing aid technology that let’s people hear what matters most, even in noisy places.
We’ve talked to countless patients and audiologists from all over the country about the challenge of noisy places and three things are clear:
Noise comes from speech too [1]
The noisy voices are often just as loud [2]
Desired Speech can come from many angles [3]
How we determine which sounds are important and which ones are not
Spectrum cues tell us if this is a voice or not. This can’t distinguish important voices vs. irrelevant ones.
Sound intensity cues tell us which voice is loudest. This can’t work when the noise is as loud as the desired speech.
Directional cues tell us which sound is in front. This can’t work if there are multiple speakers or if you turn your head.
One cue that’s been ignored in research is the distance cue, which relies on echo statistics[4]. The basic idea is to detect the volume difference between the direct sound and echo of it. Our algorithm uses this cue to determine which voices important and which ones are not.
Trick question. It doesn’t exist anywhere.
To solve this we created a ray tracing auralization framework called Floyd. Floyd takes a 3D model of a room and patients’ head recordings to simulate millions of echoes.
For each room we generate two versions of it, one where it’s loud and noisy and one where it’s not. Then we train our model to map the noisy one to the quiet one.
We simulate tens of thousands of sound reflections per echo to make sure they’re indistinguishable from real world echoes. By leveraging dozens of GPUs across the Amazon cloud we can do this at scale.
We looked for existing hardware options to support our breakthrough software. We found limiting trade-offs – wearability, battery life, and so on. We’ve decided to build it ourselves.
By using a novel microphone array design we can capture echo signatures from multiple angles to maximize benefits for our patients.
We’re partnering with custom ASIC processor designers to deliver this technology to you in a wearable form factor. These chips can run deep learning models in single digit milliWatts.