They rely on one sound direction for noise reduction, but both conversation and noise come from many directions.
Machine Learning for hearing aids today works by enhancing speech and ignoring non-speech, but this doesn’t work in common situations like a noisy cafe, where you have ambient speech.
AudioFocus reimagined the hearing aid — so you can follow conversations wherever you go. Our patented technology uses a proprietary machine learning algorithm that only amplifies voices nearby by analyzing echo statistics, just like our brains do.
Use ML to auto-detect which voices are important in the presence of many.
Design a microphone array that maximizes the sound clarity and noise reduction benefits.
Utilize lower power AI Processors that allow for a discreet and ergonomic design.
Every aspect of the AudioFocus will serve unmet needs around hearing loss.
Clinically, a person might have ‘normal’ hearing but they can’t hear in background noise. That person deserves care too.
Accessibility tech should offer easily available controls that allow users to tune their experiences, whether it be volume, scale of noise reduction, or something else.
Our data augmentation technology allows us to help people hear in a variety of situations without data farming from them.