-
Medical and health sciences
- Speech, language and hearing sciences not elsewhere classified
As the hearing loss epidemic grows due to an aging population and increasing exposure to urban and recreational noise, the demand for accessible, personalized audio solutions has become more urgent. At the same time, the limitations of speech- and audio-driven systems, such as speech recognition and robotics, are becoming clear. These systems struggle in noisy environments, and consumer electronics for speech and music lack adaptation to individual hearing impairments. To address these challenges, the InSilicoEars project aims to transform hearing loss diagnostics, treatments, and machine-learning-based audio applications by leveraging the unique properties of human auditory processing.
InSilicoEars integrates auditory neuroscience with advanced auditory processing models and machine-learning techniques to create a biophysically realistic in-silico auditory framework. This innovative system incorporates neural stochasticity and auditory feedback mechanisms to simulate the complexities of human hearing and its impairments. A groundbreaking approach converts these models into differentiable, neural-network-based, low-latency alternatives capable of seamless operation in closed-loop systems. The resulting systems enable the development of noise-robust, human-like, real-time audio-processing methods tailored for music and speech, while also compensating for early signs of hearing damage. Diagnostic markers for early-onset hearing loss and personalized audio-processing solutions will be experimentally validated with human test subjects.
By bridging auditory neuroscience and machine learning, InSilicoEars is advancing the next generation personalized hearables and audio technologies. These breakthroughs will enhance the accessibility and performance of audio applications, creating solutions customized to individual needs while expanding the capabilities of consumer electronics in complex, noisy environments.