-
Engineering and technology
- Signal processing
- Biomedical signal processing
- Audio and speech computing
With the rise of speech and audio-steered automatic systems (e.g., speech recognition, robotics), also the limitations of these technologies have become apparent. Automatic speech recognition fails when acoustic conditions are sub-optimal due to background noise, and applications are not tailored to the pathologies of individuals interfacing with them (e.g. hearing impairment). To improve the application range and accessibility of automatic technologies, this project draws from expert knowledge on sound processing in the human auditory system. Humans can perform speech intelligibility at negative signal-to-noise ratios, hence, deriving auditory features which mimic processing in the human auditory pathway can render the back-end processing of automatic systems more robust for operation in challenging acoustic scenarios. Secondly, auditory models can simulate individual degrees of sensorineural hearing loss and in this way be incorporated with machine-hearing back-ends to individualize predictions (e.g. automatic speech intelligibility performance prediction). Taken together, we follow a bio-inspired, machine learning approach to improve state-of-the-art systems using methods that do not require large amounts of training data, but rather knowledge from the processing performed by the human auditory system. Our project outcomes will result in more robust audio-steered machine-hearing systems: i.e. machine hearing 2.0.