With the rise of speech and audio steered automatic systems (e.g., speech recognition, speech-to-text), also the limitations of these technologies have become apparent. Automatic speech recognition fails when the acoustic conditions are sub-optimal due to background noise, and applications are not tailored to the pathologies of individuals interfacing with them (e.g. speech or hearing-impairments). To improve the application range and accessibility of automatic technologies, this project draws from expert knowledge on sound processing in the human auditory system. Humans can perform speech intelligibility at negative signal-to-noise ratios, hence deriving auditory features from signal processing models of the human auditory pathway can render the back-end processing of automatic systems more robust against high levels of background noise. Secondly, auditory models can simulate individual degrees of sensorineural hearing loss in great detail and can be used to predict speech intelligibility performance of individuals interacting with audio technologies. Taken together, we follow a bio-inspired, machine learning and automatic speech recognition approach to improve state-of-the-art systems using methods that do not need large amounts of training data, but rather knowledge from the signal processing performed by the human auditory system. The economic value and application range of our technologies is substantial given that many older people suffer from hearing-impairment.