-
Natural sciences
- Knowledge representation and reasoning
- Artificial intelligence not elsewhere classified
-
Engineering and technology
- Audio and speech computing
Inspired by neuroscience and biology, the project SoundStreams proposes to combine the most promising elements from models for auditory perception and learning to face the challenge of creating artificial intelligence from being exposed to a continuous sound stream. In contrast to most artificial intelligence systems, the proposed model will use internal representations that accurately account for the passage of time; it will learn only what is relevant, steered by attention; learn just enough, ignoring what it could not predict at all; combine and consolidate episodic and semantic memory depending on a general activation state. Compared to popular deep artificial neural network architectures the model is expected to be more robust against catastrophic forgetting, show higher ability to transfer between context and tasks, and is largely explainable. The project foresees validation on extensive datasets showing the performance of the model on classical metrics, but it will also assess the model's biological plausibility by comparing its behavior to human experimental data available at the research group. The project outcome will find applications in smart city sensor networks, context awareness in robots, and human-machine interactions.