Run-time adaptive hierarchical neural architectures for representation learning in modular sensor networks

01 October 2017 → 30 September 2021
Research Foundation - Flanders (FWO)
Research disciplines
No data available
neural architecture
Project description

The human brain excels when it comes to the efficient processing of multiple correlated

information streams. It continuously receives massive streams of sequential data from our

hearing, vision, touch and other senses. Nevertheless, it is capable to quickly process and learn

from these huge amounts of data, because it can efficiently abstract it into a more compact

representation and because it optimally exploits the correlations that exist between information

from its different senses. It is also very adaptive: the brain dynamically rewires itself all the time

and can learn to cope with sudden and lasting changes.

Unfortunately, artificial neural networks still cannot match the brain its performance when dealing

with multi-sensory information. Current approaches typically focus on specific combinations of

sensors and do not offer a generally applicable solution. They process the different sensor streams

in separation and only combine them at the highest levels of abstraction, while the human brain

appears to search for correlations already in early processing stages. Also, other recent

observations in the field of neuroscience involving dynamic adaptation remain unexplored for

applications in neural networks.

In my research, I will therefore draw inspiration from these insights to create a generic deep

architecture that can efficiently integrate information streams from multiple and noisy sensors,

while being able to adapt to changes in the sensor or the environment.