Bayesian inference applied to reservoir computing for the modelling and control of dynamical systems

01 October 2012 → 30 September 2014
Regional and community funding: Special Research Fund, Research Foundation - Flanders (FWO)
Research disciplines
  • Natural sciences
    • Applied mathematics in specific fields
    • Computer architecture and networks
    • Distributed computing
    • Information sciences
    • Information systems
    • Programming languages
    • Scientific computing
    • Theoretical computer science
    • Visual computing
    • Other information and computing sciences
  • Engineering and technology
    • Modelling
    • Multimedia processing
reservoir computing modellering
Project description

Machine Learning (ML) is a scientific discipline that develops algorithms that computers can use to perform advanced and difficult tasks. The main idea is that these algorithms allow computer systems to “learn from experience”. By giving examples of e.g. handwritten digits, the computer can learn to recognize digits it has never seen before and classify them as 1, 2 et cetera. This shows that ML is inspired by the functioning of the human brain. From the moment we are born, our senses are influenced by the environment: we hear things and learn a language, we see objects and learn their properties, we see how other people move and learn to walk ourselves.
A lot of techniques have been developed that can learn certain relations and distinguish
different patterns. Few techniques however exist that have all the properties of the human
brain. Our brain is robust: if we see a breed of dog that we never saw before, we know it’s a
dog. Our brain can infer things and generalize observations: we don’t have to taste every
strawberry to know that it’s sweet. Our brain recognizes changes very fast: if a person starts running we don’t need to observe him for a few seconds to realize that he is moving faster.
My research proposes the combination of two techniques, namely Recurrent Neural Networks
(based on early ideas about the functioning of neurons) and Bayesian Inference (a method that calculates probabilities based on observations). By combining these techniques, the aim is to produce systems that are robust, fast and can generalize when used in highly dynamic environments. The main idea is to learn a model for this environment and then try to invert the model to make predictions on the basis of real observations. When this is successful, these models can be used to control complex processes. This is a novel approach that could outperform state of the art applications. Moreover it has a promise to provide an insight in the exact functioning of the brain.