Project

Towards a better understanding of uncertainty sources in increasingly complex Deep Learning models

Code
11A3G26N
Duration
01 November 2025 → 31 October 2029
Funding
Research Foundation - Flanders (FWO)
Research disciplines
  • Natural sciences
    • Computer science
    • Statistical data science
    • Machine learning and decision making
Keywords
Deep Learning Uncertainty Bayesian Deep Learning Ensembles
 
Project description
Deep Learning systems are transforming industries, governments, and academia. Increasingly, these systems are being relied upon in safety-critical settings to improve decision-making efficiency. However, incorrect recommendations or predictions in these contexts can have irreversible consequences. In real-world applications, it is impossible to ensure that models are trained with optimal datasets or design choices, leading to scenarios where models encounter unfamiliar data or insufficient information for accurate predictions. In such cases, models should recognize their uncertainty rather than provide potentially wrong guesses. This requires models to be fully aware of prediction uncertainty, which can be categorized into two types: aleatoric (data-inherent) and epistemic (model ignorance). Current approaches in the literature to measure uncertainty are often imprecise on what they are measuring, making them non-interpretable and difficult to compare due to the lack of a well-defined technical framework and their task-specific nature. This proposal addresses this gap in the literature by investigating how aleatoric and epistemic uncertainties can be effectively quantified and interpreted in standard and increasingly complex Deep Learning models, enhancing their reliability and trustworthiness in critical applications. The proposed methods introduce a novel mechanism of disentangling uncertainties, testing their faithfulness, and exploring different layers of data complexity.