Project

Trustworthy AI for healthcare

Code
bof/baf/4y/2024/01/777
Duration
01 January 2024 → 31 December 2025
Funding
Regional and community funding: Special Research Fund
Research disciplines
  • Natural sciences
    • Machine learning and decision making
    • Health informatics
    • Knowledge management
  • Medical and health sciences
    • Medical intensive care
    • Diagnostic radiology
    • Neurological and neuromuscular diseases
Keywords
hybrid AI decision support uncertainty quantification explainable AI
 
Project description

In today's healthcare, the early detection (and ideally even prediction of time-to-onset) of diseases for a particular patient has important diagnostic value; it allows suitable treatment before escalation of the disease occurs. AI faces challenges in navigating uncertainties inherent in medical data and decision-making. Therefore, this research aims to tackle this challenge by designing novel explainable AI/ML models integrating medical domain expertise and clinical data with uncertainty quantification in hybrid AI, ensuring reliable and interpretable early detection of events. The hybrid AI models will result in trustworthy solutions to empower the clinicians and shift towards personalized therapy. Example use cases are ICU infection management, early detection of hip dysplasia in helping dogs, manifestation of early chronic kidney disease in cats, and personalized therapy for psoriasis.