Project

Realizing robust hybrid AI through the design of explainable, dynamic and multi-modal graph embeddings

Code
12A7726N
Duration
01 October 2025 → 30 September 2028
Funding
Research Foundation - Flanders (FWO)
Research disciplines
  • Natural sciences
    • Knowledge representation and reasoning
    • Machine learning and decision making
  • Social sciences
    • Knowledge representation and machine learning
    • Knowledge management
Keywords
Hybrid Artificial Intelligence Graph-based Machine Learning Explainable Graph Embedding
 
Project description
Hybrid Artificial Intelligence (AI) combines symbolic reasoning with subsymbolic machine learning (ML) to create systems that are robust, adaptive, and capable of solving complex real-world problems without the requirement of large amounts of data. Graphs play a pivotal role in this paradigm by linking the available data and explicit knowledge together, enabling transparent and traceable information. Despite these advantages, current graph-based hybrid AI methods face challenges in explainability and handling dynamic and multimodal data. This also hinders the recent advantages of Large Language Models (LLMs) to be used in this domain, to provide end-users with human-like explanations. In this research, I address these issues through three complementary steps. First, I will research a new graph embedding methodology that transforms graph neighbourhoods into explainable features, enabling global insights while maintaining predictive performance. Second, I examine how to adapt my graph embedding to manage evolving graph structures, diverse graph types, and multimodal data seamlessly. Finally, I will integrate these embeddings with LLMs, enhancing hybrid AI by enabling graph neighbourhood retrieval for more transparent decision-making outcomes. By leveraging the unique properties of a new graph embedding paradigm, this research aims to significantly advance adaptability, performance, and explainability in hybrid AI.