Project

Explainable, human-in-the-loop hybrid AI, resolving conflicting user feedback

Code
1SH2C24N
Duration
01 November 2023 → 31 October 2027
Funding
Research Foundation - Flanders (FWO)
Research disciplines
  • Natural sciences
    • Data mining
    • Knowledge representation and reasoning
    • Machine learning and decision making
    • Decision support and group support systems
Keywords
human-in-the-loop explainability hybrid machine learning
 
Project description

One of the biggest struggles hindering adoption of AI in industry (e.g. production processes) is the fact that they see AI as a fully data driven approach, requiring a lot of data, which is often not available. Hybrid AI softens the need for big data by including domain knowledge into the AI. Whereas today’s hybrid AI solutions embed objective knowledge into the AI, here, an approach is researched that captures the tacit knowledge of experienced engineers in a useful format to create a Knowledge Graph (KG) from, while keeping the discrepancies between experts. Uncaptured knowledge can be taken into account by enabling user feedback on the outputs of the model. However, users can give conflicting feedback. These conflicts need to be captured and resolved, by leveraging existing knowledge in the KGs, before sending it to the model for training. The resulting model predictions should also be interpretable for users, so they can give valuable feedback. This is not trivial in hybrid AI as the models use vectors, extracted from the KG, that are not interpretable for humans. Here, a generic explainability layer is investigated that can be combined with any hybrid AI outcome prediction model by designing recurrent attention models for KGs.