Explainable, human-in-the-loop hybrid AI, resolving conflicting user feedback and in a federated learning setting

01 October 2022 → 31 October 2023
Regional and community funding: Special Research Fund
Research disciplines
  • Natural sciences
    • Data mining
    • Machine learning and decision making
    • Decision support and group support systems
  • Social sciences
    • Knowledge representation and machine learning
Hybrid machine learning explainability federated learning
Project description

Hybrid AI softens the need for big data by including domain knowledge into the AI. Current solutions
embed objective knowledge. This research extends this by combining this with the tacit knowledge
of experienced engineers, while keeping the discrepancies between them. Uncaptured knowledge is
considered in the form of user feedback on the model outputs, including conflicting feedback between different users.