Code
BOF/PDO/2025/056
Duration
01 October 2025 → 30 September 2028
Funding
Regional and community funding: Special Research Fund
Promotor
Research disciplines
-
Natural sciences
- Computer vision
-
Engineering and technology
- Video communications
- Image and language processing
- Pattern recognition and neural networks
- Data visualisation and imaging
Keywords
Fake Media Detection
Explainable AI
Multimedia Forensics
Project description
The rise of AI-generated media has made creating fake content easier and more realistic, posing societal risks such as disinformation, fake evidence, and fraud. While detection methods for fake media have advanced, they lack interpretability, making them inaccessible and untrustworthy for non-expert users, and limiting their adoption in critical domains. Moreover, most research focuses on detecting fake images, leaving video and emerging 3D rendering formats unexplored. These formats, capable of creating dynamic, multi-perspective fake content, present new challenges that current methods do not address. This project aims to develop a conversational explainability framework for fake media detection across images, videos, and 3D formats. By leveraging large foundation models, users can interact with the system and receive explanations tailored to their expertise. This bridges the gap between technical sophistication and usability, making detection tools more accessible. Additionally, the project adopts a modular approach to interpretability, addressing pixel-level forensics, confidence estimation, and attribution of generative methods. By offering fine-grained transparency at every step, it enhances trust and usability in fake media detection. These advancements have potential for groundbreaking impact in fake media detection and other societally relevant multimedia forensics and computer vision fields, contributing to a safer digital environment.