Code
1SA9O26N
Duration
01 November 2025 → 31 October 2029
Funding
Research Foundation - Flanders (FWO)
Promotor
Research disciplines
-
Natural sciences
- Computer vision
-
Engineering and technology
- Video communications
- Image and language processing
- Pattern recognition and neural networks
- Data visualisation and imaging
Keywords
AI-generated media
fake media detection
misinformation detection
Project description
With the rise of generative AI, manipulation and creation of synthetic media has become very easy. However, this powerful technology has the potential to be used to spread misinformation, facilitate fraud, impersonate people, etc. To combat this, multimedia forensics methods aim to detect manipulated and AI-generated media. However, the current state-of-the-art has two major shortcomings. First, models can be disrupted by local (e.g., captions or logos added to images) and global AI-based (e.g., AI-based compression methods, such as JPEG AI) forensic distractions, breaking their intended working and significantly impacting their performance. Second, research into video forensics is limited and often lacks explainability, and the research into sparse image sets is nonexistent. This project aims to (1) increase resilience against distractions, and (2) research sparse image set and video forensics based on object consistency. I will create a plug-in architecture to make image forensic models local-distraction aware, and develop novel techniques to provide resilience against global AI-based distractions. Additionally, I will research 3D modeling approaches to forensically investigate image sets and videos in an explainable way. In conclusion, this project tackles the problem of misinformation by increasing the robustness and explainability of multimedia forensics.