Code
1SA0B26N
Duration
01 November 2025 → 31 October 2029
Funding
Research Foundation - Flanders (FWO)
Promotor
Research disciplines
-
Natural sciences
- Machine learning and decision making
- Computer graphics
- Virtual reality and related simulation
Keywords
Sparse and Unconstrained Volumetric Video
Uncertainty Modeling in Dynamic Scenes
Photorealistic 3D Reconstruction
Project description
Volumetric Video (VV) allows users to explore camera-captured dynamic scenes with full control over a virtual camera. However, existing VV techniques are constrained by the need for specialized hardware, controlled environments, and large camera arrays, limiting their practical adoption. This research seeks to eliminate these constraints by developing a method that imposes no restrictions on the capturing process or input type, allowing any video—such as those captured with a handheld smartphone—to be converted into a real-time photo-realistic volumetric video. To achieve this, I first identify the technical challenges and examine why state-of-the-art methods have not yet overcome these issues. Next, I argue that resolving these challenges is sufficient to achieve the widespread adoption of VV. Finally, I propose novel approaches to address the identified challenges. A significant focus of this work is addressing the uncertainty inherent in the sparse and incomplete data that typical casual videos provide. I introduce methods to reduce this uncertainty, including a strong inductive bias and efficient usage of a generative prior. I also present the first method to explicitly model this uncertainty. Additionally, I propose a novel technique to deal with the inconsistencies (introduced by generative AI, motion blur, estimation errors, ...), which normally make photorealism impossible.