Project

Sensor fusion for multimodal mapping

Code
bof/baf/4y/2025/01/055
Duration
01 January 2025 → 31 December 2026
Funding
Regional and community funding: Special Research Fund
Promotor
Research disciplines
  • Engineering and technology
    • Data visualisation and imaging
Keywords
sensor fusion computer vision 3D reconstruction
 
Project description

This research aims to enhance 3D reconstruction through sensor fusion for improved scene analysis and understanding. We first refine 3D reconstruction in terms of accuracy, efficiency, and robustness by integrating sensor fusion with deep learning-based pose estimation (e.g., by leveraging advancements in monocular depth estimation). Robustness in SLAM is improved by incorporating UWB, IMU, and GNSS, while accuracy benefits from high-resolution multi-view cameras. Efficiency is optimized by combining LiDAR and image data. Next, we enhance scene interpretability using multimodal data (e.g., hyperspectral, thermal, polarization, magnetometers) and advanced representations like BRDF. Finally, we accelerate scene reconstruction with methods such as neural rendering and 3D Gaussian splatting. Applications include digital twins, 3D display visualization, enhanced target detection, etc.