-
Engineering and technology
- Data visualisation and imaging
This research aims to enhance 3D reconstruction through sensor fusion for improved scene analysis and understanding. We first refine 3D reconstruction in terms of accuracy, efficiency, and robustness by integrating sensor fusion with deep learning-based pose estimation (e.g., by leveraging advancements in monocular depth estimation). Robustness in SLAM is improved by incorporating UWB, IMU, and GNSS, while accuracy benefits from high-resolution multi-view cameras. Efficiency is optimized by combining LiDAR and image data. Next, we enhance scene interpretability using multimodal data (e.g., hyperspectral, thermal, polarization, magnetometers) and advanced representations like BRDF. Finally, we accelerate scene reconstruction with methods such as neural rendering and 3D Gaussian splatting. Applications include digital twins, 3D display visualization, enhanced target detection, etc.