-
Natural sciences
- Computer graphics
- Virtual reality and related simulation
-
Engineering and technology
- Analogue and digital signal processing
Virtual reality (VR) and holographic displays allow viewers to freely move around (with 6 degrees of freedom) in three-dimensional (3D) space, which can completely immerse one in the displayed world. The VR industry is in an arm’s race to deliver the best VR experience and current rendering techniques cannot keep up with the growing expectations. The only type of content currently suitable for VR, consists of low-quality renderings of simple 3D models. A new type of content is on the rise: by using camera captures of the real world, photorealism can be achieved at low rendering effort, e.g. 360° video in VR. To achieve total freedom of movement, many cameras are placed in the scene and the views in between cameras are interpolated through “view synthesis”. However, current implementations struggle to achieve both realistic quality and real-time performance. I will deliver a novel view synthesis approach that uses optimal camera positioning for high quality. Realism will be achieved by an innovative approach that models the probability of each pixel being “correct” via camera positioning and the likelihood that the pixel belongs to a reflective surface. The framework will include a real-time visualizer to assess the achieved quality and performance on many existing and new data sets. With this framework, the door to hyper-immersive, photorealistic VR in real-world settings will be opened, allowing the wide public to travel back in time to any camera-captured place on earth.