Thesis : Real-time hologram synthesis for immersive video conference and telepresence systems
Keywords: Holography, 3D video, Algorithmics, Signal processing, Optics, Photonics
- Scientific context
With the recent advances in capture and display systems, immersive technologies have drawn considerable attention from the scientific and industrial communities during the last decade. Consumers seek a better immersive sensation during their remote interactions and audio-visual entertainments. There is a strong desire to experience the ultimate Star Wars-like three-dimensional (3D) display able to project the user into an immersive and realistic virtual environment and give them the illusion that interlocutors from miles away are present in the same conference room.
Unfortunately, most current 3D visualization systems – such as Head-Mounted Displays (HMD) or 3D televisions – are based on Stereoscopy, which fails to create a natural and realistic depth illusion. This is because stereoscopy does not reproduce all the Human Visual System (HVS) depth cues perceived in natural vision. In particular, it cannot provide the accommodation stimulus, meaning that the viewer has to focus on a fixed plane whose depth does not match the actual location of perceived objects. This severely degrades interaction and immersion and creates the so-called Vergence-Accommodation Conflict (VAC) leading to eye-strain and headaches.
To solve this limitation, several alternative technologies have been proposed in recent decades. Among these techniques, Holography is often considered as the most promising, since it provides all HVS depth cues without causing eye-strain. To create the depth illusion, a hologram diffracts an illuminating light beam to give it the shape of the light wave that would be emitted, transmitted or reflected by a given scene. As a consequence, the viewers perceive the scene as if it were physically present in front of them.
Thanks to its attractive visualization properties, Holography is a perfect candidate for the ultimate 3D display, creating virtual images indistinguishable from real ones. However, it also involves several open issues which need to be tackled. One of the most important obstacles is the very high resolution required for displaying life-size scenes. Since holography creates the depth illusion using the diffraction of light, the pixel pitch of a hologram should be close to the wavelength of visible light. Because of this microscopic size, a hologram with a large size and wide viewing angle contains several billions of pixels. For instance, a 20cm × 15cm hologram with a viewing angle of 120° requires a resolution of 720K × 540K pixels.
Despite the rapid evolution of electronic technologies, calculating a hologram of such resolution in real time remains an open problem. To achieve this challenge, the objective of this thesis is to design and develop new holographic video synthesis algorithms with reduced computational complexity.
- Thesis Objectives
To generate a hologram, state of the art methods usually represent scene objects as 3D point clouds or polygon meshes. The light waves scattered by each scene point or polygon towards the hologram plane are then summed to obtain the hologram. The main limitation of these methods is their great computational complexity, which is directly proportional to the number of points or polygons and to the resolution of the hologram. Indeed, the light wave emitted by each point of the scene potentially illuminates every hologram pixel during recording. The generation of a hologram comprising several billion pixels can thus take several hours or even several days of calculation on a desktop computer. To overcome this limitation, several research tracks will be explored during this thesis.
A first track concerns the calculation of the holographic video stream in the spatio-frequency domain, also called phase space. Indeed, while there is, in general, no correlation between two adjacent pixels of a hologram, the space-frequency distribution of the holographic signal highlights strong redundancies from one coefficient to another. By generating the hologram directly in phase space, we aim to take advantage of these spatio-frequency redundancies to reduce the computation time. Similarly, even if there is no temporal redundancy between consecutive frames of a holographic video, preliminary studies carried out at b<>com show that the phase space representation of the holographic signal presents strong correlations from one image to another. By characterizing the evolution of this representation with the motion of scene objects, it may be possible to predict each holographic frame from the preceding frames and thus considerably reduce the holographic video calculation time.
A second track would be to take into account the limitations of the human visual system. Human visual acuity reaches its maximum in a cone of around ten degrees in the center of the visual field (called foveal zone), and decreases rapidly in the peripheral vision. To reduce the calculation time, it is possible to accurately compute the light waves in the user’s foveal zone only and reduce the calculation precision in the peripheral vision. While this technique – called foveated rendering – has been commonly used in virtual reality applications, it has almost never been studied in the context of holography. This is because it requires a non-uniform sampling of light waves, drastically reducing the efficiency of light propagation operators based on discrete Fourier transforms (Rayleigh-Sommerfeld propagation, Fresnel propagation, etc.), often used for the calculation of the hologram. The application of these operators in phase space could overcome this limitation.
Finally, an important aspect of the thesis will be to evaluate and compare the visual quality of the generated holograms. Currently available holographic displays have a limited resolution, so the subjective quality evaluation of extremely high-resolution holograms has never been tackled in the literature. To address this challenge, via the collaboration and co-supervision of the thesis with IMT Atlantique and thanks to their technological platform ARAGO, we will design and fabricate physical holograms by direct-write photoplotting. This will enable the validation of the designed algorithms and their adaptation to the limitations and constraints of the physical realization. Subjective quality assessment procedures for holography will therefore be developed.
Who we’re looking for
• Master or Engineer’s degree in Computer science, Signal/Image processing, Applied Mathematics or equivalent
• Competences in Optics
• Comfortable with C++, Matlab and/or Python languages
• Proficient in English, spoken and written
• Fixed-term contract
• Start date: October 2022
• Duration: 3 years
• Location: Rennes