Advanced Media Content

Most of tomorrow’s media formats tend to increase the sense of immersion into the content. Transforming the end-user/consumer’s experience by putting them into the middle of the action or information is the challenge, especially whenever narrative aspects are key.

Labo bcom - nouveaux contenus médias
© Fred Pieau
It’s being seen in all forms of media: Television, movies, games, advertising, training, collaborative work, monitoring, industrial signage, the Web, etc.

The Advanced Media Content lab is building tools that enable the creation, storage, transportation, and sharing both for and by as many people as possible. It delivers expertise and technologies related to innovative dimensions of images (resolution, depth, color, light, etc.), sound (spatialized), and new rendering devices (screen, projection, AR/VR head-mounted displays with 3 or 6 degrees of freedom, etc.). It develops tools and solutions aimed at enabling the adoption of these future formats by industry professionals, and ultimately by end consumers. Holography and compression systems are key areas of expertise for the laboratory.

Jean-Yves Aubié - bcom
© Fred Pieau

Jean-Yves Aubié

Advanced Media Content lab Manager

The challenges raised by new video and audio formats? Bringing even MORE realism and immersion to the experience, while still remaining easy to use and handle.
products & services
Adaptative HDR Converter - teaser produit - b<>com b<>com Sublima

Perform real-time HDR conversions without compromise on the artistic intent

Spatial audio family - spectateurs avec casque - bcom Spatial Audio family

A family of audio plug-ins for more immersion

expertise
holo-couv
holography

Holography is the ultimate 3D display technology, providing the most natural, comfortable and immersive visualization experience without causing eye strain.

scientific publications

01.15.2024

PS-NET: an end-to-end phase space depth estimation approach for computer-generated holograms

In the present work, an end-to-end approach is proposed for recovering an RGB-D scene representation directly from a hologram using its phase space representation. The proposed method involves four steps. First, a set of silhouette images is extracted from the hologram phase space representation. Second, a minimal 3D volume that describes these silhouettes is extracted. Third, the extracted 3D…

read the publication

12.18.2023

Holographic Near-eye Display with Real-time Embedded Rendering

We present a wearable full-color holographic augmented reality headset with binocular vision support and real-time embedded hologram calculation. Contrarily to most previously proposed prototypes, our headset employs high-speed amplitude-only microdisplays and embeds a compact and lightweight electronic board to drive and synchronize the microdisplays and light source engines. In addition, to…

read the publication

10.03.2023

Self-Supervised Focus Measure Fusing for Depth Estimation from Computer-Generated Holograms

Depth from focus is a simple and effective methodology for retrieving the scene geometry from a hologram when used with the appropriate focus measure and patch size. However, fixing those parameters for every sample may not be the right choice, as different scenes can be composed with various types of textures. In this work, we propose a self-supervised learning methodology for fusing the depth…

read the publication

07.11.2023

TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic SLAM

Most classical SLAM systems rely on the static scene assumption, which limits their applicability in real world scenarios. Recent SLAM frameworks have been proposed to simultaneously track the camera and moving objects. However they are often unable to estimate the canonical pose of the objects and exhibit a low object tracking accuracy. To solve this problem we propose TwistSLAM++, a semantic,…

read the publication

  • 2019

    Product of the Year at the NAB Show in Las Vegas

  • 2017

    Technology Innovation Award at the NAB Show in Las Vegas