Image Vision & Immersion

Most of tomorrow's audio-visual formats increase the feeling of immersion in the content. The challenge is to transform the end user's experience by immersing him in the heart of the action or information. At the same time, computer vision technologies (augmented reality, virtual reality, volumetric video, etc.) are fed by progress in artificial intelligence. They are revolutionizing many industrial sectors by offering a new perception of the environment.

Labo bcom - nouveaux contenus médias
© Fred Pieau
Many fields are concerned: media, training, collaborative work, surveillance, health, industry, web.

The Image Vision & Immersion laboratory builds tools that enable the adoption of these new technologies by professionals and the general public. It delivers expertise and technological bricks related to the characteristics and dimensions of the image (resolution, colorimetry, exposure, segmentation, compression...), sound (spatialization), and new rendering devices (screen, projection, headsets, and glasses for AR/VR in 3 or 6 degrees of freedom, holography...). These technologies allow the company to address different fields of application, from industry to medicine (in particular with industrial holography, augmented reality for health, etc.), from entertainment (television, cinema, etc.) to serious gaming (training, institutional, etc.), by placing the user at the heart of its thinking and its work, from the definition of needs to the use of the proposed solutions.

Jean-Yves Aubié - bcom
© Fred Pieau

Jean-Yves Aubié

Head of the Image Vision & Immersion Laboratory

The progress offered by new technologies, thanks to artificial intelligence, opens up unsuspected prospects for innovation in all fields that use image or sound, environmental modeling, or rendering from 2D to extended reality.
products and services
Adaptative HDR Converter - teaser produit - b<>com b<>com Sublima

Perform real-time HDR conversions without compromise on the artistic intent

Spatial audio family - spectateurs avec casque - bcom Spatial Audio family

A family of audio plug-ins for more immersion

Dicom Family - teaser produit - bcom DICOM family

Interoperability, anonymization, and standardization

Annotate - teaser produit - bcom b<>com Annotate

A surgical workflow editor and analytics tool

expertise
holo-couv
holography

Holography is the ultimate 3D display technology, providing the most natural, comfortable and immersive visualization experience without causing eye strain.

scientific publications

01.15.2024

PS-NET: an end-to-end phase space depth estimation approach for computer-generated holograms

In the present work, an end-to-end approach is proposed for recovering an RGB-D scene representation directly from a hologram using its phase space representation. The proposed method involves four steps. First, a set of silhouette images is extracted from the hologram phase space representation. Second, a minimal 3D volume that describes these silhouettes is extracted. Third, the extracted 3D…

read the publication

12.18.2023

Holographic Near-eye Display with Real-time Embedded Rendering

We present a wearable full-color holographic augmented reality headset with binocular vision support and real-time embedded hologram calculation. Contrarily to most previously proposed prototypes, our headset employs high-speed amplitude-only microdisplays and embeds a compact and lightweight electronic board to drive and synchronize the microdisplays and light source engines. In addition, to…

read the publication

10.03.2023

Self-Supervised Focus Measure Fusing for Depth Estimation from Computer-Generated Holograms

Depth from focus is a simple and effective methodology for retrieving the scene geometry from a hologram when used with the appropriate focus measure and patch size. However, fixing those parameters for every sample may not be the right choice, as different scenes can be composed with various types of textures. In this work, we propose a self-supervised learning methodology for fusing the depth…

read the publication

07.11.2023

TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic SLAM

Most classical SLAM systems rely on the static scene assumption, which limits their applicability in real world scenarios. Recent SLAM frameworks have been proposed to simultaneously track the camera and moving objects. However they are often unable to estimate the canonical pose of the objects and exhibit a low object tracking accuracy. To solve this problem we propose TwistSLAM++, a semantic,…

read the publication

Awards

  • 2019

    Product of the Year at NAB Show de Las Vegas

  • 2017

    Technology Innovation Award at NAB Show de Las Vegas