Hello Jérôme, what vision of augmented reality does ARtwin support?
ARtwin is a 3-year European project that brings together seven partners: Artefacto, b<>com, the Czech Technical University in Prague, Holo-light, Nokia, Q-Plan and Siemens. We want to make augmented reality applications possible and feasible in complex environments. For this, we believe it is essential to have the lightest possible terminals, to reduce energy consumption and therefore the embedded computing power. We believe that this computing power must be and can be moved into the cloud and even to the edge in order to carry out the various complex treatments required such as spatial computing or 3D rendering. Therefore, we believe that it is essential to offer shared, seamless augmented reality services at the scale of a factory or a construction site by sharing a 3D map of the real environment between the different terminals. To that end, the project has developed the so-called “AR Cloud” platform that relies on private 5G connectivity developed jointly by Nokia and b<>com, which provides the very low latency and high bandwidth needed for communication between Augmented Reality devices and these remote services.
More concretely, an Augmented Reality system allows to display contextual information based on the real environment. In order for this information to appear fixed in relation to the real world, the position and orientation of the Augmented Reality device in space must be accurately estimated, accuracy that GPS is not currently able to provide. This way, AR systems build a 3D digital map of the space created from the observations captured by the cameras while using this map to locate itself in space. This is called SLAM (Simultaneous Localization And Mapping).
First, the idea of this AR Cloud platform is to distribute at the edge, in the form of microservices, the computing effort for localizing, creating, and updating the 3D maps that form the SLAM, thus making it possible to offer augmented reality services to any type of device, those with few embedded resources, or even augmented reality glasses with a simplified form factor.
Second, this AR Cloud platform makes it possible to share a common 3D map of a given real place with all AR devices so that everyone can benefit from it to locate themselves, and so that each AR device can also update the map in real time from its observations so that it remains valid regardless of how the real environment changes. At b<>com, we are particularly working on the implementation of the AR Cloud platform and on spatial computing services, with remote 3D rendering services being developed by Holo-Light.
The AR Cloud platform and the services deployed on it are developed with the SolAR Framework. Our developments are mainly open-source, both to initiate a European community on this AR Cloud theme, and also to facilitate the verification of their compliance with the regulations in force on the protection of confidential data and user privacy. Indeed, today we find ourselves with AR devices that film continuously and are able to constantly analyze the environment of users, their practices, and their activities. Also, deploying its services and storing data on a private or sovereign cloud infrastructure rather than on infrastructure subject to foreign laws controlling access to data (e.g. the Cloud Act) is becoming a real challenge to increase Europe's digital independence. This digital independence is all the more essential when it concerns sensitive areas such as defense, health and industry.
Four innovations developed within the framework of the ARtwin project, linked to digital space mapping, have been selected by the European Commission's Innovation Radar.
What applications are already developed?
ARtwin aims to provide the European Industry and Construction 4.0 ecosystem with a sovereign AR Cloud platform that meets their growing needs to increase productivity, improve product quality, as well as reduce time and cost. To this end, the concept developed in ARtwin is based on a system that is able to continuously maintain a digital twin as close to the real thing as possible, while providing seamless augmented reality services at a plant or construction site.
Currently, two experiments have been conducted in an industrial setting since March in the Siemens laboratory in Munich. The first aims to provide maintenance assistance to operators in the factory through AR. The second experiment makes it possible to plan the development of an assembly line by simulating and evaluating different configurations in augmented reality.
At the same time, an experiment led by Artefacto is planned for June on an Eiffage construction site. It will display in Augmented Reality the digital model of the building during the construction phase to detect possible defects during the work.
What is the major benefit of AR Cloud technology for manufacturers?
Today, massive adoption of AR technologies is only possible if the systems can work anywhere (not merely limited to a restricted environment), anytime and on any type of terminal, just like a smartphone. For AR devices to work anywhere, they must have this 3D mapping potentially on a "global" scale, indoors and outdoors, day and night, etc. Today, AR devices are not capable of hosting maps at the scale of a factory or construction site. Therefore, the only solution for mass adoption of Augmented Reality is to move this 3D mapping to the edge and to the cloud. Finally, without an AR Cloud platform that can run unified, interoperable 3D maps, I don't see how mass adoption of AR will be possible.
What other AR services for industry are you developing at b<>com?
In addition to these open-source developments allowing the deployment of an AR Cloud platform, we are developing high value-added technological ingredients at b<>com to allow the expansion of 3D location and reconstruction services. For example, we are conducting research into how to generate semantic 3D maps to not only improve their accuracy and location, but also allow them to be used for automated quality control and compliance purposes. Similarly, we are working on real-time analysis of the activity of users equipped with augmented reality terminals based on automatic learning approaches using the images acquired by the onboard cameras. Analyzing the user's activity will make it possible, for example, to display in augmented reality the right information at the right time with regard to the operator's needs, to verify compliance with assembly or maintenance procedures, or to capture an expert operator's knowledge in order to allow the creation of low-cost training content in augmented reality for more novice operators.