Digital twins in XR are virtual entities that represent real-world entities. By that, they enable the analysis of these entities post-hoc or even in real-time. With respect to humans, various data such as pose data is collected and applied to a digitized version of the human - an avatar. This avatar then reproduces human movements in XR for applications such as monitoring, aftermath analysis, or telepresence applications. While such a digital twin can be applied to audiovisual communication, latency often prevents this specific application during space missions. However, a digital twin in XR can be useful for monitoring, analysis in space operations and areas like assembly, integration, test & verification (AIT/AIV), mission control centers, or in concurrent design facilities (CDF) but also communication by providing an immersive view.
In this project we specifically investigate digital twins for a monitoring and aftermath analysis in XR. This way, teams could have immersive information about astronauts’ (or other teams) movements and pose in space or teams could collaborate more easier (e.g., in remote CDF and AIT/AIV activities) by displaying the avatars in virtual or augmented reality. To achieve this, we will not transmit full (compressed) visual data, but only features and meta-information (e.g., information about mimic or simple coordinates of eye and lip). This information could be acquired by simple RGB cameras, an RGB-D camera, or more sophisticated tracking technology (e.g., sensor data from within an astronaut’s suit). A deep learning pipeline will then extract the required data which will then be transmitted. The receiving party can then use this information to animate a pre-created high-fidelity avatar using AR or VR. Thus, we can provide the involved entities with an animated high-quality visual avatar while saving data rate