Projects

Although the I3TC is essentially an infrastructure unit of the TU Ilmenau, it also provides the framework for interdisciplinary projects in the field of immersive technologies.

Projects: CO-HUMANICS | 6G NeXT || AUDICTIVE APlausE-MR | AUDICTIVE ECoClass-VR | AUDICTIVE QoEVAVE | AVSPACE | Digital TwinsMetaReal | MULTIPARTIESNeuroSensEarPoQuMo8K

Project CO-HUMANICS

Co-Presence of Humans and Interactive Companions for Seniors

A central need of people is an independent lifestyle in regular social interaction with others. The technology research aimed at in the CO-HUMANICS project is to contribute to fulfilling this need as comprehensively as possible, even in times of pronounced individualization, spatial separation from relatives and friends, and an aging society.

The CO-HUMANICS project is concerned with basic and applied research on technology-supported social co-presence. Such co-presence can be realized by augmented and mixed reality (AR/MR) techniques and by robot-based telepresence, in which spatially distant persons are virtually present in a person’s real environment.

The project follows a user-centric design approach, to ensure that the needs and prior experiences of all types of users envisaged are reflected over the whole project duration.

funded by the Carl Zeiss Foundation

Speaker: Prof. Dr.-Ing. Alexander Raake

Project duration: 2021 - 2026

Website: tu-ilmenau.de/co-humanics

Project 6G NeXT

Future industrial- and media services and applications, especially in the area of multimedia, are expected to generate an enormous amount of data for industry, media and private users, that will require fast and more reliable transmissionthan is possible with today’s mobile networks. The funding project “6G Native Extensions for XR Technologies” (short title: 6G NeXt) aims to develop an infrastructure with an integrated network and software layer to enable new processing speeds and to implement the dynamic distribution of complex computing tasks (split computing). The latest software technologies combine computing and connectivity to form an overall system with capabilities that go far beyond the edge cloud familiar from 5G. 

In the pioneering 6G NeXt project, an infrastructure is being developed to identify the requirements of a future 6G network by researching and implementing two challenging use cases from innovative and forward-looking industries in Germany using new system architectures. The performance and efficiency of a new network generation will be determined by high-performance radio interfaces with application-optimized radio protocols as well as by ultra-fast software stacks, intelligent media processing and the deep integration of artificial intelligence (AI) to optimize the overall system. The focus of the project is on an implementation with open interfaces, easy integrability, sustainable development and optimized economic efficiency to achieve a broad social acceptance of the new 6G technology. 

 

Objective

6G NeXt aims to develop a scalable, modular and flexible infrastructure to enable a variety of industrial and end-user use cases, which extent the requirements of today’s 5G network in terms of intelligence, performance and efficiency. Two particularly challenging applications with different requirements will be developed as examples:

  • A novel anti-collision system for aviation using the example of drones at airports with mixed air traffic. The flight paths of aircraft are monitored in real time and collision risks are predicted using algorithms. In case of danger, evasive maneuvers are calculated centrally, and unlike today’s solutions, the aircraft are also controlled via 6G. These applications require low latency, synchronization of data streams and the possibility to dynamically distribute data computation capability (split computing). 
  • An interactive end-to-end transmission of real-time 3D holographic video with photorealistic content and realistic 3D depth for video conferencing and monitoring/inspection of objects by drones. This application requires high bit rates upstream and downstream as well as distributed and intelligent video processing.  

One focus of the project is the development of a high-performance high-speed backbone layer, which allocates computing capacities according to the requirements for latency, energy consumption and costs, among others. Additional services and extensions optimize cloud infrastructures commonly used today.

funded by the Bundesministerium für Forschung und Bildung as part of the research program Communication Systems "Souverän. Digital. Vernetzt."

Network coordinator: Mandy Galkow-Schneider (T-Labs, Deutsche Telekom AG)

Project duration: 15.10.2022 - 14.10.2025

Website: 6gnext.com

Partners:

  • Deutsche Telekom AG, T-Labs Spatial Computing Team
  • Fraunhofer FOKUS
  • Technische Universität Berlin, Goup of Mobile Cloud Computing
  • Technische Universität Ilmenau, Group of Audiovisual Technology (AVT)
  • Wildau University of Applied Sciences, Group of Aeronautical Engineering
  • Company SeeReal Technologies
  • Company Volucap (Volumetric Capture)
  • German Research Center for Artificial Intelligence (DFKI), Research Area Intelligent Networks
  • Company Logic Way
  • Schönhagen Airfield

 

The following projects use the infrastructure of the I3TC.

Project AUDICTIVE APlausE-MR

The project "Audiovisual Plausibility and Experience in Multi-Party Mixed Reality" (APlausE-MR) aims to investigate human audiovisual perception and cognition as well as social interaction in distributed multi-party mixed reality (MR) communication scenarios. A major goal of the project is to gain a robust understanding of the factors influencing the plausibility and quality of virtual audiovisual experiences while jointly being immersed in realistic Interactive Virtual Environments (IVEs).

funded by the Deutsche Forschungsgemeinschaft (DFG) within the DFG Priority Program Auditory Cognition in Interactive Virtual Environments (AUDICTIVE).

Project Coordinator: Prof. Dr.-Ing. Alexander Raake

Duration: April 2021 until March 2024

Website: APlausE-MR

Project AUDICTIVE ECoClass-VR

To improve the validity of the research on cognitive performance in classroom-like scenarios, in ECoClass-VR it is planned to successively increase the realism of these paradigms with regard to cognitive tasks and audiovisual scenes. For this purpose, two existing research paradigms on different cognitive performances - selective attention and listening comprehension - will be transferred from their audio-only focus to IVE-based complex audiovisual scenes. In addition, a third paradigm specifically developed for auditory cognition research with IVEs is examined. This paradigm allows for an investigation of the performance of audiovisual scene analysis for scenes of varying complexity and is adapted to IVE-based research on classroom scenes. To improve the validity of the research on cognitive performance in classroom-like scenarios, in ECoClass-VR it is planned to successively increase the realism of these paradigms with regard to cognitive tasks and audiovisual scenes. For this purpose, two existing research paradigms on different cognitive performances - selective attention and listening comprehension - will be transferred from their audio-only focus to IVE-based complex audiovisual scenes. In addition, a third paradigm specifically developed for auditory cognition research with IVEs is examined. This paradigm allows for an investigation of the performance of audiovisual scene analysis for scenes of varying complexity and is adapted to IVE-based research on classroom scenes. The main target group of the research in ECoClass-VR will be children. Considering the sensitivity of this participant group for empirical research, at first adult subjects will be considered for the methodological research. Based on the evaluation of the initial three paradigms during the first part of the project, the most suitable paradigm for the prototypical final study with children will be identified.

funded by the Deutsche Forschungsgemeinschaft (DFG) within the DFG Priority Program Auditory Cognition in Interactive Virtual Environments (AUDICTIVE)

Spokespersons: Prof. Dr.-Ing. Alexander Raake (TU Ilmenau, Department of AVT), Prof. Dr. Janina Fels (RWTH Aachen), Prof. Dr. Maria Klatte (TU Kaiserslautern)

Project duration: January 2021 - December 2023

Website: AUDICTIVE ECoClass-VR

Project AUDICTIVE QoEVAVE

Interactive virtual environments (IVEs) aim to replace real-world sensory input with corresponding streams of artificial stimulation. If successful, such a replacement will make the technology transparent and allow the user to interact naturally in a virtual world. IVEs bring new challenges to quality evaluation and render current evaluation approaches in the audio and video communities partially inapplicable. The QoEVAVE project aims at finding and closing the gaps in current quality evaluation methodologies for audio and video, and examines the feasibility of inferring quality from human behavior in an IVE. IVEs are multimodal and allow 3- or 6-degrees-of-freedom movement in the virtual scene. Compared to a uni-modal scenario, state-of-the-art research shows that multimodal sensory stimulation has significant effects on the resulting object localization, attention and quality evaluation.

Regardless, quality evaluation today is mostly conducted within a specific sensory modality and without interaction. The QoEVAVE project draws inspiration from the virtual reality (VR) community and the long history of using indirect methods to investigate cognitive functions of immersion, presence, and performance in IVEs. More specifically, the project builds upon the foundation of quality of experience (QoE) research and integrates methodologies from the VR community to develop the first comprehensive QoE framework for IVEs. Here, the aim is to achieve an integrated view of IVE quality perception as a cognitive process and of cognitive performances on specific tasks as IVE-quality indicators. In summary, the project recognizes the divergence between the VR community and the media technology community and sets its aim to unifying the field with regards to QoE evaluation in IVEs.

funded by the Deutsche forschungsgemeinschaft (DFG) within the DFG Priority Program Auditory Cognition in Interactive Virtual Environments (AUDICTIVE)

Spokesperson: Prof. Dr.-Ing. Alexander Raake

Project duration: Jan. 2021 - Dec. 2023

Website: QoEVAVE

Project AVSPACE

Audiovisual Feedback to Augmented Manual Activities During Space Walks

In space, it is silent and there is no acoustic feedback. However, such feedback has a major impact on the quality of work in space. Precise audiovisual feedback can ensure that tasks are performed optimally, for example when using a tool or disconnecting a cable. In this project, we will support working in the silence of space using an augmented reality (AR) auditory and visual feedback system. Astronauts can use the system during their spacewalks. It then outputs realistic synthetic sounds according to the current task or activity. The astronaut will also be supported by visual elements to provide the highest level of feedback possible. The system will enhance activities in space through audio-visual feedback. The addition of audio provides confirmation to the astronauts during the activity, which can lead to a more natural and precise execution of the task. The visual aspect additionally provides visual confirmation during the work, displays additional information, and can guide the astronaut step-by-step through the process.

The project is a collaborative effort between the Department of Virtual Worlds and Digital Games (VWDS) and the Department of Electronic Media Technology (EMT) and is funded by the European Space Agency (ESA).

Project coordinator: Dr. Tobias Schwandt

Project duration: 2022 - 2023

Website: AVSPACE

Project Digitial Twins

Digital Twins of Humans for Space Operations with XR Telepresence.

Digital twins in XR are virtual entities that represent real-world entities. By that, they enable the analysis of these entities post-hoc or even in real-time. With respect to humans, various data such as pose data is collected and applied to a digitized version of the human - an avatar. This avatar then reproduces human movements in XR for applications such as monitoring, aftermath analysis, or telepresence applications. While such a digital twin can be applied to audiovisual communication, latency often prevents this specific application during space missions. However, a digital twin in XR can be useful for monitoring, analysis in space operations and areas like assembly, integration, test & verification (AIT/AIV), mission control centers, or in concurrent design facilities (CDF) but also communication by providing an immersive view.

The project is funded by the European Space Agency ESA.

Project coordinator: Dr. Tobias Schwandt

Project duration: Jan. - Dec. 2023

Website: Digital Twins

Project MetaReal

Immersive Knowledge Access, Collaborative Exploration and Intelligent Retrieval in a Digital Copy of the World

In MetaReal, 3D reconstructions of real existing cultural assets can be experienced by several people independent of time and place using virtual reality technologies. Through the use of Augmented Reality, visitors on site are also integrated into a shared experience with the virtual visitors. It thus enables an immersive and collective experience in the manner of a walk-in Wikipedia, in which the visitors themselves enrich and expand their knowledge through their interaction with the environment.

The project is funded by the Free State of Thuringia as part of the state program ProDigital.

Project coordinator: Prof. Dr. Wolfgang Broll

Project duration: Jan. 2020 - June 2024

Website: MetaReal

Project MULTIPARTIES

Speech Communication and Quality of Experience in Augmented Reality-based Multi-Person Conferencing

In the last few years, video conferencing in general and especially in the context of the Corona pandemic have experienced an unprecedented spread. It has been shown that many aspects of interpersonal communication are lost and at the same time a significantly increased cognitive load arises for the users. MULTIPARTIES therefore aims to develop novel, interactive technologies for the realization of collaborative telepresence systems in augmented reality (AR): Using realistic augmented reality avatars with expressive gestures and facial expressions in combination with spatial audio, the project creates a shared communication space. This gives participants the impression of being together in one place (telepresence and co-presence). The collaborative project also enables the intuitive use of body language and selective acoustic perception and communication. The success of MULTIPARTIES can be measured both by user-centered evaluation of the technologies and by comparison with other systems.

funded by the BMBF as part of the kmu-innovativ programme

Project coordinators: Prof. Dr. Wolfgang Broll (VWDS), Prof. Dr. Alexander Raake (AVT), Dr. Stephan Werner (EMT)

Project duration: 2022 - 2024

Website: Multiparties

Project NeuroSensEar

Neuromorphic acoustic sensing for high-performance hearing aids of tomorrow

More than 11% of people in the EU are affected by hearing loss, but only 41% use a hearing aid due to continued problems with speech understanding and fitting the devices. The NeuroSensEar proket aims to improve the acceptance of and provision of hearing aids by significantly increasing their performance and greatly facilitating and automating their adaptation to the patient and different listening situations. To achieve this, principles of biological information processing are integrated into hearing aid technology and interactive outputs for better listening comprehension are investigated, so that persons with hearing impairment largely regain their ability to perceive hearing. The aim is to significantly improve the acceptance and supply of hearing aids by significantly increasing their performance and greatly facilitating and automating their adaptation to the patient and different hearing situations. This will, in the long run, help to reduce the economic costs and the severe social consequences in terms of sustainable and efficient health care. As a breakthrough, we aim to solve two main problems of current hearing aids: 1. hearing comprehension in difficult listening situations with many sound sources and low signal-to-noise ratios. 2. the lifelong ability to recognize, learn and act in new listening situations and demands, according to a continuous adaptation to the wearer and his/her hearing and the changing life/environment surrounding him/her.

funded by the Carl-Zeiss-Foundation

Contact: Prof. Martin Ziegler and Dr. Claudia Lenk (FG Micro- and Nanoelectronic Systems)

Project duration: 01.10.2023 - 30.09.2028

Website: NeuroSensEar

PoQuMo8K project

Perception-oriented Quality Modeling for 8K Live Video

PoQuMo8K addresses the development of a technical solution for the user-centric quality evaluation and encoding optimization of the novel video format 8K UHD-2. This format enables a stronger sensation of reality and a much higher level of immersion than before. When used in live applications, it enables a highly immersive view of events such as concerts, sports, and lectures. Encoding and streaming of 8K content in high quality and in real-time is challenging and requires cutting-edge video compression technologies. Even more than for lower resolutions, encoder developers and service providers have to ensure that the delivered video results in high visual quality for viewers. Therefore, a solution for quality evaluation of 8K media that reflects human visual perception is needed. PoQuMo8K addresses this need by proposing a video quality algorithm for an automated, real-time perception-based quality evaluation for live-service monitoring, as well as part of the encoding for 8K video in live application scenarios.

funded within the scope of R&D cooperation projects of companies

Project partner: Spin Digital Video Technologies GmbH (https://spin-digital.com)

Project Manager: Prof. Dr.-Ing. Alexander Raake

Website: PoQuMo8K